Garg (writesonic): How ChatGPT 5.4 searches differently - site-Queries & more
- 12. März
- 2 Min. Lesezeit

Key Takeaways:
writesonic analyzed 50 prompts (re: SaaS) in ChatGPT comparing GPT-5.3 Instant (new default) and GPT-5.4-Thinking (new premium) vs. GPT-5.2 Instant and GPT-5.2 Thinking and found:
in GPT-5.4 Thinking a total of 53% of citations were from brand websites, compared to GPT-5.3 Instant's 8% a major shift (it cited reddit.com, techradar.com, forbes.com and others instead)
GPT-5.4 Thinking cites totally different sources as GPT-5.3, most of them not from Google/Bing Search
Thinking models trigger way more Query Fan-out queries: However, GPT-5.4 with an average of 8,5 way more than GPT 5.2 Thinking with 5,2 while Instant models create up to one Fan-out query on average
GPT-5.4 generates site:-queries searching for results on specific (brand) websites - the decision whether you are cited or not might be done before search based retrieval (based on training data?)
GPT-5.3 sends one broad query and gets ~27 results. GPT-5.4 sends 8.5 specific queries and gets ~13 results per query.
Wells (PeecAI) confirms based on their study of 50 prompts (conusmer electronics):
In 54% of non-branded prompts, the model internally searched for specific brands or products during its reasoning process.
(ecample: "Logitech MX Brio official specs 4K 60fps low light Logitech")
Thinking models seem to narrow down domains early in the process - concentrating on a more limited number of domains
With ChatGPT 5.4 Query Fan-out queries are only exposed/visible to paid subscription users and not to all users anymore
Measures to optimize:
Analyze the relevant data sources in your niche for AI chats such as ChatGPT
Make sure to be prominent/mentioned on those sources
Make sure to monitor over time or at least re-assess as soon as a new model drops
Follow industry leaders to get a hold on new research and findings how search and retrieval of AI chats evolves









Sources:


