Skip to content
AI Primer
release

ChatGPT ships GPT-5.5 Instant by default with Memory Sources

OpenAI is rolling GPT-5.5 Instant into ChatGPT as the default model and exposing it as gpt-5.5-chat-latest, alongside Memory Sources for personalized replies. The model also claims 52.5% fewer high-stakes hallucinations, so watch for behavior changes in production prompts.

5 min read
ChatGPT ships GPT-5.5 Instant by default with Memory Sources
ChatGPT ships GPT-5.5 Instant by default with Memory Sources

TL;DR

  • OpenAI is rolling OpenAI's rollout post and ChatGPTapp's rollout note into ChatGPT as the new default experience, with API access exposed as gpt-5.5-chat-latest in the same rollout window.
  • According to OpenAI's launch thread and the official product post, GPT-5.5 Instant is pitched around tighter answers, stronger factuality, better image and STEM handling, and more useful web-search decisions.
  • The biggest hard numbers live in the official product post: OpenAI says GPT-5.5 Instant made 52.5% fewer hallucinated claims on high-stakes prompts, while TheRealAdamG's quote post highlights a separate 37.3% reduction on user-flagged factual-error conversations.
  • Personalization is shipping alongside the model update, and OpenAI's memory thread says ChatGPT can now draw more explicitly from saved memories, past chats, files, and connected Gmail accounts, while exposing those inputs through Memory Sources.
  • A buried product detail in the ChatGPT help page says the new Instant surface can automatically route some harder requests to GPT-5.5 Thinking, which makes the default model less singular than the name suggests.

You can read the official launch post, the ChatGPT help page, and the older GPT-5.5 launch post for the broader model family context. There is also an active Hacker News thread on GPT-5.5's earlier rollout, where the discussion tilted toward Codex, efficiency, and access details more than the consumer-facing Memory Sources layer.

Default model

The shipping fact is simple: GPT-5.5 Instant becomes the default ChatGPT model for all users, with OpenAI's rollout post putting the consumer rollout over two days and the official product post framing it as an upgrade to the everyday model rather than a new premium tier.

OpenAI's own phrasing is unusually productized. Eric Mitchell's post says the writing style is now "plainer and more straightforward," while Rohan Paul's screenshot thread surfaced one example chart claiming 30.2% fewer words and 29.2% fewer lines on a comparison answer.

Factuality numbers

The headline metric is factuality. In the official product post, OpenAI says GPT-5.5 Instant produced 52.5% fewer hallucinated claims than GPT-5.3 Instant on high-stakes prompts in medicine, law, and finance.

A second internal metric got less attention: TheRealAdamG's quote post pulled out OpenAI's claim that GPT-5.5 Instant also cut inaccurate claims by 37.3% on especially difficult conversations users had already flagged for factual errors.

OpenAI also published benchmark deltas on everyday capability slices:

Memory Sources

The more novel product change is not the model slug, it is observability around personalization. OpenAI's memory thread says ChatGPT can now use saved memories, past chats, uploaded files, and connected Gmail context more effectively when generating replies.

Memory Sources is the visible control layer on top of that. According to OpenAI's memory thread, ChatGPT will show what relevant context influenced a response, then let users update, delete, or disconnect that source. OpenAI's rollout post adds a staggered release detail: personalization upgrades start with Plus and Pro users on web, while Memory Sources goes out across all consumer plans on web before mobile catches up.

That makes this rollout partly a UI and trust feature. OpenAI is not only broadening what ChatGPT can remember, it is exposing a per-response audit trail for some of that personalization.

API slug

For developers, the immediate operational detail is naming and surface area. OpenAI's rollout post says the model is available in the API as gpt-5.5-chat-latest, which keeps the consumer launch and API alias aligned.

That alias lands after the broader GPT-5.5 family had already appeared elsewhere. The earlier GPT-5.5 launch post said GPT-5.5 and GPT-5.5 Pro hit ChatGPT, Codex, and later the API in April, so this week's update is specifically about bringing the Instant variant into the default chat surface instead of introducing the 5.5 line from scratch.

Community reaction also suggests the UX change may matter as much as the model card. BorisMPower's first-use post said this was the first Instant model he had started using regularly, while Sam Altman's follow-up post explicitly nudged people who had been staying on thinking models to try the new default.

Auto-switching Instant

The most interesting buried detail is outside the launch thread. The ChatGPT help page says GPT-5.5 Instant is part of a "single auto-switching system" and that, when users select Instant, ChatGPT can decide to use GPT-5.3 Instant or GPT-5.5 Thinking depending on task complexity.

That means the new default is partly a router, not just a single model endpoint. The help page also says GPT-5.5 Thinking can show a short preamble before reasoning begins and can accept follow-up steering while it is still thinking, which is new interaction detail that does not appear in OpenAI's main launch thread or the official product post.

Further reading

Discussion across the web

Where this story is being discussed, in original context.

On X· 4 threads
TL;DR1 post
Default model2 posts
Factuality numbers2 posts
API slug2 posts
Share on X