OpenAI said Codex reached 3 million weekly users and reset usage limits, with another reset planned for each additional million users up to 10 million. ChatGPT-sign-in Codex will also retire the gpt-5.2 and gpt-5.1-era lineup on April 14, so teams should watch for model-default changes.

Codex now has an official growth number, an official quota reset, and an official model-picker cleanup all on the same day. You can read OpenAI's February post on scaling past rate limits, check the new April 7 Codex changelog entry, and the pricing picture got another tweak last week with pay-as-you-go Codex seats for Business and Enterprise. The product page still describes Codex as a cloud agent that can write features, answer codebase questions, fix bugs, and propose PRs in parallel in the ChatGPT app in Introducing Codex.
The hard number here is new. Sottiaux said Codex reached 3 million weekly users, up from 2 million a little under a month ago.
The quota policy is also unusually explicit. Altman said OpenAI is resetting usage limits now, then repeating that reset every time Codex adds another million weekly users until it reaches 10 million.
That turns a one-off celebration into a public growth ladder. It also gives engineers a concrete signal that Codex demand is growing faster than OpenAI wants static limits to imply.
The retiring ChatGPT-sign-in models are:
The replacement lineup after April 14 is:
The tweet thread from OpenAI Devs matched the GitHub discussion, and the Codex changelog adds one extra operational detail: starting April 7, those retiring models no longer appear in the model picker for ChatGPT-sign-in users, even before the full April 14 removal.
If a team still needs another API-supported model, both the tweet thread and the changelog point to the same escape hatch: sign in to Codex with an API key instead of a ChatGPT account.
The reset makes more sense next to OpenAI's own infrastructure and pricing changes. In Beyond rate limits, OpenAI said Codex and Sora both saw usage push beyond early expectations, and described a real-time access engine that counts usage and lets users keep going by purchasing credits after they exceed rate limits.
That framing already showed up in product packaging. On April 2, OpenAI said in Codex now offers pay-as-you-go pricing for teams that Business and Enterprise workspaces can buy Codex-only seats with no fixed seat fee, no rate limits, and billing based on token consumption.
The current Codex rate card describes the same shift in plainer terms: usage is priced directly on input, cached input, and output tokens rather than rough per-message estimates. The quota reset looks less like a random giveaway, more like a pressure release inside a system OpenAI is still rebalancing.
For a tool story, the most revealing community signal was not benchmark bragging. It was people talking about session length and interruption patterns.
Kolt Regaskes said Codex seemed more efficient than Claude Code and could work for hours without interruption, which is exactly the kind of usage pattern that makes a quota reset immediately noticeable. In the other direction, the OpenAI community thread on Codex rate limits was opened after moderators said they had received a significant number of reports that limits were too restrictive.
A few lighter reactions point in the same direction. One reposted comparison praised Codex for disagreeing with the user more often than Claude Code, while another reposted workflow list described doing coding, browser testing, and cleanup inside the Codex app itself.
The last useful detail came from complaints about how Codex actually executes work. zeeg's WSL post said the product's Windows support still amounted to wrapping commands in wsl.exe, then argued that Codex leans too hard on shell interaction and inherits the usual escaping bugs.
A screenshot from the same user showed another flavor of that problem. Codex tried a native GitHub connector call to create a draft PR, hit a 403 "Resource not accessible by integration" error, then fell back to a successful gh pr create command in the shell.
That is a pretty good snapshot of Codex right now: fast growth, looser quotas, a cleaned-up model roster, and a product surface that still sometimes escapes back into the terminal when the nicer path breaks.