Users report OpenAI Codex raises limits 10x on May 5 reset
Users report OpenAI increased Codex limits about 10x on the May 5 reset, with much longer /goal sessions and more computer-use demos. That should extend unattended runs for app migrations and visual prototyping.

TL;DR
- Users started May 5 expecting a normal Codex quota reset because LLMJunky's pre-reset post framed it as a "token party," then LLMJunky's follow-up reported the reset landed with limits that looked roughly 10x higher.
- OpenAI's own Codex pricing page already says the $100 Pro tier gets 2x promotional Codex usage through May 31, 2026, which turns the standard 5x Plus allowance into 10x, and the first reset after that change is what users appear to be noticing in practice.
- Long unattended runs are the immediate visible effect: thekitze's migration update said
/goalkept refactoring an app for more than four hours, and thekitze's later screenshot showed the same goal still active after 14 hours. - The new headroom is showing up in creative workflows too, where bas_fijneman's Peekaboo demo used GPT Image 2.0 for assets, GPT-5.5 for product logic, and Codex for an interactive app prototype with animations and state.
- Codex's broader April update also matters here: OpenAI said in its launch post that Codex can run ongoing work, generate images, and use your computer, and LLMJunky's macOS demo turned that computer-use feature into a mouse-only digital painting.
OpenAI's launch post bundled computer use, image generation, memories, plugins, and ongoing work into one big Codex expansion. The 0.128.0 changelog is where /goal first shows up as persisted workflows with pause and resume controls. The pricing page quietly spells out the part users are now celebrating, Pro $100 gets 2x promo usage through May 31, which makes it 10x Plus instead of the usual 5x.
May 5 reset
The cleanest read is that users did not discover a surprise loophole, they hit the first reset where OpenAI's promo math became visible in the product.
According to LLMJunky's pre-reset prediction, people were already expecting a May 5 rollover. After the reset, LLMJunky's follow-up said Codex limits had "10x'd," while LLMJunky's correction acknowledged the day itself was already reset day.
OpenAI's official documentation points in the same direction. The Codex pricing page says Pro $100 normally gets 5x Plus usage, but also says that tier is doubled through May 31, 2026, which makes it 10x Plus during the promo window. The same page adds that limits vary a lot by model and task size, and that long-running sessions burn more allowance because they hold more context.
That last caveat explains why screenshots can look contradictory. In om_patel5's usage screenshot, one $20 plan user said they had not hit Codex limits while Claude Max was already 38% used for the week, while steipete's rate-limit screenshot showed a GPT-5.5 tokens-per-minute cap being exhausted on the API side. Message quotas, weekly windows, and TPM caps are different bottlenecks.
/goal runs
The bigger story for builders is not the banner, it is how much longer /goal jobs can stay alive.
OpenAI's 0.128.0 release notes describe /goal as persisted workflows with app-server APIs, model tools, runtime continuation, and TUI controls for create, pause, resume, and clear. OpenAI's pricing page also says local messages and cloud tasks share a five-hour window, with extra weekly limits on top, so a higher allowance directly changes how ambitious these runs can get.
Thekitze's benji.so migration is the strongest public example in this evidence set. His first post showed a /goal prompt to port an app into a new monorepo, compare old and new routes visually, and keep going until the whole thing matched. A few hours later, his update said Codex was still running, comparing every route for visual differences, and had built a dashboard to track migration progress.
By the next morning, thekitze's later screenshot showed the same goal still pursuing the objective after more than 14 hours. That is the kind of run people used to describe as quota roulette.
Visual prototypes
More quota does not only buy longer refactors. It also makes mixed-media prototyping less precious.
Bas Fijneman's Peekaboo project is a good example because the workflow is split cleanly across models. According to his demo post, the stack looked like this:
- GPT Image 2.0 for mascot poses, sticker rewards, backgrounds, and mockups
- GPT-5.5 for timers, rewards, parent controls, and kid-friendly microcopy
- Codex for the HTML, CSS, and JavaScript prototype with screens, transitions, animations, and persistent state
The attached prompt in the detailed prompt thread reads like a miniature product spec. It defines the phone frame size, tab behavior, animation physics, required asset list, file structure, localStorage behavior, and even a parent PIN flow for extra screen time. That level of prompt sprawl is exactly the kind of thing that stops feeling expensive when the usage ceiling moves up.
A small but useful comparison comes from bas_fijneman's tool split post, where he said he still uses Claude Code heavily but reaches for Codex to one-shot prototypes and visuals with images-2.0. That is less a winner-take-all claim than a workload split.
Computer use
The other visible consequence of more headroom is more people letting Codex stay on the desktop longer.
OpenAI's computer use guide says Codex can control normal app UIs by seeing, clicking, and typing, and the main launch post says the feature works as background computer use. LLMJunky's demo pushed that past ordinary automation by having Codex paint on macOS using only the mouse.
That demo matters because it shows the feature operating at the UI layer, not through an app-specific integration. The same launch cycle that added /goal persistence also gave Codex an in-app browser, SSH devbox support, image generation, memories, and plugins, per OpenAI's launch post. More generous limits make that bundle feel less like a checklist and more like permission to leave the agent running.