Codex users report one-shot fixes and 1.7B-token days vs Claude Code
Developers posted side-by-side reports of faster one-shot fixes, 1.7B-token workdays, and fewer limit warnings with GPT-5.5 fast mode after OpenAI added Claude Code import. The comparisons matter because they turn migration talk into a concrete workflow choice.

TL;DR
- OpenAI turned the Codex migration pitch into a product feature: OpenAI's import announcement says the app and CLI can now import settings, plugins, agents, and project configuration, while the rust-v0.128.0 release adds external-agent config and session import.
- The strongest hands-on signal is not a benchmark chart but a pile of side-by-side bug-fix reports, where bridgemindai's backend report called GPT-5.5 xhigh in Codex a "one shot machine" and TheRealAdamG's repost of tonnoz said Codex fixed a bug that had kept Claude Code busy for hours.
- Usage ceilings are part of the story: arankomatsuzaki said 1.7B Codex tokens in a day triggered no warning while 80M Claude Code tokens did, and bridgemindai's limit screenshot showed 97% of a Codex session left against 21% Claude usage after the same morning workload.
- Codex 0.128.0 also shipped a more persistent agent loop, where mattlam_ described
/goalas a goal-oriented workflow that nudges the model toward the next concrete action, and nummanali's 9-hour run argued GPT-5.5 stayed coherent through repeated compactions in long-running work. - The rollout still had caveats, because danshipper's follow-up said
/goalwas not supported in the desktop app yet, while testingcatalog's app teardown claimed browser use and computer use appeared disabled in the EU.
OpenAI's own GPT-5.5 launch post started with coding, research, and agentic work, but the weirder reveals came one layer down. You can read the Codex 0.128.0 changelog, inspect the migration import work, and browse a giant HN thread where builders focused less on OpenAI's headline positioning than on access paths, GPU scheduling, and how long the new model can actually stay useful.
Imports
OpenAI spent launch week making switching itself part of the product. OpenAI's import announcement pitched one-click migration in the app and CLI, and the underlying Codex release notes show that import means more than copying a config file.
According to the rust-v0.128.0 release, Codex added:
- external-agent config import
- external agent session import
- marketplace plugin install and uninstall flows
- expanded permission profiles
- a new
codex updatecommand
That lines up with testingcatalog's settings screenshot, which showed an "External migration" toggle in beta settings, and with the migration PR, which says Codex can detect and convert MCP server config, hooks, commands, and subagents from external agents into Codex-native forms.
One-shot fixes
The loudest early Codex posts were brutally simple: fewer turns, faster fixes. bridgemindai's backend report said bugs that previously took three to four prompts were now solved on the first attempt, and TheRealAdamG's repost of tonnoz made the same claim about a bug Claude Code had been chewing on for hours.
Other hands-on reports stacked onto the same pattern. TheRealAdamG's repost of farzyness described GPT-5.5 Codex as "insanely reliable" for dev work, while davidgomes' bugbot screenshot showed Codex choosing to add a regression test before touching the implementation.
Even OpenAI's side chatter leaned in that direction. sama said 5.5 xhigh in fast mode was "really good," and the main HN thread quickly filled with benchmark comparisons and claims that Codex had become the practical access path before the API rollout was ready.
Limits
The migration story only works if the usage model holds up. Here the Codex posts were unusually concrete.
The recurring claims:
- arankomatsuzaki said 1.7B Codex tokens in a day produced no warning, while 80M Claude Code tokens did.
- bridgemindai's limit screenshot showed 97% of a Codex five-hour session left after a morning of coding, versus 21% used on Claude Code.
- TheRealAdamG's repost of AlexMasmej said hitting Claude limits was a reason he used Codex about 10x more.
- theo joked about burning through a $200 Codex subscription badly enough to need extra subscriptions for one heavy user.
That does not mean the limit story was clean. In the HN thread, commenters linked Codex pricing and usage pages while arguing the new tiers implied tighter limits, and the Codex repo collected multiple bug reports about early GPT-5.5 limit hits, fast depletion, and weekly usage dropping too quickly during unstable compaction. The useful distinction is that public user sentiment moved toward "more headroom than Claude," even while Codex itself still had rate-limit bugs.
Goal workflows
Codex 0.128.0 looks like the release where OpenAI stopped treating long-running agent behavior as a side effect. mattlam_ described /goal as a persistent objective that survives turns, maps requirements to evidence, and injects the next-action nudge when the user stays silent.
The rust-v0.128.0 release says /goal shipped with persisted workflows, runtime continuation, and create, pause, resume, and clear controls. That is why gdb called it a built-in "Ralph loop++," and why GeoffreyHuntley's repost of fcoury summarized the feature as keeping a goal alive across turns until it is achieved.
The hands-on example came from nummanali's 9-hour run, which claimed GPT-5.5 worked coherently for 9 hours on an ML library using AGENTS.md, CONTINUITY.md, MEMORY.md, PLAN.md, and custom skills. Nummanali's list of ingredients was unusually concrete:
- clear outcome instructions with freedom on solutions
- dynamic memory workspace and task management
- a verifiable repo with gates in place
- skills that the agent can create and improve through use
- sub-agents for validating hypotheses
App shift
Some of the strongest Codex praise was about the harness, not just the model. dejavucoder said OpenAI had finally matched the product feel Claude Code had in the CLI, but in a polished app with an in-app browser and comments, and emollick argued frontier labs are opening a gap between what the raw API can do and what the native harness can do.
The app refresh also widened the target user. embirico's feature list framed Codex as useful for finance, data science, marketing, slides, sheets, and docs, while gdb and thsottiaux both pushed the line that Codex was moving beyond coders. reach_vb's CI screenshot showed another small but real harness gain, chat-native CI status inside the conversation.
OpenAI's own momentum post, OpenAI, claimed GPT-5.5 was already its strongest model launch and that Codex revenue doubled in under seven days. The more interesting read is that the company spent the same week improving migration, app UX, and long-running agent controls, which is exactly where the Codex versus Claude Code comparison is now being fought.
Rough edges
The new loop was not fully everywhere yet. danshipper's benchmark run tried /goal on a "Senior Engineer bench," got about 25 minutes of work, and danshipper's follow-up later said the desktop app did not support /goal yet.
A few other rollout edges surfaced in public:
- testingcatalog's app teardown claimed browser use and computer use appeared disabled in the EU.
- BEBischof complained that the Codex app file browser did not show PNGs.
- theo's comparison thread said GPT-5.5 could still "strangle itself with context sometimes," even while feeling faster than Claude Code.
- The GPT-5.5 launch post originally said API access was coming soon, then added an April 24 update saying GPT-5.5 and GPT-5.5 Pro were available in the API.