Skip to content
AI Primer
breaking

OpenAI fixes two GPT-5.5 issues in Codex after users report looping runs

OpenAI said Codex’s GPT-5.5 degradation over the prior 48 hours came from two issues and it will reset usage limits after the fix. Users had reported looping runs, higher cache burn, and unstable sessions in active coding workflows.

5 min read
OpenAI fixes two GPT-5.5 issues in Codex after users report looping runs
OpenAI fixes two GPT-5.5 issues in Codex after users report looping runs

TL;DR

The useful bits were unusually concrete. thsottiaux's earlier reply tied one failure mode to cache hit rate, steipete's review log showed how degraded runs can waste review passes while still surfacing real bugs, and altryne's screenshot revealed OpenAI is testing a "control other devices" connection path inside Codex at the same time users were reporting instability.

The failure pattern

The first crisp symptom was token burn. In thdxr's post, GPT-5.5 in Codex was using about 2.5x as many input tokens as a week earlier, and thdxr's follow-up argued that kind of increase was bad for OpenAI too because it implied wasted compute, not a deliberate downgrade.

OpenAI partially confirmed that diagnosis before the full fix. In thsottiaux's reply, the Codex team said a bad rollout had hurt cache hit rate and had already been rolled back.

By the next day, the complaints had converged on degraded day-to-day performance. thdxr's complaint said a previously good several-week run had turned miserable, while mweinbach's report said the model felt significantly worse and asked whether something was going on.

Two fixes, then a reset

OpenAI's public timeline moved fast. thsottiaux's investigation post said the team was aware of reports that GPT-5.5 was performing worse for some users, had nothing conclusive yet, and was still treating systems as healthy.

A few hours later, thsottiaux's fix update said the team had found and fixed two issues that could explain the degradation over roughly 48 hours. The post did not name the two issues, but it paired the fix with monitoring and a promise to reset usage limits that evening.

The reset matters because Codex usage was already a live budget topic. TimSuchanek's support email screenshot showed OpenAI correcting a one-time Codex credit boost that had failed to land, and OpenAIDevs' enterprise promo link pointed users to a Codex enterprise promo form earlier in the week.

Loops, long review chains, and extra spend

The most useful failure reports came from people running real coding loops, not toy prompts. steipete's post showed Codex looping during an automated issue-to-fix workflow that depended on autoreview and crabbox.

A separate run from steipete's review log makes the cost of that kind of instability tangible. Codex had reached review pass 11, with 10 completed passes already producing actionable findings.

The bug list in that run was long enough to read like a fuzzing session:

  • schema migration failures on older local DBs
  • media-path and cache-file handling bugs
  • symlink acceptance in imports and publish flows
  • attachment fetch limits applied in the wrong order
  • DM and guild scope mistakes in media sync
  • cache repair and hash verification failures

That same thread also showed why users obsess over spend during bad days. The CodexBar screenshot in steipete's thread context displayed 603B tokens, 7.6M requests, and more than $1.3M in 30-day OpenAI API spend across the tracked account, with GPT-5.5-2026-04-23 as the top model.

Usage resets became part of the response

Rate-limit resets were visible enough this week that users started reading them as competitive signals. ClaudeDevs' post reset Claude Code's 5-hour and weekly limits on Friday, and kimmonismus' reaction immediately framed it as either more available compute or pressure from OpenAI and Codex.

OpenAI used the same playbook once the Codex fix landed. In thsottiaux's update, the team promised to reset usage limits after the repair, and dexhorthy's reply zeroed in on that line rather than the technical explanation.

Even the CEO discourse turned into expectation management. sama's post joked that some reports reduce to users getting accustomed to the current level of magic and asking for more, which is funny right up until cache misses and loops start eating tokens.

Connections and session weirdness

One thing that surfaced alongside the degradation reports was a new Codex connection surface. altryne's screenshot showed a "Connections" modal for authorizing a Mac to control other devices signed into the same ChatGPT account, with options to resume chats, receive notifications, and start tasks remotely.

That feature was not working cleanly for everyone. altryne's post reported constant "failed to authorize remote control" errors, while kimmonismus' outage post described a separate Codex access issue that a relogin appeared to fix.

The result is a messy but useful picture of Codex operations in the wild this week: a cache regression bad enough to spike token use, two undisclosed fixes, usage-limit resets as compensation, and a still-shaky remote-control path arriving at the same time.

Further reading

Discussion across the web

Where this story is being discussed, in original context.

On X· 6 threads
TL;DR6 posts
The failure pattern4 posts
Two fixes, then a reset2 posts
Loops, long review chains, and extra spend1 post
Usage resets became part of the response3 posts
Connections and session weirdness1 post
Share on X