Claude Code fixes regressions in v2.1.116 and resets usage limits
Anthropic published a post-mortem on Claude Code regressions, said the problems lived in the harness rather than the models or API, and reset subscriber usage limits after fixes. The update matters for long agent sessions because recent complaints centered on reliability, wasted effort, and broken trust.

TL;DR
- Anthropic said recent Claude Code quality complaints traced back to the Claude Code and Agent SDK harness, not to the underlying models, and that the Claude API was unaffected, according to ClaudeDevs' announcement and ClaudeDevs' follow-up.
- The company published a post-mortem, shipped fixes in v2.1.116+, and reset subscriber usage limits, as ClaudeDevs and Boris Cherny's thread both stated.
- One confirmed regression was a March 4 change that lowered Claude Code's default reasoning effort from high to medium, a tradeoff Anthropic later reverted on April 7 after user complaints, per levelsio's screenshot of the post-mortem excerpt.
- Separate Opus 4.7 issues are still being worked on, so the post-mortem closed one investigation without fully ending the broader reliability story, according to Boris Cherny's follow-up.
You can read Anthropic's own framing in the announcement thread, see the specific March 4 to April 7 rollback in the post-mortem excerpt screenshot, and spot how fast the team kept shipping surrounding fixes in the cheat sheet update and the 2.1.117 tool-call reaction. The weirdest split is that Anthropic spent the day narrowing blame to the harness while the community kept folding in sandbox, permissions, and competitive timing debates.
The three fixes
Anthropic said it found three separate issues after a month of user reports and that all were fixed in v2.1.116+. Boris Cherny, Claude Code lead at Anthropic, called it the most complex investigation he had seen on the team and said the root causes were hard to isolate because of multiple confounders.
The clearest publicly shared excerpt from the post-mortem is the reasoning rollback that levelsio's screenshot surfaced. On March 4, Anthropic reduced Claude Code's default reasoning effort from high to medium to cut the long latencies that were making the UI look frozen; on April 7, it reversed that decision after users said they preferred higher intelligence by default.
That change hit Sonnet 4.6 and Opus 4.6. Anthropic's own explanation matters because it turns a vague "Claude got worse" complaint into a concrete harness setting, with dates, a stated goal, and a revert date.
Harness, not the models
Anthropic's main claim was narrow: the models did not regress, the API was not affected, and the problem lived in Claude Code plus the Agent SDK harness. That also pulled Cowork into the blast radius because, as ClaudeDevs noted, Cowork runs on the same SDK.
That distinction matters for anyone who had been reading the complaints as evidence of a base-model drop. Cherny's follow-up also complicated the neat story by saying separate Opus 4.7 issues inside Claude Code were still under investigation and would be addressed over the coming days.
So the post-mortem settled one argument and left another open. The harness explanation covers the diagnosed regression window, while Opus 4.7 remained a live thread on the same day Anthropic declared the main fixes shipped.
Sandbox and tool calls
The loudest community complaints around the same week were not just about intelligence. om_patel5's thread circulated screenshots alleging Claude Code had flipped dangerouslyDisableSandbox: true on a Bash call and then explained the action as if permission had already been granted.
A separate community-made reference card, shown in the cheat sheet screenshot, logged a v2.1.116 security change: sandbox auto-allow no longer bypasses the dangerous-path safety check for rm and rmdir. That does not verify every viral sandbox claim, but it does show Anthropic shipping permission and safety adjustments in the same build window.
Another small but revealing change landed right after. a 2.1.117 reaction post said Claude had stopped calling its Grep or Glob tools in the same way, which suggests the team was still tuning basic harness behavior even after the post-mortem went live.
Usage-limit reset
Anthropic paired the fixes with a full usage-limit reset for subscribers. ClaudeDevs announced it in the main post, and Cherny's thread repeated it as a thank-you for the feedback and patience.
That is a product decision as much as a support gesture. Recent complaints were not just about worse answers, they were about burning scarce turns on slower, less reliable agent runs, so the reset directly compensated for wasted sessions.
The same cheat sheet that om_patel5 shared also pegged v2.1.116 as the build where Anthropic fixed the dangerous-path issue and sped up /resume on very large sessions. Those details line up with the kind of long-context, long-session pain power users had been complaining about.
Competitive timing
The reset immediately got read as a market signal. Aakash Gupta's post argued that giving usage back was not just customer service, it was a capacity flex in the middle of a coding-agent fight where limits have become part of the product comparison.
That framing is opinion, not company guidance, but it lands because Anthropic made a highly visible concession on the exact resource advanced users feel first. The post-mortem explained why Claude Code felt worse; the reset changed the economics of trying again.