Skip to content
AI Primer
update

Anthropic doubles Claude Code 5-hour limits after SpaceX Colossus 1 compute deal

Anthropic said a SpaceX compute deal will add 300+ MW and 220,000+ NVIDIA GPUs, and it doubled Claude Code 5-hour limits across paid plans. It also raised Opus API ceilings; users should still watch the unchanged weekly caps.

4 min read
Anthropic doubles Claude Code 5-hour limits after SpaceX Colossus 1 compute deal
Anthropic doubles Claude Code 5-hour limits after SpaceX Colossus 1 compute deal

TL;DR

  • In claudeai's change list, Anthropic says Claude Code's 5-hour limits doubled for Pro, Max, and Team, peak-hour reductions were removed for Pro and Max, and Opus API rate limits increased immediately.
  • According to bcherny's capacity post, the capacity bump comes from a SpaceX deal that adds 300+ megawatts and 220,000 NVIDIA GPUs within a month, while nottombrown's post says Claude inference will ramp on Colossus over the next few days.
  • scaling01's rate-limit table shows the biggest API jump at Opus tier 1, from 30,000 to 500,000 input tokens per minute, while tier 4 rises from 2 million to 10 million.
  • Several users immediately noticed that the announcement only changed the 5-hour bucket, not the weekly cap, according to btibor91's screenshot and kimmonismus's follow-up.
  • The official post also slips in a stranger detail: as scaling01's screenshot of the SpaceX section shows, Anthropic says it has already expressed interest in building multiple gigawatts of orbital AI compute with SpaceX.

You can read Anthropic's post, inspect the new Opus limit table, and see Theo's screenshots of user reactions tying the whole move back to plain old capacity constraints. The same event day also included RLanceMartin's roundup of Managed Agents updates, but the immediate shipping news was simple: more compute, higher ceilings, fewer peak-hour slowdowns.

What changed

Anthropic's announcement had three concrete changes, all effective the same day, per Anthropic's post and bcherny's rollout post:

  • Claude Code 5-hour limits doubled for Pro, Max, Team, and seat-based Enterprise.
  • Peak-hour limit reductions were removed for Pro and Max.
  • Opus API rate limits were raised substantially.

The company also acknowledged the reason for the change. In bcherny's apology post, Anthropic said demand had outpaced capacity and that the last few weeks had been frustrating for users.

Opus API ceilings

The API change is larger than the headline implies. scaling01's screenshot of Anthropic's table lists these new Opus per-minute ceilings:

  • Tier 1 input: 30,000 to 500,000 tokens.
  • Tier 1 output: 8,000 to 80,000 tokens.
  • Tier 2 input: 450,000 to 2,000,000 tokens.
  • Tier 2 output: 90,000 to 200,000 tokens.
  • Tier 3 input: 800,000 to 5,000,000 tokens.
  • Tier 3 output: 160,000 to 400,000 tokens.
  • Tier 4 input: 2,000,000 to 10,000,000 tokens.
  • Tier 4 output: 400,000 to 800,000 tokens.

That is why rohanpaul_ai's summary reads the new tier 4 ceiling as infrastructure for large agent workloads, not just bigger chat sessions.

Colossus 1 capacity

The compute side of the announcement is unusually explicit. In Anthropic's post, the company says it signed an agreement to use all compute capacity at SpaceX's Colossus 1 data center, which scaling01's screenshot of the SpaceX section transcribes as more than 300 megawatts and over 220,000 NVIDIA GPUs within the month.

nottombrown's timeline post adds the operational detail that Anthropic expects to ramp Claude inference on Colossus in the next few days, which helps explain why the user-facing limits changed before the full monthly capacity number is online.

The post also places SpaceX inside a broader supply stack. As rohanpaul_ai's thread link notes from the same announcement, Anthropic is already spreading training and inference across NVIDIA GPUs, AWS Trainium, and Google TPUs, alongside separate Amazon, Google, Broadcom, Microsoft, NVIDIA, and Fluidstack deals.

Weekly caps and lingering bottlenecks

The first community question was not about SpaceX. It was whether Anthropic had only enlarged the short window. In btibor91's question, users zeroed in on the phrase "five-hour rate limits," and kimmonismus's follow-up explicitly asked why weekly limits appeared unchanged.

That caveat matters because some users were already hitting overload and hard caps before the announcement. bridgemindai's 529 screenshot showed Claude Code returning server overload errors earlier in the day, and even after the rollout sbmaruf's reply reported still seeing rate limiting.

The community read on the change was blunt. In Theo's screenshots, several users took the SpaceX deal as evidence that capacity, not pricing theory, had been the binding constraint all along.

Orbital AI compute

Buried in Anthropic's post, and visible in

, is a line saying Anthropic has already expressed interest in partnering with SpaceX on multiple gigawatts of orbital AI compute capacity.

That is not part of the shipped limit increase. But it is a concrete extra fact from the same announcement, and koltregaskes's summary captures the current status accurately: interested, not committed.

Further reading

Discussion across the web

Where this story is being discussed, in original context.

On X· 6 threads
TL;DR4 posts
What changed2 posts
Opus API ceilings2 posts
Colossus 1 capacity3 posts
Weekly caps and lingering bottlenecks4 posts
Orbital AI compute2 posts
Share on X