Skip to content
AI Primer
release

Claude Code adds /ultrareview with 3 free cloud reviews through May 5

Claude Code introduced /ultrareview in research preview, sending parallel bug-hunting agents to scan critical changes and return findings in the CLI or Desktop. That matters because Pro and Max users get three free runs through May 5, and analysis threads frame it as a lower-noise answer to conventional AI review false positives.

3 min read
Claude Code adds /ultrareview with 3 free cloud reviews through May 5
Claude Code adds /ultrareview with 3 free cloud reviews through May 5

TL;DR

  • Claude Code shipped /ultrareview as a research preview that sends a fleet of cloud agents to review a PR, then drops findings back into the CLI or Desktop, according to ClaudeDevs' launch post.
  • Pro and Max subscribers get three free runs through May 5, while the update path is just claude update, per ClaudeDevs' follow-up post.
  • aakashgupta's analysis thread frames the pitch around false-positive reduction: multiple agents scan different failure modes, and findings are reproduced before they reach the user.
  • The command is already surfacing in community-made Claude Code reference material, where om_patel5's cheat sheet post lists /ultrareview next to newer workflow commands like /simplify, /loop, and /chrome.

You can open the official ultrareview docs, skim Anthropic's broader Claude Opus 4.7 launch post, and check the still-active Hacker News thread, where commenters are already connecting Claude Code's newer review tooling to pricing, defaults, and workflow friction.

/ultrareview

Anthropic's core claim is compact: /ultrareview runs a fleet of bug-hunting agents in the cloud, then returns the results inside Claude Code or the Desktop app. ClaudeDevs' launch post explicitly positions it for risky merges, naming auth flows and data migrations as example targets.

The rollout is lightweight on the client side. ClaudeDevs' follow-up post says existing users can get it with claude update, and the launch post points straight to the docs rather than a separate product page.

Verification layer

The sharpest detail in the evidence pool comes from aakashgupta's analysis thread, which says standard AI review tools often lose teams on noise rather than coverage. His summary of Anthropic's approach breaks into three steps:

  1. Parallel agents inspect different angles, including logic, security, edge cases, and performance.
  2. Findings are independently reproduced before they reach the user.
  3. Unverified bugs are filtered out.

That is a cleaner product story than "AI code review," because the workflow claim is specifically about cutting triage time. aakashgupta contrasts that with conventional tools that may flag seven or eight issues on a large PR, then force a human to sort through false alarms for half an hour or more.

Pricing window

The official offer is narrow and concrete: Pro and Max users get three free reviews through May 5. ClaudeDevs' launch post treats that as a preview incentive, not a permanent entitlement.

The pricing discussion around Claude tooling is already bleeding into the launch. In the main HN thread and a fresher HN summary, commenters tie newer review features to a broader pattern of premium packaging, while aakashgupta's thread claims Anthropic is signaling a likely paid range of roughly $5 to $20 per review.

Cheat sheet evidence

The most useful side-channel evidence is om_patel5's cheat sheet post, which screenshots a one-page Claude Code reference updated to v2.1.116 on April 20. It lists /ultrareview [PR#] inside the tools section, but the bigger reveal is what shipped around it:

  • /simplify [focus], described as parallel refactoring with three agents
  • /loop [interval] [prompt] for recurring tasks
  • /chrome for browser access
  • xhigh effort, placed between high and max
  • --dangerously-skip-permissions as a CLI flag
  • auto mode availability for Max subscribers using Opus 4.7

That screenshot makes /ultrareview look less like a one-off command and more like part of a wider push toward multi-agent and higher-autonomy Claude Code workflows.

Further reading

Discussion across the web

Where this story is being discussed, in original context.