Anthropic releases Claude Opus 4.7 with xhigh effort and adaptive thinking
Anthropic released Claude Opus 4.7 with adaptive thinking, xhigh effort, and new Claude Code controls including auto mode, recaps, and ultrareview. The update also changes token use, adds a tokenizer update, and fixes rate limits for longer coding and creative sessions.

TL;DR
- Anthropic's launch post introduced Claude Opus 4.7 as the new generally available flagship, kept list pricing flat at $5 input and $25 output per million tokens, and positioned it for harder coding work plus creative production like interfaces, slides, and docs.
- In Claude Code, Boris Cherny's launch thread and his follow-up tips tied 4.7 to auto mode, xhigh effort, longer autonomous runs, and better handling of ambiguous multi-file work.
- Under the hood, Cherny's rate-limit note said 4.7 uses more thinking tokens, while the main HN discussion and Anthropic's migration guide both pointed to adaptive thinking, tokenizer changes, and a different reasoning display model.
- Day-one reaction split fast: one Anthropic employee described a smoother brainstorm-to-spec-to-implementation loop, while early side-by-side tests and the car-wash prompt screenshot circulated as instant counterexamples.
- Anthropic spent the rest of launch day patching the experience, because a Claude Code maintainer acknowledged a rate-limit bug for longer requests and another team update launched a new docs digest plus an official @ClaudeDevs updates channel.
Anthropic's own materials say Opus 4.7 brings higher-resolution vision, adaptive thinking, and a migration guide full of behavior changes. On the Claude Code side, you can read the new permission modes doc, browse the new What's New digest, and see that Anthropic is already using it to funnel launch-week features into one running changelog.
What shipped
Introducing Claude Opus 4.7
1.7k upvotes · 1.2k comments
Anthropic's official pitch is straightforward: Opus 4.7 is the strongest generally available Claude, with better long-running software work, stronger instruction following, and better output on visual professional tasks. The launch post says it improves resolution on Anthropic's internal 93-task coding benchmark by 13 percent over Opus 4.6, adds higher-resolution image support, and keeps the same list pricing and 1M context positioning across Claude, API, Bedrock, Vertex AI, and Foundry.
The concrete launch-week changes break down like this:
- Model:
claude-opus-4-7, now live across Anthropic's first-party surfaces and major clouds, according to the launch post. - Vision: maximum image resolution rises to 2,576 px, according to Anthropic's What's new in Claude Opus 4.7.
- Creative positioning: Anthropic explicitly says 4.7 is better for interfaces, slides, and docs in the announcement.
- Effort control: xhigh sits between high and max, according to minchoi's summary and Anthropic's 4.7 feature page.
- Task budgets: Anthropic added budget controls for longer agentic runs in the same feature page.
- Safety: the launch post says 4.7 is the first model with automatic blocking for high-risk cyber requests as Anthropic stages toward Mythos-class releases.
Auto mode and long runs
The most important Claude Code shift is not the model name. It is the push toward letting a session run longer with less supervision.
According to Cherny's tips thread, auto mode routes permission prompts to a model-based classifier that decides whether a command is safe enough to auto-approve. Anthropic's permission modes doc adds the interface details: Shift+Tab cycles modes in the CLI, and the same selector appears in VS Code, Desktop, and claude.ai.
That release bundle also included several smaller controls that make long sessions easier to resume or hide:
- Auto mode: fewer interruptions for shell commands, edits, and network work, per Cherny's thread.
/fewer-permission-prompts: scans session history and suggests allowlist entries, according to Cherny's walkthrough.- Recaps: short summaries of what an agent did and what comes next, per the recap post.
- Focus mode: hides intermediate work and leaves the final result on screen, according to the same thread.
- Spec-first workflow: trq212's example describes brainstorming, rewinding, interviewing the user, then writing a spec before letting auto mode implement.
Adaptive thinking changes the token math
Discussion around Claude Opus 4.7
1.7k upvotes · 1.2k comments
Anthropic changed the control surface as much as the model. For Opus 4.7 and later, the company now recommends adaptive thinking instead of the old fixed thinking budgets, and its migration guide says manual extended thinking is no longer supported on 4.7.
Three details matter immediately:
- Higher spend per session. Cherny said 4.7 "thinks more," he later said rate limits were raised to offset that, and HN commenters focused on the likelihood that real agentic sessions will cost more than the unchanged price card suggests.
- Tokenizer inflation. Anthropic's migration guide says the updated tokenizer can map the same input to 1.0 to 1.35 times more tokens, depending on content type.
- Different reasoning visibility. HN commenters noted that 4.7 no longer defaults to a human-readable reasoning summary, and Anthropic's adaptive thinking docs describe summarized reasoning as a separate display choice.
Anthropic also had to patch rate limits after launch. trq212's note said a quick fix went out for higher usage, and the follow-up repost specified a bug where subscription rate limits were not properly adjusted for long-context Opus 4.7 requests.
Early regressions showed up in public
Launch-day praise came with immediate counterexamples. stevibe's tree-growth test reported that Opus 4.6 produced the intended animation twice, while 4.7 returned a static result twice, even though 4.7 looked faster. another viral screenshot showed 4.7 answering "walk" to the car-wash prompt, then later admitting it had pattern-matched the scenario instead of reasoning from the goal.
The harsher critique came from om_patel5's roundup, which claimed the new tokenizer makes 4.7 materially more expensive in practice and paired that with screenshots of prompt-following failures and fabricated search claims. A separate repost in the prompt-injection complaint thread added another early report of 4.7 surfacing internal safety framing in a user-visible answer.
Anthropic's own channels implicitly acknowledge the adjustment period. Cherny's first impressions said it took him a few days to learn how to work with 4.7 effectively, and QualitativAi's launch-day post focused less on raw quality than on losing the old sense of explicit control over reasoning budgets.
Anthropic is now shipping a digest for Claude Code itself
The last interesting detail is organizational, not model-level. Anthropic used this launch to add a dedicated @ClaudeDevs account, a curated What's New page, and a monthly webinar series for Claude Code updates.
That matters because Claude Code's feature set is now moving as fast as the models. In the same week as Opus 4.7, Anthropic had already pushed a rebuilt desktop app via the desktop relaunch post, cloud-hosted routines that can trigger on schedules, API calls, or GitHub events, plus the permission, recap, and effort controls wrapped around 4.7. The product is starting to need its own newsroom.