Anthropic began charging pay-as-you-go for third-party harnesses like OpenClaw instead of covering usage with Claude subscriptions. Users should review wrappers and migration paths, including Claude Code CLI, OpenAI auth, Hugging Face guides and local tools.

According to Hugging Face's "Liberate your OpenClaw" guide, the fastest replacement path was hosted inference, while local models were pitched as the privacy-first option. Anthropic's Claude Code auth docs still describe direct login with a Claude.ai account, OpenClaw's OpenAI provider page says ChatGPT sign-in works for Codex-based access, and sonami-tech's proxy appeared the same day to turn Claude Code CLI into an OpenAI-compatible endpoint.
The policy change landed as an email, not a product blog post. The screenshot in Moritz Kremb's post says that starting April 4 at 12pm PT, Claude subscriptions would no longer cover third-party harnesses including OpenClaw, though users could keep using those tools with separately billed extra usage.
The same email gave three concrete reasons and concessions. It said third-party harnesses put an outsized strain on Anthropic's systems, that enforcement would start with OpenClaw and roll out to more harnesses shortly, and that affected users could redeem a one-time credit equal to their monthly subscription price by April 17.
That split lines up with Anthropic's help article on subscription versus API billing, which already described Claude plans and API usage as separate products, and with Anthropic's Claude Code plan page, which says Pro and Max now cover Claude Code itself.
The quickest workaround was to stop calling Anthropic as an API provider and start calling the local Claude Code binary. In Kremb's post, OpenClaw is told to use Claude Code indirectly, "through the cli," so the request looks like terminal usage instead of third-party harness traffic.
That idea showed up in several forms within hours. A public migration gist walks users from anthropic/ to claude-cli/, and sonami-tech's Rust proxy exposes Claude Code CLI as an OpenAI-compatible Chat Completions server for tools like OpenClaw, aider, LiteLLM, and Open WebUI.
The docs support the premise. Anthropic's authentication page for Claude Code says individual users can log in with a Claude.ai account on Pro or Max, which is exactly the surface these wrappers are borrowing.
Hugging Face moved fast with a migration narrative of its own. In the linked guide, the company says Anthropic is limiting access to Claude models in open agent platforms for Pro and Max subscribers, then offers two replacements: a hosted open model through Hugging Face Inference Providers, or a fully local model running on the user's own hardware.
OpenClaw was already built for that kind of swap. Its provider directory lists Anthropic, Claude CLI, Google, Groq, Hugging Face Inference, OpenAI, local gateways, and a long tail of other backends, all selected with the same provider/model pattern.
There was also a closed-model detour. Kremb's instructions show openclaw models auth login --provider openai-codex, and OpenClaw's OpenAI provider docs say Codex supports ChatGPT sign-in for subscription access, with explicit support for external tools and workflows like OpenClaw.
One of the more specific community explanations came from the Reddit post, which claims OpenClaw sends a consistent system prompt at the start of each Claude API request, including obvious OpenClaw strings that Anthropic could detect and block.
That post is also where the story gets weirder. The same author says their proxy includes a text-replacement layer that can swap words in prompts and responses transparently before they hit Claude Code CLI, which turns a billing change into an instant subgenre of wrapper engineering.
Even without that fingerprint theory, the direction of travel is clear in the surrounding products. OpenClaw's own repo now foregrounds OpenAI OAuth in its README, and Hugging Face's guide treats open and local models as the cleanest exit ramp from a subscription surface the app no longer controls.