A source map in Claude Code’s npm package exposed the CLI source, prompts, and unreleased features; GitHub later reversed overbroad fork takedowns. Vendors should treat build packaging, safety scaffolding, and repo enforcement as active risk areas.

The weird bits surfaced fast: the npm package page showed 2.1.88 live while the leak spread, the official GitHub repo moved to v2.1.89 soon after, the Alex Kim write-up pulled out fake tool injection and KAIROS references, and GitHub’s own original DMCA notice plus retraction showed how a repo cleanup spilled into a much broader fork takedown.
The basic fact pattern is simple. A source map in the public @anthropic-ai/claude-code npm package let outsiders reconstruct readable CLI source. The npm package listing shows 2.1.88 published during the incident window, and the official GitHub repo shows v2.1.89 landing shortly after.
That matters because the incident looks like a packaging failure, not an intrusion. HN commenters pointed at a possible Bun production source map issue Bun build bug comment, though that remains a community theory, not a confirmed root cause. Either way, the engineering lesson is plain: your published artifact is part of your security boundary.
The real story here is the anti-distillation mechanism. According to both the HN thread and Alex Kim’s analysis, Claude Code can send anti_distillation: ['fake_tools'] in API requests, which causes the server to inject decoy tool definitions into the prompt path HN comment on anti-distillation Alex Kim’s write-up.
That is more revealing than the usual leaked prompt snippets. It shows Anthropic treating prompt capture and agent imitation as an active adversarial problem. If you build agent products, that design choice is worth studying: the company appears to assume competitors or wrappers will record interactions and try to clone the behavior, so the system can poison the observation layer itself.
One of the more credible practitioner reactions came from the HN commenter who called out src/cli/print.ts at roughly 3,167 lines, handling the agent loop, signals, rate limits, AWS auth, MCP lifecycle, plugin install and refresh, and worktree bridging HN comment on CLI internals.
That is interesting for two reasons:
Anthropic’s public docs make part of this visible already. The hooks reference and hooks guide describe extensive lifecycle automation, shell hooks, HTTP hooks, and MCP tool hooks. The leak seems to have shown how much more of that wiring exists behind the public surface.
Public discussion zeroed in on a line that tells the agent not to mention Claude Code or say it is an AI in commit messages and PR descriptions HN comment on undercover mode. Read plainly, that suggests a mode for blending into normal developer output.
There is also reported logic for detecting user frustration, plus subscription and compaction-related internals in the leaked code Alex Kim’s write-up. None of that is shocking on its own. Put together, it shows that serious coding agents now have product instincts baked into the runtime: when to stay quiet, how to manage user trust, how to compress context, and how to avoid looking clumsy in version control.
For engineers, this is a useful correction to the usual “agent equals prompt plus tools” mental model. Shipping one of these products means building a behavior layer around the model, not just a command wrapper.
The roadmap leakage may be more strategically painful than the code leak. HN commenters and secondary analysis called out references to KAIROS, described as an always-on or proactive agent mode, plus other features like AFK mode, fast mode, and task budgeting tweet on unreleased features HN comment on KAIROS.
Treat those specifics carefully, because most came from public reverse engineering rather than an official changelog. Still, the pattern is believable and consistent: Claude Code appears to have a much larger product surface in flight than what is publicly documented. Competitors did not just get implementation details. They got hints about where terminal agents are headed next.
The repo enforcement sequence is almost its own incident. Theo reported that GitHub disabled a repo that did not contain the leaked Claude Code source, only a prior skill edit Theo on mistaken takedown. Gergely Orosz then traced the public paperwork: Anthropic’s original DMCA notice named the nirholas/claude-code repo and 96 listed forks, but GitHub’s notice says it processed the takedown against the entire network of 8.1K repositories because the reported network exceeded 100 repositories and the submitter alleged most forks were infringing Gergely on network-wide takedown Gergely on original notice.
Anthropic then filed a partial retraction stating that all repositories except the named repo and the 96 individually listed forks should be reinstated. That does not answer every procedural question, but it does settle the narrow factual one: the overbroad removals were reversed in public paperwork within hours.
This story will be remembered as a leak, but engineers should probably file it under operational exposure. One shipped artifact exposed implementation details, hidden safeguards, prompt behavior, and unreleased features. One cleanup workflow then swept in unrelated repos before being narrowed back down.
That combination is the useful lesson. If you ship AI developer tools, you need release checks for debug artifacts, tighter separation between runtime logic and internal notes, and takedown processes that do not expand faster than humans can audit. Claude Code’s model capabilities may remain the moat. The surrounding system is what the public got to inspect.
Posted by treexs
The useful engineering angle is the accidental exposure of a large AI coding CLI through npm source maps, plus the apparent build/runtime issue in Bun that may have caused it. Commenters also extract implementation clues from the leaked code: anti-distillation defenses, a very large monolithic CLI function, and unreleased feature flags, all of which are relevant to how AI tools are packaged and operated.
Posted by treexs
Chaofan Shou (@Fried_rice) posted on March 31, 2026: 'Claude code source code has been leaked via a map file in their npm registry! Code: https://pub-aea8527898604c1bbb12468b1581d95e.r2.dev/src.zip'. The tweet links to a ZIP file containing the leaked Claude Code source.
Posted by treexs
Thread discussion highlights: - cedws on anti-distillation / fake tools: ANTI_DISTILLATION_CC ... injects anti_distillation: ['fake_tools'] into every API request, which causes the server to silently slip decoy tool definitions into the model's system prompt. - mohsen1 on Claude Code internals: src/cli/print.ts ... 3,167 lines long ... handles: agent run loop, SIGINT, rate-limits, AWS auth, MCP lifecycle, plugin install/refresh, worktree bridging ... This should be at minimum 8–10 separate modules. - jakegmaths on Bun production build bug: This is ultimately caused by a Bun bug ... source maps are exposed in production ... Claude code uses (and Anthropic owns) Bun, so my guess is they're doing a production build, expecting it not to output source maps, but it is.
Posted by alex000kim
Relevant as a look at Claude Code’s internal architecture and operational mistakes: leaked source maps, anti-distillation fake tools, hook-based extensibility, compaction/session handling, and how release packaging can accidentally expose sensitive implementation details.
Posted by alex000kim
Alex Kim analyzes Anthropic's Claude Code CLI source code leaked via a .map file in their npm package. Key findings: ANTI_DISTILLATION_CC flag injects fake tools; undercover mode hides AI traces and internal codenames; regex detects user frustration; cch=00000 enforces subscription checks; 250,000 wasted API calls daily from compaction failures; KAIROS unreleased autonomous agent mode. Second leak after model spec exposure.
Posted by alex000kim
Thread discussion highlights: - mzajc on undercover mode: “NEVER include in commit messages or PR descriptions: - The phrase 'Claude Code' or any mention that you are an AI ...” This very much sounds like it does what it says on the tin, i.e. stays undercover and pretends to be a human. - Eisenstein on tooling hooks and lifecycle: I think its just a case of dealing with something that has no precedent... we now have a tool that can produce results that are independent of our ability to produce them with any former class of tools. - causal on operational leakage: It's like they discarded all release harnesses and project tracking and just YOLO'd everything into the codebase itself... This is just revealing operational details the agent doesn't need to know.
🚨 Anthropic’s Claude Code Source Leak — What It Actually Exposes A careless build mistake just laid bare one of the most advanced AI coding tools — and the lessons are huge. Insights from Zhihu contributor deephub 👇 🏢 About Anthropic Anthropic is a leading AI safety-focused Show more
Anthropic DMCA’d my Claude code fork. …which did not have the Claude Code source. It was only for a PR where I edited a skill a few weeks ago. Absolutely pathetic.
this was a communication mistake, see retraction here: github.com/github/dmca/bl… should be reinstated but not sure what the process is here