
A 59.8 MB JavaScript source map file — a debugging artifact that translates compressed, minified code back into readable source — was accidentally bundled into version 2.1.88 of the @anthropic-ai/claude-code package on the public npm registry earlier today.
The irony is almost too neat: the same company that quietly engineered an "Undercover Mode" to make sure its AI never leaks internal codenames in public git commits just accidentally shipped its entire codebase to every developer with an internet connection.
On March 31, 2026, Chaofan Shou posted publicly to X: "Claude code source code has been leaked via a map file in their npm registry!" The published package reportedly contained a source map file referencing the complete, unminified TypeScript source, which was directly downloadable as a ZIP archive from Anthropic's own R2 cloud storage bucket. That X thread has already crossed 3.1 million views.
The leaked codebase was quickly archived to a public GitHub repository, surpassing 1,100+ stars and 1,900+ forks within hours. To be clear, this is not a leak of Claude's core intelligence — no model weights, no training data. What surfaced is the client-side CLI layer: the terminal agent developers install and run locally. But it's still a significant exposure.
What's Inside
The base tool definition alone runs to 29,000 lines of TypeScript, and the Query Engine — the brain handling all LLM API calls, streaming, caching, and orchestration — is a 46,000-line module. Beyond the architecture, the leak exposed things Anthropic clearly wasn't ready to announce.
![]() |
| Image by @amaan8429 |
The code confirms that "Capybara" is the internal codename for a Claude 4.6 variant, with Fennec mapping to Opus 4.6 and the unreleased Numbat still in testing. Internal comments reveal Anthropic is iterating on Capybara v8, which still carries a 29–30% false claims rate — a regression from the 16.7% seen in v4.
One of the most talked-about finds is BUDDY — a Tamagotchi-style AI companion that sits in a speech bubble next to the user's input box, complete with 18 species, rarity tiers, and five personality stats including CHAOS and SNARK. The plan was a teaser rollout from April 1–7, going live in May, starting with Anthropic employees.
Also exposed is KAIROS — an autonomous, always-on background agent mode that runs memory consolidation tasks while the user is idle, cleaning context and merging observations into verified facts through a process called autoDream.
The full system prompt is also out in the open, along with telemetry logic that tracks frustration signals — including when users swear at Claude — and repeated "continue" prompts routed through Datadog.
The timing makes this worse than a simple packaging mistake. A concurrent, separate supply chain attack on the axios npm package occurred hours before the leak. If you installed or updated Claude Code via npm on March 31, 2026, between 00:21 and 03:29 UTC, you may have inadvertently pulled in malicious versions of axios (1.14.1 or 0.30.4) containing a Remote Access Trojan. VentureBeat
Check your package-lock.json, yarn.lock, or bun.lockb for axios versions 1.14.1 or 0.30.4, or the dependency plain-crypto-js. If found, treat the machine as fully compromised and rotate all secrets. Anthropic has designated the native installer (curl -fsSL https://claude.ai/install.sh | bash) as the recommended installation method, as it uses a standalone binary that bypasses the npm dependency chain entirely. If you must stay on npm, downgrade to the last confirmed safe version, 2.1.86, and rotate your Anthropic API keys via the developer console regardless.
Anthropic has not issued a public statement as of publication. This story will be updated.
