Anthropic accidentally published the internal source code of Claude Code, its agentic coding tool and one of its most commercially important products, via a source map file bundled into a routine npm package update. The leak was first reported by CNBC on March 31. By April 1, Bloomberg had confirmed the extent of the exposure: roughly 163,000 lines of TypeScript code, covering the full internals of a product the company has built considerable commercial and reputational weight on. Anthropic's response, which included sending copyright takedown requests to thousands of GitHub repositories, was subsequently described by the company itself as having gone further than intended.
The mechanism of the leak is worth understanding because it is a common failure mode that other companies in similar positions should note. When publishing JavaScript or TypeScript packages to npm, build toolchains typically generate source map files: auxiliary files that map minified or compiled production code back to the original source, primarily to help developers debug errors. Claude Code uses Bun's bundler, which generates source maps by default unless explicitly disabled. Whoever prepared the package for publication did not suppress the source map, and the map file referenced the full, unminified TypeScript source, which was accessible from Anthropic's R2 storage bucket once the mapping was followed.
No credentials or customer data were exposed. Anthropic confirmed this and characterised the incident as human error in the packaging process rather than a security breach in the conventional sense. That distinction matters for customer risk, but it does not change what was exposed: the complete implementation of a proprietary tool that developers, and Anthropic's competitors, now have access to. Reverse engineering a minified binary is slow and uncertain; reading clearly commented TypeScript is not.
The takedown response became its own story. Anthropic issued DMCA copyright takedown notices to GitHub targeting repositories that had copied, archived, or mirrored the exposed code. The notices were broad, and according to Bloomberg, thousands of repositories were removed, including some that were primarily analytical or educational in nature rather than simple copies. The company later acknowledged that the takedowns had been wider than intended and said the scope had been scaled back significantly. This is a delicate position: a company that has publicly emphasised openness, safety research, and trust as core values was simultaneously attempting to scrub third-party research that drew on code it inadvertently published.
The timing adds an edge to the story. Bloomberg's framing was direct: the leak was a blow to a company that has built its brand on prioritising safety and responsible development. Whether operational security over a packaging process is meaningfully connected to AI safety research is a philosophical question, but the reputational adjacency is real. Anthropic is a company that asks its users and partners to trust it with their most sensitive code and data. A significant internal code leak caused by a default build setting left unchecked does not help that case.
Claude Code has become one of Anthropic's most visible consumer and developer products, with a substantial user base among professional developers who run it against their own codebases daily. For those users, the question is not primarily one of liability but of confidence: knowing that the tool they use to process their most sensitive work is built and operated by a team that missed a source map file in a production package. The answer to that question is probably "it happens, and it does not change the underlying product quality." The more interesting long-term question is what the exposed code reveals about how the tool actually works, and how that information will be used by the researchers and competitors who now have access to it.