An Anthropic employee accidentally included the full source code of Claude Code — nearly 2,000 files and 500,000 lines of code — in a routine npm software update around March 31, according to The Guardian. Security researcher Shou Chaofan discovered the code, decrypted it, and posted it on X, where it drew over 33 million views. A rewritten version became GitHub’s fastest-ever downloaded repository before Anthropic issued over 8,000 copyright takedown requests to contain the spread.
The leak triggered an outsized reaction in China, where Anthropic’s services are blocked entirely. China sits on the same restricted list as Russia, North Korea, Afghanistan, Iran, and Cuba. According to the South China Morning Post, Chinese developers scrambled to download copies and analyze everything from the tool’s architecture and agent design to its memory mechanisms. A forum thread titled “Claude Code source code leak incident” drew millions of views, with developers sharing what they had learned and proposing ways to build on it.
What the Code Reveals
The leak exposed Claude Code’s internal architecture — the engineering decisions behind how Anthropic turns its Claude model into an autonomous coding agent — not the model weights themselves. Zhang Ruiwang, a Beijing-based IT system architect, told SCMP that “the code batches are indeed a treasure for AI companies or developers, as they revealed all the key engineering decisions Anthropic made.” Within the code, developers also spotted blueprints for a Tamagotchi-style coding assistant and an always-on AI agent, according to The Guardian, citing The Verge.
Anthropic called it “a release packaging issue caused by human error, not a security breach,” and said no sensitive customer data or credentials were involved. But the leak included commercially sensitive information, including tools and instructions for getting Claude to work as a coding agent, the Wall Street Journal reported via The Guardian.
The Geopolitical Irony
The timing compounds the embarrassment. Less than a year ago, CEO Dario Amodei publicly called China an “adversarial nation” and pushed for restricting Chinese access to American AI capabilities. Anthropic recently accused three Chinese AI companies — DeepSeek, Moonshot AI, and MiniMax — of setting up more than 24,000 fraudulent accounts and prompting Claude over 16 million times to train their own models, according to the Times of India.
Now the company has handed Chinese developers a detailed map of how its most commercially valuable coding tool works — through a misconfigured .npmignore file.
The Broader Security Question
This was Anthropic’s second data leak in recent weeks. Fortune previously reported, as noted by The Guardian, a separate breach revealing the company stored thousands of internal files on publicly accessible systems, including references to an upcoming model codenamed “Mythos” and “Capybara.” For a company whose entire brand rests on AI safety and responsible development, the pattern raises a straightforward operational security question: if Anthropic cannot secure its own build pipeline, what does that signal about the infrastructure its agent tools run on?