This is a developing story. NCT previously covered the original Claude Code source leak in “Anthropic Accidentally Leaked 512,000 Lines of Claude Code Source via npm, Revealing Anti-Distillation Traps and Undercover Mode.” This article covers the new DMCA enforcement fallout.
Anthropic’s attempt to scrub its leaked Claude Code source from GitHub backfired on Tuesday when a DMCA takedown notice swept up approximately 8,100 repositories, including legitimate forks of Anthropic’s own public Claude Code repository. The company retracted the bulk of the notices within hours, according to TechCrunch.
The takedown was triggered by last week’s accidental exposure of 512,000 lines of Claude Code source code via an npm packaging error. After the leaked code was mirrored to GitHub and accumulated over 84,000 stars, Anthropic filed a DMCA notice under U.S. digital copyright law requesting removal of repositories hosting the code.
What Went Wrong With the Takedown
The problem was scope. The repository named in Anthropic’s DMCA notice was part of a fork network connected to Anthropic’s own public Claude Code repo, so GitHub’s automated enforcement cascaded across the entire fork tree. Developers who had legitimately forked the public repository found their code blocked, according to TechCrunch.
Boris Cherny, Anthropic’s head of Claude Code, confirmed on X that the overbroad takedown was accidental. “The repo named in the notice was part of a fork network connected to our own public Claude Code repo, so the takedown reached more repositories than intended,” an Anthropic spokesperson told TechCrunch. Anthropic retracted all notices except for one repository and 96 forks containing the leaked source, and GitHub restored access to the affected forks.
GitHub’s own DMCA records confirm the notice was executed against approximately 8,100 repositories.
The Copyright Irony
The DMCA enforcement carries a particular irony. Anthropic has been on the receiving end of multiple copyright lawsuits alleging it used copyrighted material to train its AI models without authorization. In September 2025, a court ordered Anthropic to pay $1.5 billion in damages in a class-action lawsuit brought by authors and publishers over allegations it used pirated books and shadow libraries to train Claude, according to Business Insider. Reddit has also sued Anthropic for scraping user-generated content, and Universal Music Group filed suit last month over allegedly downloading 20,000 copyrighted songs for training.
Now Anthropic is invoking the same copyright framework to protect its own code.
Security Fallout Compounds the Cleanup
The DMCA debacle is only one layer of the post-leak fallout. Security firm Straiker warned that the exposed source gives attackers a blueprint to craft payloads targeting Claude Code’s four-stage context management pipeline, according to The Hacker News. Attackers have already begun typosquatting internal npm package names to target developers attempting to compile the leaked source, with five malicious packages published to npm that currently contain empty stubs but could be weaponized in a dependency confusion attack.
Paul Price, founder of ethical hacking firm Code Wall, told Business Insider the leak was “more embarrassing than detrimental,” noting that Anthropic’s internal model weights were not exposed. “Claude Code is one of the best-designed agent harnesses out there, and now we can see how they approach the hard problems,” Price said.
Why This Matters for Builders
The botched DMCA cleanup adds to a difficult stretch for Anthropic, which is reportedly planning an IPO. As TechCrunch noted, leaking source code as a public company typically triggers shareholder litigation. For teams building on Claude Code or similar agent harnesses, the episode is a reminder that DMCA enforcement on fork networks can cascade unpredictably, and that npm packaging pipelines remain a single point of failure for source exposure.