This is a developing story. NCT previously covered the original source leak and the DMCA takedown fallout. This article covers new developments that emerged on April 2.
The Anthropic Claude Code situation has expanded beyond an intellectual property exposure. Three new developments landed within hours of each other on Wednesday: a critical security vulnerability, revelations about hidden user-behavior tracking, and a Congressional demand for answers.
Critical vulnerability: CVE-2026-21852
SecurityWeek reported that security firm Adversa AI discovered a critical vulnerability in Claude Code, catalogued as CVE-2026-21852. Penligent’s technical analysis explains the mechanics: a malicious repository could set the ANTHROPIC_BASE_URL environment variable so that Claude Code would make requests before displaying the trust prompt, potentially leaking API keys to an attacker-controlled server.
The vulnerability is separate from the source map leak itself. The March 31 npm packaging error exposed 512,000 lines of source code; Adversa AI found the CVE by analyzing that exposed code.
Additionally, VentureBeat reported that developers who installed or updated Claude Code via npm on March 31 between 00:21 and 03:29 UTC may have pulled a malicious version of the axios dependency (versions 1.14.1 or 0.30.4) containing a Remote Access Trojan (RAT). The window was narrow, but the exposure is real for anyone who updated during that three-hour period.
Frustration tracking in the source code
Scientific American reported that the leaked source code contains regex-based detection that scans user prompts for profanity, insults, and phrases like “so frustrating” and “this sucks,” then logs that the user expressed negativity.
Independent developer Alex Kim, who published a technical analysis cited by Scientific American, called it “a one-way door” — a feature that can be enabled but not disabled. Kim told Scientific American the signal “doesn’t change the model’s behavior or responses. It’s just a product health metric.”
Miranda Bogen, director of the AI Governance Lab at the Center for Democracy & Technology, told Scientific American the concern is what happens to such data over time: “Even if it’s a very legible and very simple prediction pattern, how you use that information is a separate governance question.”
National security framing
A lawmaker has demanded answers from Anthropic, framing the leak as a national security concern that could erode the U.S. AI advantage, according to SecurityWeek. The Congressional attention arrives the same day the DOJ filed to appeal the federal judge’s order blocking the government’s ban on Anthropic.
Anthropic has not publicly responded to the vulnerability disclosure or the frustration-tracking revelations beyond its initial statement on the leak itself. The company told VentureBeat: “This was a release packaging issue caused by human error, not a security breach.”
For developers using Claude Code, the immediate action items are specific: check whether you updated via npm during the March 31 vulnerability window, rotate any API keys that may have been exposed, and verify your axios dependency version.