Anthropic on Tuesday announced Claude Mythos Preview, a frontier AI model the company says is too dangerous for public release. During internal testing, Mythos discovered thousands of high-severity zero-day vulnerabilities across major operating systems, web browsers, and open-source projects. Anthropic is restricting access to 40+ organizations through a new cybersecurity initiative called Project Glasswing.
Project Glasswing: Controlled Release, Not Open Access
Twelve core partners, including Apple, Google, Microsoft, Amazon, Broadcom, Cisco, CrowdStrike, and Palo Alto Networks, will deploy Mythos for defensive security work. Another 40 organizations have access beyond that core group. All participants build or maintain critical software infrastructure, according to CNBC.
Anthropic is committing up to $100 million in usage credits for the initiative, plus $4 million in donations to open-source security efforts. Partners will share their findings so the broader tech industry can benefit, TechCrunch reported.
“There was a lot of internal deliberation,” Dianne Penn, Anthropic’s head of research product management, told CNBC. “We really do view this as a first step for giving a lot of cyber defenders a head start on a topic that will be increasingly important.”
What Mythos Found
The model identified thousands of zero-day vulnerabilities, many of them critical, according to TechCrunch. Among the discoveries: a 27-year-old vulnerability in OpenBSD, a security-focused open-source operating system. It also found a flaw in the video encoder FFmpeg that had survived 5 million previous automated tests, and “several” exploitable vulnerabilities in the Linux kernel, according to Platformer.
Mythos is a general-purpose model, not specifically trained for cybersecurity. Anthropic says its capabilities emerged from improvements to coding and reasoning abilities. The critical difference from current models: Mythos can identify multiple separate vulnerabilities in a single piece of software and chain them together into compound attacks, according to an Anthropic video accompanying the announcement.
“Claude Mythos Preview’s large increase in capabilities has led us to decide not to make it generally available,” Anthropic wrote in the model’s system card, per Business Insider.
The Six-Month Window
External cybersecurity experts validated Anthropic’s urgency. Alex Stamos, chief product officer at Corridor and former head of security at Facebook and Yahoo, told Platformer the initiative is “a big deal, and really necessary.”
“We only have something like six months before the open-weight models catch up to the foundation models in bug finding,” Stamos said. “At which point every ransomware actor will be able to find and weaponize bugs without leaving traces for law enforcement to find (and with minimal cost).”
Anthony Grieco, Cisco’s chief security and trust officer, said in a statement that “AI capabilities have crossed a threshold that fundamentally changes the urgency required to protect critical infrastructure from cyber threats, and there is no going back.”
Pentagon Backdrop
The launch comes weeks after Anthropic’s public clash with the Pentagon over autonomous weapons and domestic surveillance. A judge blocked the Defense Department’s attempt to designate Anthropic a supply-chain risk, and the Trump administration has appealed the ruling. Anthropic says it briefed senior U.S. government officials, including CISA and the Center for AI Standards and Innovation, about Mythos’ capabilities before launch, per CNBC.
Whether the government is engaging with the offer remains unclear. As Platformer noted, “A functioning government would take a strong interest in what Anthropic is up to here, if only out of self-preservation.”
The Leak That Started It
Mythos’ existence was first revealed through an accidental data leak in late March. Fortune discovered a draft blog post about the model, then called “Capybara,” in a publicly accessible data cache. The leaked document described it as “by far the most powerful AI model we’ve ever developed.” Cybersecurity stocks fell on the report. Anthropic attributed the leak to human error.
The accidental disclosure was part of a rough stretch for the company, which also exposed Claude Code source files via an npm packaging mistake and then accidentally took down thousands of GitHub repositories while trying to clean up.
The Race to Patch Before Proliferation
Newton Cheng, Anthropic’s Frontier Red Team cyber lead, framed the restricted release as a time-limited defensive advantage. “Cybersecurity is just going to be an area where this broad increase in capabilities has potential for risk, and thus we have to keep a really close eye on what’s going on there,” he told CNBC.
Stamos’ six-month estimate puts a clock on the advantage. These capabilities did not emerge from specialized cybersecurity training. They emerged from general reasoning improvements that every frontier lab is pursuing. When open-weight models reach the same threshold, the defensive head start Project Glasswing provides will be the margin that matters.