Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell summoned chief executives from America’s largest banks to an emergency meeting this week, according to Bloomberg and TechCrunch. The message: deploy Anthropic’s Claude Mythos Preview to scan financial infrastructure for vulnerabilities. Goldman Sachs, Citigroup, Bank of America, Morgan Stanley, and Wells Fargo are now testing the model alongside JPMorgan Chase, which was already a partner in Anthropic’s Project Glasswing initiative.
The urgency is genuine. Mythos has identified thousands of zero-day vulnerabilities in major software systems, many of them one to two decades old, according to Anthropic’s own disclosure. Government officials at the meeting raised the possibility that AI-driven cyberattacks could target banking platforms, potentially forcing them offline or allowing unauthorized actors to tamper with financial data, National Today reported.
But the same administration pushing Mythos into the financial system is simultaneously suing Anthropic in federal court. That contradiction sits at the center of this story, and it reveals something important about how AI security agents are being absorbed into critical infrastructure: faster than the legal and regulatory frameworks can keep up.
The Glasswing Backstory
Anthropic released Mythos Preview on April 7 as part of Project Glasswing, a $100 million coalition with 12 partner organizations including Amazon, Apple, Broadcom, Cisco, CrowdStrike, the Linux Foundation, Microsoft, and Palo Alto Networks. The stated goal: defensive security work, scanning first-party and open-source software for code vulnerabilities.
Mythos is not a cybersecurity-specific model. It is a general-purpose frontier model that Anthropic describes as having strong agentic coding and reasoning capabilities. The cybersecurity results, according to Anthropic, came from applying those general capabilities to vulnerability discovery. The company claims Mythos far exceeds what its previous flagship model, Opus, could accomplish in finding exploitable flaws.
The model’s distribution is deliberately restricted. Only about 40 organizations outside the Glasswing partnership will receive access to the preview, per Anthropic. The company framed this as a safety decision: Mythos could be weaponized by bad actors to find and exploit vulnerabilities rather than patch them.
Not everyone buys that framing. David Crawshaw, CEO of exe.dev, argued on Bluesky that restricted release is “marketing cover for the fact that top-end models are now gated by enterprise agreements and no longer available to small labs to distill.” TechCrunch’s analysis noted that AI cybersecurity startup Aisle claims it replicated much of Mythos’s results using smaller, open-weight models, suggesting the security case for restriction may be overstated.
The Legal Contradiction
The banking push is striking because Anthropic and the Trump administration are locked in an active legal dispute. In March 2026, the Department of Defense designated Anthropic as a supply-chain risk, a classification usually reserved for foreign adversaries. The designation requires any company or agency doing business with the Pentagon to certify it does not use Anthropic’s models.
The conflict started when Anthropic drew two red lines: no mass surveillance of Americans, and no fully autonomous weapons targeting without human decision-making. Defense Secretary Pete Hegseth argued the Pentagon should have access to AI systems for “any lawful purpose” without contractor-imposed restrictions. When negotiations collapsed, the Pentagon applied the supply-chain risk label.
Anthropic filed lawsuits in California and Washington, D.C., calling the government’s actions “unprecedented and unlawful” and arguing the administration was retaliating against the company’s protected speech about AI safety limitations. The General Services Administration subsequently terminated Anthropic’s “OneGov” contract, ending Anthropic service availability across all three branches of the federal government.
The Trump administration appealed in early April. The case is ongoing.
So one arm of the federal government has labeled Anthropic a threat to national security supply chains. Another arm is now actively encouraging the deployment of Anthropic’s most powerful model inside the world’s most systemically important financial institutions. Both things are happening at the same time, under the same administration.
What “Testing” Means at a G-SIB
When a globally systemically important bank “tests” a frontier AI model, the operational implications are significant. G-SIBs operate under Basel III capital requirements, OCC heightened standards, and Federal Reserve supervisory frameworks that treat technology changes as material risk events. Model risk management guidelines (SR 11-7, the Fed’s foundational supervisory guidance) require banks to validate, document, and govern any model used in decision-support or operational processes.
Deploying Mythos to scan infrastructure for vulnerabilities means giving an AI agent access to internal code repositories, network architectures, and potentially production systems. The vulnerability information Mythos produces would itself become classified threat intelligence requiring secure handling under federal banking examination standards.
The emergency meeting’s framing, as National Today reported, included concerns about “account balances being digitally wiped or reduced to zero.” This suggests federal officials are treating AI-discoverable vulnerabilities as potential systemic risk vectors, not just IT security issues.
For the banks, the compliance question is immediate: does deploying a model from a company the Pentagon has labeled a supply-chain risk trigger any vendor risk management obligations? The supply-chain risk designation technically applies to defense contracting, not financial services, but the legal ambiguity is real. No bank general counsel wants to explain to the OCC why they deployed a vendor flagged by another federal agency.
The UK Dimension
The Financial Times reported that UK financial regulators are separately discussing the systemic risk posed by Mythos. While details are limited, the fact that the FCA is engaging on this suggests the banking deployment question is not isolated to the U.S. If Mythos finds vulnerabilities in software used by global banks, those findings have cross-border implications. A zero-day in a payment processing system used by JPMorgan and Barclays simultaneously is not a U.S. regulatory problem; it is a global financial stability problem.
The UK’s AI regulatory approach differs from the U.S. model. The UK has opted for sector-specific regulation rather than comprehensive AI legislation, meaning the FCA would handle financial AI risks directly rather than waiting for a horizontal AI law. This creates the possibility of regulatory divergence: the U.S. pushing banks to deploy Mythos while UK regulators impose constraints on how that deployment operates across jurisdictions.
The Enterprise Sales Question
There is a business dimension to this story that deserves scrutiny. Anthropic restricted Mythos to 40 organizations plus Glasswing partners. The Treasury and Fed are now actively pushing the largest banks in the country to join that restricted group. Whether intentional or not, federal officials are functioning as enterprise sales accelerators for Anthropic’s most commercially valuable product tier.
TechCrunch noted the anti-distillation angle: gating top-end models behind enterprise agreements prevents smaller labs from using them to train competing models, maintaining the commercial moat. The selective release creates scarcity, and scarcity drives enterprise contract value. When the U.S. Treasury tells G-SIBs they should be using Mythos, the commercial implications for Anthropic’s revenue pipeline are substantial.
This does not mean the security concerns are fabricated. It means the incentives align in ways that make it difficult to separate genuine cybersecurity urgency from market-making for a frontier AI lab’s enterprise business.
The Precedent Being Set
The Treasury and Fed meeting establishes a new pattern: federal officials actively directing private-sector adoption of specific AI models from specific companies for infrastructure security purposes. This has not happened before at this scale. Previous government cybersecurity guidance (CISA advisories, NIST frameworks, NSA technical bulletins) has been technology-agnostic, recommending practices rather than products.
The implications extend beyond Anthropic and banking. If the model works, if Mythos finds exploitable vulnerabilities in G-SIB infrastructure that existing security tools missed, the precedent creates a template for every critical infrastructure sector. Energy, healthcare, telecommunications, defense contracting: all face the same vulnerability discovery problem, and the government has now signaled that deploying frontier AI agents is the expected response.
If it fails, or if Mythos produces false positives that trigger unnecessary remediation at massive scale, the precedent is equally significant. Banks spending billions to patch non-existent vulnerabilities based on AI hallucinations would set back the case for AI security agents by years.
The Systemic Risk Calculation
The deeper question is whether deploying a single AI model across all major U.S. banks creates a new category of systemic risk rather than reducing it. Concentration risk in financial technology is a known regulatory concern: when all major banks rely on the same vendor for a critical function, a failure in that vendor’s product becomes a correlated shock across the entire system.
If Mythos becomes the standard vulnerability scanner for G-SIBs, and Mythos has a blind spot (every model does), then every major bank shares the same blind spot simultaneously. If Mythos is compromised, every bank’s vulnerability data is compromised simultaneously. If Anthropic’s access is disrupted (by, say, the ongoing federal lawsuit), every bank’s security scanning capability is disrupted simultaneously.
This is the same pattern that made SolarWinds a systemic event. The technology was deployed everywhere precisely because it was trusted everywhere. The breadth of deployment was the vulnerability.
The Fed and Treasury appear to be calculating that the cybersecurity upside of Mythos outweighs the concentration risk of deploying it universally. That calculation may be correct. But it is being made in a meeting room, under emergency conditions, without the kind of public deliberation that usually accompanies decisions affecting the stability of the global financial system.
What Comes After the Emergency
Three things will determine whether this becomes a turning point or a cautionary tale.
First, the legal case. If the Pentagon’s supply-chain risk designation survives appeal and Anthropic remains classified as a risk to national security supply chains, the Treasury and Fed will face pressure to explain why they encouraged banks to adopt the same company’s technology. The interagency contradiction cannot persist indefinitely.
Second, the results. If Mythos finds critical vulnerabilities in banking infrastructure that existing tools missed, the case for AI security agents in critical infrastructure becomes overwhelming. If it produces noise, the case collapses.
Third, the commercial terms. Anthropic has not disclosed pricing for Mythos access. The difference between “the government encouraged banks to test a free security tool” and “the government encouraged banks to enter into enterprise contracts with a specific AI vendor” is the difference between cybersecurity policy and industrial policy.
The answers will shape not just Anthropic’s trajectory, but the rules of engagement for deploying AI agents in every critical system that keeps modern economies functioning.