White House chief of staff Susie Wiles met Anthropic CEO Dario Amodei on Friday to discuss how the U.S. government could safely use Mythos, the AI model whose autonomous vulnerability exploitation capabilities have forced a week of emergency engagement across the federal government. AP News reported the administration described the meeting as “productive and constructive,” with discussions covering cybersecurity collaboration, America’s lead in the AI race, and AI safety.
A Week of Escalation
The Wiles-Amodei meeting was the culmination of multiple government engagements. On April 28, National Cyber Director Sean Cairncross chaired a meeting with tech and cybersecurity firms focused broadly on Mythos concerns, Politico reported. The same day, OpenAI and Anthropic briefed the House Homeland Security Committee behind closed doors on “recent frontier model developments” and their implications for critical infrastructure cybersecurity, according to Axios.
By April 30, the White House had sent tech and cybersecurity companies formal questions about AI-driven cybersecurity threats, specifically those posed by Mythos, Politico reported. The progression from informal huddle to formal questionnaire to chief-of-staff-level engagement in five days signals how seriously the administration views Mythos’s autonomous capabilities.
The Political Tension
The meetings took place against a backdrop of open conflict between the Trump administration and Anthropic. President Trump previously tried to bar all federal agencies from using Anthropic’s Claude chatbot, posting on social media that the administration “will not do business with them again.” When asked Friday about the White House meeting, Trump told reporters he had “no idea,” according to AP News.
Defense Secretary Pete Hegseth designated Anthropic a supply-chain risk, an unprecedented move against a U.S. company. U.S. District Judge Rita Lin issued a ruling in March blocking enforcement of Trump’s directive ordering agencies to stop using Anthropic products.
External Validation
The concern is not limited to Washington. The UK AI Security Institute evaluated Mythos and called it a “step up” over previous models, stating that “Mythos Preview can exploit systems with weak security posture, and it is likely that more models with these capabilities will be developed,” AP News reported. Anthropic has also been in talks with the European Union about its AI models, including advanced models not yet released in Europe.
David Sacks, the former White House AI and crypto czar and a frequent Anthropic critic, acknowledged the seriousness of Mythos’s capabilities on the “All-In” podcast. “With cyber, I actually would give them credit in this case and say this is more on the real side,” Sacks said. “As the coding models become more and more capable, they are more capable at finding bugs. That means they’re more capable at finding vulnerabilities. That means they’re more capable at stringing together multiple vulnerabilities and creating an exploit.”
The Collaboration Paradox
The administration is simultaneously trying to restrict Anthropic’s government access and seeking Anthropic’s help managing the autonomous threat its own model represents. The White House recently drafted executive guidance allowing federal agencies to bypass the Pentagon’s supply-chain-risk designation to access Mythos. Anthropic said it is “looking forward to continuing these discussions” with the administration.
For organizations running autonomous agents in production, the signal is clear: the capability of frontier models to autonomously discover and chain exploits has reached the level where it commands chief-of-staff attention. The policy infrastructure to govern these capabilities is being built in real time, under political conditions that make coherent governance difficult.