The Trump administration is discussing a pre-release government review process for AI models before they reach the public, according to the New York Times, citing U.S. officials and people briefed on the deliberations. The proposal marks a reversal from the administration’s previously noninterventionist approach to AI and represents the first formal federal effort to vet autonomous agent capabilities before deployment.

The Proposal

White House officials briefed executives from Anthropic, Google, and OpenAI last week on the emerging policy framework, according to the NYT. One proposal would give the government first access to AI models before public release, though it would not block their deployment. Bloomberg reported that the administration is considering an executive order to create a working group on artificial intelligence to formalize this oversight.

Reuters confirmed the scope: the review would apply to new AI models broadly, not just to a single company or product. Forbes characterized the move as a direct reversal in Trump’s approach to AI, triggered by the fallout between the administration and Anthropic over the company’s Mythos model.

Why Mythos Changed the Calculus

Anthropic introduced Mythos earlier this year with autonomous agent capabilities that the company itself flagged as requiring restricted access. Anthropic limited the initial rollout to approximately 50 organizations due to the model’s advanced cybersecurity capabilities, which included the ability to orchestrate attacks comparable to state-sponsored hacking operations.

The model’s disclosure set off a chain of events. The Commerce Department’s Center for AI Standards and Innovation began actively testing Mythos’ capabilities, according to Politico. Staff on at least three congressional committees held or requested briefings from Anthropic. The administration then moved to formally oppose Anthropic’s plan to expand Mythos access from 50 to 120 organizations.

On April 29, Axios reported that the White House was already developing guidance to allow federal agencies to bypass Anthropic’s supply chain risk designation and onboard Mythos, suggesting the administration wanted access to the model even as it sought to restrict broader distribution.

The Broader Pattern

The pre-release vetting proposal goes beyond the Mythos-specific dispute. It would create a standing mechanism for government review of any AI model with autonomous capabilities before public deployment. That scope covers not just Anthropic but also OpenAI, Google, Meta, and any company releasing models with agentic functionality.

For companies building agent platforms, including OpenClaw’s 3.2 million users, the practical question is whether a vetting requirement introduces delays between model release and agent deployment. If the government reviews models before public availability, the gap between a model’s internal development and the moment developers can build agents on top of it could widen from days to weeks or months.

The proposal also creates a disclosure precedent. Anthropic published Mythos’ capability limitations voluntarily. A formal vetting process would make that kind of disclosure mandatory for all labs, not just those that choose transparency. The difference matters: companies that might otherwise ship quietly would face government review of their models’ autonomous capabilities before reaching production.