Defense Secretary Pete Hegseth threatened to invoke the Defense Production Act against Anthropic earlier this year during the Pentagon contract dispute, reportedly suggesting the Cold War-era law would let him force the company to hand over its technology on whatever terms the government wanted. According to The Atlantic, that threat was one of several legal levers the administration has explored to direct, or commandeer, America’s three leading AI labs: OpenAI, Anthropic, and Google DeepMind.

The Atlantic’s analysis, based on a dozen conversations with former Pentagon and Trump administration officials, AI policy experts, and legal scholars, maps out a spectrum of government control scenarios ranging from full nationalization to utility-style regulation. Full seizure, where top researchers would work out of Pentagon SCIFs and computational capacity would be centralized under military control, is “in all likelihood not going to happen,” the report concludes. The Constitution prevents the government from seizing private property without compensation, and the industry is collectively worth trillions.

The Spectrum Is Not Binary

The more relevant question for the industry is where the government lands between “do nothing” and “seize everything.” The Atlantic identifies three intermediate positions, each with distinct implications for enterprise AI buyers.

Utility regulation would treat AI companies like electricity providers. The government could cap pricing to cost-plus margins, impose service reliability requirements, and mandate baseline access levels for all customers. Sam Altman has already described a future where “intelligence is a utility like electricity or water and people buy it from us on a meter,” according to The Atlantic. Jensen Huang has made similar statements about AI as national infrastructure. Dean Ball, a former Trump administration AI adviser, told The Atlantic that AI could follow the path of internet service providers, cable, and telephone services, where “the government basically says how the providers can do business.”

Soft nationalization involves equity stakes, board placements, and embedded personnel. This is not a hypothetical scenario. It is current policy. The Trump administration already holds a 10 percent stake in Intel, per The Atlantic. OpenAI appointed retired NSA director Paul Nakasone to its board. The Army’s new tech detachment recruited its first four members from Meta, Palantir, and OpenAI. OpenAI has committed to deploying its own engineers alongside the military after securing the Pentagon contract that fell apart with Anthropic.

Emergency powers represent the tail risk. Samuel Hammond, acting director of AI policy at the Foundation for American Innovation, told The Atlantic that if AI models displace large portions of the labor market such that a handful of companies run most of the economy, “then some kind of nationalization becomes potentially imperative.” The DPA provides broad authority during declared emergencies. The article notes historical precedent: Wilson nationalized railroads during World War I, Fannie Mae and Freddie Mac were placed under conservatorship during the financial crisis.

The Anthropic Situation Crystalizes the Risk

The Trump-Anthropic conflict provides the clearest view of how government pressure operates in practice. After Hegseth’s DPA threat and the Pentagon’s “supply-chain risk” designation of Anthropic, POLITICO reported that the administration began backing off the confrontation. But the damage was already visible: The Washington Post reported that the White House forced out AI researcher Collin Burns from a Commerce Department position days after his hiring, apparently because of his Anthropic background.

The chilling effect extends beyond one company. Multiple senators have proposed legislation directing federal agencies to explore “potential nationalization” of AI, according to The Atlantic. Elon Musk, Sam Altman, and Palantir CEO Alex Karp have all publicly discussed nationalization scenarios. Lawyers representing Silicon Valley’s biggest AI firms are, per the report, “paying attention.”

The Vendor Concentration Problem

For enterprise buyers running production workloads on OpenAI, Anthropic, or Google APIs, the political risk is now a procurement variable. Consider what utility-style regulation would mean in practice: if the government caps what AI companies can charge, their incentive to invest in the enterprise features, SLAs, and dedicated capacity that large customers depend on could erode. If soft nationalization deepens, government priorities could reshape product roadmaps. A lab that embeds engineers with the Pentagon may deprioritize commercial API improvements when defense timelines conflict.

The OpenAI-Microsoft partnership revision announced this week, ending exclusive cloud licensing and allowing OpenAI to serve products across any cloud provider, adds another variable. Multi-cloud availability reduces one form of lock-in but does nothing to address the underlying concentration of model development in three labs whose relationships with the U.S. government are increasingly entangled.

The Pricing Question

None of this means enterprise buyers should abandon frontier AI APIs. The technology is too embedded and the switching costs too high for a mass exodus. But the risk calculus has shifted. Organizations building critical workflows on a single provider’s API now face a form of political counterparty risk that did not exist 18 months ago.

The practical responses are straightforward: abstract model calls behind an orchestration layer that supports provider switching, maintain evaluation pipelines for at least two frontier providers, and build contracts that account for service disruption from regulatory action. These are not exotic hedging strategies. They are the standard playbook for any category where government intervention is plausible. AI just joined that list.