The U.S. Department of Defense filed its first formal rebuttal in the Anthropic lawsuit late Tuesday, and the language left no room for interpretation. Anthropic’s refusal to agree to “any lawful use” contract terms makes the company an “unacceptable risk to national security,” the DOD argued in its filing.

The filing marks a sharp escalation in what began as a contract dispute and has become a constitutional showdown over whether AI companies can set usage constraints on government customers.

How the Pentagon Got Here

The arc is now well-documented. Anthropic held firm on red lines that prohibited certain military applications of Claude. The DOD terminated a contract reportedly worth up to $200 million, then went further — applying a formal national security designation that Anthropic claims amounts to punishment for exercising First Amendment rights over the content guidelines embedded in its products.

Anthropic CEO Dario Amodei has maintained that the military, not Anthropic, makes operational decisions — the company simply defines what its AI will and won’t do. The DOD’s counter: those guardrails function as a veto over sovereign military authority, and a contractor that can selectively refuse lawful government orders cannot be trusted with national security infrastructure.

OpenAI Took the Seat

While Anthropic litigates, OpenAI moved in. Through an expanded AWS deal, OpenAI now sells AI to U.S. government agencies including the DOD through Amazon’s classified cloud infrastructure. The contract directly fills the gap left by Anthropic’s termination.

The deal was not without internal friction. OpenAI’s robotics team lead Caitlin Kalinowski resigned over the arrangement, publicly stating it was “rushed without guardrails defined.” But the contract is live and classified workloads are flowing.

The competitive positioning is now permanent. OpenAI agreed to “any lawful purpose” terms. Anthropic refused them. One company is the Pentagon’s AI partner. The other is its legal adversary.

The Industry Picks a Side

What makes this more than a two-company spat is the breadth of support rallying behind Anthropic’s position.

Nearly 150 retired federal judges filed a brief supporting Anthropic, arguing that while the DOD can choose its contractors, it cannot “punish Anthropic on its way out” with a national security designation. The judges’ argument centers on First Amendment protections: a government contractor cannot be penalized for the content policies embedded in its products.

Microsoft filed an amicus brief. Over 30 employees from OpenAI, Google, and other labs signed supporting statements — some of them working for the company that took the contract Anthropic lost.

This is the most significant judicial, industry, and technical coalition ever assembled behind an AI safety position in U.S. courts.

Two Bets on How AI Companies Survive

Strip away the legal filings and the national security framing, and the dispute reduces to a strategic wager.

Anthropic is betting that maintaining usage constraints — even when it costs hundreds of millions in government revenue — builds the kind of trust that makes it the preferred AI partner for the broader market. Enterprises, healthcare systems, and financial institutions watching this fight will note which company stood by its safety commitments under maximum pressure.

OpenAI is betting that access wins. Government contracts open classified networks, unlock defense budgets, and create lock-in that no amount of brand goodwill can compete with. The Pentagon doesn’t sign contracts with companies it might need to sue later.

Both bets have historical precedent. Google famously walked away from Project Maven in 2018, and its cloud business grew anyway. But Palantir leaned into government work and built a $60 billion company on it.

What Comes Next

The case is heading toward a federal court ruling that will set precedent on whether AI developers have First Amendment protection over model behavior constraints. If Anthropic wins, every AI company gains legal cover to set red lines. If the DOD wins, “any lawful use” becomes the standard clause in government AI procurement — and companies that refuse it get flagged.

The retired judges, the Microsoft brief, the cross-company employee signatures — none of this is performative. The AI industry clearly understands that this ruling will define the terms under which every lab operates for the next decade.

The trial is expected to proceed later this year.

Sources: TechCrunch, NYT, TechCrunch, Business Insider