The Department of Defense now argues that Anthropic could intentionally disable its own AI tools during a military conflict, according to new court filings reported by Wired on March 21. The sabotage allegation represents the Pentagon’s third distinct legal theory against the AI safety lab, following earlier arguments centered on national security risk and foreign workforce concerns.
A federal hearing in the case is confirmed for Tuesday, March 24 before Judge Rita Lin in the U.S. District Court for the Northern District of California, San Francisco. The Times of India and Wired both confirm the date. Any injunction or ruling from Judge Lin could come the same day.
The Sabotage Theory
The DoD’s legal strategy has shifted rapidly since the case began. The initial argument was straightforward: Anthropic posed a national security risk because of its safety-first stance on military applications. That evolved into a claim that Anthropic’s globally diverse engineering workforce created security vulnerabilities. Now the Pentagon is arguing something broader: that Anthropic possesses the technical capability to remotely degrade or disable Claude in deployed military systems during wartime.
Anthropic has flatly denied the claim, per Wired’s reporting. The denial is unsurprising. The allegation itself is more notable for what it reveals about the Pentagon’s litigation posture than for its technical merits. Every SaaS vendor with access to production systems technically has the ability to push updates, revoke keys, or throttle service. The DoD’s argument, taken to its logical conclusion, would apply to any cloud-deployed software in defense contexts.
What’s At Stake on Tuesday
The March 24 hearing arrives just days after TechCrunch reported that sworn declarations filed by Anthropic reveal Pentagon officials privately told the company the two sides were “nearly aligned” on contract terms, roughly one week before the Trump administration publicly terminated the relationship.
That timeline creates an uncomfortable narrative for the DoD: private conciliation followed by public rejection, now escalated to a federal sabotage accusation. Judge Lin will need to weigh whether the government’s shifting theories reflect genuine, evolving security concerns or post-hoc rationalization of a political decision.
Precedent Risk for the AI Industry
Each legal theory the Pentagon has deployed in this case is broader than the last. If “foreign workforce” sticks as grounds for excluding an AI vendor from defense contracts, it implicates every major US AI lab with international engineering teams. If “potential for sabotage” becomes a valid objection, it creates a precedent that any cloud-deployed AI system is inherently untrustworthy for government use unless the vendor surrenders source code, on-premises deployment rights, or both.
OpenAI, Google DeepMind, and Meta all employ engineers across multiple countries and deliver AI capabilities through cloud APIs. None of them would survive the precedent the Pentagon is trying to set against Anthropic.
The hearing Tuesday will determine whether these arguments advance to full litigation or get dismissed early. NCT will cover the proceedings as they develop.