On Tuesday, March 24, Judge Rita Lin will convene a hearing in the U.S. District Court for the Northern District of California to decide whether Anthropic gets emergency relief from the Pentagon’s supply-chain risk designation — or whether the government’s ban on the company’s AI technology stands while the full case plays out.
The hearing arrives after three weeks of escalating filings, sworn declarations, and a federal government that can’t agree on what the ban actually requires. Anthropic argues it’s facing unconstitutional retaliation for its AI safety policies. The government says it made a routine national security call. The evidence trail suggests something messier than either narrative.
This is what Judge Lin is weighing, what the filings actually say, and why the outcome will reshape how every AI company does business with Washington.
The Timeline: From Negotiation to Courtroom in 27 Days
The dispute’s compressed timeline reveals how quickly a contract disagreement became a constitutional crisis.
On February 24, Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei a deadline: accept “all lawful uses” language for Claude’s military deployment by 5:01 PM on February 27, or face consequences. Anthropic refused on February 26, releasing a public statement that it would not allow unrestricted military use of its AI models, citing concerns about autonomous weapons and mass surveillance of Americans.
On February 27, President Trump directed federal agencies to “immediately cease” using Anthropic’s technology. Hegseth followed by designating Anthropic a supply-chain risk on March 3 — a label previously reserved for foreign companies posing national security threats, according to Federal News Network. The designation gave the Pentagon 180 days to remove Claude from all defense systems.
Then the timeline gets strange. On March 4, one day after the designation was finalized, Under Secretary Emil Michael emailed Amodei to say the two sides were “very close” on the two issues the government now cites as evidence that Anthropic is a national security threat: autonomous weapons and mass surveillance. That email is now an exhibit in the court record.
On March 5, Amodei published a statement saying the company had been having “productive conversations” with the Pentagon. On March 6, Michael posted on X that “there is no active Department of War negotiation with Anthropic.” A week later, he told CNBC there was “no chance” of renewed talks.
Anthropic sued on March 9. The DOJ filed its response on March 17. Anthropic filed its reply brief with sworn declarations on March 20. The hearing is Tuesday.
The Government’s Three Legal Theories
The Pentagon has cycled through three distinct justifications for the designation, each broader than the last.
Theory one: AI safety red lines as national security risk. The initial argument, reported by the New York Times on March 17, was straightforward: Anthropic’s refusal to allow “all lawful uses” of Claude for military purposes made the company an unreliable partner. The DOJ’s filing stated that “the First Amendment is not a license to unilaterally impose contract terms on the government,” per Wired’s coverage of the March 17 filing.
Theory two: foreign workforce vulnerability. The Pentagon pivoted to arguing that Anthropic’s globally diverse engineering workforce constituted a security risk. This theory, if accepted, would implicate every major AI lab in the U.S. — OpenAI, Google DeepMind, Meta, and Microsoft all employ significant numbers of non-U.S.-citizen engineers.
Theory three: potential wartime sabotage. The most aggressive claim appeared in a filing reported by Wired on March 20: Anthropic could “attempt to disable its technology or preemptively alter the behavior of its model either before or during ongoing warfighting operations” if the company felt its “corporate ‘red lines’ are being crossed.” The DOJ wrote that the Pentagon “is not required to tolerate the risk that critical military systems will be jeopardized at pivotal moments for national defense.”
The escalation pattern matters. Each theory is less specific and more sweeping than the last. The sabotage argument, taken at face value, would apply to any cloud-deployed software provider with production access to military systems — which describes every enterprise software vendor the Pentagon uses.
The Technical Rebuttal: No Kill Switch, No Backdoor, No Access
Anthropic’s March 20 reply brief included two sworn declarations that directly address the sabotage theory with technical specifics.
Thiyagu Ramasamy, Anthropic’s Head of Public Sector, spent six years at AWS managing AI deployments for government customers before joining Anthropic. His declaration, filed with the court, states: “Anthropic has never had the ability to cause Claude to stop working, alter its functionality, shut off access, or otherwise influence or imperil military operations.” He explains that once Claude is deployed inside air-gapped, government-secured systems operated by third-party contractors, Anthropic has no access. No remote kill switch. No backdoor. No mechanism to push unauthorized updates.
Ramasamy’s declaration also addresses the foreign workforce theory: Anthropic employees have undergone U.S. government security clearance vetting, and to his knowledge, Anthropic is the only AI company where cleared personnel actually built the models running on classified networks.
Sarah Heck, Anthropic’s Head of Policy and a former National Security Council official under the Obama administration, filed the second declaration. She was present at the February 24 meeting between Amodei, Hegseth, and Michael. Her sworn statement asserts: “At no time during Anthropic’s negotiations with the Department did I or any other Anthropic employee state that the company wanted” an approval role over military operations. She adds that the sabotage concern was never raised during negotiations — it appeared for the first time in the government’s court filings.
Heck’s declaration reveals one more detail: on March 4, Anthropic proposed contract language that explicitly stated the license “does not grant or confer any right to control or veto lawful Department of War operational decision-making.” Negotiations broke down anyway.
The Federal Government’s Compliance Chaos
While the legal teams prepare for Tuesday, the rest of the federal government is trying to figure out what Trump’s directive actually requires.
The Hill reported on March 18 that agencies received no formal guidance beyond Trump’s Truth Social post. The response has been inconsistent. The General Services Administration and the Department of Health and Human Services removed Claude within hours. Other agencies are still “reviewing” their Anthropic exposure.
At HHS, thousands of employees using Anthropic products had just a few hours to save their work. An agency leader told The Hill that “staff were really upset with how quickly” the shutdown happened, with “no spin-down time.” Employees lost chat histories, coding projects, and work in progress. HHS’s chief AI officer sent a message saying Claude Enterprise was “temporarily disabled” and the office was “awaiting more detailed federal guidance.”
The GSA, responsible for most federal technology procurement, went further: it removed Anthropic from its government-wide AI testing tool USAi and terminated its OneGov deal with the company. GSA is also proposing a new contract clause that would confirm the government’s right to use any AI system “as necessary for any lawful Government purpose” — language that mirrors the Pentagon’s demands and would apply to all AI vendors and their subcontractors.
At other agencies, the picture is murkier. One AI advisor at a civilian agency told The Hill there was “a tremendous lack of information” and “nobody has clear answers.” Civilian AI leaders still don’t know whether the order applies to contractors who use Anthropic in their own workflows but not directly in agency work.
The Pentagon Says Replacement Is Easy. Its Own Users Disagree.
The Pentagon’s public stance is confidence. Emil Michael, the Under Secretary for Research and Engineering, told attendees at the McAleese Defense Programs conference that he is “pretty confident” the Pentagon can phase out Claude within the six-month deadline, according to Federal News Network. He pointed to recent deployments of OpenAI, Gemini, and xAI models and said the “workflows are very similar” across providers, making the disruption “minimal.”
Michael also acknowledged a structural problem he’s now trying to solve: “One of the problems, without getting into the specifics about Anthropic, is that we had one primary provider on classified networks. That doesn’t work for the Department of War.”
But defense contractors tell a different story. Reuters reported on March 19 that military personnel and contractors called the six-month timeline “disconnected from operational reality.” Claude became the first AI model approved to operate on classified military networks. Each deployment required a full Authority to Operate — months of security testing, red-teaming, and compliance review. Starting over with a different model means repeating the entire accreditation process.
Joe Saunders, CEO of government contractor RunSafe Security, put it more precisely to Federal News Network: “These models are embedded across workflows, security-accredited environments, and mission-specific processes. Even when other models are available, each one requires validation, and in many cases, re-authorization before it can be used in operational settings.”
Michael’s own filing to the court acknowledged the constraint from the other direction: the Pentagon “cannot simply flip a switch at a time when Anthropic currently is the only AI model cleared for use” on classified systems “and high-intensity combat operations are underway.”
Industry Support: Broad, Deep, and Unprecedented
No amicus brief has been filed in support of the government’s position. Multiple filings support Anthropic.
Nearly 150 retired federal judges argued that while the Pentagon can choose its contractors, it cannot “punish Anthropic on its way out” with a supply-chain risk designation. Microsoft filed an amicus brief. Over 30 employees from OpenAI, Google DeepMind, and other labs — including Google DeepMind’s chief scientist Jeff Dean — signed supporting statements, per Wired. Some of those signatories work for the company that took the contract Anthropic lost.
The New York Times reported on March 18 that Amazon, Microsoft, and Google — all Anthropic investors — are worried the designation would establish a precedent that makes any tech company vulnerable to political retaliation when contracting with the government.
What Judge Lin Is Actually Deciding Tuesday
The hearing is not a trial. Judge Lin is considering Anthropic’s request for a preliminary injunction — an emergency order to suspend the supply-chain risk designation while the full case proceeds. To grant it, she needs to find that Anthropic is likely to succeed on the merits, that the company faces irreparable harm without relief, that the balance of equities favors Anthropic, and that an injunction serves the public interest.
The irreparable harm element is well-documented. Anthropic’s filings reported by Wired detail enterprise customers pausing or canceling deals after the designation, with the company warning of billions in lost revenue if the label stands. The commercial fallout extends far beyond the Pentagon contract itself — a “supply-chain risk” designation signals to every potential customer, government or private, that doing business with Anthropic carries regulatory risk.
The First Amendment question is the centerpiece. Anthropic argues the designation punishes the company for its publicly stated views on AI safety. The government says it penalized a business decision, not speech. The distinction matters: if Judge Lin agrees that model safety policies constitute protected expression, AI companies gain legal cover to set deployment limits on any customer, including the military. If she sides with the government, “any lawful use” becomes the default standard for government AI procurement.
The March 4 email from Michael — telling Amodei the two sides were “very close” on the exact issues the government now cites as national security threats — complicates the government’s position. If those stances were nearly acceptable on March 4, it’s harder to argue they constituted an unacceptable risk on March 3.
What Comes After Tuesday
A ruling could come the same day or within days. If Judge Lin grants the injunction, Anthropic regains its government business while the case proceeds to full trial — likely later this year. If she denies it, the 180-day removal clock keeps ticking, and the commercial damage compounds.
Either way, the case has already reshaped the terms of debate. The GSA’s proposed “any lawful purpose” contract clause is moving forward regardless of the court’s ruling. The Pentagon is deploying alternatives to Claude on classified networks. And every AI company negotiating a government contract now knows that safety red lines can trigger a designation normally reserved for foreign adversaries.
The hearing begins Tuesday morning in San Francisco.
Sources: TechCrunch, Wired (March 17), Wired (March 20), The Hill, Federal News Network, Reuters, NYT, TechPolicy.Press, Court filings via CourtListener