U.S. District Judge Rita Lin granted Anthropic a preliminary injunction on Thursday, blocking the Department of Defense from enforcing its supply chain risk designation against the AI company and barring the Trump administration from implementing its directive that federal agencies stop using Claude. The ruling came two days after a contentious hearing in San Francisco federal court where Lin told government lawyers the Pentagon’s actions “look like an attempt to cripple” Anthropic.
“Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation,” Lin wrote in the order.
What the Ruling Says
Lin’s language went further than her remarks at the hearing. She wrote that “nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government,” according to CNBC.
The injunction does two things. First, it bars the Trump administration from implementing, applying, or enforcing the president’s directive ordering agencies to stop using Claude. Second, it prevents the Pentagon from moving forward with the supply chain risk designation that would force defense contractors like Amazon, Microsoft, and Palantir to certify they do not use Claude in military work, per CNBC.
The Guardian reported that the judge stayed the order for one week, giving the government time to respond. A final verdict could still be months away.
Anthropic’s Response
Anthropic said it was “grateful to the court for moving swiftly.” In a statement reported by CNBC, the company said: “While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.”
The Dispute
The case traces back to late February, when Defense Secretary Pete Hegseth declared Anthropic a supply chain risk on X, followed by a formal DOD notification letter in early March. President Trump also posted on Truth Social ordering agencies to “immediately cease” all use of Anthropic’s technology.
The underlying conflict is straightforward: the DOD wanted unrestricted access to Claude across all lawful purposes. Anthropic wanted contractual assurance that its models would not be used for fully autonomous weapons or domestic mass surveillance. The two sides could not reach an agreement on a $200 million contract signed in July, and the Pentagon responded by designating Anthropic a supply chain risk, a label historically reserved for foreign adversaries.
Anthropic is the first American company to receive the designation publicly. It has filed two separate lawsuits: one in San Francisco addressing the constitutional claims, and one in the U.S. Court of Appeals in Washington for formal review of the DOD’s determination under the two federal statutes the administration invoked, per CNBC.
What Comes Next
The one-week stay means the government has until approximately April 2 to respond before the injunction takes effect. The broader case will continue, and a final ruling could take months. But the preliminary injunction is a significant early victory: it signals that the court views Anthropic’s First Amendment and administrative law arguments as likely to succeed on the merits.
For enterprise customers and defense contractors who depend on Claude, the immediate impact is a pause on the forced phase-out. The longer-term question is whether the DOD will appeal, negotiate, or escalate. The Guardian noted that the DOD has reportedly been using Claude extensively for military operations, including target selection and analysis of missile strikes in its conflict with Iran, making a clean break from the technology operationally difficult.