U.S. District Judge Rita Lin told Pentagon lawyers Tuesday that the Department of Defense’s decision to blacklist Anthropic’s Claude AI models “looks like an attempt to cripple” the company, pressing them repeatedly on the legal basis for the supply chain risk designation during a preliminary injunction hearing in San Francisco federal court.

The hearing marked the first time both sides argued their cases live before the judge who will decide whether Anthropic gets emergency relief while its broader lawsuit against the Trump administration proceeds.

What the Judge Said

Lin opened by framing the case narrowly: “I see the question in this case as being a very different one, which is whether the government violated the law,” she said, according to CNBC. She told the courtroom her concern was whether Anthropic was being “punished for criticizing the government’s contracting position in the press.”

Her sharpest exchange came when government lawyer Eric Hamilton argued that the DOD “come to worry that Anthropic may in the future take action to sabotage or subvert IT systems,” citing the hypothetical risk that Anthropic could install a kill switch.

Lin pushed back: “What I’m hearing from you, though, is that it’s enough if an IT vendor is stubborn and insists on certain terms and it asks annoying questions, then it can be designated as a supply chain risk because they might not be trustworthy. That seems a pretty low bar.”

The Government’s Defense Fell Apart in Real Time

The hearing exposed a specific contradiction in the government’s position. Defense Secretary Pete Hegseth posted on X last month that any military contractor working with Anthropic was prohibited from doing government business. But in court, Hamilton argued that Hegseth’s post was not a legal action and that no contractor would face noncompliance issues for ignoring it.

“You’re standing here saying, ‘We said it, but we didn’t really mean it,’” Lin told the government lawyer. When she asked why Hegseth would post the claim if it had no legal effect, Hamilton replied: “I don’t know.”

What Anthropic Is Asking For

Anthropic is requesting a preliminary injunction to pause the blacklisting while the full lawsuit plays out. The company argued in court that an injunction would not force the government to use Claude or prevent it from switching to another AI vendor. Without relief, Anthropic has said in filings that it could lose billions in business, as defense contractors including Amazon, Microsoft, and Palantir would need to certify they do not use Claude in military work.

The company’s lawsuit, filed earlier this month, alleges the government violated Anthropic’s First Amendment rights by designating it a supply chain risk as retaliation for the company’s public stance on AI safety and its refusal to loosen safety guardrails on Claude for uses including domestic mass surveillance and fully autonomous lethal weapons.

What Happens Next

Lin said she expects to issue an order on Anthropic’s motion within the next few days. If she grants the preliminary injunction, Anthropic can continue doing business with federal agencies and contractors while the case proceeds. If she denies it, the blacklisting stands and the financial damage Anthropic has warned about begins compounding immediately.

This is the first time the U.S. government has ever designated a domestic technology company as a supply chain risk. The outcome will set precedent for whether the executive branch can use national security tools to punish companies for their product safety policies.