Anthropic submitted two sworn declarations to a California federal court on March 20, revealing that Pentagon officials had privately told the company the two sides were “nearly aligned” on contract terms — just one week before the Trump administration publicly designated Anthropic an “unacceptable national security risk,” according to TechCrunch.

The filings directly contradict the Pentagon’s public posture and raise questions about whether the ban was driven by genuine security concerns or political calculations.

The “Nearly Aligned” Timeline

According to the sworn declarations, Pentagon procurement officials communicated to Anthropic that contract negotiations were progressing well and that the remaining gaps were minor and resolvable. This exchange occurred approximately one week before Defense Secretary Pete Hegseth designated Anthropic a “supply chain risk” on March 3 and ordered contractors to phase out Claude within six months.

The timeline creates a stark contradiction: working-level officials saying “we’re almost there” while political leadership was preparing to pull the plug entirely. The declarations, filed under penalty of perjury, are the first time Anthropic has placed these private communications into the court record.

The Foreign Workforce Pivot

Separately, Axios reported that the DoD’s legal argument has shifted to a new theory: Anthropic’s reliance on a globally diverse engineering workforce constitutes the security risk, not the company’s AI safety policies.

This represents a significant pivot from the original justification. The initial designation focused on Anthropic’s ethical red lines — specifically, its refusal to allow Claude to be used for autonomous weapons targeting. The foreign workforce argument is a different legal theory entirely, one grounded in personnel security rather than technology policy.

Implications Beyond Anthropic

The foreign workforce theory, if accepted by the court, would set a precedent with wide-ranging consequences. OpenAI, Google DeepMind, Meta AI, and Microsoft’s AI research divisions all employ significant numbers of non-US-citizen engineers, researchers, and scientists. Many of these employees hold H-1B visas or work from international offices.

If employing foreign-born AI researchers constitutes a supply chain security risk for defense contracts, the pool of eligible AI vendors shrinks to companies with almost exclusively US-citizen workforces — a category that, in practice, barely exists among frontier AI labs. The argument, if accepted, would affect virtually every major AI company seeking defense work.

Reuters reported earlier that military staff and defense contractors have pushed back on the six-month replacement timeline, noting Claude is embedded in classified and unclassified workflows with no viable substitute available on schedule.

Next Steps in the Case

The sworn declarations were filed late Friday, March 20. Any judicial response, whether a hearing date, a motion to expedite, or an injunction ruling, would likely surface Monday or Tuesday.

Anthropic’s filings frame the ban as politically motivated rather than security-driven, with the government’s own private communications as the central exhibit. The court’s ruling will have direct implications for how AI companies navigate defense procurement.