Google DeepMind CEO Demis Hassabis told staff in a January all-hands meeting that the company was “leaning more” into defense and national security contracts with governments, Business Insider reported on March 20 based on a recording of the meeting.

Tom Lue, Google DeepMind’s VP of global affairs, told employees the company had a “robust process” to evaluate defense partnerships against Google’s AI principles. He said Google was in active conversations with governments around cybersecurity and biosecurity. “The north star for the analysis is whether the benefits substantially exceed the risks,” Lue said.

Hassabis, who in earlier years expressed concern about how Google might use DeepMind’s technology for warfare, said he was “very comfortable” with the current approach. He framed defense work as an obligation: “It’s incumbent on us to work with democratically elected governments.”

Anthropic’s Pentagon Exit

The meeting took place in January, before the Anthropic-Pentagon conflict became public. But the timing is no less significant. In February, Defense Secretary Pete Hegseth designated Anthropic a “supply chain risk” and ordered a six-month phase-out of Claude from Pentagon networks. Anthropic filed two lawsuits against the DoD on March 9. The Department of Justice responded on March 18 by calling Anthropic an “unacceptable national security risk” in a court brief.

Google’s posture stands in direct contrast. Where Anthropic drew red lines on military use of its models, Google removed a previous pledge not to use its technology for weapons development or surveillance in a 2025 update to its AI principles. The updated principles now focus on a benefit-versus-risk calculus rather than categorical exclusions.

What Google Is Actually Doing for the Pentagon

A Google DeepMind spokesperson pointed Business Insider to a blog post published last week describing the company’s most recent Pentagon contract. The tools are used for document drafting and project planning, far from the lethal autonomous weapons that originally triggered Google employee protests during the Project Maven controversy in 2018.

That distinction matters. Google can position its defense work as administrative AI rather than battlefield AI, which makes it easier to reconcile internally. But the direction of travel is clear: as Anthropic exits government contracts over ethics clauses, Google is filling the gap.

Where Each Lab Stands on Military Contracts

Three of the largest AI labs now occupy distinct positions on military work. Anthropic refuses to remove safety guardrails and is being sued by the DOJ for it. OpenAI signed a DoD contract in February, weeks after the Anthropic ban. Google is publicly telling employees to expect more defense deals.

For enterprise buyers, this creates a real decision matrix. Companies that want to signal alignment with government and defense can choose OpenAI or Google. Companies that want to signal safety-first principles, particularly in regulated industries like healthcare and finance, have increasingly been choosing Anthropic, where enterprise adoption grew 7x year-over-year according to Deutsche Bank data.

The Pentagon dispute is sorting AI labs into political and philosophical camps, and those camps are starting to define commercial market segments.

Sources: Business Insider, Wired, The Register