Pentagon AI Chief Cameron Stanley confirmed to CNBC on April 28 that the Department of Defense is expanding its use of Google’s Gemini model for classified projects. Google is the third major AI vendor to sign a classified access agreement with the Pentagon since the DoD designated Anthropic a supply chain risk in February 2026. OpenAI and xAI signed their deals within weeks of the blacklisting. The defense AI market has now reorganized itself around one principle: compliance with unrestricted government use terms.

Three Vendors, Two Months

The timeline is compressed and telling.

In late February, Defense Secretary Pete Hegseth declared Anthropic a supply chain risk after negotiations over a $200 million contract broke down. Anthropic refused to grant the Pentagon use of Claude for “any lawful government purpose,” insisting on guardrails against domestic mass surveillance and autonomous weapons without human oversight. The DoD responded with a designation normally reserved for foreign adversaries.

OpenAI signed a deal with the Pentagon in early March. xAI followed by mid-March, gaining classified network access. Google’s agreement, first reported by The Information and confirmed by Stanley to CNBC, extends to classified workloads under an amendment to an existing unclassified arrangement.

“Overreliance on one vendor is never a good thing,” Stanley told CNBC. “We’re seeing that, especially in software.”

The Contract Language

The agreements share a common structure that reveals the Pentagon’s negotiating position.

Google’s deal allows the DoD to use Gemini for “any lawful government purpose,” according to 9to5Google, citing The Information’s reporting. The agreement requires Google to “assist in adjusting its AI safety settings and filters at the government’s request.” Google retains no right “to control or veto lawful government operational decision-making.”

Google included contract language stating it does not intend for its AI to be used for domestic mass surveillance or autonomous weapons without appropriate human oversight. The Wall Street Journal reported it is unclear whether such provisions are legally binding or enforceable. TechCrunch noted that OpenAI’s contract contains similar language.

This is the exact framework Anthropic rejected. The distinction between a stated intention and an enforceable restriction matters enormously in classified settings, where oversight mechanisms are inherently limited. Google says it does not “intend” for misuse. Anthropic demanded contractual guardrails that would prevent it.

The legal fallout from Anthropic’s refusal has produced contradictory outcomes across two federal courts.

On April 8, a federal appeals court in Washington, D.C., denied Anthropic’s request to temporarily block the supply chain risk designation. “On one side is a relatively contained risk of financial harm to a single private company,” the court wrote. “On the other side is judicial management of how, and through whom, the Department of War secures vital AI technology during an active military conflict.” Reuters confirmed the ruling.

Separately, a judge in San Francisco federal court granted Anthropic a preliminary injunction barring the Trump administration from enforcing a broader ban on Claude across non-defense government agencies. Politico reported that Judge Lin called the Pentagon’s decision “an attempt to cripple” the company.

The net result: Anthropic is locked out of Pentagon contracts and defense contractor work but can continue serving other government agencies while the full lawsuit proceeds. The D.C. appeals court acknowledged Anthropic “will likely suffer some degree of irreparable harm” but found its interests “primarily financial in nature.”

Acting U.S. Attorney General Todd Blanche called the D.C. ruling a “resounding victory for military readiness” in a post on X. “Military authority and operational control belong to the Commander-in-Chief and Department of War, not a tech company,” Blanche wrote.

Anthropic responded that it is “confident the courts will ultimately agree that these supply chain designations were unlawful,” according to CNBC.

600 Google Employees Object

Google’s decision to proceed came over the objections of its own workforce.

More than 600 employees, primarily staff working on AI systems, signed an open letter to CEO Sundar Pichai urging him to reject classified workloads, according to SiliconANGLE. By the time CNBC published Stanley’s confirmation on April 28, the count had risen to more than 700.

“As people working on AI, we know that these systems can centralize power and that they do make mistakes,” the letter stated, per SiliconANGLE. “We feel that our proximity to this technology creates a responsibility to highlight and prevent its most unethical and dangerous uses.”

The signatories argued that contractual language is insufficient. “Reject any classified workloads” is the only path to preventing misuse, they wrote.

Google has a history with this tension. In 2018, employee protests forced the company to drop Project Maven, a Pentagon contract for AI-powered drone imagery analysis. Google subsequently published AI Principles pledging not to design or deploy AI for weapons or surveillance technologies. Those principles have since evolved, SiliconANGLE noted, with Google now framing its position around “responsible” military use rather than categorical exclusion.

“Making the wrong call right now would cause irreparable damage to Google’s reputation, business and role in the world,” the signatories warned.

Google responded with a statement: “We are proud to be part of a broad consortium of leading AI labs and technology and cloud companies providing AI services and infrastructure in support of national security.”

The Mythos Factor

Stanley’s comments to CNBC suggest the Pentagon’s urgency has increased since Anthropic’s Mythos model rollout in early April.

Anthropic released Claude Mythos Preview on April 7 with what it called a “step change” in cybersecurity capabilities. The model was made available only to a limited number of organizations through Project Glasswing, Anthropic’s defensive cyber initiative. TechCrunch reported that Anthropic acknowledged the model could potentially be weaponized to find and exploit bugs rather than fix them.

Stanley told CNBC the DoD is “taking this very seriously” so that it can “make sure we are not only matching the moment but are prepared for what comes next, which is a whole raft of AI-enabled capabilities” in challenging areas.

The irony is notable: the company the Pentagon blacklisted for refusing unrestricted access just shipped what may be the most capable cyber-offensive AI model to date, and is controlling its distribution more tightly than any commercial vendor has before. The DoD is now dependent on Google, OpenAI, and xAI to match those capabilities in classified environments.

What This Consolidation Produces

Three structural shifts are now locked in.

First, the defense AI market has bifurcated. Anthropic serves commercial and non-defense government customers. Google, OpenAI, and xAI serve the Pentagon. Defense contractors must now build around this split, maintaining separate AI stacks for classified and commercial work if they want access to Claude’s capabilities alongside defense-cleared models.

Second, the “any lawful government purpose” framework is now the standard contract template for defense AI procurement. Three major vendors have accepted it. Future AI companies seeking Pentagon contracts will face the same terms. The negotiating leverage has shifted permanently toward the government: accept unrestricted use, or accept exclusion.

Third, the employee opposition at Google, while substantial, did not change the outcome. This sets a precedent for future deals. The 2018 Project Maven protests succeeded because Google had alternatives. In 2026, with the defense AI market consolidating rapidly and competitors already signed up, walking away from classified work means ceding a massive revenue stream to rivals. The calculus has changed.

Stanley framed the Pentagon’s approach in practical terms. “There’s a lot of different things that are saving thousands of man hours, literally thousands of man hours on a weekly basis,” he told CNBC.

President Trump told CNBC last week that “it’s possible” there will be a deal allowing Anthropic’s models back into the DoD. Whether that happens depends on the courts and Anthropic’s willingness to accept the terms its competitors already signed. For now, the Pentagon has what it wanted: multiple AI vendors, classified access, adjustable safety filters, and no veto rights for the companies building the models.