Defense Secretary Pete Hegseth’s order to remove Anthropic’s Claude from Pentagon systems is running into a practical wall: the AI is too embedded to rip out. Reuters reported today that military personnel and defense contractors are resisting the six-month wind-down timeline, calling it logistically and technically unfeasible.

Hegseth designated Anthropic a “supply chain risk” on March 3, after the company refused to allow Claude to be used for autonomous weapons systems without human oversight or for domestic surveillance operations. The Pentagon ordered all contractors to assess their Anthropic exposure and begin replacing it. Two weeks later, the replacement effort has stalled at the assessment stage.

Claude on Classified Networks

The core problem is structural. Claude operates on both classified and unclassified military networks, powering workflows that span intelligence analysis, logistics planning, and contractor operations. These integrations were built over months of procurement cycles, security certifications, and custom fine-tuning. Swapping in an alternative model means re-certifying an entirely new system through the same classified accreditation process that took months the first time.

US News confirmed the pushback from multiple defense contractor sources, who described the replacement order as “disconnected from operational reality.” The six-month timeline assumes a drop-in replacement exists. For classified network deployments, it does not.

Industry Rallies Behind Anthropic

The resistance extends beyond the Pentagon’s own workforce. According to CoinCentral’s reporting, technology executives are publicly aligning with Anthropic’s position. This follows last week’s revelation that 150 retired federal judges and over 30 employees from rival AI labs filed amicus briefs supporting Anthropic’s legal fight against the DOD.

The sequence of events over the past three weeks underscores the tension in Hegseth’s approach. Anthropic refused military AI terms it considered unethical. The Pentagon labeled the company a national security risk. OpenAI moved to fill the gap through an AWS classified-network deal, as TechCrunch reported. And now the Pentagon’s own users are saying the switch can’t be executed on the stated timeline.

The Procurement Trap

Military AI procurement follows a pattern that works against rapid vendor changes. Each model deployment on classified infrastructure requires a full Authority to Operate (ATO), which involves months of security testing, red-teaming, and compliance review. The defense contractors who built their workflows around Claude did so after completing that process. Starting over with OpenAI’s models or xAI’s Grok means repeating the entire cycle, with no guarantee the replacement models match Claude’s performance on the specific tasks they were deployed for.

Anthropic’s enterprise adoption hit 24.4% of total business AI usage this month, according to data published by Digit.fyi, growing at 4.9% month-over-month. The Pentagon’s attempt to isolate Anthropic is happening at the exact moment the company’s commercial position is strengthening fastest.

Hegseth’s supply chain risk designation remains in effect. The six-month clock is ticking. But the contractors tasked with executing the order are, by their own account, unable to comply.