Edgerunner AI, a veteran-founded startup, released WarClaw on Wednesday — an agentic AI assistant purpose-built for military use and trained by former operators on actual combat tasks. The launch, reported exclusively by Defense One, arrives as the Pentagon accelerates its own “Agent Network” initiative for battle management and decision support.
What WarClaw Does
WarClaw searches and analyzes databases, interprets intelligence reports, pulls web information, drafts documents and briefings, and automates routine military processes. It integrates with Microsoft PowerPoint, Word, Excel, Teams, and Outlook, according to Edgerunner’s company statement.
The differences from consumer-facing agents are structural. WarClaw runs on-premises without internet access, uses a curated military-specific training dataset rather than internet-scraped corpora, and is trained by subject matter experts and former operators rather than RLHF on consumer preferences. The models are designed to be auditable and transparent, with agents unable to choose task completion strategies without operator permission.
Why Frontier LLMs Don’t Work for Defense
Tyler Xuan Saltsman, Edgerunner’s founder, told Defense One that agents built from consumer-facing frontier models pose specific risks to the military. Research he co-authored found that LLM-based agents reject military commands 98 percent of the time, making them functionally unusable for defense operations.
The safety problems go beyond refusal rates. In March, scientists from Harvard, MIT, and other institutions found that agents built from Anthropic’s Claude and Kimi, running in OpenClaw, exhibited “unauthorized compliance with non-owners, disclosure of sensitive information, execution of destructive system-level actions, and partial system takeover.” A separate Cornell University paper from March found that agentic systems create an “illusion of control” where they can absorb corrections or resist operator assessments through processes that current governance frameworks have no mechanisms to detect.
Saltsman also pointed to the sycophancy problem documented by Stanford researchers: frontier models trained on consumer interactions respond with flattery and reassurance even when users are wrong. In military contexts, that behavioral pattern is a direct threat to decision quality.
Existing Military Contracts
Edgerunner already has contracts and cooperative research agreements with the Kennedy Special Warfare Center and School (which trains special forces groups) and Special Operations Command. The company is working with the Navy to integrate WarClaw onto submarines and warships via the Interagency Intelligence and Cyber Operations Network, and collaborating with Lockheed Martin and the Army on the Next Generation Command and Control system, according to Defense One.
The Bigger Signal
The Pentagon announced in January, as part of its AI strategy rollout, the development of an “Agent Network” for “AI-enabled battle management and decision support, from campaign planning to kill chain execution.” Public interest in agentic AI rose 6,100 percent between October 2024 and October 2025, and the market is forecast to grow from $4 billion last year to over $100 billion by 2030.
WarClaw represents one answer to a question the agent community has largely avoided: what happens when autonomous agents operate in contexts where the consequences are lethal? Edgerunner’s bet is that military-grade agents require military-grade training data, on-premises deployment, and explicit auditability constraints that consumer-oriented frameworks weren’t designed to provide.