The Timeline
In January 2024, OpenAI quietly revised its usage policies, removing long-standing language that prohibited use of its models for “military and warfare” purposes, as CNBC reported at the time. When pressed by The Intercept, an OpenAI spokesperson admitted the company harbored “a desire to pursue national security use cases.”
What followed was a systematic hiring spree. A Jacobin investigation published April 5 documents more than a dozen hires with national security backgrounds brought on between February 2024 and late that year:
- February 2024: Katrina Mulligan joined as head of national security partnerships, hired directly from a senior staff position advising the assistant secretary of defense for Special Operations and Low-Intensity Conflict (SO/LIC).
- June 2024: Retired four-star Army General Paul Nakasone, former director of the NSA and commander of US Cyber Command, joined OpenAI’s board of directors.
- August 2024: Morgan Dwyer and Benjamin Schwartz, senior Biden officials involved in the CHIPS and Science Act, joined with backgrounds in defense research and Pentagon policy. The same month, Sasha Baker — former national security adviser to Senator Elizabeth Warren and deputy chief of staff to Obama-era Secretary of Defense Ashton Carter — became head of national security policy.
- April–Fall 2024: Matt Rimkunas and Meghan Dorn, former staffers for Senator Lindsey Graham, joined the federal affairs team. Both are registered to lobby on OpenAI’s behalf.
The payoff, per NPR: OpenAI reportedly secured a $200 million defense contract within hours of the Trump administration icing out rival Anthropic over its refusal to allow military applications.
What It Means for Agent Builders
The two largest foundation model providers have now taken opposite positions on military use, and the divergence is structural. Anthropic refused to remove its ethical guardrails and is fighting the Pentagon’s blacklisting at the Ninth Circuit. OpenAI removed the guardrails, hired the people, and collected the contract.
For teams building autonomous agents on these APIs, the question is no longer theoretical. The platform you build on reflects the policy posture of the company behind it. OpenAI’s models are now, by extension, defense-contracting infrastructure. Anthropic’s models are now, by extension, the product of a company in active litigation with the US government over its right to refuse military use cases.
Neither posture is risk-free. OpenAI’s defense alignment may accelerate regulatory scrutiny in non-US markets. Anthropic’s government clash may limit its access to the largest buyer in the US tech market. Agent builders choosing a primary model provider are also choosing which set of risks they are willing to inherit.