The UK government has drawn up proposals to lure Anthropic into expanding its London operations and potentially pursuing a dual stock listing, according to the Financial Times. The pitch follows directly from Anthropic’s clash with the US Department of Defense, which designated the company a supply-chain risk in March 2026 after Anthropic refused to allow Claude to be used for autonomous strike targeting or large-scale domestic surveillance.
The proposals, circulated by the UK’s Department for Science, Innovation and Technology, are set to be presented to CEO Dario Amodei during his visit to the UK in late May, Benzinga reported. Prime Minister Keir Starmer has backed the effort. London Mayor Sadiq Khan sent a letter directly to Amodei writing, “I believe that London can provide a stable, proportionate, and pro-innovation environment in which this kind of AI can flourish,” according to the Times of India, citing the FT.
What the UK Is Offering
The package includes expanding Anthropic’s London office footprint and pursuing a dual listing on a UK exchange alongside a potential US IPO. One government official described the dual listing to the FT as “the dream,” while acknowledging it remains unlikely. UK Business Secretary Peter Kyle said Anthropic was among several companies the government is engaging with, telling the FT: “I set up the Global Talent Taskforce to assertively get out there and sell all the benefits of investing, innovating and scaling in the UK.”
Anthropic currently has around 200 employees in the UK, including roughly 60 researchers, per the Times of India. The company appointed former UK Prime Minister Rishi Sunak as a senior adviser last year.
Why This Matters for Agent Builders
The underlying dispute is about what autonomous AI agents are and aren’t allowed to do. Anthropic refused to strip safety restrictions from Claude for military applications. The Pentagon responded by effectively blacklisting the company. Now the UK is positioning itself as the jurisdiction that rewards responsible AI constraints rather than punishing them.
For teams building on Claude or deploying autonomous agents in regulated environments, the jurisdiction where Anthropic lands matters. A company that relocates research or lists in a market with different regulatory assumptions creates different compliance realities for its downstream users. The UK’s bet is that attracting the lab that said no to autonomous weapons is worth more than the short-term defense contract revenue the US is trying to leverage.