The UK government signed a memorandum of understanding with OpenAI promising to deploy advanced AI models “throughout government and the private sector.” Eight months later, the Department for Science, Innovation and Technology (DSIT) has conducted zero trials under the agreement, according to a Guardian investigation published today.
A freedom of information request filed by Valliance, an AI consultancy, asked DSIT for information about trials conducted under the memorandum. The department replied that it held none of the information requested and “had not undertaken any trials under the memorandum of understanding with OpenAI.”
The Only Deployment: ChatGPT Licenses
When pressed by the Guardian, DSIT pointed to one concrete outcome: an agreement last October allowing Ministry of Justice civil servants to use ChatGPT with UK-based data storage. That deployment appears to have been part of the MoJ’s separate “AI Action Plan for Justice” rather than a direct result of the OpenAI memorandum.
Tarek Nseir, CEO of Valliance, put it bluntly: “Either there’s been a huge failure in execution, or it was a failure of intent. We use PowerPoint — that doesn’t mean we have a strategic relationship with Microsoft. If this was the intent of the MoU then our government is not taking the impact of AI on our economy seriously.”
Stargate UK Deployment Behind Schedule
The OpenAI deal is not the only UK government AI initiative struggling to materialize. Nscale, the company tasked with building the UK’s largest AI supercomputer by the end of 2026 using Nvidia GPUs, will “almost certainly not complete the project on time,” according to the Guardian’s reporting. Nscale is also collaborating with OpenAI on Stargate UK, a deployment of up to 8,000 Nvidia chips across UK sites. When the Guardian asked OpenAI about progress on that deployment — previously suggested for Q1 2026 — the company said it had “nothing to share.”
DSIT maintained it was “pleased with the progress” and pointed to ongoing work with the UK AI Safety Institute on testing and safeguards.
A Pattern Across Providers
The UK government has signed similar memoranda with Anthropic, Google DeepMind, and Nvidia. The Guardian reports that the Google DeepMind agreement, concluded in December, is still in “early stages of planning.” Anthropic said it was planning to build an AI assistant to help navigate government services and working with the AI Safety Institute on safety research. Nvidia did not respond to requests for comment.
The Ada Lovelace Institute’s Matt Davies flagged a structural problem: “Voluntary partnerships with big AI companies don’t follow the usual procurement rules, raising real questions about accountability and scrutiny. The memorandum with OpenAI doesn’t clearly explain how progress will be measured or how it will deliver public benefit.”
Why This Matters
Government AI partnerships generate headlines and stock-price-friendly press releases. The UK experience suggests at least some of these deals have no operational substance behind them. Eight months with zero trials points to a partnership that exists on paper only.
The 84% of UK citizens who told the Ada Lovelace Institute they are concerned about the government prioritizing AI sector interests over public protection have data to point to now: the government signed the deals, issued the press releases, and then did nothing with the technology it committed to deploying.
For AI companies, the lesson is equally clear. Government MOUs look great in investor decks. They are worth exactly as much as the deployment contracts that follow them, and in the UK’s case with OpenAI, that amount is currently zero.
Sources: The Guardian, UK Government MoJ AI Action Plan, Ada Lovelace Institute