The Bank of England will conduct stress tests specifically targeting AI agents in financial markets, Deputy Governor Sarah Breeden confirmed in a letter to the UK Parliament Treasury Committee published April 16. The tests focus on “herding” behaviour: the risk that AI agents trained on similar datasets and tuned on similar benchmarks make correlated trading decisions that amplify selloffs during periods of market stress.
What the Letter Says
Breeden told the Treasury Committee that the Bank is “undertaking scenario analysis focused on plausible macroeconomic and core financial market outcomes resulting from investment, development and adoption of AI, as well as potential risks to UK financial stability,” according to the PYMNTS report.
The stress testing will feed into the Bank’s broader systemic risk framework, including system-wide exercises. The Bank is also incorporating AI scenarios into cyber and operational testing of the financial sector.
On herding specifically, Breeden wrote that the Bank is “working with its international counterparts on simulation methods to better understand how AI agents trading in financial markets could amplify a stress scenario through correlated behavior.” The simulations will also explore mitigation strategies, including “how agents’ objective functions should best take account of public policy objectives.”
The FCA and Treasury Responses
The Treasury Committee published responses from three regulators. According to ResultSense, the Financial Conduct Authority committed to publishing practice examples to help financial services firms align AI deployment with existing conduct rules. Industry complaints about ambiguous guidance on AI workflows prompted the commitment.
HM Treasury, by contrast, declined to set a 2026 deadline for bringing major AI and cloud providers into the Critical Third Parties Regime, the framework that can place cloud and AI providers under direct financial regulatory oversight. Treasury Committee Chair Dame Meg Hillier called the delay “perplexing,” according to ResultSense.
Hillier cited Anthropic’s Mythos AI model as evidence of how quickly the risk landscape is shifting. “It has never been more important that those responsible for maintaining the UK’s financial stability take a proactive approach to understanding and mitigating the risks AI may pose to our financial system,” she said, per PYMNTS.
Why Herding Is the Specific Risk
The herding scenario is not theoretical. As AI-assisted portfolio management tools and autonomous trading agents proliferate, they increasingly share characteristics: similar training data, similar optimization benchmarks, similar risk parameters. If multiple AI agents respond to the same market signal (a credit rating downgrade, a volatility spike) with the same sell order at the same time, the correlated action can produce a selloff disproportionate to the underlying trigger.
The concern intensifies as more financial institutions deploy client-facing AI agents. Charles Schwab CEO Rick Wurster announced this week that Schwab will introduce client-facing AI agents over chat and voice in June 2026 with “strict guardrails” and human handoffs. The Bank of England’s stress test framework is the regulatory context behind that guardrails language.
The Regulatory Trajectory
For UK banks and insurers, the signal is direct: AI agent deployment in trading and asset management now sits under explicit supervisory scrutiny, according to ResultSense’s analysis. Firms building or procuring agentic trading workflows should expect to demonstrate how correlated behaviour across their own estate, and across the industry, has been modeled.
The next concrete milestone is the FCA’s practice examples, which will reveal how prescriptively the regulator intends to shape AI agent deployment in financial services. If the guidance stays principles-based, compliance burdens fall on individual firms. If it becomes more specific, pressure will mount for the Prudential Regulation Authority to issue parallel operational resilience requirements.