Leapwork, the enterprise test automation company, launched a fully rebuilt Continuous Validation Platform on April 15, designed specifically for organizations deploying AI agents into production. The platform comprises three interconnected products covering test automation, performance testing, and what Leapwork calls “AI-native agentic quality orchestration,” according to the company’s launch announcement via GlobeNewswire.

The platform targets every stage of AI adoption, claiming to be “application agnostic, deterministic by design.”

The Validation Problem for AI Agents

Traditional test automation assumes deterministic outputs: the same input produces the same output. AI agents break that assumption. They produce probabilistic outputs that vary across runs, interact with external systems, and execute multi-step workflows spanning multiple services. Existing test frameworks were not built for this class of behavior.

Leapwork’s research, published in February through a survey of more than 300 software engineers, QA leaders, and IT decision-makers, found that 88% of respondents said AI is now a priority for their testing strategy, but only 12.6% apply AI across key test workflows, InfoQ reported. The gap: 54% cited concerns about quality and reliability as the primary barrier holding back broader AI adoption.

On average, only 41% of testing is automated today. Test creation was identified as the biggest bottleneck by 71% of respondents, followed by test maintenance at 56%, according to InfoQ.

The Missing Infrastructure Layer

The agent infrastructure conversation in 2026 has covered model providers, orchestration platforms, security tools, and observability. What has been largely absent: the quality validation layer that ensures agents deployed to production reliably do what they are supposed to do.

“It is no longer a question of whether testing teams will leverage agentic capabilities in their work. The question is how confidently and predictably they can rely on it,” Kenneth Ziegler, Leapwork’s CEO, told InfoQ in February.

Leapwork’s answer is a platform where the validation framework itself is agentic. Rather than static test scripts, the system uses AI-native orchestration to adapt to the non-deterministic behavior of AI agent outputs while maintaining enough structure to produce reproducible quality signals.

Production Readiness Signal

Leapwork is an established player in enterprise test automation, not a startup press release. The platform launch positions the company as an infrastructure provider for the “agents in production” thesis that has defined the April 2026 news cycle: from Automation Anywhere reporting 80% auto-resolution of IT tickets to Salt Security finding that 48.9% of enterprises are blind to machine-to-machine agent traffic.

For teams shipping agents to production, the quality assurance layer is one of the last gaps in the deployment stack. Leapwork is the first established vendor to ship a product explicitly built for it.