IBM, Auth0, and Yubico unveiled a joint framework at RSA Conference 2026 on March 25 that requires cryptographically verified human approval before AI agents can execute high-consequence actions. The partnership addresses a specific gap in enterprise AI agent deployments: proving, with hardware-backed evidence, that a real human authorized a specific agent action.

The framework, described as a “Human-in-the-Loop” authorization model, works like this: IBM’s WatsonX AI orchestration layer manages the agent workflow. When an agent attempts a high-risk action — a large financial transfer, a production code deployment, access to sensitive data — Auth0 triggers an out-of-band approval request using the Client Initiated Backchannel Authentication (CIBA) standard. The designated human approver then taps a physical YubiKey to provide cryptographic proof that they, specifically, authorized the decision.

“The hard problem in agentic AI security is accountability: can you prove a specific human approved a high-consequence action?” said Albert Biketi, Yubico’s chief product and technology officer, per Biometric Update.

Why hardware, not software

The choice of hardware-backed authentication over software-only approval flows is deliberate. Software tokens, push notifications, and password-based approvals can be intercepted, replayed, or spoofed. A physical YubiKey tap produces a cryptographic assertion tied to a specific hardware device, which creates a non-repudiable audit trail: this person, with this device, at this time, approved this specific agent action.

The timing aligns with alarming adoption data. The World Economic Forum’s Global Cybersecurity Outlook 2026 found that 87% of organizations report rising risks from AI vulnerabilities, while most lack foundational AI security practices. The IBM/Auth0/Yubico framework targets the specific failure mode where an AI agent takes a consequential action and no one can prove who approved it — or whether anyone approved it at all.

Routine tasks stay autonomous

The framework does not gate every agent action behind a human tap. Routine tasks — scheduling meetings, pulling reports, processing standard queries — continue to execute autonomously. The human-in-the-loop trigger activates only for actions that cross predefined risk thresholds, which enterprises configure based on their own policies.

This design choice preserves the speed advantage of AI agents (the whole point of deploying them) while adding friction only where the consequences of an error or compromise are severe enough to justify it.

Yubico and Delinea extend the model

In a separate but related announcement at RSAC, Yubico and Delinea revealed an integration that brings Yubico’s hardware-attested Role Delegation Tokens (RDTs) into the Delinea Platform, which now includes StrongDM’s runtime authorization engine following Delinea’s acquisition of StrongDM completed on March 5, 2026.

The combined stack unifies privileged access management with just-in-time runtime authorization for both human and non-human identities. Yubico’s RDTs add the hardware attestation layer, ensuring that privilege delegations to AI agents can be traced back to a specific hardware credential held by a specific person.

“Hardware attestation without runtime enforcement, or runtime enforcement without hardware attestation, leaves organizations exposed,” Biketi said. “This integration solves both sides.”

The broader RSAC signal

These partnerships land in a week where AI agent identity has dominated RSAC 2026. RSA (the company) expanded its ID Plus platform to support authentication for AI agents alongside human users, including integration with Microsoft 365 E7. Swissbit previewed a FIDO2 key with face biometric verification and post-quantum authentication capabilities at the same conference.

The consistent message across vendors: enterprise AI agent security is moving from software-defined policy enforcement toward hardware-rooted identity verification. The assumption is that as agents gain more autonomy and access more sensitive systems, software-only controls will not meet the evidentiary standard that regulators, auditors, and boards require. Hardware creates a physical anchor that software cannot replicate.

For enterprises running AI agents in production today, the practical question is no longer whether to implement agent governance, but whether their governance stack can produce the cryptographic proof that a human approved the actions their agents take.