Visa’s Payment Ecosystem Risk and Control (PERC) unit tracked a more than 450% increase in dark web community posts mentioning “AI agent” over six months compared to the prior period. The same report found a 25% increase in malicious bot-initiated transactions globally, with the U.S. experiencing a 40% jump. The threat research is accelerating just as McKinsey projects the U.S. B2C retail market alone could see up to $1 trillion in agentic commerce revenue by 2030, with global projections reaching $3 trillion to $5 trillion.

The fundamental problem: payment infrastructure was built for humans clicking buttons. AI agents don’t click. They call APIs, skip browsing entirely, and complete purchases in seconds. Every fraud detection system trained on human behavioral signals (mouse movements, hesitation patterns, retry rates) becomes partially blind when software is the buyer.

The Market Is Moving Faster Than the Security Stack

Sixty-eight percent of U.S. consumers have used at least one AI tool as part of their shopping experience in the past three months, according to an ICSC and McKinsey report published May 7. Sixty-two percent used AI to compare brands, models, prices, or reviews. The adoption curve is steep enough that both Visa and Mastercard have launched dedicated agentic commerce programs.

Visa has invested more than $13 billion in technology and security over five years and launched its Intelligent Commerce platform with over 100 partners building in the Visa Intelligent Commerce sandbox. Mastercard introduced Agentic Tokens, which format Dynamic Token Verification Codes for standard card payment fields, allowing verified agents to submit transactions through existing checkout infrastructure. Both networks are also working with Cloudflare on Web Bot Auth, a protocol that makes agent requests verifiable, time-based, and non-replayable.

The industry consensus is clear: agents will buy things autonomously. The question is whether the guardrails get built before the fraud tooling matures.

How Mandate-Based Security Works

Entersekt’s Dewald Nolte, co-founder and Chief Strategy Officer, laid out the core architectural shift in a May 7 analysis: control moves upstream from individual transactions to mandates. Instead of authenticating each purchase, customers authenticate the scope of what an agent is allowed to do.

The framework defines three mandate types. Intent mandates cover set-and-forget tasks where the customer won’t be present when the transaction happens (an agent monitoring coffee supply and reordering when stock is low and prices acceptable). Cart mandates apply when an agent discovers options and assembles a basket, but the customer approves the final purchase in real time. Payment mandates label transactions as agent-initiated so issuers and payment networks know an AI agent is in the loop.

In all three cases, the customer authenticates the mandate rather than each individual transaction. The agent then presents cryptographic proof that it is acting within that mandate when it spends. The existing EMV 3D Secure, KYC, and token frameworks serve as the foundation. Nolte’s argument is that agentic commerce “builds on rails FIs already know, including EMV 3D Secure, delegated authentication and tokenisation.”

This is not a theoretical proposal. Mastercard’s Agentic Tokens already implement a version of this pattern, formatting cryptographic verification codes into standard card payment fields. Google’s Agent Payments Protocol (AP2) uses signed digital receipts where the user attests to specific transaction intent. The infrastructure pieces are shipping.

The Fraud Vectors That Actually Matter

The threats emerging around agentic commerce are familiar attack types — credential theft, fake merchants, social engineering — migrated onto new surfaces.

Visa’s threat intelligence identified fraudulent storefronts engineered specifically to exploit AI shopping agents. These fake merchants pass automated security checks, offer below-market prices to attract agent-driven bargain hunting, and harvest payment credentials when the agent completes a purchase using stored card data. Both the creation of the fake merchant and the exploitation of the agent are automated.

Visa also uncovered a network of fraudulent websites deploying conversational AI agents as part of their operations. The chat interface served two purposes: it made the sites appear legitimate, and the agent actively dissuaded victims from contacting their banks by pretending to offer customer support. By delaying fraud reports, the scam sites operated longer and captured more victims before detection.

Social engineering is the most credible near-term risk, according to Nolte. The target shifts from pushing a customer to authorize a one-off payment to walking them through setting up an agent with a powerful mandate. “Instead of pushing a customer to authorise a one-off payment or load a card into a rogue wallet, a fraudster will try to walk them through setting up an agent and quietly granting a powerful mandate to it,” Nolte told ITWeb.

Then there are the grey-area failures: situations where the agent does exactly what the system allows but not what the customer reasonably meant. Whether these incidents constitute fraud, error, or misconfiguration is an open question with regulatory implications.

Why Traditional Fraud Detection Goes Partially Blind

Signifyd’s Xavi Sheikrojan, director of risk intelligence, described the detection challenge in a February analysis: “What’s changed isn’t the motive behind fraud in agentic commerce, but how it shows up. When agents handle the shopping, the human session disappears. There’s no mouse movement, hesitation or familiar checkout path.”

The specific signals that break down: session duration (agents complete purchases in seconds), browsing behavior (agents don’t browse), retry patterns (agents don’t fail and retry), and IP geolocation (agent requests come from data centers, not consumer locations). A fast, clean, non-interactive session is no longer inherently suspicious because that is exactly how a legitimate agent transaction looks.

This creates a detection paradox. Fraud in agentic flows blends into transactions that look perfect — clean, fast, and non-interactive — which is also exactly how legitimate agent purchases look, according to Signifyd’s analysis. Fraud systems need to shift from asking “does this look human?” to asking “does this action align with the customer’s identity and intent, even when the customer never touched the keyboard?”

The Infrastructure Gap

Ruston Miles, founder and Chief Strategy & Development Officer at Bluefin, identified a structural problem in an analysis for FinTech Weekly: payment standards define roles for merchants, issuers, acquirers, and service providers, but they do not define how autonomous software should be identified, authorized, or controlled when acting on behalf of a user.

PCI DSS, card network rules, and NACHA operating guidelines all assume a person is present at the moment of authorization. When an AI agent holds delegated authority that allows it to evaluate, decide, and execute across multiple transactions without interruption, “compromising an orchestration layer no longer impacts a single transaction. It can influence entire streams of purchasing activity,” Miles wrote.

The attack surface moves higher in the system architecture. Attackers are experimenting with synthetic delegation (fabricating authorization flows) and prompt injection (manipulating an agent’s decision-making process). The target is no longer a single credential but the environment in which the agent operates.

Miles outlined four foundational controls: structured permission frameworks with spending caps and merchant category restrictions, verifiable identity for AI agents (equivalent to KYC for software), continuous monitoring that can detect rapid operational changes characteristic of AI-driven fraud, and time-bound permissions that expire automatically.

The Race Conditions

Multiple timelines are running simultaneously. Consumer adoption is accelerating (68% using AI shopping tools). Payment networks are shipping infrastructure (Visa Intelligent Commerce, Mastercard Agentic Tokens). Threat actors are building tooling (450% dark web activity surge). Regulatory frameworks are lagging behind all three.

American Express is pushing for agentic standards, according to Payments Dive. The mandate-based framework that Entersekt, Visa, and Mastercard are converging on addresses authorization scope. But the harder problems remain unsolved: cross-platform agent identity (how does a merchant verify an agent’s credentials across different orchestration layers?), liability allocation (who is responsible when an agent operates within its mandate but causes harm?), and regulatory classification (is an agent-initiated transaction subject to the same consumer protection rules as a human-initiated one?).

Nolte acknowledged the limits of the mandate model: it doesn’t eliminate fraud risk, it relocates it. The attack surface moves from individual transactions to mandate creation and agent registration. If weak Know Your Agent (KYA) checks allow a fraudster to register an agent to a victim’s account, the mandate framework becomes a tool for automating fraud rather than preventing it.

What Actually Needs to Happen Next

The payment industry has a window. Agent-initiated commerce is growing but not yet dominant. The mandate frameworks are being prototyped but not yet standardized. The fraud tooling is being researched but not yet weaponized at scale.

Three things need to happen before that window closes. First, agent identity needs standardization across payment networks, not proprietary solutions that fragment the ecosystem. Second, fraud detection models need retraining on agentic behavioral patterns, where “normal” looks like clean, fast, non-interactive transactions rather than messy human browsing. Third, liability frameworks need to be established before the first major agentic commerce fraud incident forces courts to make those decisions retroactively.

The $5 trillion question is whether the security infrastructure scales at the same rate as adoption. If it does, agentic commerce becomes the most significant expansion of the payment ecosystem since mobile wallets. If it doesn’t, the 450% surge in dark web research becomes the preamble to a fraud crisis that makes credit card skimming look quaint.