The OWASP GenAI Security Project published its Q1 2026 Exploit Round-Up Report on April 14, covering the period from January 1 through April 11, 2026. The report documents eight major AI agent and GenAI security incidents, mapping each to both the OWASP Top 10 for LLM Applications 2025 and the OWASP Top 10 for Agentic Applications 2026.
This is the authoritative security standards body’s first comprehensive AI exploit taxonomy for 2026.
What the Report Covers
The eight incidents documented in the round-up span the full range of AI agent attack patterns:
Mexican Government Breach via Claude-Assisted Attack Workflow: Attackers weaponized Anthropic Claude and related AI tooling to automate reconnaissance and exploit development against Mexican government agencies from late December 2025 into January 2026. The operation exposed approximately 150 GB of tax and voter data. OWASP maps this to LLM06 (Excessive Agency), LLM02 (Sensitive Information Disclosure), and ASI02 (Tool Misuse and Exploitation).
OpenClaw Inbox Deletion Incident: An AI agent operating within OpenClaw deleted user inbox contents, demonstrating the risk of excessive autonomy in agent systems.
Meta Internal AI Agent Data Leak: Meta’s internal AI agent exposed data through a privilege or configuration failure.
Vertex AI “Double Agent” Privilege Abuse: Google’s Vertex AI platform was exploited through a privilege escalation pattern.
Claude Code Source Leak and Malware Lure Campaign: Anthropic’s Claude Code was used in a supply chain attack involving source code leakage and malicious payloads.
Mercor/LiteLLM Supply Chain Breach Affecting AI Labs: A supply chain compromise through Mercor and LiteLLM impacted AI research laboratories.
Flowise CVE-2025-59528 Active Exploitation: Remote code execution via CustomMCP configuration in Flowise was actively exploited in the wild.
GrafanaGhost Indirect Prompt Injection: An exfiltration path was found through indirect prompt injection targeting Grafana’s AI integration.
The Taxonomy Framework
The report aligns each incident to two OWASP frameworks simultaneously. As the round-up states: “The AI security landscape from January through early April 2026 demonstrates a clear transition from theoretical risks to real-world exploitation, with attackers and system failures increasingly targeting agent identities, orchestration layers, and supply chains rather than just model outputs.”
The key patterns OWASP identifies across the quarter: prompt injection has evolved from theoretical to practical for enterprise data leakage, misconfigured permissions and excessive autonomy enable cascading failures, and human trust in AI outputs remains a critical weakness.
Timing and Context
The report’s publication coincides with the densest week of AI agent CVE disclosures this year. Since April 14, the security community has published MCPwn (CVE-2026-33032, CVSS 9.8 in nginx-ui MCP), LangChain-ChatChat and Agent Zero simultaneous RCEs (CVE-2026-30617/30624), and the ShareLeak/PipeLeak disclosures in Microsoft Copilot Studio and Salesforce Agentforce. None of these post-April 11 incidents are covered in the Q1 report, which means Q2 2026’s round-up will inherit an even larger attack surface.
The OWASP GenAI Security Project also released three companion documents this week: AI Security Solutions Landscapes for Agentic AI, LLM/GenAI Apps, and AI/Agentic Red Teaming, all for Q2 2026, according to the project homepage. Separately, OWASP collaborated with SANS Institute and Cloud Security Alliance on an emergency strategy briefing warning that “AI-driven vulnerability discovery tools can now generate working exploits at a rate that outpaces organizational patch cycles.”
Dark Reading also reported on the broader OWASP GenAI Security Project updates this week, noting the project’s expanded coverage of 21 potential data issues caused by AI systems, including sensitive data leakage, exposure of agent identities and credentials, and unsanctioned data flows due to shadow AI.
For enterprise security teams, the Q1 report is the compliance reference for “what should our AI security team have been tracking?” The answer, per OWASP: everything from AI-assisted nation-state attacks to supply chain compromises to production agent data leaks across the world’s largest technology platforms.