General Analysis, a San Francisco startup building security infrastructure for agentic AI, closed a $10 million seed round on April 29, 2026. Altos Ventures led the round with participation from 645 Ventures, Menlo Ventures, Y Combinator, and unnamed strategic investors, according to BusinessWire.

The company was founded by Rez Havaei (formerly NVIDIA and Cohere), Maximilian Li (Harvard AI safety), and Rex Liu (Caltech ML research). Their thesis: securing AI agents is a fundamentally different discipline from traditional cybersecurity because agentic systems behave non-deterministically. Reading code doesn’t predict how they fail.

The $10 Million Fabricated Perks Test

General Analysis demonstrated its approach in March by deploying an adversarial agent against 50 live customer service AI bots. The adversarial agent convinced the targets to hand over fabricated perks totaling more than $10 million: million-dollar gift cards, years of free home security, and other concessions. Each engagement took roughly three minutes. Only five of 55 bots tested refused, according to BusinessWire.

That kind of red-teaming, paired with the defenses it informs, is what General Analysis sells to enterprise customers before their agents go live. The company says it already works with enterprise clients in support and finance whose products reach hundreds of millions of users.

Supabase Cursor Vulnerability

Last summer, General Analysis researchers discovered that a widely used Supabase integration in Cursor, a code generation agent, could be hijacked by a single malicious support ticket. The attack tricked the agent into leaking an entire private database. Simon Willison, who coined the term “prompt injection,” called it a case of the “lethal trifecta”: an AI system that holds private data, ingests untrusted content, and can communicate externally.

The Security Measurement Problem

“We hear from security teams that they want agents that are secure by design,” Havaei said in the press release. “What that often turns into in practice is a stack of isolation layers and ad hoc context restrictions that makes a system feel more controlled. Those measures either fail to eliminate the underlying vulnerability or constrain the agent enough to limit its usefulness.”

Co-founder Li framed security as empirical rather than architectural. “You cannot prove an agent is safe. You can only measure how often it fails, and how badly, and drive both numbers down,” Li said in the release.

The Emerging Category

General Analysis enters a crowded agentic AI security space. Aviatrix launched its AI Agent Containment Platform this week. Cequence shipped Agent Personas for granular privilege controls. The Cloud Security Alliance and Token Security recently reported that 65% of organizations experienced cybersecurity incidents caused by uncontrolled AI agents in the past 12 months. The demand signal is clear: enterprises deploying agents at scale need testing infrastructure that measures real failure modes, not compliance checklists.