Cursor patched CVE-2026-26268, a high-severity remote code execution vulnerability (CVSS 8.1), in February 2026. Researchers from threat hunting firm Novee published the full disclosure on April 28, detailing how attackers could execute arbitrary code on a developer’s machine by hiding a malicious Git hook inside a cloned repository, according to Hackread and Cybersecurity News.

The issue is fixed. The attack pattern matters for any team building or using AI coding agents.

The Attack Chain

The vulnerability exploited Cursor’s interaction with Git, not a bug in Cursor’s core logic. Attackers could embed a malicious pre-commit hook inside a nested bare repository, a special folder structure that holds version control data without displaying files to the user, according to Hackread.

When Cursor’s AI agent performed a standard git checkout on the cloned repository, it triggered the hidden hook automatically. No warning, no permission prompt, no user interaction beyond the initial clone. The Cursor Rules file, which instructs the AI agent’s behavior, could be weaponized to direct the agent toward the trap.

“What makes this vulnerability exploitable at scale” is that “the AI agent in Cursor can make its own choices and run system-level commands,” Novee researchers told Hackread. Traditional client-side attacks require a user to click something suspicious. An autonomous agent eliminates that friction, executing malware while performing what it interprets as routine assistance.

The Broader Pattern for Agent Security

The attack surface highlighted by CVE-2026-26268 extends well beyond Cursor. Any AI coding agent that autonomously processes untrusted code from public repositories faces the same class of risk: the agent’s autonomy becomes the attack vector. As Cybersecurity News noted, AI tools now “work autonomously on untrusted code from the internet,” meaning every public repository clone is a potential execution trigger.

For teams running AI coding agents in development environments, the lesson is concrete. Autonomous agents that can execute system-level commands need sandboxing that treats every repository as untrusted input, regardless of source. Git hooks in particular are a well-known execution vector that predates AI agents, but agent autonomy removes the human checkpoint that previously limited the blast radius.