The week of March 24, 2026 produced three separate attacks on foundational AI agent frameworks within four days. All three have been patched — LangChain, Langflow, and LiteLLM all released fixes in March and early April — but a new analysis by security researcher Deepak Gupta published April 9 argues these incidents aren’t coincidence. They’re the predictable result of frameworks that became critical infrastructure before their security caught up.

The attack cluster hit LangChain, Langflow, and LiteLLM, three of the most widely deployed tools in the AI agent stack. Each exploited a different vector. Together, they exposed a single pattern: the plumbing connecting LLMs to enterprise systems is riddled with vulnerability classes that the broader software industry solved years ago.

The Three Incidents

On March 27, Cyera researchers disclosed three vulnerabilities across LangChain and LangGraph. CVE-2026-34070 (CVSS 7.5) is a path traversal flaw in LangChain’s prompt-loading module allowing arbitrary filesystem access. CVE-2025-68664 (CVSS 9.3), dubbed “LangGrinch,” is a serialization injection in langchain-core that can extract API keys and environment secrets. CVE-2025-67644 (CVSS 7.3) is an SQL injection in LangGraph’s SQLite checkpoint implementation exposing conversation histories.

LangChain, LangChain-Core, and LangGraph collectively account for over 60 million PyPI downloads per week, according to The Hacker News. Every vulnerability in these core packages ripples through hundreds of downstream libraries and integrations.

Days earlier, Langflow’s CVE-2026-33017 (CVSS 9.3) went from advisory to active exploitation in 20 hours. Sysdig’s Threat Research Team observed attackers building working exploits directly from the advisory description before any public proof-of-concept existed. CISA added it to the Known Exploited Vulnerabilities catalog on March 25. This was the second critical Langflow RCE in the KEV catalog; the first, CVE-2025-3248, exploited the same underlying exec() call in a different endpoint.

The LiteLLM incident was the most technically sophisticated. A threat group called TeamPCP compromised LiteLLM’s PyPI publishing pipeline through a poisoned version of Trivy, a popular open-source vulnerability scanner. LiteLLM’s CI/CD pipeline ran Trivy without a pinned version. The compromised scanner exfiltrated LiteLLM’s PyPI publishing token, which TeamPCP used to push two malicious package versions to PyPI. The packages contained a .pth file that auto-executes Python code on interpreter startup, harvesting AWS, GCP, and Azure tokens, SSH keys, and Kubernetes configurations. Approximately 40,000 downloads occurred in the three hours before PyPI quarantined the package.

Path Traversal, Serialization Injection, SQL Injection

Gupta’s analysis makes a point worth sitting with: every vulnerability in this cluster belongs to a well-understood attack class. Path traversal. Deserialization of untrusted data. SQL injection. Unauthenticated code execution via exec(). Supply chain credential theft through CI/CD compromise.

These are not novel attack patterns. They are textbook entries in OWASP’s Top 10. The difference is where they’re appearing. LangChain sits between applications and LLM providers. Langflow instances typically hold API keys for OpenAI, Anthropic, AWS, and database connections. LiteLLM routes requests across every major LLM provider. Compromising any one of these tools provides lateral access to the entire AI supply chain flowing through it.

As Cyera noted: “LangChain doesn’t exist in isolation. It sits at the center of a massive dependency web that stretches across the AI stack.”

The Patch Status

All three sets of vulnerabilities have patches available. LangChain’s CVE-2026-34070 requires langchain-core version 1.2.22 or later. CVE-2025-68664 requires versions 0.3.81 or 1.2.5 and above. CVE-2025-67644 requires langgraph-checkpoint-sqlite 3.0.1. Langflow users should upgrade to version 1.9.0 or later. LiteLLM released version 1.83.0 through a rebuilt CI/CD pipeline.

The Infrastructure Security Question

The timing of this attack cluster coincides with LangChain’s launch of Deep Agents Deploy, its new managed agent deployment platform. The framework is simultaneously positioning itself as production infrastructure for autonomous agents while patching path traversal and serialization flaws in its core libraries.

That tension defines the current moment in agent infrastructure. Frameworks built for rapid prototyping are being adopted as enterprise production systems. The security review processes that govern traditional application development often don’t cover AI tooling. And as Gupta’s analysis documents, the security tools meant to catch these issues, like Trivy, can themselves become attack vectors.

Organizations running production agents on LangChain, Langflow, or LiteLLM should patch immediately, audit any workflows passing untrusted data through serialization layers, pin all CI/CD dependencies to specific versions, and rotate credentials if any suspicious activity was detected during the March 24-27 window.