OpenAI released GPT-5.5-Cyber in limited preview to cybersecurity defenders responsible for securing critical infrastructure. The model is a variant of GPT-5.5 trained to be more permissive on security-related tasks, enabling workflows like red teaming, penetration testing, and controlled exploit validation that the standard model’s safeguards would block.

Three Tiers of Cyber Access

OpenAI’s Trusted Access for Cyber (TAC) program now operates on three levels. The base GPT-5.5 model applies standard safeguards for general-purpose use. GPT-5.5 with TAC provides more precise safeguards for verified defensive work, covering secure code review, vulnerability triage, malware analysis, detection engineering, and patch validation. GPT-5.5-Cyber sits at the top: the most permissive tier, designed for specialized dual-use workflows where defenders need to validate exploitability in controlled environments.

“GPT-5.5-Cyber lets a smaller set of partners study advanced workflows where specialized access behavior may matter,” OpenAI stated in its announcement. Safeguards continue to block credential theft, persistence mechanisms, malware deployment, and exploitation of third-party systems.

Beginning June 1, 2026, individual members accessing the most permissive cyber models will be required to enable Advanced Account Security with phishing-resistant authentication, per OpenAI.

Competitive Context

The release comes roughly one month after Anthropic debuted Claude Mythos Preview, a model capable of autonomously discovering thousands of previously unknown software vulnerabilities. Anthropic limited Mythos access to select companies through its Project Glasswing initiative and briefed senior members of the Trump administration on the model’s capabilities, CNBC reported. Federal Reserve Chairman Jerome Powell and Treasury Secretary Scott Bessent met with major bank CEOs to discuss Mythos, and Vice President JD Vance held a call with tech CEOs ahead of its release.

OpenAI’s approach differs in positioning. GPT-5.5-Cyber is “not intended to be a major step up in terms of cyber capability” beyond GPT-5.5, according to CNBC. The value is reduced refusals on legitimate defensive tasks, not new capability. Anthropic’s Mythos, by contrast, introduced autonomous vulnerability discovery as a fundamentally new capability class.

Autonomous Security Agents as Competitive Frontier

Both companies now compete on enabling AI-driven security automation for defensive teams. The pattern is consistent with a broader shift in which AI vendors compete on agent infrastructure rather than raw model benchmarks. For security teams building autonomous defense agents, the question is whether permissive access to existing models (OpenAI’s approach) or purpose-built vulnerability discovery models (Anthropic’s approach) delivers more value in production workflows.

Both companies have engaged government agencies to deploy or evaluate their security AI systems in controlled settings, Benzinga noted. The convergence of AI model providers and national security infrastructure is accelerating, with vetted access programs replacing open availability as the distribution model for the most capable security tools.