OpenAI is finalizing a dedicated cybersecurity product for restricted release to select partners, Axios reported on April 9. The product is separate from ChatGPT and targets offensive and defensive security operations. Axios corrected its initial headline to clarify that OpenAI is releasing a cybersecurity product to select partners, not staggering the release of a new model.

The product builds on OpenAI’s existing “Trusted Access for Cyber” pilot program, which launched in February 2026 after the release of GPT-5.3-Codex. Security Boulevard reported that the pilot provides vetted organizations with “permissive, high-capability models designed specifically to accelerate defensive research,” backed by $10 million in API credits.

The Competitive Context

The timing follows Anthropic’s restricted rollout of Claude Mythos Preview, announced earlier this week. Anthropic withheld Mythos from public release due to its autonomous vulnerability discovery capabilities, instead launching Project Glasswing with 40+ enterprise partners and $100 million in compute credits. OpenAI’s Codex head Thibault Sottiaux hinted on social media that OpenAI was working on comparable capabilities, replying “Uhm” to a post claiming it would be “months before we use a model of this level of capability.”

Both companies are adopting what Security Boulevard described as a “defensive blueprint” modeled on decades-old responsible disclosure practices in cybersecurity: give defenders a head start before capabilities become widely available.

The Capability Threshold Question

Industry leaders cited by Security Boulevard questioned whether restricted access can hold. Wendi Whitmore, chief security intelligence officer at Palo Alto Networks, warned that similar capabilities “will inevitably leak or be replicated in open-source models within weeks.” Rob T. Lee of the SANS Institute noted that the ability to find flaws in aging codebases is a “fundamental feature of modern LLMs that cannot be easily unlearned.”

OpenAI is also working on its next flagship model, codenamed “Spud.” Whether Spud carries the same cybersecurity capabilities or faces the same restricted access remains unclear. India Today reported that OpenAI President Greg Brockman has described Spud as the product of two years of research and “a big step towards artificial general intelligence,” though it is not confirmed whether the cybersecurity product Axios reported on is related to Spud.

For security teams building agent-powered defensive tooling, the practical takeaway is that both OpenAI and Anthropic now offer or plan restricted-access cybersecurity models. The question is no longer whether AI can autonomously discover vulnerabilities. It is who gets access first and how long the capability gap between restricted partners and the broader market lasts.