Sam Altman posted on X at 2:33 a.m. on May 2: “you can sign in to openclaw with your chatgpt account now and use your subscription there! happy lobstering.” The tone was casual. The strategic significance is not. OpenAI has made its ChatGPT subscription the authentication and billing layer for OpenClaw, the open-source AI agent framework with 346,000 GitHub stars and 3.2 million users. ChatGPT Plus subscribers can now log in via OAuth, access GPT-5.4 through the Codex endpoint, and run autonomous AI agents on their own hardware for $23 per month total.
The move arrives exactly 28 days after Anthropic did the opposite. On April 4, Anthropic blocked Claude Pro and Max subscribers from using their flat-rate plans with OpenClaw and all other third-party agent frameworks. Two companies looked at the same product, the same 3.2 million users, and reached opposite conclusions about what to do with them.
What OpenAI Actually Did
The integration makes ChatGPT the default authentication and billing backend for OpenClaw. Users authenticate via OAuth, the same login flow they use for ChatGPT on the web. Their existing subscription covers usage. The agent runs locally on the user’s own hardware, sends inference requests to GPT-5.4 through the Codex endpoint, and operates autonomously across WhatsApp, Telegram, Signal, Discord, Slack, iMessage, and Microsoft Teams.
OpenAI’s connection to OpenClaw runs through a hire. Peter Steinberger, the Austrian developer who built OpenClaw in a Madrid cafe in November 2025 and previously sold a software company for $100 million, joined OpenAI in February to “drive the next generation of personal agents.” OpenClaw was moved to an independent foundation with OpenAI’s continued support and funding. The framework remains open-source and compatible with multiple model providers. But with Anthropic blocking access and OpenAI enabling it, the practical effect is that 3.2 million users now have a frictionless path to becoming ChatGPT subscribers.
Why Anthropic Blocked It
Anthropic’s rationale was economic. Boris Cherny, Head of Claude Code at Anthropic, wrote on X on April 3: “We’ve been working hard to meet the increase in demand for Claude, and our subscriptions weren’t built for the usage patterns of these third-party tools. Capacity is a resource we manage thoughtfully and we are prioritizing our customers using our products and API.”
The technical argument had merit. Anthropic’s first-party tools like Claude Code and Claude Cowork are built to maximize prompt cache hit rates, reusing previously processed text to reduce compute costs. Third-party harnesses like OpenClaw bypass these optimizations. Cherny acknowledged the gap directly: “I did put up a few PRs to improve prompt cache hit rate for OpenClaw in particular, which should help for folks using it with Claude via API/overages.”
The numbers explain the urgency. Growth marketer Aakash Gupta calculated on X that a single OpenClaw agent running autonomously for one day could burn $1,000 to $5,000 in API costs. “Anthropic was eating that difference on every user who routed through a third-party harness,” Gupta wrote. A $20/month Pro subscriber generating $100+/day in compute is a loss leader without the “leader” part.
Third-party tool access remained available through pay-as-you-go “extra usage” billing and the direct API — only the flat-rate subscriber path closed. The company offered a one-time credit equal to one month’s subscription and up to 30% off pre-purchased usage bundles to soften the transition. But the flat-rate all-you-can-eat access that made Claude the default model for OpenClaw’s power users ended on April 4 at 12 p.m. Pacific.
The Steinberger Factor
The relationship between Anthropic and OpenClaw’s creator deteriorated publicly. Steinberger, who is now an OpenAI employee, posted on X after the subscription ban: “Funny how timings match up, first they copy some popular features into their closed harness, then they lock out open source.” He appeared to be referencing Claude Dispatch, which lets users remotely control agents and assign tasks, rolled out weeks before the pricing change.
Six days later, Anthropic temporarily suspended Steinberger’s account entirely. He posted a screenshot of a message citing “suspicious activity.” The ban was reversed within hours after the post went viral, with an Anthropic engineer publicly offering to help. Steinberger explained he was only using Claude for testing to ensure OpenClaw updates would not break functionality for Claude users. When one commenter suggested he should have joined Anthropic instead of OpenAI, Steinberger replied: “One welcomed me, one sent legal threats.”
The exchange illuminated a dynamic that corporate press releases tend to obscure. Anthropic had filed a trademark complaint against the original “Clawdbot” name (a play on Claude). Steinberger renamed the project twice before settling on OpenClaw. When OpenAI then hired him, the competitive tension became personal.
The Economics of Opposite Bets
OpenAI is subsidizing agent usage through its subscription tier. The bet is that the lifetime value of a subscriber who enters through OpenClaw exceeds the compute cost of serving that subscriber’s agent workloads. The Next Web compared this to the logic that drove mobile carriers to subsidize smartphones: absorb the hardware cost to lock in the monthly billing relationship.
The financial positions of both companies frame the risk tolerance differently. OpenAI’s annualized revenue run rate exceeded $25 billion as of March 2026, with 50 million ChatGPT subscribers and 900 million weekly active users. Anthropic’s run rate hit $30 billion by April 2026, up from $9 billion at the end of 2025, according to Axios. The Atlantic reported that Anthropic’s run rate doubled from $14 billion to $30 billion in just two months, driven primarily by Claude Code demand.
Both companies can afford to experiment. But their experiments point in different directions. OpenAI is optimizing for distribution: get as many users as possible into the ChatGPT subscription funnel, even if the per-user economics are temporarily negative. Anthropic is optimizing for unit economics: charge what usage actually costs, even if it means losing users to competitors.
The real-world cost data for OpenClaw users clarifies the stakes. According to ShareUHack’s analysis, post-Anthropic-ban costs for Claude access via API range from $3 to $60 per month for light usage, but heavy autonomous agent workloads can run $1,500 to $5,000+ monthly on premium models. OpenAI’s $23/month flat rate for the same class of workloads is a dramatic price compression, at least for now.
The Distribution Thesis
The strategic logic goes beyond subscriber acquisition. OpenClaw is model-agnostic. It supports Claude, GPT, DeepSeek, Gemini, open-weight models through Ollama, and more. By making ChatGPT the default authentication layer, OpenAI is not just selling subscriptions. It is positioning GPT-5.4 as the path of least resistance for 3.2 million users who previously chose their model provider based on quality, price, or habit.
Before April 4, Claude was the most popular model among OpenClaw power users, according to multiple developer community discussions and Steinberger’s own acknowledgment. Anthropic’s subscription ban created an opening. OpenAI’s integration fills it. The timing is not coincidental. When a commenter asked Steinberger why he was testing Claude at all instead of using OpenAI’s models, he replied: “Working on that.”
The conversion funnel is straightforward. A user who has been running OpenClaw with Claude on a pay-as-you-go API, paying variable costs that could spike unpredictably, now sees a $23/month flat-rate alternative that covers the same workload. The switching cost is a login screen. OpenAI absorbed the complexity of OAuth integration, subscription management, and rate limiting. The user just clicks “Sign in with ChatGPT.”
Security Costs of Scale
The integration carries risk proportional to its scale. OpenClaw’s rapid growth has been accompanied by a documented pattern of security incidents. A critical remote code execution vulnerability (CVE-2026-25253) disclosed in late January allowed any website to silently connect to an agent’s local server through an unvalidated WebSocket. Security researchers found 824 confirmed malicious entries out of 10,700 available skills on ClawHub, OpenClaw’s skills marketplace, with 335 traced to a single coordinated attack campaign. Over 30,000 OpenClaw instances were found exposed on the public internet without authentication, according to The Next Web.
Patches shipped in updates released following the January 2026 disclosure, and current versions are no longer vulnerable. But a significant portion of the installed base runs older software. OpenAI’s decision to route its brand, billing system, and user credentials through an open-source platform with this security history means that a future OpenClaw vulnerability is now also an OpenAI credential security incident.
The Model Provider Landscape After the Split
The divergence between OpenAI and Anthropic on OpenClaw access creates a new competitive dynamic in model distribution. For the first time, a major model provider is using an independent open-source agent framework as a subscription acquisition channel. Another major provider has explicitly rejected that same channel.
The impact will be measurable. OpenClaw’s 3.2 million users will sort themselves: those who prioritize flat-rate predictability will move toward ChatGPT. Those who prefer Claude’s capabilities and are willing to pay variable API costs will stay with Anthropic. A third group, the cost-sensitive users building on open-weight models like DeepSeek, Kimi K2.5, or MiniMax through providers like DeepInfra and Ollama, may ignore both.
Nvidia, Tencent, Meta, and Microsoft have all built their own integrations with OpenClaw. Nvidia’s NemoClaw adds enterprise security hardening. Tencent’s ClawPro targets the Chinese market. Microsoft’s Agent 365, which reached general availability on May 1, provides governance controls for OpenClaw agents running on Windows. The agent framework has become infrastructure that every major technology company must have a strategy for, and “let users authenticate with our subscription” is now one available strategy.
The Subsidy Clock
OpenAI’s flat-rate model only works if the average revenue per OpenClaw subscriber exceeds the average compute cost over time. If autonomous agent workloads are as expensive as Anthropic’s data suggests, and a meaningful fraction of the 3.2 million users convert to paid ChatGPT subscribers who run agents 24/7, OpenAI could be looking at a significant short-term margin hit.
The bet is that this is temporary. Inference costs are declining as hardware improves and optimization techniques mature. OpenAI’s scale gives it better unit economics on inference than smaller providers. And the subscriber relationship, once established, generates revenue across all of OpenAI’s products, not just agent workloads. A user who signs up for ChatGPT through OpenClaw may also use ChatGPT directly, use the API for other projects, and upgrade to higher tiers.
Anthropic made the opposite calculation. Protecting margins now, even at the cost of losing distribution, preserves the resources needed to fund model development. Anthropic’s $30 billion run rate is impressive, but the company is also spending aggressively on compute infrastructure, chip diversification (it is reportedly in talks with London-based Fractile for inference chips), and the Mythos model program that has drawn White House attention. Every dollar saved on subsidized agent compute is a dollar available for these strategic priorities.
Neither bet is obviously wrong. OpenAI is playing the platform game, where the winner is the company with the most users locked into its billing relationship. Anthropic is playing the technology game, where the winner is the company that builds the best model and charges what it costs. The next twelve months will show which thesis holds.