This is a follow-up to NCT’s April 4 coverage of Anthropic cutting OpenClaw off from Claude subscriptions. The developments below are new: the founder’s account suspension, reinstatement, and VC commentary from the All-In podcast.
Anthropic temporarily suspended OpenClaw founder Peter Steinberger’s personal Claude account on April 11, citing “suspicious activity” and a usage policy violation. The account was reinstated within hours. The suspension came one day after Anthropic launched Claude Managed Agents, a managed platform for deploying Claude-powered agents that directly competes with OpenClaw’s core function.
The Suspension and Reinstatement
Steinberger shared a screenshot of Anthropic’s suspension email on social media. “An internal investigation of suspicious signals associated with your account indicates a violation of our Usage Policy. As a result, we have revoked your access to Claude,” the email read, per The Hans India.
Steinberger responded publicly: “Yeah folks, it’s gonna be harder in the future to ensure OpenClaw still works with Anthropic models,” according to NewsBytes. Hours later, he confirmed the account was restored: “My account got reinstated. Thanks, folks!”
Steinberger told NewsBytes that the ban may have been triggered by his work on getting a Claude CLI fallback feature working. “I’ve been working on getting the claude -p fallback feature working after Boris confirmed that it’s a classifier bug and not intentional. We’re still blocked and it seems that got me banned too,” he said. Boris refers to Anthropic engineer Boris Cherny, who announced the original subscription restrictions on April 3.
Anthropic has not publicly explained the suspension beyond the boilerplate email.
The Timeline
The sequence of events over the past 10 days tells a story that is difficult to read as coincidental:
April 3: Anthropic’s Boris Cherny announces on X that Claude subscriptions will no longer cover third-party tool usage, including OpenClaw, starting April 4. Users must switch to pay-per-token API billing or purchase usage bundles.
April 10: Anthropic launches Claude Managed Agents, a hosted platform for deploying autonomous Claude-powered agents with built-in tool use, persistence, and monitoring. The product directly overlaps with OpenClaw’s value proposition: deploying Claude agents across messaging platforms without managing infrastructure.
April 11: Steinberger’s Claude account is suspended, then reinstated hours later.
April 12: OpenClaw’s documentation notes that “OpenClaw-style Claude CLI usage is allowed again” after “Anthropic staff told us” the practice was sanctioned, per the official docs. The phrasing suggests the situation remains ambiguous.
VCs Call It Suppression
On the All-In podcast (recorded around April 11-12), venture capitalist Jason Calacanis described the situation in explicit terms. Killing OpenClaw “is the number one goal” in the LLM space, Benzinga reported. Calacanis added that OpenAI’s acquisition of Steinberger (who now works on next-generation personal AI agents at OpenAI while maintaining his role at the OpenClaw Foundation) was designed “to subvert the open-source project.”
Steinberger has addressed the dual-role question directly. “You need to separate two things,” he told The Hans India. “My work at the OpenClaw Foundation, where we wanna make OpenClaw work great for any model provider, and my job at OpenAI to help them with future product strategy.”
The Platform Control Pattern
This is not the first time an AI provider has tightened access to third-party tools that compete with its own products. The pattern is becoming recognizable: a third-party integration gains traction, the platform launches a competing product, then the platform adjusts policies or pricing in ways that make the third-party tool harder to use.
What makes the Anthropic/OpenClaw case instructive is the speed of the sequence. Eight days separated the subscription cutoff from the competing product launch. The founder’s account suspension, even if triggered by a legitimate classifier bug, landed in the single worst news cycle possible for Anthropic’s optics.
For developers building on closed AI platforms, the lesson is structural. Any integration that routes significant usage through a provider’s infrastructure, and that the provider could replace with a first-party product, exists at the provider’s discretion. Multi-model support and open-source alternatives are not just technical preferences. They are risk management.
The OpenClaw documentation now lists Anthropic models as functional again. How long that lasts depends on decisions made inside Anthropic, not inside OpenClaw.