The Information’s AI Agenda newsletter reported today that Anthropic’s Claude agents now match many of the features that made OpenClaw a sensation, and that the gap between the two is closing week by week. The assessment arrives ten weeks after Anthropic launched Cowork, the first product in a shipping cadence that has produced four distinct OpenClaw-competitive capabilities in rapid succession.

The timeline tells the story more clearly than any single launch announcement could.

The Ten-Week Timeline

January 12: Cowork launches. Anthropic introduced Claude Cowork as its agentic productivity layer, giving Claude persistent task threads and the foundation for autonomous work.

March 17: Dispatch ships. Anthropic released Dispatch, a mobile feature that lets users send tasks to Claude from their phone while the desktop agent executes them. This directly replicated OpenClaw’s core value proposition: message an AI from your phone, have it do work on your machine. CNBC reported that users can “message Claude a task from a phone, and the AI agent will then complete that task.”

March 20-21: Claude Code Channels. Anthropic launched native Telegram, Discord, iMessage, and Slack connectors inside Claude Code. VentureBeat called it “an OpenClaw killer,” noting that the feature gave Claude “the same basic functionality — the ability for users to message it from popular third-party apps Discord and Telegram, and have it message them back when it finishes a task.”

March 24: Computer use. Claude gained the ability to directly control a user’s Mac — clicking buttons, opening applications, typing into fields, and navigating software autonomously. Anthropic built a priority system where Claude checks for direct connectors first, falls back to browser control, and uses screen-level interaction as a last resort.

Between these headline features, Anthropic also added plugins, admin controls, and scheduled tasks. As the Towards AI newsletter documented: “A paid Claude Cowork user can now message an agent from their phone, let it work on their machine, connect it to dozens of apps, and hand it the mouse to the full computer when connector or API access isn’t available.”

What Anthropic Built That OpenClaw Doesn’t Have

The Towards AI analysis identified the core strategic difference: Anthropic is wrapping OpenClaw’s primitives in an enterprise permission model. “Open source found the primitive,” the newsletter wrote. “Anthropic wrapped it in the permission model that lets a company actually deploy it.”

Specific advantages Anthropic has shipped that OpenClaw lacks natively:

  • Connector-first architecture. Claude checks for direct API integrations before falling back to screen control. VentureBeat reported that Anthropic’s documentation explains: “pulling messages through your Slack connection takes seconds, but navigating Slack through your screen takes much longer and is more error-prone.”
  • Per-app permission gating. Claude requests access before touching any new application. Users can stop it at any time.
  • Prompt injection scanning. Built into the agent loop at the model level.
  • Admin controls. Enterprise administrators can manage channels and maintain sender allow-lists, per Silicon Republic.

What OpenClaw Still Has That Claude Doesn’t

Three structural advantages that Anthropic’s sprint hasn’t addressed:

Model agnosticism. OpenClaw connects to any model provider. If Anthropic raises prices, changes rate limits, or has downtime, Claude users have no fallback. OpenClaw users can route to Kimi K2.5, MiniMax, or any OpenAI model without changing their setup.

Infrastructure flexibility. Dispatch requires the Claude Desktop app running on a physical Mac. OpenClaw runs on VPS instances, Raspberry Pis, cloud servers, or any Linux box — headless, 24/7, no desktop GUI required.

Community extensibility. OpenClaw’s skill registry is open and user-contributed. Anyone can write a plugin. Claude’s connectors are Anthropic-controlled.

The Market Signal

The speed of Anthropic’s response reveals what OpenClaw’s 333,000 GitHub stars actually proved: that the demand for always-on, message-based AI agents is large enough for a well-funded lab to sprint resources toward it.

Towards AI quantified a secondary consequence of this agent push: each new agentic feature dramatically increases token consumption per user. “A single Cowork session running scheduled tasks, clicking through apps, and filling spreadsheets burns far more compute than a conversation,” the newsletter noted. “Every new agentic workflow Anthropic or anyone else ships multiplies the demand per user.”

Reuters reported on March 23 that OpenAI is actively courting private equity firms in what it described as an “enterprise turf war with Anthropic.” The agent layer is now the battlefield where that turf war is being fought.

The ten-week cadence suggests Anthropic views this as a race it can’t afford to lose. Whether OpenClaw’s open-source community can maintain differentiation through model agnosticism and infrastructure flexibility, or whether Anthropic’s enterprise permission model absorbs the mainstream market, is the central question for the second half of 2026.