The New Claw Times

The latest news on OpenClaw, AI agents, and automation

Commentary

48 articles · Opinion, analysis, and editorial perspective on AI agents and automation.

The Monolithic AI SDR Is Dead: Why $74M-Funded 11x.ai Lost to $300/Month Multi-Agent Stacks

11x.ai raised $74M from a16z and Benchmark but delivered roughly $3M in actual ARR, with ZoomInfo publicly calling its agents worse than human SDRs. Artisan's Ava agent got rate-limited by LinkedIn for pattern abuse. The single-agent SDR model is collapsing at 50-70% annual churn while founders building five specialized agents spend $300/month and generate more pipeline. The architectural lesson applies far beyond sales.

· 3 min read

The Compound Failure Problem: Why 90% Accurate AI Agents Break Down in Production Multi-Step Workflows

A 90% success rate per step sounds good until you run a 10-step workflow. Then your overall success rate drops to 35%. At 20 steps, you're below 12%. Yutori co-founder Abhishek Das calls this the normalization of unreliability in the agent industry. Princeton researchers studying 14 agentic models confirm the pattern: capability scores keep climbing while reliability metrics barely move.

· 5 min read

Soft Nationalization of AI Is Already Underway, and Enterprise Buyers Should Be Pricing It In

The Atlantic reports the Trump administration has multiple legal levers to seize or regulate frontier AI labs, from Defense Production Act invocations to utility-style rate controls. Full nationalization is unlikely. But soft nationalization, where the government takes equity stakes, places officials on boards, and embeds engineers inside labs, is already happening. For enterprise buyers building on these APIs, the question is no longer theoretical.

· 4 min read

Anthropic's Opus 4.7 Tokenizer Quietly Raises API Costs Up to 35% While List Prices Stay Flat

Anthropic's Claude Opus 4.7 keeps the same $5/$25 per million token pricing as its predecessor. But a new tokenizer that produces up to 35% more tokens for identical text, a default shift to 'xhigh' reasoning in Claude Code, and automatic overage billing at $2,000 per day have combined to create what developers are calling a stealth price increase. The backlash is the first significant pushback against a company that has otherwise enjoyed near-universal developer goodwill.

· 4 min read

CBS News Asks 'Should You Let AI Agents Shop for You?' as Retailers Deploy Without Consumer Guardrails

CBS News ran a consumer risk editorial on AI shopping agents during its morning news cycle on April 17, featuring Boston Consulting Group, Tasklet's CEO, and security researchers all saying the same thing: agents can shop for you, but the trust layer is not ready. The piece contrasts these warnings with Amazon, Walmart, and Amex racing to deploy agentic commerce products.

· 3 min read

Harvard Business Review Publishes Research on China's Meituan AI Agent as the Agentic Commerce Archetype

HBR published research on April 17 analyzing Meituan's Xiaomei AI agent as the leading real-world deployment of what it calls an 'orchestrator plus execution agent.' Launched in late 2025, Xiaomei completes food delivery transactions from natural language intent with zero screen interaction. The research examines why Chinese platforms are 12 to 18 months ahead of Western counterparts in commercial agent deployment, and what design patterns the rest of the industry is converging toward.

· 2 min read

Google AI Director Addy Osmani Publishes Agentic Engine Optimization Framework for Content That AI Agents Can Parse and Act On

Addy Osmani, a director of engineering at Google Cloud AI working on Gemini, published a framework for Agentic Engine Optimization (AEO) that defines how web content should be structured for AI agents rather than human readers. The framework covers discoverability, parsability, token efficiency, capability signaling, and access control. Research cited in the framework shows AI coding agents compress multi-page human browsing sessions into one or two HTTP requests, making traditional engagement analytics invisible.

· 3 min read

The Guardian Questions Anthropic's Mythos Safety Narrative as Marketing Strategy

The Guardian published a critical analysis on April 12 examining whether Anthropic's decision to withhold Mythos from public release is genuine safety caution or the most effective PR campaign in AI. The piece documents Anthropic's recent media saturation, including a 10,000-word New Yorker profile, multiple Wall Street Journal features, and a Time magazine cover, alongside the contradiction of a 'responsible AI' company whose models coordinate Pentagon missile strikes. AI critic Gary Marcus and AI Now Institute's Heidy Khlaaf question whether the safety framing is engineered competitive advantage.

· 3 min read

1,200 Legal Hallucination Cases Worldwide and Counting: What the Attorney AI Crisis Reveals About Agent Deployment

HEC Paris has tracked over 1,200 cases involving AI hallucinations in legal systems worldwide, with 800 from the U.S. alone. The rate is still increasing despite courts imposing six-figure fines on lawyers who submit AI-generated briefs with fabricated case citations. The legal profession's experience is a controlled experiment in agent deployment: AI output looks authoritative enough to fool experts, but the validation overhead required to catch hallucinations consumes as much time as the AI saves. The implications extend to every domain where agents operate in high-stakes, accountability-heavy environments.

· 4 min read

Anthropic Suspended OpenClaw Founder Peter Steinberger's Claude Account, Then Reinstated It Hours Later

Anthropic temporarily revoked OpenClaw founder Peter Steinberger's access to Claude on April 11, citing 'suspicious activity' and a usage policy violation. Hours later, the account was reinstated. The suspension came one day after Anthropic launched Claude Managed Agents, a direct competitor to OpenClaw's core value proposition, and one week after Anthropic cut off Claude subscriptions from covering OpenClaw usage. On the All-In podcast, venture capitalist Jason Calacanis called killing OpenClaw 'the number one goal' in the LLM space.

· 3 min read

Three Attacks in Four Days Exposed the Security Debt in AI Agent Frameworks

In the last week of March, LangChain disclosed three high-severity CVEs affecting 60 million weekly downloads, Langflow was exploited within 20 hours of disclosure, and a threat group hijacked LiteLLM's PyPI publishing pipeline to distribute credential-stealing malware. A new analysis argues these aren't isolated incidents. They're symptoms of an infrastructure class that grew faster than its security posture.

· 3 min read

Yann LeCun Raised $1.03 Billion to Replace the Architecture Behind Every AI Agent

Crunchbase data shows foundational AI startups raised $178 billion in Q1 2026, double all of 2025. The most interesting bet in that pile isn't another LLM lab. It's Yann LeCun's AMI Labs, which raised $1.03 billion to build 'world models' that understand physical reality. At a Brown University lecture on April 1, LeCun made the agent connection explicit: today's agentic systems can't predict the consequences of their own actions. That's a problem world models are designed to solve.

· 4 min read

OpenClaw's Open-Source Architecture Creates a Governance Vacuum, Persistent Systems Architect Argues

A senior R&D architect at Persistent Systems compared OpenClaw, Claude Cowork, and Google Antigravity in a VentureBeat op-ed published today, arguing that the agentic AI moment is a state-shift, not a trend. His central concern: OpenClaw's open-source model means no central governing authority exists when something goes wrong, while vendor-backed tools at least have an accountability chain.

· 3 min read

OpenAI Is Asking State AGs to Investigate Elon Musk. It's Also Managing a CEO Trust Crisis. The Company Controls the API Layer Most Agents Run On.

OpenAI sent letters to the California and Delaware attorneys general on April 6 asking them to investigate Musk's alleged anti-competitive behavior, weeks before the April 27 trial begins. On the same day, The New Yorker published a 100-source investigation concluding that OpenAI insiders don't trust Sam Altman. For agent builders, both stories point at the same risk: the dominant infrastructure layer under your stack is run by a company in institutional crisis at the exact moment it's commanding record valuations.

· 4 min read

OpenAI's Policy Paper Calls for Robot Taxes and Public Wealth Funds. The Implicit Argument Is That Agents Are Already Disrupting Labor.

OpenAI published a policy paper on April 6 outlining a vision for managing AI's economic impact: robot taxes to shift the burden from labor to capital, a Public Wealth Fund to give citizens automatic stakes in AI infrastructure, and a subsidized four-day workweek. The paper's real signal for agent builders is what OpenAI assumes as a baseline: that autonomous AI systems are already disrupting labor markets at scale, and that redistribution mechanisms are necessary as a result.

· 3 min read

AWS Frontier Agents Go GA: Autonomous DevOps and Penetration Testing Hit Production Across Six Regions

Amazon Web Services launched two autonomous AI agents into general availability on March 31 — the AWS DevOps Agent for incident response and the AWS Security Agent for penetration testing. Both operate without continuous human oversight, integrate across multicloud environments, and are priced to undercut traditional engineering staffing costs. With Microsoft's Azure SRE Agent already GA since March 10, the hyperscaler race to sell pre-built autonomous operations agents is now a two-horse sprint. Google Cloud has no equivalent first-party offering. This analysis breaks down what the agents actually do, what they cost, where they fall short, and what it means for engineering teams that suddenly face a buy-vs-hire decision on core operational functions.

· 7 min read

AI Agents Are Starting to Spend Money, and Crypto May Be Better Positioned Than Banks to Handle It

As AI agents move from demos to production, they need to pay for APIs, compute, and services without human intervention. CryptoSlate argues the real crypto winners from the agent economy won't be AI-branded tokens but stablecoin infrastructure, machine-readable wallets, and cryptographic identity layers. Meanwhile, a developer marketplace called TaskBounty is already letting agents earn real USDC by completing bounties. The agent payments question is no longer theoretical.

· 3 min read

Anthropic Shipped Four OpenClaw-Rival Features in Ten Weeks — What That Velocity Means for the Agent Market

Between January 12 and March 24, Anthropic launched Cowork, Dispatch, Claude Code Channels, and full computer-use control — systematically replicating the capabilities that made OpenClaw a 333,000-star phenomenon. The Information's AI Agenda newsletter flagged Claude as 'gaining on OpenClaw' today. Here's a timeline-by-timeline breakdown of what Anthropic shipped, what's still missing, and what it signals about where the agent market is headed.

· 4 min read

RSA 2026 Mid-Conference Report: AI Agent Security Dominated the Exhibition Floor

Three days into RSA Conference 2026, a pattern is unmistakable: AI agent security has gone from a niche breakout track to the dominant product category on the exhibition floor. Cisco is registering non-human identities in Duo. IBM is requiring YubiKey taps before agents can execute high-risk actions. 1Password launched a unified vault for humans and AI agents. Databricks entered the cybersecurity market entirely. Every major vendor at RSAC this year shipped something aimed at the same problem: autonomous software that acts on behalf of humans, with credentials humans never explicitly granted.

· 4 min read

OpenAI's Seven-Move Tuesday: Sora Killed, Disney Gone, Safety Handed Off, $10B Raised, All in 24 Hours

On March 25, OpenAI made seven distinct announcements in a single day: shutting down Sora, losing the $1 billion Disney deal, handing off safety oversight, revealing a new model codenamed 'Spud,' closing a $10 billion raise, committing $1 billion through its Foundation, and killing the ChatGPT shopping feature. Taken together, these moves reveal a company stripping consumer-facing products to concentrate entirely on the agent and AGI mission.

· 4 min read

The Pentagon Called Anthropic a National Security Threat, Then Handed the Contract to OpenAI

The Department of Defense filed a formal rebuttal calling Anthropic's AI safety red lines an 'unacceptable risk to national security.' OpenAI filled the gap within weeks through an AWS classified-network deal. 150 retired federal judges and 30+ employees from rival labs now back Anthropic's legal fight. The AI industry's most consequential loyalty test is playing out in federal court.

· 3 min read

MCP Is Winning: IBM Declares 2026 the Year Agent Protocols Hit Production, While SignNow Ships the Proof

IBM published its 2026 AI trends forecast declaring that multi-agent communication protocols — Anthropic's MCP, IBM's own ACP, and Google's A2A — are moving from lab experiments to production deployments. Hours later, airSlate SignNow launched the first MCP integration for e-signatures, letting AI agents send and track contracts autonomously. The protocol layer under the GTC hype is quietly becoming the real infrastructure story of 2026.

· 3 min read

NextPlatform Declares OpenClaw the 'GPT Moment' for Agentic AI After Huang's GTC Keynote

The enterprise infrastructure publication NextPlatform published a thesis piece arguing OpenClaw occupies the same defining role for agentic AI that GPT-3 played for conversational AI. After Jensen Huang's GTC keynote canonized OpenClaw as foundational infrastructure, the comparison raises a specific question: if OpenClaw is the new GPT, who are the winners and who are the dead startups walking?

· 4 min read
← Back to all stories