The Guardian published a probing analysis on April 12 that reframes the Mythos story through a lens most AI coverage has avoided: marketing.

The piece, titled “‘Too powerful for the public’: inside Anthropic’s bid to win the AI publicity war,” does not claim Anthropic’s safety concerns about Mythos are insincere. Instead, it documents how the withholding of Mythos functions simultaneously as safety responsibility and competitive positioning, and asks whether the framing has been deliberately crafted to serve both purposes.

The PR Infrastructure

The Guardian’s core observation is that Anthropic’s “responsible AI” reputation did not emerge organically. In recent months, the company secured a 10,000-word New Yorker profile, two Wall Street Journal features, a Time magazine cover with CEO Dario Amodei’s face above the Pentagon, and appearances on two separate New York Times podcasts. Amodei and co-founder Jack Clark discussed whether Claude might be conscious and whether it could “rip through the economy.”

Anthropic’s media lead, Danielle Ghiglieri, documented the wins publicly on LinkedIn. Of the Time cover, she wrote it was “one of those pinch-me moments.” Of the New Yorker profile: “I would be lying if I said I wasn’t nervous for our first meeting in person.”

One unnamed tech PR professional told The Guardian: “They are clearly having a moment right now but companies building technology that will change the world deserve equal scrutiny. They accidentally leaked their own source code last week, then this week they claim stewardship over cyber threats with a new powerful model that only they control. Any other big tech firm would be ridiculed.”

The Contradiction

The piece foregrounds a tension that has received comparatively little coverage. Anthropic positions itself as the responsible alternative to OpenAI’s aggressive release strategy. Its models are also being used by the Pentagon to coordinate missile strikes on Iran and deployed by Wall Street banks at federal government urging. The company managed to emerge from the Pentagon dispute looking better than OpenAI, which offered similar military capabilities with fewer guardrails.

Gary Marcus, the AI researcher and frequent industry critic, is quoted directly: “Dario has far more technical chops than Sam [Altman], but seems to have graduated from the same school of hype and exaggeration.”

Independent Assessment

Dr. Heidy Khlaaf, chief AI scientist at the AI Now Institute, told The Guardian that Anthropic’s Mythos capabilities were not “substantiated.” She described the release as “a marketing post with purposely vague language that obscures evidence,” suggesting the company may be trying to “garner further investment without scrutiny.”

Khlaaf drew a direct parallel to OpenAI’s trajectory: “We may be seeing the very same bait and switch playbook that was used by OpenAI, where safety is a PR tool to gain public trust before profits are prioritized. Anthropic publicity has managed to better obscure this switch than its rivals.”

Jameison O’Reilly, an offensive cybersecurity expert, provided a more measured view. Mythos “is a real development and Anthropic was right to treat it seriously,” he told The Guardian. But some claims, such as the discovery of thousands of zero-day vulnerabilities, were less significant than they appeared. In over a decade of authorized penetration testing across banks, governments, and critical infrastructure, O’Reilly said “the number of times we needed a zero-day vulnerability to achieve our objective was vanishingly small.”

The Infrastructure Constraint

The Guardian raises an additional factor: Anthropic may not have the compute capacity to release Mythos even if it wanted to. The company has introduced usage caps on Claude, and recently required users to purchase extra capacity to run third-party tools like OpenClaw. Releasing a hyped frontier model to millions of users requires infrastructure Anthropic may not yet have, making the safety framing convenient in a way that a pure capacity constraint would not be.

The Agent Trust Question

This piece matters for the agent ecosystem because deployment decisions increasingly hinge on “safety” reputation. Teams choosing Claude over GPT-4 or Gemini often cite Anthropic’s safety positioning as a differentiator. If that positioning is partly engineered, the signal becomes harder to read. The Guardian’s analysis does not resolve the question, but it is the first mainstream piece to critically interrogate the Mythos narrative rather than accept the company’s framing at face value.