Anthropic co-founder Jack Clark told journalist Derek Thompson in an extended interview published this week that AI-driven mass unemployment is “a choice that we can make,” directly contradicting CEO Dario Amodei’s repeated prediction that AI could destroy half of all entry-level white-collar jobs and push unemployment to 20% within five years.

The interview, published on Thompson’s “Abundance” newsletter, covers agents, nuclear weapons analogies, and what Clark calls the technology industry’s obligation to stop sugarcoating what AI systems can do. For anyone building or deploying autonomous agents, the conversation offers the clearest window yet into how Anthropic’s leadership thinks about the economic consequences of the tools they’re shipping.

The Disagreement on Unemployment

When Thompson pressed Clark on Amodei’s 20% unemployment forecast, Clark was explicit: “I don’t agree with this, because I think it’s a choice that we can make.” He argued that major employment shifts historically take longer to filter through economies than people expect, even with technology as powerful as what Anthropic is building.

Clark’s framing carries weight because it comes from someone who co-founded the company and has spent five years running its public policy team. His counterpoint to Amodei centers on what happens to the revenue AI generates. If agents and AI systems produce massive economic growth, governments can choose to redirect that growth into labor-intensive sectors like teaching and nursing through cross-sector wage subsidies. The displacement, in Clark’s view, is real but not inevitable.

“If you end up in a situation where employment is negatively affected by AI in one part of the economy, but loads of money being generated by AI in another part of the economy, you could choose to create jobs,” Clark told Thompson.

Anthropic’s $20 Billion Revenue and the Bubble Question

Thompson’s interview opens with a data point that contextualizes everything else: Anthropic’s annual recurring revenue has more than doubled from $9 billion to over $20 billion between December 2025 and March 2026. Thompson writes that “according to several analysts, there is no record of any company growing this fast at this scale … ever.”

That growth rate matters for the agent ecosystem because much of it is driven by Claude’s integration into autonomous workflows. As Business Insider reported this week, OpenClaw and similar agent frameworks are a primary driver of Claude’s compute consumption, with automated multi-step loops burning through tokens at rates that are straining Anthropic’s infrastructure.

The Anthropic Institute

Clark used the interview to explain his new role leading the Anthropic Institute, a 30-person internal think tank launched earlier this month. The Institute combines three existing Anthropic teams: the Frontier Red Team, the Societal Impacts team, and the Economic Research team.

The Verge reported that founding members include Matt Botvinick (formerly of Google DeepMind), Anton Korinek (University of Virginia economics professor), and Zoe Hitzig, who left OpenAI after its decision to introduce ads in ChatGPT. Clark told The Verge he expects staff to double every year.

The Institute’s stated goal is studying how AI reshapes labor markets, legal systems, and social dynamics. CIO reported that Anthropic said the Institute “will engage with workers and industries facing displacement, and with the people and communities who feel the future bearing down on them but are unsure how to respond.”

Clark told The Verge that the think tank had been planned since November, but the timing coincides with Anthropic’s ongoing legal battle with the Pentagon over its supply chain risk designation. He described safety research as “not a cost center but a profit center,” arguing that trust-building through transparency generates commercial value.

The Nuclear Weapons Analogy and Agent Governance

Thompson pushed hard on Anthropic’s habit of comparing AI to nuclear weapons, asking why the analogy applies to export controls and government oversight but not to the conclusion that private companies shouldn’t build the technology at all.

Clark’s response is relevant to anyone deploying autonomous agents: AI is “like a factory that produces cars, micro scooters, animals, and nuclear weapons all at the same time.” The governance challenge is deciding which outputs get released and which stay restricted. He pointed to Anthropic’s work with the National Nuclear Security Administration testing how well AI understands nuclear technology as an example of how private-public collaboration should work.

For agent builders, the implication is clear. The same technology that automates inbox cleanup or code review could, in Clark’s framing, produce outputs that require government-level oversight. The boundary between a productivity tool and a capability that needs restriction is determined by what the agent is instructed to do, not by the underlying model.

Why This Matters for Agent Builders

Clark’s interview arrives during a week when Anthropic is at the center of multiple converging storylines: a federal judge blocked the Pentagon’s blacklist, the company’s compute caps are tightening under agent-driven demand, and a leaked Claude Mythos model suggests the next generation of capabilities is already in testing.

The core tension Clark articulates applies directly to anyone running autonomous agents in production. Agent frameworks give AI systems the ability to take actions in the real world. The question of whether those actions cause mass displacement or generate new economic opportunity depends on policy choices made outside the AI labs. Clark is making a bet that being honest about these tradeoffs is commercially smarter than pretending they don’t exist.

Whether that bet pays off may depend on how quickly the agent economy generates enough revenue to fund the safety infrastructure Clark is building. At $20 billion ARR and doubling, Anthropic has time. The workers whose jobs agents are automating may not.