Confidential financial documents shared with investors before this year’s funding rounds show OpenAI and Anthropic face the same fundamental problem: the cost of training new AI models is growing faster than revenue, and there’s no sign that race slows down. The Wall Street Journal reported on the documents, with follow-on analysis from Seeking Alpha and multiple financial outlets.
The numbers are stark. OpenAI expects to spend $121 billion on computing power for AI research by 2028, up from roughly $30 billion in 2026, according to Digital Today’s reporting on the WSJ documents. At that spend level, the company projects burning $85 billion in cash flow in 2028 even if revenue nearly doubles by 2027. OpenAI is not expected to break even until after 2030 when training costs are included. Excluding research computing costs, it could reach a small operating profit this year.
Anthropic’s trajectory is similar. The company has set aggressive revenue targets: $18 billion for 2026, $55 billion in 2027, and $148 billion by 2029, according to Infor Capital’s analysis. Anthropic projects breaking even sooner than OpenAI when training costs are included, but it faces the same structural pressure: inference costs already exceed half of revenue at both companies.
The Train/Serve Cost Gap
Both companies are now presenting profitability in two ways to investors: with and without training costs. The “excluding training costs” metric shows a rosier near-term picture. Including them reveals a business where capital outlays dwarf revenue for years. The Wall Street Journal described this as the “Achilles heel” for both companies, per Digital Today.
The cost pressure compounds as models get more capable. As Seeking Alpha noted, “as it becomes increasingly difficult to raise AI performance by one level, costs are also higher than for previous models.” Both companies are releasing new model versions at an accelerating pace, which means training spend keeps rising.
Both companies are betting on custom silicon as the long-term cost fix. Anthropic’s 3.5 GW TPU commitment with Google and Broadcom, reported last week, is explicitly aimed at reducing dependence on Nvidia-priced compute by 2027. OpenAI is pursuing similar arrangements.
Why This Shows Up in Your API Bill
For anyone building on OpenAI or Anthropic APIs, these financials explain the decisions you’ve already seen. Anthropic blocking flat-rate OpenClaw access for third-party agent harnesses last week is a direct expression of compute cost pressure. When inference costs exceed 50% of revenue and training costs are growing toward nine figures annually, every developer-facing pricing and access policy becomes a cost management tool.
The IPO race adds another variable. Both companies need to show improving unit economics before going public. That creates near-term pressure to widen per-token margins and reduce subsidized usage patterns. Agent workloads, which generate large token volumes at low average revenue per user, are among the first targets.
The compute cost trajectory of Anthropic and OpenAI is not a background business story. It’s the upstream constraint that shapes API pricing, subscription terms, and access policies for every agent deployment built on their models.