Broadcom disclosed two AI infrastructure deals on April 6. The first: an agreement to produce future versions of Google’s custom AI chips. The second: an expanded arrangement with Anthropic for approximately 3.5 gigawatts of Google TPU capacity, expected to come online starting in 2027. CNBC reported the filing the same evening. Broadcom shares rose 3% in extended trading.
Broadcom already had existing business with both companies. On an earnings call last month, Broadcom CEO Hock Tan said the company was “off to a very good start in 2026” in delivering 1 gigawatt of TPU compute for Anthropic. The April 6 deal expands that to a projected 3.5 gigawatts for 2027. Mizuho analysts estimated the Anthropic relationship alone will generate $21 billion in revenue for Broadcom in 2026 and $42 billion in 2027.
Anthropic’s Infrastructure Position
Anthropic’s blog post on the partnership disclosed that its annualized revenue has exceeded $30 billion, up from approximately $9 billion at the end of last year. The company now counts over 1,000 business clients spending more than $1 million annually, double the count as of two months ago. 9to5Google confirmed those figures from the same post. “We are building the capacity necessary to serve the exponential growth we have seen in our customer base,” Anthropic CFO Krishna Rao said in the announcement.
The vast majority of the new compute will be located in the United States, Anthropic said.
For Agent Builders: The Long-Term Calculation
Custom silicon matters to anyone building on top of these model providers because it directly affects inference economics. TPUs are designed specifically for the tensor math that runs neural networks. Google-designed TPUs running Anthropic’s models means faster inference throughput and lower per-token cost than general-purpose GPU alternatives, particularly at the scale Anthropic is projecting.
That matters for agent workloads specifically, where the per-call cost accumulates across dozens or hundreds of tool calls per task. A 2027 Anthropic running 3.5 gigawatts of custom TPU capacity is one with meaningfully different cost structure than today.
The timing creates a contrast worth noting. Anthropic ended flat-rate Claude access for third-party agent harnesses like OpenClaw on April 4, a week before this deal closed. The infrastructure bet suggests the move was pricing strategy rather than capacity constraint: building the hardware to eventually bring per-token costs down while repricing the short-term access model upward.
Broadcom is also working with OpenAI on custom silicon. OpenAI has committed to drawing on six gigawatts of AMD GPUs, with the first gigawatt arriving in the second half of this year. The custom silicon race among US AI labs is now a competitive front alongside model capability and pricing.