Meta has signed a multibillion-dollar, multi-year agreement with AWS to deploy tens of millions of Amazon Graviton5 processor cores for agentic AI workloads, Amazon announced Friday. The deal makes Meta one of the largest Graviton customers in the world.

The notable detail: these are general-purpose ARM CPUs, not AI accelerators. While GPUs handle model training, the rise of agentic AI is creating massive demand for CPU-intensive work: real-time reasoning, code generation, search, and the coordination involved in managing agents through multi-step tasks. Graviton5, built on 3nm process technology with 192 Neoverse V3 cores, is designed specifically for these workloads.

Why CPUs for AI

“As we scale the infrastructure behind Meta’s AI ambitions, diversifying our compute sources is a strategic imperative,” Santosh Janardhan, Meta’s head of infrastructure, said in Amazon’s announcement. “Expanding to Graviton allows us to run the CPU-intensive workloads behind agentic AI with the performance and efficiency we need at our scale.”

The Graviton5 chip’s cache is five times larger than the previous generation, reducing inter-core communication delays by up to 33%, according to Amazon. The chips also support Elastic Fabric Adapter for low-latency, high-bandwidth communication between instances, which is essential for distributed agentic workloads where tasks are split across many processors.

The Competitive Context

As TechCrunch noted, AWS timed the announcement to coincide with the close of Google Cloud Next, where Google announced its own eighth-generation TPU chips. The deal also brings more of Meta’s spending back to AWS after Meta signed a six-year, $10 billion deal with Google Cloud last August.

Amazon’s own AI GPU, the Trainium, is largely spoken for: Anthropic agreed earlier this month to spend $100 billion over 10 years running workloads on AWS with a focus on Trainium, while Amazon invested another $5 billion into Anthropic.

The Graviton deal positions Amazon’s homegrown CPUs against Nvidia’s Vera CPU, also ARM-based and designed for agentic workloads. The difference, as TechCrunch observed, is that Nvidia sells chips to anyone while AWS only sells access through its cloud.

The Infrastructure Map

This Graviton deal is one piece of Meta’s compute procurement strategy that now exceeds $200 billion across multiple suppliers, according to The Next Web: approximately $50 billion with Nvidia, $60 billion with AMD, $35 billion with CoreWeave, $27 billion with Nebius, plus custom MTIA silicon built in-house. No single supplier can meet Meta’s demand.

The takeaway for the infrastructure layer: as agents move from demos to production, the compute bottleneck is shifting. GPUs train the models. CPUs orchestrate the agents. Meta is now buying both at scale.