Morgan Stanley published a research note on Sunday projecting that agentic AI will add $32.5–60 billion to the global data-centre CPU market by 2030, according to Reuters. The note argues that as AI systems transition from generating text to executing autonomous multi-step tasks, the computing bottleneck shifts from graphics processors to general-purpose CPUs and memory.

“As AI transitions from generation to autonomous action, the computing bottleneck is shifting towards CPU and memory, driving a step-change in general-purpose compute intensity,” the firm wrote, per the Economic Times. Morgan Stanley added that GPU demand remains strong, but the next wave of agentic AI will be driven more by coordination than raw computing power.

The CPU Thesis

The core argument: agentic AI workloads look fundamentally different from training or inference. An autonomous agent planning tasks, calling tools, managing state, and coordinating with other agents requires sustained CPU orchestration, not just bursts of parallel matrix math. CPUs increasingly act as the control layer for AI systems that manage multi-step workflows.

That distinction matters because it widens the addressable market for AI infrastructure. The current data-centre CPU market already exceeds $100 billion. Morgan Stanley’s $32.5–60 billion estimate represents incremental spend on top of that baseline, driven specifically by agentic workloads.

Who Benefits

Morgan Stanley named specific beneficiaries across the semiconductor supply chain, according to Investing.com:

  • CPUs and accelerators: Nvidia, AMD, Intel, and Arm
  • Memory: Micron, Samsung, and SK hynix
  • Chipmaking and equipment: TSMC and ASML

Memory demand is set to rise sharply as agents maintain persistent state, context windows, and coordination logs. Companies in supply-constrained parts of the ecosystem could gain pricing power, the brokerage noted.

The Infrastructure Math

The note arrives as multiple signals converge on the same conclusion: agent infrastructure costs are broadening. Google’s Jeff Dean confirmed last week that the company is developing inference-specialized TPUs for agent workloads, per NCT’s earlier reporting. GitHub reported 275 million commits per week driven by AI agents in March 2026, per NCT’s earlier coverage. The Silicon Valley Agentic AI Summit earlier this month featured leaders from Google, Amazon, Microsoft, and Meta acknowledging that production agent systems are operationally complex and expensive, per NCT’s reporting.

Morgan Stanley’s research puts a dollar figure on what the industry has been building toward. If agentic AI does add $32.5–60 billion in CPU demand by 2030, the semiconductor industry’s AI story is no longer just a GPU story. It becomes a full-stack infrastructure buildout where coordination, memory, and general-purpose compute matter as much as raw model throughput.