Two stories dropped on April 6 that individually look like inside-baseball. Together, they describe a company that controls the largest AI agent infrastructure stack in the world while managing simultaneous crises on opposite ends of its org chart.
The Legal Escalation
On April 6, OpenAI sent letters to the attorneys general of California and Delaware asking them to investigate what it calls “improper and anti-competitive behavior” by Elon Musk, according to CNBC and Mercury News.
The letter, written by OpenAI strategy chief Jason Kwon, alleges that Musk has been “coordinating his efforts” with Meta CEO Mark Zuckerberg to undermine OpenAI. Both California AG Rob Bonta and Delaware AG Kathy Jennings participated in reviews that cleared OpenAI’s for-profit restructuring in October 2025. OpenAI’s argument in the letter: if Musk’s legal campaign succeeds, it would benefit xAI’s Grok platform. The letter also references The New Yorker’s investigation into opposition research that Musk’s team reportedly conducted against Altman, including allegations that Musk’s associates circulated false claims of sexual misconduct against the CEO.
Jury selection for the trial, where Musk is seeking up to $134 billion in damages from OpenAI and Microsoft, is scheduled to begin April 27 in the Northern District of California.
The CEO Trust Investigation
On the same day, Ars Technica published a summary of a New Yorker investigation based on interviews with more than 100 people familiar with how Altman operates. The takeaway is pointed: insiders describe Altman as someone with “a strong desire to please people” combined with “almost a sociopathic lack of concern for the consequences that may come from deceiving someone,” according to one board member characterization in the piece.
Former research head Dario Amodei, who left to found Anthropic, wrote in messages reviewed by The New Yorker: “The problem with OpenAI is Sam himself.” Former chief scientist Ilya Sutskever, who left in 2024, reached similar conclusions.
Altman disputed elements of the story and attributed shifting positions to the changing AI landscape. OpenAI on the same day published a policy paper on ensuring AI benefits humanity, which Ars Technica notes created an awkward juxtaposition: the company calling for transparency on AI risks while its own leadership is being described by insiders as having credibility problems.
The Infrastructure Question for Agent Builders
Here is why these two stories matter to anyone building on agents: OpenAI is the dominant infrastructure layer under most production agent deployments right now. The Responses API, the Assistants API, GPT-4o-based tool calls, and OpenAI’s function calling conventions have become de facto standards that competing providers have had to accommodate. Anthropic designed Claude’s tool use to mirror OpenAI’s spec. Agent frameworks like LangChain and OpenClaw both treat OpenAI as a first-party integration.
When the company running that infrastructure is simultaneously managing a $134 billion lawsuit, escalating pre-trial legal pressure through two state attorneys general, and surfacing a 100-source CEO trust investigation, those are simultaneous signals about institutional stability at a company whose API uptime and roadmap continuity your agent architecture depends on.
Musk’s legal campaign introduces one specific risk: a successful lawsuit could force governance changes or financial obligations that affect OpenAI’s operating model. xAI and Grok are direct competitors to the APIs most agents run on. If Musk wins, he wins in a way that benefits his own competing platform.
The Altman trust story introduces a different risk. Leadership credibility problems tend to surface operationally: in product decisions that shift without clear reasoning, in research priorities that follow external pressures rather than technical merit, in safety commitments that slide when capital requirements increase. OpenAI just closed a $122 billion funding round at an $852 billion valuation. The pressure to justify that number is real.
The Timing
The Musk trial opens April 27. The New Yorker investigation is out now. Both California and Delaware attorneys general are reviewing the letter.
OpenAI is, by most measures, at peak value and peak institutional complexity at the same time. For agent builders, the practical move is not to panic but to document. Know which parts of your stack would break if OpenAI had an extended outage, a pricing shock, or an API deprecation. Know which competitors could absorb your traffic on short notice. You do not need to act on this today. You do need to know the answer.