Meta has indefinitely suspended all work with Mercor, the $10 billion AI data contracting startup that supplies training specialists to OpenAI, Anthropic, and Meta itself, WIRED reported, citing two sources familiar with the decision. The pause came after Mercor confirmed on March 31 that it had been hit by a supply-chain attack targeting LiteLLM, an open-source tool used by millions of developers to route requests between AI services.
Mercor sits at a critical junction in the AI model pipeline. Founded in 2023 and valued at $10 billion after a $350 million Series C led by Felicis Ventures in October 2025, the company contracts specialists including doctors, scientists, lawyers, and engineers to create training datasets for the labs building the models that power autonomous agents, according to Fortune. A breach at Mercor puts proprietary training methodologies at risk across multiple frontier AI companies simultaneously.
Who Claimed Responsibility
The breach was linked to a hacking group called TeamPCP, known for supply-chain attacks on widely used software libraries, according to Livemint. Separately, hacking group Lapsus$ also claimed credit and posted samples of what it said was stolen data, including material referencing Slack conversations and ticketing data, plus two videos purportedly showing exchanges between Mercor’s AI systems and contractors.
Mercor spokesperson Heidi Hagburg told TechCrunch, per Livemint, that the company had “moved promptly” to contain the situation and launched a third-party forensics investigation. She did not confirm whether customer or contractor data had been accessed.
Downstream Impact
OpenAI told WIRED that the breach “in no way affects OpenAI user data” but confirmed it is investigating how its proprietary training data may have been exposed, per Times of India. Anthropic did not respond to requests for comment. Contractors staffed on Meta projects were not given a direct explanation for the pause and cannot log billable hours until work resumes.
For anyone building agents on top of frontier models, this is a supply-chain security story. The humans who shape training data for the LLMs powering autonomous agents just had their intermediary compromised. If training methodologies or proprietary dataset structures leaked, the integrity of downstream model behavior is an open question until the forensic investigation concludes.