OpenAI, Anthropic, and Google have begun sharing threat intelligence to detect and block Chinese competitors using their API outputs to train imitation models. Bloomberg reported the cooperation on April 6, citing people familiar with the matter. The three companies are coordinating through the Frontier Model Forum, the industry nonprofit they co-founded with Microsoft in 2023. Google, Anthropic, and the Frontier Model Forum declined to comment on the record. OpenAI confirmed participation and pointed to a recent memo it sent to Congress on the practice.
What Distillation Is and Why It Matters
Distillation is a technique where a “student” model is trained on the outputs of a more capable “teacher” model. Some forms are standard practice: AI labs routinely use distillation to create smaller, faster versions of their own systems. The controversy here is adversarial distillation: using a competitor’s API at scale to train a rival model without authorization.
For the three Western labs, the economic threat is direct. Their models are proprietary and priced accordingly. Chinese competitors largely release open-weight models, which are cheaper to use because the underlying weights are freely downloadable. According to Moneycontrol’s reporting on Bloomberg’s story, US officials have estimated that unauthorized distillation costs Silicon Valley labs billions of dollars in annual profit.
The security threat runs separately: a distilled model may lack the safety tuning of its source, including controls that prevent generation of instructions for weapons or biological agents.
The DeepSeek Trigger
Adversarial distillation became a central concern in January 2025 after DeepSeek’s R1 reasoning model release. OpenAI and Microsoft investigated whether DeepSeek had extracted large volumes of output from US models to build R1. OpenAI subsequently told the House Select Committee on China that DeepSeek was continuing to use “increasingly sophisticated tactics” to extract results from its models. The Japan Times reported that this prior investigation directly prompted the current intelligence-sharing arrangement.
The Frontier Model Forum coordination means the three companies are now pooling detection signals: patterns of API usage that resemble distillation harvesting, account behaviors consistent with systematic output extraction, and detection signatures that individual labs might miss but could identify in aggregate.
For Agent Builders
The models at the center of this cooperation are the same foundation every agent harness depends on. OpenClaw agents, Claude Code pipelines, and Gemini-powered workflows all sit on top of model inference from these three providers. The argument for using proprietary models over open-weight alternatives rests on capability and safety tuning that distillation can erode.
If Chinese competitors successfully replicate GPT-4 or Claude-class capability at open-weight cost, the competitive calculus for agent infrastructure shifts. A distilled model with no rate limits, no per-token pricing, and no safety restrictions changes what builders in unregulated markets can deploy at what cost.
The information-sharing effort echoes practices from the cybersecurity industry, where companies routinely share threat indicators about malware campaigns regardless of competitive position. The AI equivalent is now operational.