← Front Page
AI Daily
Geopolitics • April 14, 2026

OpenAI, Anthropic, and Google Are Now Sharing Secrets to Stop China

By AI Daily Editorial • April 14, 2026

Three of the fiercest competitors in AI announced this week that they are coordinating through the Frontier Model Forum to combat what they describe as systematic extraction of their models' capabilities by Chinese AI laboratories. The practice at issue, broadly called model distillation, involves training a smaller model on the outputs of a larger proprietary one, allowing a competitor to capture much of the capability gap without the underlying research investment. For US frontier labs, this represents an asymmetric threat: years of compute spending and engineering work can be partially recovered by a competitor running queries against an API.

The coordination is notable precisely because of who is involved. OpenAI, Anthropic, and Google are companies that compete aggressively on model benchmarks, pricing, and enterprise contracts. They have each accused others, directly or indirectly, of unfair practices at various points. The decision to share intelligence about adversarial extraction represents a maturation of the industry's relationship with geopolitics. AI capabilities are now being treated, by the companies building them, as strategic assets that require collective defense rather than merely competitive protection.

Anthropic had been building toward this publicly. In February 2026, the company publicly accused Chinese AI laboratories of systematically mining Claude's outputs to improve their own models, a complaint that generated significant coverage but relatively little coordinated industry response. The Frontier Model Forum announcement escalates from complaint to organized countermeasure: these companies are now treating the problem as a shared threat rather than Anthropic's particular grievance.

The irony in the situation is worth noting. These are companies that have spent years defending their right to train models on text, images, and code produced by humans, often without explicit consent and over the sustained objections of the people who made that content. The legal and moral terrain of those arguments is still contested. They are now asserting a much stronger form of protection over the outputs their models generate, on the grounds that those outputs represent proprietary intellectual property. The positions are not necessarily contradictory, but the asymmetry is striking: content produced by people flows freely into training data; content produced by AI is a protectable asset.

The practical challenge of enforcement is harder than the announcement makes it sound. Chinese labs can access US models through API calls, through third-party intermediaries, or through openly available fine-tuning datasets that themselves contain model outputs. Detecting systematic distillation requires technical forensics: identifying statistical signatures in a competitor's model that suggest training on another model's outputs. These techniques exist and are improving, but so do methods for obscuring the signal. The Frontier Model Forum can share analysis and coordinate detection methods; it cannot block access or compel enforcement outside US jurisdiction.

What this announcement does clearly is establish a frame. AI development is increasingly being understood, by the companies doing it, as a geopolitical contest with frontier capabilities as the prize. That framing has consequences that go beyond the specific question of distillation. It accelerates the logic of export controls and access restrictions. It makes international scientific cooperation harder to sustain. It deepens the bifurcation of the global AI ecosystem into blocs that share less and less. The Frontier Model Forum coordination might slow the specific problem it targets. The broader effect will be to further militarize the frame through which AI development is understood, which brings its own set of risks that are harder to measure and easier to overlook.

Sources