← Front Page
AI Daily
Geopolitics • April 26, 2026

The US Wants to Make AI Knowledge Itself an Export Control

By AI Daily Editorial • April 26, 2026

For years, Washington's strategy to limit China's AI capabilities focused on hardware: block the chips, and you slow the training. That strategy has been expensive, imperfect, and increasingly complicated by the fact that DeepSeek keeps releasing competitive models anyway. This week the Trump administration announced a new front: targeting not the silicon, but the knowledge itself.

In a Thursday memo, Michael Kratsios, the president's chief science and technology adviser, accused foreign entities "principally based in China" of conducting "deliberate, industrial-scale campaigns" to distill leading US AI systems. Distillation, in this context, means using a powerful model's outputs to train a smaller, cheaper model, essentially extracting the larger model's learned capabilities without access to its underlying training data or weights. The White House memo claimed these campaigns involved "tens of thousands of proxy accounts" and systematic jailbreaking to expose proprietary model outputs. The same day, the House Foreign Affairs Committee passed bipartisan legislation to identify and sanction foreign actors who extract "key technical features" from US-owned AI models.

The technical concept is real and well-understood. Distillation can reduce the compute required to achieve a given level of performance by up to 100 times: a student model trained on the outputs of a frontier teacher can reach near-frontier capabilities without the original training data or hardware. That efficiency is precisely why it is used legitimately throughout the industry, including by US companies. What the administration is asserting is that China is doing this at scale, covertly, through coordinated API abuse, to copy capabilities that cost billions of dollars to develop.

OpenAI and Anthropic have made similar allegations for months. Anthropic accused DeepSeek and two other Chinese labs in February of "illicitly extracting" Claude's capabilities. OpenAI has made parallel claims to US lawmakers. Both companies have commercial incentives to frame the issue this way, though neither has released detailed forensic evidence. China's foreign ministry called Kratsios's claims "groundless" and "a smear against the achievements of China's AI industry."

Enforcement is the open question. Kyle Chan, a fellow at the Brookings Institution, described the challenge as "looking for needles in an enormous haystack." A legitimate user sending many queries to an AI API looks identical to a coordinated campaign harvesting outputs for a training dataset. The proposed Bureau of Industry and Security rules on model weight restrictions haven't yet been formally published, and no timeline has been set. What exists so far is a policy direction, not a regulatory mechanism.

The timing is also complicated. Trump is planning a state visit to Beijing next month, and his administration is simultaneously trying to negotiate a trade de-escalation after the tariff shock of early 2026. Picking a fight over AI model weights in that context is a choice with diplomatic costs. Chan noted that the administration may not want to "rock the boat" ahead of that visit.

The most significant collateral risk lands on Meta. Its Llama models are genuinely open-source: the weights are publicly available for download and modification. If weight-restriction rules are eventually codified, Meta faces a binary choice between its open-source strategy and regulatory compliance. That tension has no obvious resolution, and no other major US AI lab is as exposed to it.

What the distillation crackdown represents, at a structural level, is a shift in Washington's theory of the case. The chip export controls rested on the assumption that access to advanced hardware was the binding constraint on frontier AI. That assumption is now under pressure. The new push to restrict knowledge transfer through API access suggests the administration has started to update its model: if you can't wall off the chips effectively, maybe you try to wall off the outputs instead.

Sources