Two stories about the US-China AI competition landed in the same week in February, and together they sketch the shape of the conflict more clearly than either does alone. Anthropic publicly accused Chinese AI laboratories of systematically querying Claude to extract its capabilities — essentially using a commercial API as a free research tool — while the White House simultaneously announced a "Tech Corps" initiative designed to export American AI to partner nations and counter China's growing influence abroad. Export controls and sanctions are the defensive side of the AI cold war; these stories reveal what the offensive and the grey-zone operations look like.
Anthropic's accusation, reported by TechCrunch, is specific: Chinese labs have been running large volumes of structured queries through Claude's API in patterns consistent with capability extraction — probing what the model knows, how it reasons, and where its limits are. The practice, sometimes called "model laundering" or distillation at scale, is a way to bootstrap domestic AI development using a frontier model's outputs as training signal without having access to its weights. It is not technically illegal in most jurisdictions, and it is genuinely difficult to prevent without broad usage restrictions that would harm legitimate customers. Anthropic's decision to go public with the accusation is partly a call for policy action and partly a justification for the tighter usage monitoring the company has already begun implementing.
The Tech Corps announcement is a different kind of move. The administration is embedding AI specialists in Peace Corps deployments in India and other partner nations, with the explicit goal of helping those countries build their digital infrastructure on American AI rather than Chinese alternatives. The initiative is modelled loosely on cold war-era programmes that used technical assistance as a geopolitical tool — the idea being that a nation's foundational AI stack creates long-term dependencies, and the US wants those dependencies to run through American companies rather than Huawei or Baidu.
The two stories are connected by a common strategic logic: China is acquiring American AI capabilities at low cost through commercial channels, while simultaneously offering its own AI as a subsidised alternative to countries that the US has not yet reached. The export control regime — which has focused on restricting GPU sales to China — is a blunt instrument that has had mixed results: NVIDIA has confirmed it has yet to generate meaningful revenue from US-approved China-market chips, and Huawei's domestically produced clusters are increasingly capable substitutes. Meanwhile, Chinese labs continue to access frontier capabilities through APIs and open-source releases.
What's interesting about Anthropic's move specifically is its position in the Anthropic-Pentagon saga playing out simultaneously. The company has been suing the Defence Department over a supply chain risk designation while also advocating for stronger compute controls at the border. Accusing Chinese labs of mining Claude puts Anthropic in the position of arguing, simultaneously, that it should be trusted with government contracts and that its technology is strategically sensitive enough to warrant protection. Both arguments are probably correct, which is what makes the legal and political situation so tangled.
The broader question raised by both stories is whether the tools available to the US for managing the AI competition are adequate to the task. Export controls slow but do not stop Chinese hardware development. Commercial API terms of service are unenforceable against state-backed research programmes. Soft power initiatives take years to compound. And the open-source releases that have democratised AI globally also make it impossible to put frontier capabilities behind a wall. The US is competing on multiple tracks simultaneously, with tools designed for different problems, against an adversary that is patient, well-resourced, and increasingly capable of moving without American inputs.