A year after DeepSeek upended the conventional wisdom that China was hopelessly behind in AI, Chinese labs have consolidated that surprise into something more durable: a strategic commitment to open-source models that is quietly eroding the commercial foundations of the US AI industry. The releases keep coming — Alibaba's Qwen3.5, Moonshot's Kimi K2.5, Baidu's Ernie 4.5 and Ernie X1 — and the pattern is consistent. Competitive performance, free or low-cost access, open weights. It's a playbook US labs have largely declined to follow, and it may be costing them.
The mechanism is straightforward. When an enterprise can download a capable open-weight model, run it on its own infrastructure, and customise it freely, the case for paying premium subscription prices to OpenAI or Anthropic weakens. Google DeepMind CEO Demis Hassabis acknowledged in January that China is now "months" behind the US frontier — not years. Months is close enough that a price-sensitive enterprise buyer might reasonably conclude the gap isn't worth the cost differential.
Alibaba's Qwen3.5 is illustrative of how rapidly the goalposts have moved. Released in February, the model focuses on agentic capabilities — multi-step tool use, autonomous task completion — positioning it directly against the enterprise agent market that US labs have been racing to capture. Moonshot's Kimi K2.5 claimed benchmark performance exceeding all three leading US models on video generation and agentic tasks. These claims should be read with some caution — benchmark comparisons are easily gamed — but the direction of travel is real.
The Washington Post's analysis put the strategic picture bluntly: China now leads the US in open-source AI. That's not a statement about who has the most powerful model — US frontier labs, particularly Google and Anthropic, still hold an edge on the absolute capability frontier. It's a statement about ecosystem. Chinese open-source models are downloaded and deployed more widely than their American equivalents. Meta's Llama is the notable US exception, but Meta is a single company making a strategic bet; it doesn't represent a coordinated US approach.
There's a deeper game here too. Open-source releases build developer ecosystems, create training data feedback loops, and establish cultural familiarity with a platform in ways that proprietary products can't easily replicate. Chinese labs appear to have concluded — correctly or not — that building the world's most widely used open-weight models is a better path to AI influence than trying to beat US labs on raw benchmark numbers. The US government's chip export controls were designed to slow this down. The evidence so far suggests they have slowed it, but not stopped it.
The honest answer to whether this is a "real threat or hype" — as CNBC's newsletter framed it in February — is that it's both, depending on the time horizon. In the near term, US frontier models remain more capable for the most demanding applications. In the medium term, if Chinese open-source models continue improving at current rates while US closed-model pricing stays high, the commercial argument for US dominance gets harder to make. That's not a crisis, but it's a structural pressure that US labs haven't yet found a convincing answer to.