← Front Page
AI Daily
Server racks bathed in blue light, seen from below at an angle, suggesting scale and industrial power
Models • April 26, 2026

DeepSeek V4's Real Message: China's AI Stack Is Going Independent

By AI Daily Editorial • April 26, 2026

When DeepSeek released its V4 model on Friday, the technical benchmarks attracted the usual round of comparisons with OpenAI and Google. What deserved more attention was a single line in Huawei's press release: its Ascend AI processors now offer "full support" for V4. That combination, a frontier-class open-source model running natively on Chinese chips, is a different kind of milestone than any benchmark score.

The model itself is formidable. DeepSeek released two versions: V4-Pro, with 1.6 trillion parameters and a 1-million-token context window, and V4-Flash, a lighter variant at 284 billion parameters. Both use a mixture-of-experts architecture, activating only a fraction of parameters per query to keep inference costs low. The company claims V4-Pro "falls marginally short" of OpenAI's GPT-5.4 and Google's Gemini 3.1 Pro, which it translates into being roughly three to six months behind the frontier. On agentic coding benchmarks, it says both V4 models perform comparably to GPT-5.4, and it specifically calls out strong results against Anthropic's Claude.

The pricing is harder to dismiss. DeepSeek is charging $3.48 per million output tokens for V4-Pro. OpenAI and Anthropic charge $30 and $25 respectively for comparable top-tier models. Even other Chinese labs charge more: Moonshot AI's Kimi costs $4 per million tokens. The smaller V4-Flash costs just $0.28. While the broader AI sector has been raising prices and imposing rate limits to manage demand, DeepSeek is moving in the opposite direction, and expects prices to fall further as Huawei scales up production of its new Ascend 950 chips.

Market reaction was telling. Shares in Chinese chip manufacturer SMIC jumped 10% in Hong Kong trading, as did Hua Hong Semiconductor. The Ascend processors Huawei uses in its AI clusters are manufactured on SMIC's process nodes, so a major customer just validated their production path. Meanwhile, shares in MiniMax and Knowledge Atlas, two of DeepSeek's domestic rivals, fell 8-9%. The new model lands in a very different competitive environment than R1 did: not against a handful of US labs, but against a crowded field of Chinese open-source alternatives.

The Huawei connection is the story that matters most strategically. DeepSeek's previous models almost certainly relied on Nvidia hardware for training, likely including chips sourced through channels that US export controls were designed to block. If V4 was genuinely trained at scale on Ascend processors, that represents a meaningful step toward AI sovereignty for China. Wei Sun, principal AI analyst at Counterpoint Research, said V4's ability to run on local chips "could have massive implications, helping Beijing achieve more AI sovereignty and further reduce reliance on Nvidia." The caveat is that it remains unclear how extensively Huawei chips were used in training versus inference: the company has announced compatibility, not exclusivity.

Nvidia's CEO Jensen Huang put it plainly on a podcast last week: "The day that DeepSeek comes out on Huawei first, that is a horrible outcome for the US." It is not clear that day has fully arrived, but Friday's release is the clearest signal yet that it is approaching. The US has spent years trying to constrain China's AI development through chip export controls. The working assumption was that cutting off Nvidia hardware would slow things down. DeepSeek keeps finding ways to make that assumption look optimistic.

One year after R1 shook global markets, the follow-up has arrived without the same shock value, because the world has already updated its expectations. But the underlying trajectory is unchanged: Chinese AI is getting better, cheaper, and less dependent on American infrastructure, all at the same time.

Sources