On April 24, DeepSeek released its V4 model, the most significant update to its flagship architecture since the R1 release in January 2025 that briefly rattled global stock markets and forced a reassessment of the gap between American and Chinese AI development. V4 is again open source, again claims to rival leading closed-source systems from OpenAI, Anthropic, and Google, and again provoked a more muted reaction than its predecessor. But buried in the release notes was an announcement that deserves attention beyond the benchmark comparisons: V4 was built to run on Huawei's Ascend processors, using Huawei's own CANN software stack. It is the first time DeepSeek has officially declared a deep hardware-software collaboration with the Chinese telecom giant, and it marks a genuinely new moment in the story of China's technological self-reliance.
The significance is worth unpacking carefully. DeepSeek's previous models ran primarily on NVIDIA GPUs, using CUDA, the American software ecosystem that dominates AI training globally. That arrangement always contained a vulnerability: if US export controls tightened further, or if access to NVIDIA hardware became more restricted, the entire pipeline was at risk. The V4 release demonstrates that DeepSeek can now produce a competitive frontier model end to end on Chinese silicon. Huawei's Ascend line is not yet equivalent to NVIDIA's most advanced chips, but that is a different question from whether it is capable enough to support frontier model development. Evidently, it is.
The Council on Foreign Relations' assessment captures the tension well. V4 is a significant model: new architecture, open source, competitive benchmarks, and trained on a hardware stack that two years ago was considered inadequate for the task. At the same time, the gap with American frontier models has not closed. The Straits Times reported that independent benchmarking suggests China's leading models remain roughly months behind their US counterparts, and that V4 does not appear to have changed this trajectory fundamentally. GPT-5.5 was released by OpenAI just hours before DeepSeek announced V4, a piece of timing that the geopolitically aware will note was probably not coincidental.
The White House has added a new layer to the rivalry. Its Office of Science and Technology Policy issued a statement accusing foreign entities, widely understood to mean Chinese actors, of conducting large-scale efforts to extract knowledge from US frontier models. The accusation reflects a genuine concern: open-source AI models, including DeepSeek's own previous releases, create an environment in which the outputs of expensive closed-source training runs can be used to distil cheaper competing models. Congressional hearings have aired similar concerns, with the House Select Committee on the CCP arguing that China is both buying what it legally can under existing export control regimes and stealing what it cannot. The witnesses called to testify at those hearings, however, pushed back on framing DeepSeek as a theft story. They pointed out that the fundamental mathematical and scientific basis for modern AI is public, and that DeepSeek's actual innovations are real and well-documented.
What makes the Huawei collaboration more interesting than the benchmark numbers is what it signals about the structure of China's AI ecosystem. Building a competitive AI model on domestic chips, using domestic software infrastructure, sourced from a company that has been under severe US sanctions since 2019, is precisely the kind of vertically integrated capability that China's government has been trying to develop across multiple strategic technology domains. It is not a story of catching up. It is a story of building a parallel system that no longer depends on the one it is competing with.
There are important caveats. Full AI self-reliance, as the Straits Times analysis noted, may still be some years away. Training the very largest models at the frontier remains harder on Huawei's hardware than on NVIDIA's. The software ecosystem around CANN is less mature than the CUDA ecosystem built up over fifteen years. And the gap at the absolute frontier still matters: months behind in AI development, in a domain where each new generation of models is noticeably more capable than the last, is a real disadvantage. DeepSeek's efficiency innovations, which allow competitive performance at lower compute cost, partly compensate for this. But partly is the operative word.
The story that emerges from V4, viewed alongside the Manus reversal and the ongoing trade confrontation, is one of two AI ecosystems accelerating their separation. Each incident makes the next incident slightly easier to understand: once decoupling begins, every subsequent decision reinforces the direction. The Huawei-DeepSeek stack is not yet a replacement for the NVIDIA-CUDA stack. But it is no longer a thought experiment either.