← Front Page
AI Daily
Models • April 25, 2026

DeepSeek V4 Arrives With a Price Tag That Changes the Conversation

By AI Daily Editorial • April 25, 2026

DeepSeek released V4 on April 24, and by the end of the day shares in SMIC, the Chinese chipmaker that manufactures Huawei's Ascend AI processors, had jumped 10 percent. Competing Chinese AI startups MiniMax and Knowledge Atlas dropped over 9 percent each. The market read the announcement correctly: this is not just another model release. It is a statement about where Chinese AI is headed, and at what cost.

V4 comes in two variants. V4-Pro packs 1.6 trillion parameters and supports a one million token context window, making it one of the largest publicly available models by parameter count. V4-Flash is a smaller, faster, cheaper sibling aimed at high-volume applications. DeepSeek claims V4-Pro outperforms every open-source competitor on agentic coding and reasoning, and that it comes close to but does not match frontier closed-source models. Their own technical documentation is refreshingly candid on this point, describing a developmental trajectory that "trails state-of-the-art frontier models by approximately three to six months." That is a more honest positioning than most AI companies offer.

The pricing is where V4 becomes genuinely disruptive. V4-Pro costs $3.48 per million output tokens. OpenAI charges around $30 per million; Anthropic charges $25. V4-Flash sits at $0.28 per million, a figure that makes it competitive with the smallest, fastest models in the Western field rather than the large reasoning ones. DeepSeek says those prices will fall further as Huawei scales production of its Ascend 950 processors. The model is also released open-source in preview form, meaning developers can download, modify, and run it locally at no cost.

The Huawei chip integration is arguably the most significant long-term element of this release. DeepSeek trained V4 on Huawei's Ascend processors rather than Nvidia or AMD hardware, and Huawei announced "full support" for DeepSeek's model family. US export controls have spent two years trying to restrict China's access to cutting-edge AI training hardware. V4 is evidence that those controls have accelerated domestic alternatives rather than halting progress. The Trump administration responded within hours, announcing a crackdown on Chinese companies "exploiting" US AI models, but the structural point stands: a competitive Chinese AI supply chain now exists from chip to model.

For developers outside China, the practical question is whether the performance-to-price ratio justifies the tradeoffs. V4-Pro's three-to-six month lag behind frontier models is real for applications where state-of-the-art reasoning is the requirement. But for a wide range of production workloads, including document processing, code generation, and enterprise knowledge tasks, the gap is unlikely to be decisive, and the cost difference is. The open-source release also means developers can inspect the model's weights and run fine-tuned versions without any ongoing API dependency.

The deeper tension this release surfaces is about who gets to set the price floor for intelligence. For the past two years, that has effectively been OpenAI and Anthropic. DeepSeek is making the same argument it made with V1 and V2: that high-quality AI does not have to cost what Western labs charge, and that the assumption it does is a business decision, not a technical necessity.