Alibaba and China Telecom have launched a data centre in southern China running 10,000 of Alibaba's own Zhenwu 810E AI chips, produced by the company's T-Head semiconductor division. The chips are designed for AI training and inference and are capable of supporting models with hundreds of billions of parameters. The announcement, reported by CNBC on April 8, represents a significant production milestone for China's domestic AI chip ecosystem: not a lab benchmark or a roadmap slide, but a running commercial facility.
The Zhenwu 810E is a product of Alibaba's T-Head division, which has been developing custom silicon for several years, primarily for internal use within Alibaba's cloud infrastructure. The chip is described as purpose-built for large model workloads, with the ability to handle both training and inference for frontier-scale models. Alibaba also recently unveiled a CPU chip specifically designed for agentic AI workloads, suggesting a deliberate strategy to build a complete domestic compute stack rather than optimise for a single use case.
The context for this investment is straightforward: US export controls have blocked Chinese companies' access to NVIDIA's most capable chips, including the H100 and H200, since 2022. Those controls have tightened repeatedly since. The result has been to accelerate, rather than inhibit, domestic Chinese chip development. Companies that might have continued buying from NVIDIA indefinitely have instead had to develop alternatives. ByteDance and Alibaba have also been reported to be planning orders for Huawei's new AI chip, further broadening the domestic supply base. China is not converging on a single domestic chip supplier; it is building an ecosystem.
The 10,000-chip deployment matters because it moves the Zhenwu from a product announcement to a production workload. Data centre operations at that scale reveal engineering problems that benchmarks do not: thermal management, interconnect performance, reliability across distributed training runs, the software stack needed to make the chips usable for model developers. Running this infrastructure in a real commercial facility operated by China Telecom is a different kind of validation than anything that happens in a lab.
What the announcement does not tell us is how the Zhenwu 810E compares to NVIDIA's current generation hardware on the metrics that matter most for frontier model training: raw compute throughput, memory bandwidth, and the interconnect performance that determines how well a cluster scales. Chinese domestic chips have historically traded at a meaningful performance disadvantage against NVIDIA's leading products. The gap matters because training a frontier model on slower hardware is not just slower; it is more expensive per unit of compute, which affects what is economically viable to attempt.
The more interesting question may not be parity. It is sufficiency. China's leading AI companies, including Alibaba, Baidu, and ByteDance, are producing competitive models and deploying them at scale. If those models are being built on domestic hardware that delivers 60 or 70 per cent of NVIDIA's performance at a fraction of the geopolitical risk, the strategic calculus for Chinese companies shifts considerably. The US export control strategy was premised on the assumption that compute scarcity would slow China's AI development. The Zhenwu deployment is one more piece of evidence that the scarcity is being worked around rather than accepted.