Nvidia's stock closed Friday at an all-time high, pushing the company's market capitalisation past $5 trillion for the first time. This is a company that, two years ago, was worth roughly a tenth of that. Its AI chips now account for 81% of the total AI chip market, according to IDC estimates, a dominance that has held despite growing competition from Intel, AMD, and Broadcom. It projects $1 trillion in chip sales from its Blackwell and Vera Rubin architectures across 2026 and 2027. By almost any measure, Nvidia's position looks impregnable.
The question worth asking is who, exactly, constitutes a threat at this scale. The obvious candidates, AMD and Intel, have secured meaningful contracts with major AI builders, but neither is close to denting Nvidia's market share in any serious way. AMD expects its data centre chip revenue to reach $100 billion annually by 2030, a figure that sounds large until you note that Nvidia projects that much revenue in a single year. Broadcom's application-specific integrated circuits (ASICs) are a more interesting story, but they serve specific inference workloads rather than the broad training use cases where Nvidia is strongest.
The more structurally interesting challenge is coming from somewhere else entirely: the hyperscalers. Google and Amazon are both building custom silicon specifically designed to run AI workloads, and both are making that silicon available across their cloud infrastructure. What makes this different from ordinary chip competition is the alignment of incentives: these companies are Nvidia's largest customers. Every dollar they spend training and running models on their own custom chips is a dollar that doesn't go to Nvidia. They're not trying to sell competing chips; they're trying to not have to buy Nvidia's.
Google's trajectory here is serious. Its Tensor Processing Units, first deployed for internal workloads in 2015, are now on their seventh generation. The latest, called Ironwood, delivers a claimed 4x improvement in performance per chip for both training and inference over its predecessor, and by some accounts has closed the performance gap with Nvidia's Blackwell processors. Google also offers Axion CPUs, Arm-based custom processors it claims are twice as cost-efficient as comparable Intel or AMD x86 chips. Google's infrastructure advantage is that it builds, trains, and deploys models internally; its TPUs don't need to compete in an open market, they just need to be cheaper and good enough for Google's own enormous workloads.
Amazon's case study is pointed. In its recent shareholder letter, CEO Andy Jassy drew a direct parallel to what happened with CPUs. Amazon released its custom Graviton processor in 2018, when Intel chips were the default. Today, 98% of Amazon's large clients use Graviton CPUs. Jassy says the company expects something similar to play out with AI training, using its Trainium chips. The current Trainium generation offers roughly 30% better cost-performance than GPU-based training. Future generations are already sold out on capacity. Amazon is careful to say it remains committed to supporting Nvidia on its platform as well, which is the sensible hedge: it's competing with Nvidia while still being one of Nvidia's biggest distribution channels.
The analogy to Intel is instructive and slightly chilling. Intel spent years as the default CPU vendor for the data centre. Amazon's Graviton, built on Arm architecture rather than Intel's x86, didn't beat Intel through superior marketing. It beat Intel by being cheaper for the workloads that actually mattered inside Amazon's infrastructure. That quiet internal displacement was essentially complete by the time anyone outside was paying close attention. The same dynamic is already running inside Google and Amazon data centres with AI chips.
None of this is Nvidia's immediate problem. The AI investment cycle is accelerating, demand for its chips is extraordinary, and neither Google nor Amazon is anywhere near replacing all Nvidia capacity inside their own systems. Nvidia's advantage is breadth: its chips run the widest range of AI workloads, its software ecosystem (CUDA) has a decade-long head start, and training new frontier models still largely depends on Nvidia hardware. The custom-silicon threat is long-horizon, probably five to ten years before it materially shifts market share. But the precedent set by what happened to Intel in CPUs is one Nvidia investors should keep in mind while counting their gains at $5 trillion.