Amazon CEO Andy Jassy published his annual shareholder letter on Thursday, and while the headline number is the $200 billion capital expenditure commitment Amazon is making in 2026, the more interesting read is what the letter says about who Amazon thinks it is competing with. It is not just the other cloud providers. The letter takes direct aim at Nvidia for chips, Intel for processors, and SpaceX's Starlink for connectivity. Jassy is laying out a case that Amazon should build as much of its own stack as possible, and that the stack it has built is already generating returns large enough to justify the investment.
The chip claims are the most specific. Amazon's custom Trainium chip, designed for AI training workloads, is presented not just as a cost-saving measure but as a business in its own right. Jassy writes that if Amazon were selling Trainium to outside customers the way Nvidia sells GPUs, the chip division would be running at roughly $50 billion in annualised revenue. That is a striking claim, not because the number is necessarily precise, but because the framing reveals how Amazon thinks about the asset. It is not an internal cost tool. It is a product that Amazon happens to be consuming exclusively for now.
Graviton, Amazon's custom CPU designed as an alternative to Intel's x86 architecture, is further along in demonstrating external market pull. Jassy writes that Graviton is now used by 98 per cent of AWS's top 1,000 EC2 customers. Two companies, he says, actually requested to acquire all of Amazon's available Graviton instance capacity in 2026. That is an unusual signal: customers competing for access to Amazon's own hardware rather than the dominant industry standard they have built their infrastructure on for decades.
The $200 billion figure is defended through the lens of customer commitments rather than speculative demand. Jassy's key example is the OpenAI deal: as part of the agreement that made AWS OpenAI's primary cloud provider, OpenAI pledged to spend $100 billion on AWS infrastructure. One customer commitment worth $100 billion is, in Jassy's framing, a reasonable explanation for why $200 billion in total infrastructure investment makes sense. "We're not investing approximately $200 billion in capex in 2026 on a hunch," he writes.
The strategic argument running through the letter is that dependence on external vendors in critical infrastructure is a competitive liability, and that building your own components at sufficient scale transforms that liability into an advantage. Amazon's willingness to invest in custom silicon at a time when Nvidia's market dominance and pricing power are at their peak is a bet that the long-term economics of owning the stack outweigh the short-term costs of building it. Whether that bet pays out depends heavily on whether Trainium can match Nvidia's training performance closely enough to justify the switching cost for Amazon's largest AI workloads.
The letter also serves as a direct counter-narrative to concerns about Amazon's AI position relative to Microsoft and Google. Both competitors have deeper model partnerships at the frontier: Microsoft with OpenAI, Google with its own Gemini family. Amazon's answer is that infrastructure matters more than models at this stage, and that AWS's scale, custom silicon, and breadth of model partnerships (which include Anthropic and many third-party models via Bedrock, not just OpenAI) position it to benefit from AI adoption regardless of which models ultimately dominate. It is an infrastructure-first argument at a moment when much of the industry conversation is focused on model capability. Whether infrastructure or model lead ultimately matters more in AI is one of the genuinely open strategic questions in the industry right now.