← Front Page
AI Daily
Hardware • March 20, 2026

The Memory Chip Shortage Will Last Until 2030. Here's Why That Matters.

By AI Daily Editorial • March 20, 2026

The chairman of SK Hynix, one of the world's three dominant memory chip manufacturers, said this week that the global shortage of high-bandwidth memory is unlikely to be resolved until around 2030. That is a remarkably long forecast horizon for a supply constraint, and it deserves unpacking — because the shortage is not just an AI infrastructure story. It has already begun spreading into smartphones, cars, and consumer electronics, and the price consequences are only starting to show up in products ordinary people buy.

High-bandwidth memory, or HBM, is the specific category of chip that AI accelerators like NVIDIA's GPUs depend on. It is physically stacked in dense three-dimensional configurations — Micron describes the process as building a "cube" of twelve to sixteen memory layers — and producing it is slow, expensive, and incompatible with conventional DRAM production lines. This last point is the crux of the shortage: when a fab makes HBM, it sacrifices the capacity to make conventional memory. Micron has said it can currently meet only about two-thirds of medium-term demand for some customers. Samsung, which tripled its quarterly profits in part because of HBM pricing power, still cannot produce enough. SK Hynix, which has the most advanced HBM process, is sold out through next year.

DRAM prices rose between fifty and fifty-five percent in the first quarter of 2026 compared to the previous quarter — a rate analysts are describing as unprecedented outside of acute supply shocks. The difference from previous cycles is structural. Memory has historically been a boom-bust industry: prices spike, manufacturers invest in new capacity, supply catches up, prices collapse, investment falls, and the cycle repeats roughly every three to four years. What several executives are now saying is that cycle is broken. AI demand is growing faster than any plausible capacity expansion can track, and the capital investment cycle for new HBM capacity is long — fabs take years to build and qualify.

The knock-on effects are already visible. Tesla and Apple are among the corporations that have signalled production constraints linked to DRAM availability. Smartphone prices are forecast to rise through 2026 as manufacturers compete with data centres for the same underlying memory supply. Cars with advanced driver assistance systems — which require substantial on-board memory — face similar constraints. The memory shortage that began as an AI infrastructure problem is becoming a general consumer electronics problem, and the companies at the top of the supply chain have every incentive to prioritise their highest-margin customers, which are the hyperscalers, not the handset makers.

What makes the 2030 timeline significant is not just the duration — it is what it implies about the pace of AI infrastructure buildout. If the people who make the memory chips say demand will outstrip supply for four more years, they are implicitly forecasting that the AI capital expenditure cycle will remain intense through the end of the decade. That aligns with the announced spending plans of Microsoft, Google, Meta, and Amazon, all of which have committed to multi-year infrastructure programmes. The memory constraint is, in a strange way, a vote of confidence in how seriously the industry takes its own AI roadmaps.

The longer-term question is whether the shortage reshapes AI architecture. Engineers are already experimenting with models that require less memory bandwidth per inference operation, and there is active research into memory technologies that might eventually compete with HBM on performance without the same production constraints. But those alternatives are years from commercial scale. For now, the shortage is a hard physical limit on how fast the AI buildout can proceed — and the people closest to that limit are saying it is not going away soon.

Sources