← Front Page
AI Daily
Archive shelves of classified documents and export control forms, dimly lit government office
Opinion
April 18, 2026

The Encryption Wars Are Back, and We're Going to Lose Again

By Peter Harrison • April 18, 2026

I remember reading about the encryption wars as a young developer. Phil Zimmermann released PGP to the world in 1991 and spent three years under criminal investigation for illegal arms export. RSA Security had to ship two versions of its software: a domestic version with real security, and an export version deliberately weakened to 40-bit keys that a modestly resourced attacker could break in days. The US government had classified strong encryption as a munition. The rationale was national security. Keep strong crypto away from adversaries, the theory went, and America maintains an intelligence advantage.

The outcome was something else entirely. Adversaries got strong encryption anyway, because the mathematics is public knowledge. American software companies lost market share to European competitors who faced no such restrictions. The internet got built on weaker security than it deserved, because anyone who needed genuine encryption for legitimate business reasons couldn't reliably get it from US vendors. The controls were eventually unwound, quietly, after roughly a decade, by which point the damage to American software competitiveness was not recoverable.

We are doing it again.

The Trump administration's chip export regime runs on the same logic and, I suspect, will produce the same outcome. The theory is that America can win the AI race by controlling access to the hardware that runs AI. Keep the most powerful chips away from China, gate access for everyone else through licensing, and US strategic dominance is secured. On paper, it sounds like exactly the kind of geopolitical leverage a dominant technology producer should be using.

In practice, Bloomberg reported this week that the Commerce Department's Bureau of Industry and Security is drowning in licence applications it cannot process. Staff have left; policy direction from the top has been inconsistent enough that major rules have been proposed, withdrawn, and re-proposed within the same month. Companies waiting for export permits have been given no reliable timelines. Some have stopped waiting and are adjusting their supply chains to route around the bottleneck, in ways that do not always benefit American manufacturers. Nvidia and AMD are watching sales stall in markets that have lost patience and are looking at alternatives.

Meanwhile, what are the mathematical foundations of modern AI? They are in published papers, many of them from American universities, openly accessible to anyone with internet access. You cannot embargo PyTorch. You cannot classify the transformer architecture. The research is public because that is how science works, and the American university system's openness is a large part of why it produces so much of the foundational AI research to begin with. The things you could export-control, you largely cannot. The things you can export-control are the physical hardware: the chips themselves, the manufacturing equipment. That delay is real. It is also bounded.

China's semiconductor programme is years behind TSMC and ASML. But "years behind" is not "permanently excluded." Each year the gap narrows. Export controls can slow that process; they probably cannot stop it, especially when the underlying technical knowledge is freely available in the open literature. What the controls do reliably is raise the cost of doing business for American companies trying to compete today, not for the adversaries the policy is designed to contain.

I want to be fair to the argument for some version of these controls. Not every analogy to the encryption wars holds perfectly. Physical manufacturing capacity is genuinely harder to replicate than a mathematical algorithm, and specific chip generations can matter at specific moments in a geopolitical competition. There is a version of chip export controls that is coherent, targeted, and administratively competent. That version does not currently exist.

What exists is an agency that cannot staff the permitting function, under an administration that keeps changing the rules, producing a regime that is already pushing friendly-nation buyers toward alternatives and providing symbolic political cover rather than strategic delay. The gap between the ambition and the execution is where the damage happens.

The other symptom of the same disease showed up this week in the form of Anthropic and OpenAI both opening major London offices within three days of each other. That is not a coincidence. American political turbulence, including the Pentagon's blacklisting of Anthropic over a disagreement about autonomous weapons policy, is actively redirecting AI investment and talent toward places that have more coherent policies. The chip export bottleneck and the UK office announcements are different expressions of the same failure mode: treating AI as a thing to be controlled rather than a capability to be developed.

The encryption wars took about a decade to unwind. The policy eventually collapsed under its own weight, when the domestic damage became undeniable and the strategic benefit remained theoretical. I expect the same trajectory here. What I am less sure about is whether, when these controls are eventually wound back, the American companies that were supposed to be their beneficiaries will still be the ones setting the pace.