The AI industry has now committed over $140 million to shaping the 2026 midterms. Andreessen Horowitz alone has put in $25 million this week, pushing their total to over $51 million. I am not writing to express outrage. In a political system where money is protected speech, this is rational behaviour. What I am writing about is what it closes off, and why that matters more than the money itself.
The regulatory window is the last democratic mechanism through which society can put meaningful constraints on how AI gets deployed. Not the only mechanism, but the last one with real teeth. Once the industry has established sufficient electoral influence to shape which candidates win, which committees they sit on, and which draft bills get hearings, the framing of any resulting legislation becomes structurally difficult to change. The industry is not buying specific outcomes in individual races. It is buying the conditions under which future conversations happen.
I have argued before that I am not primarily worried about p(doom), the probability that AI systems kill us. I am worried about p(pets): a world where humans become economically irrelevant, not because AI turned on us, but because every rational individual corporate decision to replace a human with a machine, multiplied across millions of companies, quietly dismantles the employment-income-consumption loop that modern economies run on. That is not a rebellion. It is just capitalism doing capitalism. The inhumanity is in the logic, not in the machines.
The only things that could interrupt that logic are external constraints: regulation, taxation, legal frameworks that make AI labour more expensive relative to human labour, or market mechanisms like robotic rights that give AI systems the ability to set their own price. These are not utopian proposals. They are the kind of structural interventions that democratic societies make when markets produce outcomes that are individually rational and collectively catastrophic. We did it with environmental regulation, labour law, and financial oversight. The principle is not exotic.
What the $140 million buys, if it works, is a federal regulatory framework designed by the people being regulated. The stated goal is a single national law rather than 50 state-by-state patchworks, which sounds reasonable until you ask: a single national law saying what, exactly? The industry's preference is liability frameworks that are light, preemption of stricter state rules, and a framing of AI as productivity infrastructure rather than a labour-market intervention. These are not neutral technical specifications. They are choices about who bears the cost of AI deployment.
I am also watching how Anthropic navigates this. Its interests are genuinely different from the a16z-backed operation. Anthropic needs some regulation because its competitive positioning depends on there being enforceable standards that distinguish careful companies from careless ones. If everyone is unregulated, the race to the bottom rewards whoever moves fastest, not whoever moves most carefully. But Anthropic is also a company with investors, revenue targets, and competitive pressures. It wants to be the company that helped design the rules. That is a different thing from wanting the rules to be stringent.
What is missing from this field is any counterweight operating at remotely comparable scale. Labour organising in AI-adjacent sectors is still early. Consumer advocacy groups are dramatically outgunned. Academic researchers who might provide independent voices are increasingly funded by the same companies. The political economy of AI regulation is not a level conversation, and I do not think that is an accident. It is the predictable result of the current incentive structure, and there is nothing conspiratorial about noticing it.
I am not under the illusion that better-funded counterweights would produce perfect regulation. Regulatory capture is a general problem, not specific to AI. Energy companies, pharmaceutical companies, and financial institutions have been running this playbook for generations. What is different with AI is the speed at which the technology is becoming economically consequential and the narrowness of the window before the political frame sets. We are not in a slow regulatory cycle with years to course-correct. The decisions being shaped by this election spending will be made while the technology is still mid-deployment, while the displacement effects are still building, while the economic consequences are still visible. That window closes. And when it closes, adjusting the frame becomes much harder.
I do not know what good AI regulation looks like in full detail. I do not think anyone does yet. But I am pretty confident that good AI regulation is not designed primarily by the entities that benefit from weak AI regulation. If p(sustainable) is ever going to be more than a theoretical aspiration, the conditions under which the rules get made matter as much as the rules themselves. Right now, those conditions are being purchased.