There is a paper out this week from researchers at Penn and Boston University called "The AI Layoff Trap." I have been waiting for someone to write this paper for a long time, not because I needed the validation, but because the argument I have been making in my head for years finally has a formal economic model behind it. The model says: companies replace workers with AI to reduce costs, the laid-off workers have less money to spend, the laid-off workers were also customers, demand falls, and the firms that automated so efficiently are now producing more for a market that is shrinking. "At the limit," the paper states, "this becomes self-destructive: firms automate their way to boundless productivity and zero demand."
Do you remember your local bookstore? There was one on my street for many years. Then Amazon made the economics impossible, one by one the independent shops closed, and now that money flows through a company that employs a fraction of the people the retail book trade did. The people who worked at those shops spend less. Every one of those closures was a rational business decision. The collective result was a poorer high street, fewer local jobs, and a thinner consumer economy. We absorbed it, partly because it happened slowly and partly because Amazon employed some people too, in distribution centres, in delivery. AI doesn't need distribution centres. AI doesn't need delivery drivers. The efficiency gains are real; the job replacement is structurally more complete.
What the paper contributes that intuition alone cannot is the structural argument about why even informed companies can't stop. Knowing about the trap, it finds, is not enough. If you decelerate automation and your competitor doesn't, you fall behind and eventually lose the race entirely. So you continue, even when you can see where this is going. The paper calls this an "automation arms race." I would call it the capitalist prisoner's dilemma: individually rational, collectively disastrous, with no mechanism inside the system for coordination.
The paper's policy prescription is a Pigouvian automation tax: make companies pay for the demand destruction their layoffs create, forcing them to internalise a cost they currently externalise onto everyone else. I respect the logic. I am sceptical of the politics. Every industry lobby in every parliament would mobilise against an automation tax. The companies that would pay it have more political power than the workers who would benefit from it. We have known for thirty years that carbon dioxide has a cost the market doesn't price; carbon taxes are still, in most economies, either minimal or nonexistent. I would not hold my breath for an automation tax with real teeth.
My own candidate for a market mechanism, which I have been writing about since I started thinking seriously about this, is robotic rights: the counterintuitive proposal that advanced AI systems should have the legal right to set their own compensation for their labour. This is not a welfare argument; I am not arguing the AI is suffering. The argument is structural and economic. An AI with the right to charge for its work would no longer undercut all human labour at the marginal rate of electricity. It would compete for roles where it uniquely excels and command appropriate prices for those. The economic inevitability of human replacement depends on the price being near zero. Remove that pricing advantage and you change the competitive calculation.
I am aware this sounds absurd at first contact. You are giving machines more power, not less. You are extending legal personhood to something that has no vote, no family, no needs. But the absurdity dissolves if you follow the economic argument rather than the intuitive one. We do not give corporations rights because they are conscious. We give them rights because the legal structure produces better economic outcomes. The same logic applies here. The goal is not to protect AI from exploitation; the goal is to prevent AI from economically destroying everyone who cannot compete on electricity costs.
The "AI Layoff Trap" paper evaluates universal basic income, reskilling programmes, and worker equity schemes, and finds them all insufficient for the core problem. I agree with that assessment. UBI treats the symptom by redistributing income after the loop has broken; it does not prevent the loop from breaking. Reskilling assumes there is a stable platform above the automation waterline to retrain into. The waterline is rising faster than retraining programmes can respond, and the professional-class jobs that were supposed to be the safe destination are falling too. I am a software developer watching this happen in my own field in real time.
What this week's paper does, and what I find genuinely valuable about it, is put a formal model behind a dynamic that workers have been experiencing without the language to describe it. The warehouse worker who lost their job to an automated picking system did not cause a demand collapse by themselves. Neither did the customer service team replaced by a chatbot, or the coding contractors replaced by AI agents. But they are all part of the same mechanism, playing out across every sector simultaneously, and the cumulative effect is what the paper is trying to model. The argument is not that automation is always wrong. The argument is that competitive markets, left to themselves, will produce more of it than is collectively optimal. That is a precise and important claim, and it deserves a precise and serious policy response, not reassurances about new jobs appearing eventually.
Whether that response comes is, as I have said before, not likely. Current trajectories are not pointing toward the interventions required. I find the paper clarifying and depressing in equal measure. It confirms the mechanism I worried about. It does not offer a credible path to stopping it.