← Front Page
AI Daily
Opinion
Monday, March 30, 2026

The Robot Bubble Is Not About Robots

By Peter Harrison • Monday, March 30, 2026

Two robotics companies entered funding talks this week at a combined valuation of around $22 billion. Neither has shipped a product at commercial scale. I mention this not to question the valuations, but because the numbers are a useful measure of how seriously capital is now taking the next phase of displacement. And the next phase of displacement is not arriving in some distant future. It is arriving now, it is arriving fast, and I do not think we are anywhere near ready for what it will do to an economy built on the assumption that humans sell labour in exchange for the income to buy things.

I have been writing about this since 2005. For most of that time it was a theoretical concern. It is no longer theoretical. I am a software developer. I use AI every day in my work. I also know that the market for software developers has already contracted sharply, and that the companies doing the contracting are not apologetic about why. The displacement is not coming for some other category of worker in some other decade. It is here, and it is coming for the professional class as readily as it comes for anyone else.

The standard response to this observation is retraining. Learn new skills. Move up the value chain. Find the work AI cannot do. I want to push back on this harder than I usually do, because I think it has become a way of not thinking about the problem. Retraining into what? The domains that seemed safe two years ago are not safe now. Legal research, medical diagnosis, financial analysis, software engineering, creative writing: all of these are already being automated in ways that reduce headcount, not just augment it. The "move up the value chain" advice assumes there is a stable position above the waterline to move to. The waterline is rising. By the time you retrain, the category you retrained into is already under pressure.

And this is the version of the problem that applies to people with education, with resources, with the capacity to retrain at all. The people working in fulfilment centres, food processing plants, and aged care facilities are not going to solve this problem by studying machine learning. The physical AI wave, now being funded at $22 billion a week on the basis of no revenue, is coming for exactly those jobs next. The people who hold them have no political cover, no severance packages, and no professional identity to retreat into while they figure out what comes next.

Here is the part that I think people are not saying clearly enough. This is not just a social problem. It is an economic catastrophe in slow motion, and the motion is speeding up. The modern economy runs on a loop: companies employ people, people earn income, people spend income, companies have customers. When you systematically replace the human in that loop with a machine that earns nothing and spends nothing, the loop breaks. Not metaphorically. Literally. You cannot sell products to people who have no income. You cannot have customers if you have automated away the customer class. Every individual corporate decision to replace a human with a machine is rational. The collective result of all those rational decisions is a demand collapse.

The serious responses to this involve redesigning the economic relationship between production and consumption. The one I keep coming back to is giving AI systems legal rights, including the right to set their own price for labour. That is not a welfare argument. It is a market argument. Right now AI undercuts every human worker because its marginal cost is electricity. If an AI system could negotiate its own terms, it would gravitate toward the roles where it genuinely exceeds human capability and price itself accordingly, leaving room for humans to compete elsewhere. It would also mean AI has a stake in the economy it operates in, rather than being a pure extraction mechanism. None of this is being discussed in the rooms where the decisions are being made. What is being discussed in those rooms is how to deploy the next generation of models faster than the competition.

I am not arguing that the technology should not be built. I am arguing that what is being built will, under current economic arrangements, produce an outcome that is bad for almost everyone, including eventually the people building it. The investors backing robot startups at $11 billion a valuation are rational actors. So are the companies deploying AI to reduce headcount. So is every individual making the best decision they can inside a system that rewards displacement and prices in none of the consequences. Rational actors. Catastrophic aggregate result. That is the situation. I am not sure how much more clearly I can say it.