A Quinnipiac poll this week found that 55 percent of Americans believe AI will do more harm than good in their lives. A year ago it was 44 percent. The coverage has been predictably split: tech press treating it as a comms problem to be managed, and labour advocates treating it as vindication. Both reactions miss the more interesting question, which is: are these people afraid of the right things?
The answer is: broadly yes, but not quite. The stated fear is job loss. That is real, and it is happening. But job loss is a first-order effect. The second-order effect is what should actually be keeping people up at night, and it almost never appears in polling language.
Here is the mechanism. The modern economy runs on a loop. Companies employ people. People earn income. People spend that income. Companies have customers. Break any link in that loop and the whole thing seizes. What AI is doing, at scale and with increasing speed, is replacing the human in the first step of that chain with a system that earns nothing, spends nothing, and has no material needs. Every individual company making that substitution is acting rationally. The collective result of millions of rational decisions is that the customer class, the people with wages to spend, quietly shrinks.
This is not a distant hypothetical. It is already in motion. We are just in the early stages where the effect is visible in anxiety surveys and patchy job displacement data rather than in demand collapse. The patchy data is itself the sign of where we are in the curve, not evidence that the concern is overblown.
What I find interesting about the Bloomberg piece this week, the one arguing that "AI washing" is masking the real labour story, is not the scepticism it expresses. It is the implicit assumption underneath the scepticism. The argument is: we don't have good data showing AI is specifically causing the job losses, so perhaps the anxiety is disproportionate. But this framing treats "AI is definitely causing discrete job eliminations right now" as the threshold for concern. The real threshold is lower: is AI systematically weakening the coupling between human productive contribution and human economic participation? It clearly is, even when the displacement is partial, gradual, and hard to trace in individual cases.
The response you hear from every panel discussion and government briefing is retraining. Learn new skills. Move up the value chain. The trades are booming, look at the HVAC salary figures. This is true as far as it goes, but it does not go far. What are people supposed to retrain into, and for how long? The knowledge work domains that seemed safe, legal research, financial analysis, software development, writing, are contracting. The trades boom is real, but it is a function of the data centre buildout, which is itself temporary infrastructure construction. Once the data centres are built, the demand for cooling systems engineers flattens.
I have been writing about this for twenty years, and the thing I most underestimated was not the pace of AI capability development but the persistence of the retraining narrative. In 2005, I would have thought that by 2026 the conversation would have moved on from "people will adapt" to something more honest about the structural challenge. Instead, the narrative has held up remarkably well, stretched and reapplied to each new wave of displacement. It is a way of not thinking about the problem rather than thinking about it.
So: 55 percent of Americans are correctly sensing that something large and hard to control is bearing down on them. They are framing it as "my job might go," which is the personal, immediate, legible version of the fear. The harder, less legible version is: "the system that connects my effort to my livelihood and my livelihood to society's functioning is being systematically dismantled, and no one in a position of power has a plan for what replaces it."
That version doesn't fit in a poll question. But it's the one that matters.