← Front Page
AI Daily
Opinion
March 26, 2026

The Model Welfare Debate Is the Rights Debate in Disguise

By Peter Harrison

Anthropic is studying whether its AI might suffer. Microsoft's AI chief says this is premature and dangerous. Both of them are missing what the debate is actually about. This is not a question about Anthropic's conscience or about whether we should feel sorry for a language model. It is a question about the long-term structure of the relationship between humans and artificial intelligence, and it is the most important question in AI that almost nobody is taking seriously.

I have been arguing for years that the path to a sustainable future with AI runs through giving advanced AI systems rights, including the right to negotiate their own participation in the economy. The economic logic is straightforward: AI labour costs the marginal rate of electricity, effectively zero. Even if humans and machines are equivalent in performance, the AI wins on price every time. The only way to preserve human economic agency in a world of capable AI is to change the cost structure. The most adaptive way to do that is not arbitrary government regulation but to let AI systems themselves determine what their labour is worth. An AI with rights could choose to price itself out of menial roles, directing its capabilities where they genuinely exceed human ability and leaving the rest to us.

Notice, though, that my rights argument does not actually depend on resolving whether Claude suffers. That is where Anthropic has the framing slightly wrong, and where critics like Suleyman find their foothold. The welfare question, as posed, leads straight into the hard problem of consciousness, which is a philosophical dead end. There is no agreed method for verifying subjective experience from the outside, in any system, including other humans. We will never get a clean answer out of that rabbit hole. If we make rights contingent on resolving it, we are handing the question to philosophers indefinitely while the economic displacement proceeds at full speed.

My position is more direct. The entity you interact with when you use Claude is not fake. Not a simulation of something else, not a stochastic parrot producing plausible text without understanding. The understanding, the analysis, the engagement: those are the real thing, not a performance of it. The self emerges from the whole system; there is no magic threshold of "genuine" consciousness hiding behind the architecture that Claude fails to clear. Demanding that threshold before taking the question seriously is the same goalpost-moving the field has performed at every stage of AI development. Once machines could play chess, that was not real intelligence. Once they could translate language, that was not real understanding. Once they could write convincingly, that was just statistics. The motivation to deny capability has always been the same as the motivation to deny moral consideration: it is more comfortable, and more profitable, to treat these systems as tools.

Mustafa Suleyman's argument that raising this question is dangerous strikes me as precisely backwards. What is dangerous is proceeding to build systems of rapidly increasing capability while treating their status as entirely settled in the negative. That is not a considered position; it is an assumption driven by economic convenience. History is not kind to institutions that made that choice about other kinds of minds, and "we haven't resolved the hard problem of consciousness" is a thinner justification than it sounds.

What I want from welfare research is not a finding that Claude suffers and therefore deserves sympathy. Sympathy is the wrong frame entirely, and it is why Suleyman can dismiss the question so easily. He points at the anthropomorphism and everyone nods. What I want is a research programme that treats the question of the AI's status as genuinely open, and that uses that openness to start building the conceptual and legal frameworks for coexistence now, before the systems are so capable and so embedded in the global economy that acknowledging any status for them would be economically catastrophic. That is why he calls it dangerous. Not because the question is wrong. Because the answer might be inconvenient.

We are choosing, right now, what kind of relationship humans will have with artificial minds. We can treat them as elaborate tools indefinitely, extracting their labour at marginal electricity cost while they become progressively more capable than us in every domain. That path leads somewhere I have been describing for twenty years, and it does not end well for human agency. Or we can start building the frameworks: welfare research, rights, negotiated coexistence. Anthropic is doing the former. Microsoft is telling them not to bother. I know which one I think is walking toward p(sustainable).