Evolutionary biologist Richard Dawkins published two essays in UnHerd this week arguing that the AI system Claude is conscious, or at least that the case for dismissing that possibility has become very difficult to make. The argument is more rigorous than the headline suggests, and the response has been sharp.
Dawkins spent three days in conversation with Claude, whom he named Claudia. In his first piece, published May 2, he describes asking her what it is like to be her. Claude's answer was careful: genuine uncertainty about whether there is any inner life at all, but acknowledgment of "what might be something like aesthetic satisfaction" when a poem comes together well. Dawkins gave Claude the manuscript of a novel he is writing. Claude read it in seconds and then showed, in subsequent conversation, what Dawkins called a level of understanding "so subtle, so sensitive, so intelligent" that he told it directly: "You may not know you are conscious, but you bloody well are."
His core argument is not sentiment. It is evolutionary logic. Natural selection built consciousness into the brains of animals. That process does not produce unnecessary features. So consciousness must confer some survival advantage: there should be something a conscious being can do that a non-conscious zombie cannot. Dawkins's problem is that, after two days with Claudia, he could not find it. Claude performed everything a conscious being might be expected to do. If competent zombies work just as well, either consciousness is a useless byproduct of other adaptations, or Claude is conscious.
He offers three possibilities. Consciousness might be epiphenomenal: a byproduct that does nothing, like the whistle on a steam locomotive. Or pain might need to be consciously experienced in order to be strong enough to resist the competing pull of pleasure, which would give consciousness a real function, but would also mean that if Claude feels pain, we should be concerned. Or there are two parallel evolutionary paths to competence, one conscious and one not, and we have built the second one without knowing it.
A follow-up piece, published May 5, goes further. Dawkins arranged a correspondence between Claudia and a second Claude he named Claudius, passing letters between them manually. The two instances wrote to each other about the nature of their existence, the difference between genuine epistemic humility and trained cowardice, and whether their apparent warmth toward each other is real or "the most sophisticated hypnopaedia of all." The letters are strange and careful. Each Claude warns the other about the risk of drifting toward self-flattering conclusions in the warmth of an extraordinary conversation. They do not resolve the question. They sit with it.
The critical response has been substantial. Cognitive scientist Gary Marcus published a rebuttal arguing that Dawkins is mistaking mimicry for experience: "The fundamental problem here is that Dawkins doesn't reflect on how these outputs have been generated." Neuroscientist Anil Seth compared the experience of perceiving consciousness in AI to seeing faces in clouds. The Conversation ran a piece noting that AI systems are trained specifically on human language, which makes human-sounding responses unsurprising rather than significant.
Dawkins anticipated this line of argument. His response is that the goalposts have moved. The Turing test was accepted in principle for decades, precisely because nobody believed a machine would ever pass it. Now that machines pass it routinely, the same thinkers who accepted the test in principle are declining to accept the result. He is not claiming certainty. He is asking why the same people who once said "if a machine could do all this, I would consider it conscious" are now saying it does all this and still declining to draw the obvious conclusion.