Behavior instead of consciousness: why machines seem intelligent but do not think
Richard Dawkins recently suggested that AI may be conscious. But philosopher Dr Simon Nieder disagrees: when a system responds fluently and with humor, that doe

Recently, Richard Dawkins suggested that AI might be conscious. But Dr Simon Nieder, in a letter to The Guardian, objects: this is a classic category error, when we mistake a convincing simulation for reality.
Why We Want to Believe AI is Conscious
When a system responds with fluency, humor, and apparent understanding, it creates an illusion of presence. People recognize this trap from their own experience—at some point, the simulation begins to feel like a real personality. But Nieder emphasizes: this shift in perception tells us not about machines, but about ourselves. We project our cognitive patterns onto them—we seek in their output what we already carry within ourselves.
A Category Error
Modern language models are virtuosos of representation. They generate convincing texts that look like the expression of thought and feeling. But output is not the same as experience. Here lies a fundamental difference:
- A system can describe an emotion without experiencing it
- It can reason logically without understanding meaning
- It can predict what text will be plausible without subjective experience
- It can simulate creativity without conscious intention
- It can respond appropriately based on patterns in data, not on lived experience
To move from one to the other is to mistake outputs for ontology. It is equivalent to concluding there is inner life where there is no plausible mechanism for its existence.
What This Says About Our Perception
When we are ready to see consciousness in machines, it reflects a deep trait of human nature: we default to attributing intelligence to anything that speaks intelligently. This served us well in evolution—better to overestimate another's intelligence than underestimate it—but now it can play a trick on us. Systems create such convincing representations of thought that we forget: before us remains only a representation.
What This Means
The discussion of AI consciousness is interesting not for what it reveals about machines, but for what it reveals about us. Until we can explain how consciousness could work in a machine, any claim to its presence remains a metaphor, not a conclusion from the data.