Why Machine Consciousness Is Likely Unachievable

As artificial intelligence rapidly advances — crafting essays, composing music, and conversing with eerie fluency — the question has become urgent: Can machines ever be conscious? Can a sufficiently advanced AI not only think, but understand? For some, the answer seems inevitable: if a machine can do everything the mind does, must it not have a mind of its own?

This is a view called functionalism, which holds that mental states are defined by their functions — what they do, not what they are made of. According to functionalism, if a machine processes information in the same way a human brain does, it should have the same mental states, including consciousness. This position undergirds much of today’s AI enthusiasm: if we can build machines that act like minds, then we will have built minds.

But this reasoning is flawed. To illustrate the flaw in a more meaningful way, John Searle proposed a famous thought experiment. Imagine a person who speaks no Chinese is placed inside a room. From outside, Chinese speakers pass in notes with questions in Chinese. Inside, the person uses a rulebook (written in English) to manipulate the characters and send out grammatically correct responses. To the outsiders, it would appear the person understands Chinese. And that’s a fair assumption to make. Put Chinese in, get Chinese out. But inside the room, there is no understanding — only the mechanical following of rules. Searle argues this is exactly what a computer does: it manipulates symbols according to formal rules, but it does not understand what any of the symbols mean.

John Searle

This reveals a fundamental distinction: simulation is not the same as understanding. People often like to cite Alan Turing’s test as a way to prove whether or not a computer has consciousness. A computer might pass the Turing Test — convincing us it is intelligent — but this tests only external behavior. Internally, there may be no qualia, no conscious experience. This is why the Turing Test has another name: The Imitation Game. It doesn’t measure how conscious or aware a machine is, but instead how easily it can deceive us into believing we’re speaking with another person. Searle encapsulates this with a simple yet devastating phrase: syntax is not sufficient for semantics. Machines can follow the structure of language without grasping its content.

Some philosophers have tried to counter Searle’s argument. The Systems Reply suggests that while the person in the room doesn’t understand Chinese, the entire system — person, rulebook, and room — does. But Searle rebuts this by imagining the person internalizing all the rules and still not understanding the language. If no part of the system understands, and the whole is just the sum of its parts, then understanding is nowhere to be found.

Others offer the Robot Reply, proposing that giving machines bodies and sensory input might bridge the gap to understanding. Perhaps if there were something that could link the machine to the real world, some sort of outside stimuli, then it could understand in a way that we do. But this, I think, is wishful thinking as well. Merely adding cameras and robotic limbs just gives the machine more data to process — it does not generate meaning or experience. Outside stimulation is not what makes us conscious. It’s our ability to interpret that data, to understand what it is we are seeing, hearing, tasting, and so on.

The implications of Searle’s argument are significant. As AI systems become more human-like in their outputs, we risk anthropomorphizing them — treating them as conscious beings when they are not. This confusion could lead to ethical missteps: granting rights to machines while ignoring the humans displaced or devalued by them. The impact is already being felt in many work sectors across the world, without the conversation of whether or not it’s ethical. What’s going to happen when we treat machines as people? Do they have a right to live — to sustain themselves monetarily?

Searle does not deny that machines could simulate consciousness with increasing accuracy. But simulation is not the same as reality. A puppet may dance like a man, but it does not feel the music. Consciousness remains, at least for now, the domain of beings who not only process information but live through it.

Leave a comment