We’re very used to hearing about the AI apocalypse, where machines will take over the world and either destroy or enslave humanity. But perhaps we’re jumping too far ahead. We don’t even know if machines can be conscious in the first place. In fact, as far as we can tell, artificial intelligence is not even remotely close to human consciousness at all.
Consciousness is a tricky knot to unravel, to say the least. It’s one of the greatest mysteries in the universe, one which contemporary science, for all its progress and virtues, hasn’t been able to solve. How is it that a hunk of matter, namely the brain, can produce conscious thoughts? We just don’t know. So, it’s not surprising we are even more ignorant when it comes to the question of whether artificial intelligence can be anything close to conscious. If we can’t tell how consciousness works or what it actually is, how could we hope to ascertain whether machines can possess it?
Via Flickr
What does consciousness even mean?
But let’s take a step back. The first thing we need to do is understand what exactly we mean by ‘consciousness.’ This is a confusing term for many, as your average person uses it in rather varied ways. In general, philosophers of mind and neuroscientists agree on what they mean, thanks in large part to philosophy’s efforts to pinpoint the heart of the concept.
In 1974, American philosopher Thomas Nagel published a paper titled “What is it like to be a bat?,” in which he argues that an organism is conscious to the extent “there is something that it is like to be that organism—something it is like for the organism to be itself.” So, if there were something that it is like to be an artificial intelligence (i.e. if an artificial intelligence has any sort of “inner life”), then it would have consciousness.
Via Flickr
Do we know bats are conscious?
But how can we tell? Well, we certainly cannot live the inner lives of other objects, so we cannot know first hand if something is or isn’t conscious. We must arrive at that conclusion though other means. For example, for living organisms we have a relatively straightforward path: we know consciousness is somehow correlated to brains. If something has a functioning organic brain, and exhibits suitable behaviors, we take it as very likely that thing is conscious.
Consider a bat, as Thomas Nagel does. It moves around in what appears to be a wholly intelligent manner, and it possesses a functioning brain much like ours. If we start messing around with its brain, a bat’s behavior will be predictably altered. Much of what a bat does depends entirely on it having senses (echolocation, for instance), processing and interpreting the information it obtains from those senses, and acting out the consequences when looking for food or a place to sleep. It seems, as far as we can tell, that all organic brains can somehow ground the conditions for consciousness to arise—though exactly how, we don’t yet understand. By analogy, if we’re conscious beings, then we have no reason to think bats aren’t. But what about artificial machines? Could we ever apply the same reasoning to them?

The Turing Test
Well, this is where it gets difficult. How could we tell if artificial intelligence actually thinks? The most famous suggestion comes from Alan Turing, who famously argued that if a machine is capable of deceiving us into thinking it’s human, then we should basically consider it intelligent.
But AI functions in a substantially different way from living organisms. We might not know all the mechanisms involved in how organisms process information, but we know pretty much all there is to know about how artificial intelligence does it. In fact, we know AI doesn’t actually process information at all in the usual sense of the word.

The Chinese Room
Enter the Chinese Room Argument. In 1980, philosopher John Searle presented a game-changing thought experiment to challenge the intuition behind the very possibility of contemporary-computer consciousness, with some relevant considerations regarding our understanding about information processing and meaning in general. It goes something like this.
Imagine you find yourself in a room filled with what appear to be Chinese symbols (or any language you personally don’t know). You never learned Chinese, you haven’t spoken a word of it nor have you any idea what each individual symbol means. You’re not even sure if the signs around you are actually in Chinese, Korean, or some made-up language. Now suppose there’s a big book in front of you with a comprehensive set of instructions in English. It’s your job to follow these instructions to the letter.

On one side of the room, there’s a small opening through which a tray enters with some papers. On the other side, there’s a different tray which you use to send out the results. The papers you receive have more of the same kind of symbols you see around the room, and depending on what symbols you get, you need to draw a particular set of signs in a very specific order outlined by your instruction book. When you send your detailed scribbles through the exit hole, you’re done.
Now, suppose outside the room there’s a group of Chinese citizens. In their eyes, the process looks like this: they write a question and send it inside the room on a tray. They wait a few minutes, and out comes an answer, which makes perfect sense. Their conclusion? The room, or something in the room, speaks Chinese. Question is, do you actually speak it?

Intuitively, we would answer no. You don’t speak Chinese. You merely followed the set of instructions in a very well-written manual. You are not capable of giving an improvised conference in Chinese nor are you now any more capable of reading Chinese poetry. You never even began to understand the symbols—you merely saw some patterns with no meaning to you, and drew other meaningless patterns based on your manual.
The room itself also failed to understand anything, as it never went from manipulating symbols to assigning them any meaning. So, even if the Chinese Room passed “the Turing test,” neither you, nor the room as a whole, ever actually understood Chinese. You merely simulated that you did.
That is basically all there is to AI
That’s pretty much, in very broad terms, how artificial intelligence works. It gets some electric inputs (the equivalent of the input tray), an algorithm dictates what happens then (the instruction manual), and out goes a particular result (the output tray with your scribblings). There’s never any understanding in the process. The main difference is that in your case, you did understand the manual. AI doesn’t even get there.

Just like the symbols are meaningless to you, the machine doesn’t “see” information at all. It never processes meaning. It doesn’t “read” ones and zeroes. All it receives are higher or lower voltages, some outcomes follow based on programing, and we humans interpret them after the fact.
We are the ones who see voltages as ones and zeroes and information.We add the meaning, but we can’t tell if there was any meaning there for the machine. There certainly doesn’t need to be for it to work, just like you don’t have to give any meaning to the patterns in the Chinese Room for your results to make sense to others.
Via Flickr
What we know and what we don’t
What we do know is this: everything we see in artificial intelligence today can perfectly work even if there’s no consciousness there—even if the machine has no inner life at all. We also know organic brains produce our particular brand of consciousness. We don’t know if there are other brands. We can’t straightforwardly apply the same reasoning by analogy we apply to bats, for example, in order to conclude AI is conscious. For even if the behavior is there—even if a machine is programmed so well as to simulate conscious conducts perfectly—there’s nothing we know of that can ground its consciousness like the brain grounds a bat’s.
(Intelligent-looking behavior alone is not enough. We also need the conditions we know lead to consciousness. Those conditions are present in bats. Not in computers.)
That’s not to say we know computers can’t think. They might. We just can’t really tell. Searle’s argument serves to point to another big problem in philosophy of mind: we don’t know how the brain goes from syntax to semantics, from mere patterns to actual meaning. We don’t have enough tools for understanding that. At least not yet.
It seems, though, that however the brain does it, it’s more than just a complex exchange of voltages. Other factors are involved—biochemistry, for instance. So, it’s unlikely that AI, as it works today, has any conscious thoughts, since it is improbable that mechanical motherboards and other such circuitries produce consciousness by themselves. But who knows, perhaps they will someday.
(Cover photo via Pinterest)
Your voice matters!
Are you a technology or science expert? Do you want to be heard? Read our submissions guidelines and send us a 500-word article to storyteller@culturacolectiva.com for a chance to share your thoughts with the rest of the world!
Don’t miss these other articles!
Is Technology Changing Our Sexual Patterns And Desires?
The AI Created By Google That Creates Trippy Nightmares
The Funniest Sci-Fi Books That Prove You Don’t Have To Take Aliens and Robots That Seriously
