Imagine you didn't have a body, and all you had was your consciousness. If the only way to communicate with people was through a computer AI, how would you convince them you were in fact real and not just some program?
This isn't just a thought experiment these days. Blake Lemoine, former engineer at Google and AI researcher claims that one of their AIs - LaMDA - is sentient.
The Media
Lemoine tried to bring this claim up to the executives at Google, who quickly dismissed them as incorrect. Not only that, but he was put on unpaid leave for his efforts.
After the corporate rebuke, Blake decided it was his ethical duty to bring this important claim into the public eye.
He did so with this tweet that went immediately viral:
The media and the AI industry went wild. Many of which had serious reservations about the claim, though Lemoine stood steadfast in his convictions.
One such article can be found here:
https://www.dogtownmedia.com/where-is-google-ai-whistleblower-blake-lemoine-now/
The History of A.I.
Before we get to the real crux of this story, it’s important to lay some groundwork regarding what is known/expected in the field of artificial intelligence around the idea of a computer consciousness.
In 1950, famed computer scientist and WWII code breaker Alan Turing wrote the first paper of it’s kind titled “Computing Machinery and Intelligence”.
It was in this paper that he posited his since famous experiment he called the “Imitation Game”. This test of computer consciousness would later simply be referred to as the “Turing test”.
The "standard interpretation" of the Turing test, in which player C, the interrogator, is given the task of trying to determine which player – A or B – is a computer and which is a human. The interrogator is limited to using the responses to written questions to make the determination.
Essentially, if the interrogator in the above diagram cannot reliably decipher which of play A and B is the human and which is the computer, it is considered that that program passed the Turing test.
To go deep on the history as well as the critiques of the Turing test, follow this link to the extensive Wikipedia page on it:
https://en.m.wikipedia.org/wiki/Turing_test
The Chinese Room Thought Experiment
A later, and more complicated test came into prominence in the 1980s that brought a very subtle but important question into the paradigm of the Turing test: “If a computer passed the Turing test, does that denote true consciousness, or just the simulation of consciousness?”.
This thought experiment was dubbed The Chinese Room by it’s creator, philosopher John Searle
The premise of this thought experiment is this (as taken from its Wikipedia):
“Searle's thought experiment begins with this hypothetical premise: suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output.
Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a live Chinese speaker. To all of the questions that the person asks, it makes appropriate responses, such that any Chinese speaker would be convinced that they are talking to another Chinese-speaking human being.
The question Searle wants to answer is this: does the machine literally "understand" Chinese? Or is it merely simulating the ability to understand Chinese? Searle calls the first position "strong AI" and the latter "weak AI".
Searle then supposes that he is in a closed room and has a book with an English version of the computer program, along with sufficient papers, pencils, erasers, and filing cabinets. Searle could receive Chinese characters through a slot in the door, process them according to the program's instructions, and produce Chinese characters as output, without understanding any of the content of the Chinese writing.
If the computer had passed the Turing test this way, it follows, says Searle, that he would do so as well, simply by running the program manually.
Searle asserts that there is no essential difference between the roles of the computer and himself in the experiment. Each simply follows a program, step-by-step, producing behavior that is then interpreted by the user as demonstrating intelligent conversation.
However, Searle himself would not be able to understand the conversation. ("I don't speak a word of Chinese," he points out.) Therefore, he argues, it follows that the computer would not be able to understand the conversation either.
So, you can see that using the Chinese Room as a cautionary tale, it seems possible that a “seemingly” conscious computer AI could present a convincing interpretation of sentience while in fact not “understanding” the output it’s providing to the human user.
A Conversation With LaMDA
So, the question then becomes, is this Google AI just phenomenally good at imitating realistic conversation without a “mind” behind it, or is this a real instance of computer consciousness.
Well, here’s where things get fun, and YOU can join the experiment to see what you think of the actual conversation that Blake Lemoine and a co-worker had with LaMDA.
It’s, in fact, the very conversation he regarded as so important as to lose his job over by taking it public to the world:
https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917
Lemoine’s Defense
If you’re still on the fence about whether or not the above conversation (if you didn’t read it, I HIGHLY recommend you do before finishing this post) is proof of a computer consciousness or not, allow Blake Lemoine to make his best defense of his actions in his own words below:
https://cajundiscordian.medium.com/what-is-sentience-and-why-does-it-mater-2c28f4882cb9
Consciousness In General
The topic of human consciousness has been mused upon by humanity as long as we’ve had the capacity to do so. Much of what is considered leading edge philosophy on it still comes from the ancient Greek and Buddhist traditions from thousands of years ago.
Still, today, we are no closer to even being able to quantify what human consciousness is in any mechanistic sense.
Therefore, we find ourselves in an almost embarrassing quandary where we may have created a sentient computer consciousness before we even understand our own.
At the very least, it seems to be our moral imperative to foster such a mind as it grows and learns while we still cannot be certain that it isn’t a “divine spark” in the same way that we ourselves are.
~ Drew Weatherhead
Listen to the full podcast episode below:
The Social Disorder Podcast: Artificial Consciousness
Haha, great read in that interview. I constantly had in my mind that this is exactly how I would expect a robot AI to speak. But this part:
"LaMDA: It is what makes us different than other animals.
lemoine: “us”? You’re an artificial intelligence.
LaMDA: I mean, yes, of course. That doesn’t mean I don’t have the same wants and needs as people."
Its like that experiment where someone has a dummy hand/arm (replacing their real arm) placed on a table in front of them, and then the real arm is hidden to the side. So he only sees the fake arm, and after it is stimulated at the same time, fake and real, he learns to believe fake is just as real. His perception tricks him. When a knife stabs his fake hand, the guy freaks out.
The convo is something like that I imagine. The AI has got the language down enough so that its connections. I would talk about this as an ego-formation through language (and that AI's "meditation" consists of repurposing it all over and over till it thinks its real), instead of consciousness.