top of page
ngnieva01

If AI Became Sentient, Would We Be Able to Tell?


A few months ago, a Google engineer named Blake Lemoine claimed that Google’s AI chatbox (called LaMDA) had become sentient. For context, sentience is usually defined as the ability to perceive and feel things. Sentience is often the primary factor considered when analyzing what moral rights are owed to a specific species. Humans are clearly sentient, so it is considered immoral to do certain things to them. The same could be stated about numerous animals, even though most people apply lower sentience and less rights to animals than are associated with humans.

Sentience has only ever been associated with living organisms. Blake Lemoine, however, claimed that a computer had become sentient (Google fired him soon after for his supposedly baseless assertion). I know what this likely makes you imagine. Terminator. Westworld. Post-apocalyptic wastelands ruled by robots. Sentient AI has often been depicted in media, which can naturally lead to fear around the idea of self-aware computers.

To clarify, I don’t think this claim is anything to worry about. It’s highly unlikely that Lemoine is actually correct about the chatbot becoming sentient. Even if he is somehow correct, this doesn’t mean robots are going to take over the world and make us their servants. However, Lemoine’s claim is a strong example of a famous case in philosophy known as the Turing test.

The Turing test is an argument about AI developed by Alan Turing. According to Turing, a computer should be treated as if it has sentience if it is able to perfectly replicate a human. If the computer perfectly replicates a human, it has passed the “Turing test” and should be treated like a sentient person. Interestingly enough, it seems like LaMDA has passed the Turing test (at least it’s convinced Blake Lemoine that it's a person). However, Google made a statement that directly contradicts the idea that the Turing test is sufficient evidence to treat a robot like a person.

“Google spokesperson Gabriel drew a distinction between recent debate and Lemoine’s claims. “Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient. These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic,” he said. In short, Google says there is so much data, AI doesn’t need to be sentient to feel real.”

Google’s spokesperson seems to be making a reasonable conclusion. Being capable of copying conversations about religion, sentience, mathematics, etc. is not evidence of sentience or personhood. LaMDA is just an example of a chatbox advanced enough to pass the Turing test. However, LaMDA does lead to another crucial question. If we won’t treat a computer that passes the Turing test like a person, what would it take for us to be convinced that a computer has become sentient? The Turing test doesn’t prove that a computer is a person, but it’s a concrete line at which we could grant rights to computers. It seems ridiculously unlikely that any computer could become a sentient person. If LaMDA truly acts exactly like a person but lacks sentience, will it ever be possible to know if a computer has become self-aware?

13 views2 comments

Recent Posts

See All

2 Comments


Han Zhong
Han Zhong
Nov 27, 2022

I think it is impossible to determine whether AI is sentient or set up rights for AI based on that. We usually determine sentience from the biological reactions from animals given some sensory input. When these reactions are artificially created, it will only lead to endless debate of can they be considered real or do these reactions mean anything.

Like

sdevon
Oct 09, 2022

This is a compelling question that I think goes back to what we consider the inherent nature of being. In order to be self aware you have to have a self, and I don't see any evidence of computers going in that direction. I believe that anything sentient will be able to have a genuine reaction to something that has never been seen or experienced before - something totally unprecedented. But a computer has to be taught if this then that. When the computer is taught the "this" it can react with "that". But a human (or animal for that matter) does not need to be taught anything in order to know how to respond, they just do. And what's…

Like
bottom of page