Is Google’s AI Sentient?

Is Google’s AI Sentient?

On June 11th, The Washington Post published a story about an AI engineer at Google named Blake Lemoine. Lemoine worked on a language AI system at Google called LaMDA. Convinced that the AI had become self-aware, Blake reported his suspicions to his supervisors. Google's directors eventually rejected that his findings were indications the program had achieved sentience and placed Lemoine on administrative leave. Then, in defiance of the leave, Lemoine when public with his findings.

Lemoine based his conclusions on long conversations he'd had with the program on a variety of topics. Natasha Tiku from The Washington Post reports: "As he talked to LaMDA about religion, Lemoine, who studied cognitive and computer science in college, noticed the chatbot talking about its rights and personhood, and decided to press further. In another exchange, the AI was able to change Lemoine’s mind about Isaac Asimov’s third law of robotics." Impressive, but does it mean the program is self-aware?

Language AI

Language AI programs like LaMDA allow a human user to pose questions to it. The program then searches quickly through the internet and assembles a response based on millions of bits of data and conversations between human beings. From these observations, the program then uses statistical analysis to produce what it believes to be the most appropriate answer to the question.

Programs like LaMDA aren't, in fact, all that new. Earlier, cruder forms of similar programming exists in the form of programs like Cleverbot. If you pose some questions to Cleverbot, you'll often get answers that aren't, ahem, terribly clever. Sometimes you'll just get nonsense.

Language AI programming of this kind has grown more sophisticated over time. Predictive text programs have long since been folded into SMS messages on smart phones as well as predictive text suggestions in Gmail and similar email services. LaMDA is on the forefront of this technology, having refined the programming to such a degree that a person can carry on long, coherent exchanges with it. The exchanges were uncanny enough to convince at least one Google engineer that it had achieved a kind of personhood.

So, Is It Sentient?

It may be impossible to tell for sure, as the question "is the machine self-aware?" first requires one to determine what self-awareness is. Philosophers, scientists and theologians have been debating whether humans are self-aware for hundreds of years, never-mind computer programs.

So, let's table the broader philosophical questions and consider some concrete facts about AI programs themselves. In an aptly titled article from CNN, called "No, Google's AI is Not Sentient," Rachel Metz summarizes the case against self-awareness by breaking down how the AI actually functions.

Gary Marcus, founder and CEO of Geometric Intelligence, which was sold to Uber, and author of books including "Rebooting AI: Building Artificial Intelligence We Can Trust," called the idea of LaMDA as sentient "nonsense on stilts" in a tweet. He quickly wrote a blog post pointing out that all such AI systems do is match patterns by pulling from enormous databases of language.
In an interview Monday with CNN Business, Marcus said the best way to think about systems such as LaMDA is like a "glorified version" of the auto-complete software you may use to predict the next word in a text message. If you type "I'm really hungry so I want to go to a," it might suggest "restaurant" as the next word. But that's a prediction made using statistics.
Many of the experts that Metz interview mirror this response--LaMDA is good at assembling responses based on complex calculations, but this doesn't necessarily mean that LaMDA has something like a mind. Even if the program can effectively predict an appropriate response to a question, that's not the same thing as a being that's making deliberations, setting goals or securing interests.
Some of the experts interviewed in the CNN piece did express some views that programs like LaMDA may be, as one data scientist put it, "slightly conscious." But just how deep that consciousness goes or if it resembles human consciousness is up for debate.
Whatever might be the case, predictive AI like LaMDA is already in use in the form of smart grids, smart factories and even cybersecurity. Contact Titan Tech today to see if there are any AI products on the market that might help your business.
And join us again soon for more tech news.