Google’s AI Part II: What’s at Stake?

Google’s AI Part II: What’s at Stake?

On June 17th we published a post about controversy surrounding recent developments relating to one of Google's AI projects, LaMDA. To summarize, an engineer who had been working on the AI became convinced that LaMDA had achieved a kind of person-hood. After Google placed him on administrative leave, he made his point of view public.

In our post, we described, in as plain of terms as we could, the mechanics behind the AI and how it uses language to interact with human beings. In short, the program scours the internet for conversations and then analyzes them to fashion appropriate responses to questions from humans. Think of LaMDA like a turbo-charged version of the auto complete function on your word processor or smart phone. Today, we'll explore the social implications this kind of technology could have if it were brought into broad use.

Ghost in the Machine

Supremely intelligent computers who, often inexplicably, become self-aware before either enslaving or wiping out humanity have appeared in schlocky sci-fi movies and novels for decades. But what if self-awareness or the prospect of a robot apocalypse aren't really what's at issue?

A June 14th article in The Verge explores some of the social implications that language modeling that LaMDA and similar programs use may mean for human life. The article interviews a variety of AI researchers and ethicists to explore the conundrum.

For many of the experts interviewed, the issue isn't so much the possibility of a super intelligent AI exploiting humans but rather humans using AI to exploit other humans. One researcher, for example, fears that advanced language modeling "will lead to scams." Imagine an AI program crafting electronic communications that resemble those of a family member or loved one after having analyzed personal communications stolen by a cyber criminal--hey, sweetie, remind me what your mother's maiden name and date of birth are...

AI could also be used to deepen already existing socioeconomic inequality. According to another ethicist, the hand-wringing about conscious AI "prevents people from questioning real, existing harms like AI colonialism, false arrests, or an economic model that pays those who label data little while tech executives get rich." Like any tool, AI's potential benefit or harm depends on how it's used.

What are the Right Questions?

Many of the broad worries surrounding artificial intelligence are difficult to answer. Can a machine have feelings? Can it truly think, deliberate or achieve a kind of consciousness? Could it have desires, a will or even an agenda? Yet, even when applying similar questions to humans and animals, there aren't always clear answers.

One way to respond to these questions is to dismiss them entirely. According to the article, many AI experts no longer even consider such questions because they believe that much of the hype around AI draws attention away from more salient issues. Even the article's writer dismissed such preoccupations: "asserting that an AI model can gain consciousness is like saying a doll created to cry is actually sad."

Perhaps a more helpful line of question is one that considers how language modeling and similar technology could be employed by different kinds of human agents. How do we ensure that it's used in a way that's transparent and fair? What sorts of laws and policies should be put in place to prevent its abuse? Who gets access to the technology and for what purposes?

As the technology becomes easier to produce, it's likely to start appearing more frequently in everyday life. As that happens, you'll need a guide to help you understand the technology, its benefits and its pitfalls. Contact Titan Tech today to learn more.

If you're interested in hearing the other side of the argument, you can read the Washington Post story that started the controversy here.