Despite Google Engineers’ Claims, Experts Say LaMDA Isn’t Sentimental

Despite Google Engineers’ Claims, Experts Say LaMDA Isn’t Sentimental

Despite Google Engineers’ Claims, Experts Say LaMDA Isn’t Sentimental

  • Last week, a Google engineer was sent on leave after claiming the company’s chatbot was sensitive.
  • Insider spoke to seven experts who said the chatbot is likely unaware.
  • There are no clear guidelines for determining whether a robot is alive and conscious, experts say.

It’s unlikely — if not impossible — that a Google chatbot has come to life, experts told Insider after one of the search giant’s senior engineers was suspended for making startling claims.

The technician told The Washington Post that while chatting with Google’s interface called LaMDA – or Language Model for Dialogue Applications – he came to believe that the chatbot had become “conscious,” or was able to perceive and feel like a human. Blake Lemoine, the engineer, worked in Google’s Responsible Artificial Intelligence Organization.

But Lemoine, who didn’t respond to a request for comment from Insider, apparently stands alone when it comes to his claims about the artificial intelligence-powered chatbot: A Google spokesperson said a team of ethicists and technologists reviewed Lemoine’s claims. They said there is no evidence to support them.

“Hundreds of researchers and engineers have spoken to LaMDA and we are not aware of anyone else making the broad claims, or anthropomorphizing LaMDA, as Blake has,” the spokesperson said.

Seven experts Contacted Insider agreed: They said the AI ​​chatbot is likely unaware and that there’s no clear-cut way to measure whether the AI-powered bot is “alive”.

“The idea of ​​sentient robots has inspired great science fiction novels and movies,” Sandra Wachter, an Oxford University professor who focuses on the ethics of AI, told Insider. “But we are a long way from creating a machine akin to humans and the mind,” she added.

A simple system

Another Google engineer who has worked with LaMDA told Insider that while the chatbot is capable of a large number of conversations, it follows relatively simple processes.

“What the code does is model series in language it got off the internet,” the engineer, who prefers to remain anonymous due to Google’s media policy, told Insider. In other words, the AI ​​can “learn” from material scattered across the web.

Male hacker coding.


Getty Images


The engineer said that in a physical sense it is extremely unlikely that LaMDA can feel pain or experience emotion, despite conversations in which the machine seems to convey emotion. In a conversation Lemoine posted, the chatbot says it’s “happy or sad sometimes.”

It’s hard to distinguish ‘feeling’

The Google engineer and several experts told Insider that there’s no clear-cut way to determine “feel” or differentiate between a bot designed to mimic social interactions and one that might be able to actually feel. what it conveys.

“You somehow couldn’t differentiate between feeling and not feeling based on the set of words that come out because it’s just patterns that have been learned,” the engineer told Insider. “There is no ‘gotcha’ question.'”

Laura Edelson, a postdoctoral researcher in computer science at NYU, told Insider that the topic of the conversation between Lemoine and LaMDA shows little evidence of life. And the fact that the conversation has been edited makes it even more blurry, she said.

The Google logo can be seen at the company's headquarters in Mountain View, California,

The Google logo can be seen at the company’s headquarters in Mountain View, California.

Marcio Jose Sanchez/AP


“Even if you had a chatbot that could have a superficial conversation about philosophy, that’s not really different from a chatbot that could have a superficial conversation about movies,” Edelson said.

Giada Pistilli, a researcher specializing in AI ethics, told Insider that it is human nature to attribute emotions to inanimate objects — a phenomenon known as anthropomorphization.

And Thomas Diettrich, a computer science professor emeritus at Oregon State University, said it’s relatively easy for AI to use language that involves internal emotions.

“You can train it on large amounts of written text, including stories with emotion and pain, and then it can finish that story in a way that seems original,” he said. “Not because it understands these feelings, but because it knows how to combine old sequences to create new ones.”

Diettrich told Insider that AI’s role in society will no doubt be subject to further scrutiny.

“SciFi has made sense magical, but philosophers have struggled with this for centuries,” Diettrich said. “I think our definitions of what is alive will change as we continue to build systems over the next 10 to 100 years.”