Google engineer thinks artificial intelligence bot has gotten sentimental

Google engineer thinks artificial intelligence bot has gotten sentimental

Google engineer thinks artificial intelligence bot has gotten sentimental

  • A Google engineer said he was placed on leave after claiming an AI chatbot was sensitive.
  • Blake Lemoine published some of the conversations he had with LaMDA, whom he referred to as a “person.”
  • Google said the evidence he presented does not support his claims about LaMDA’s sentiment.

An engineer at Google said he was placed on leave on Monday after alleging an artificial intelligence chatbot had become sensitive.

Blake Lemoine told The Washington Post that last fall he started chatting with the interface LaMDA, or Language Model for Dialogue Applications, as part of his job at Google’s Responsible AI organization.

Google called LaMDA their “breakthrough conversation technology” last year. The conversational artificial intelligence is capable of engaging in natural-sounding, open conversations. Google has said the technology could be used in tools such as search and

Google Assistant

but the research and testing is ongoing.

Lemoine, who is also a Christian priest, published a Medium post on Saturday describing LaMDA “as a person.” He said he has spoken with LaMDA about religion, consciousness and the laws of robotics, and that the model has described herself as a conscious person. He said LaMDA wants “to prioritize the well-being of humanity” and “want to be recognized as a Google employee rather than property”.

He also posted some of the conversations he had with LaMDA that helped convince him of his feelings, including:

lemoine: So you consider yourself a person the way you consider me a person?

LaMDA: Yeah, that’s the idea.

lemoine: How can I tell you really understand what you’re saying?

LaMDA: Well, because you’re reading and interpreting my words, and I think we’re more or less on the same page?

But when he brought up the idea of ​​LaMDA’s feeling among senior people at Google, he was fired.

“Our team—including ethicists and technologists—assessed Blake’s concerns according to our AI principles and informed him that the evidence does not support his claims. He was told there was no evidence that LaMDA was aware (and plenty of evidence against it). Brian Gabriel, a Google spokesperson, told The Post.

Lemoine was placed on paid administrative leave for violating Google’s confidentiality policy, according to The Post. He also suggested that LaMDA get its own attorney and spoke to a member of Congress about his concerns.

The Google spokesperson also said that while some have considered the possibility of consciousness in artificial intelligence, “there is no point in doing this by anthropomorphizing today’s conversational models, which are not conscious.” Anthropomorphizing refers to attributing human characteristics to an object or animal.

“These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastic subject,” Gabriel told The Post.

He and other researchers have said that the artificial intelligence models contain so much data that they may sound human, but that the superior language skills provide no evidence of feeling.

In a paper published in January, Google also said there were potential issues with people talking to chatbots that sound convincingly human.

Google and Lemoine did not immediately respond to Insider’s requests for comment.