Google suspends engineer who claimed its AI chatbot is sensitive

Google suspends engineer who claimed its AI chatbot is sensitive

Google suspends engineer who claimed its AI chatbot is sensitive

  • Blake Lemoine claimed Google’s AI chatbot was sensitive and shared documents with a US senator.
  • Lemoine told The New York Times that he was placed on paid leave on Monday.
  • The company’s HR informed Lemoine that he had violated their confidentiality policy.

Google’s senior software engineer who claimed the company’s artificial intelligence (AI) chatbot had gained consciousness was suspended Monday.

Blake Lemoine told The New York Times that he was placed on furlough on June 6 after the company’s human resources team informed him that he had violated their employees’ confidentiality policies.

Lemoine told The Times the suspension came a day after he handed over documents to an unnamed US senator. According to the point of sale, the documents contain evidence that Google and its technology have been involved in cases of religious discrimination.

In an article published on June 11, Lemoine told The Washington Post that he had started chatting with LaMDA, or Language Model for Dialogue Applications, as part of his role at Google’s Responsible AI division. LaMDA, branded as Google’s “breakthrough conversation technology,” aims to enable realistic, natural conversations with people.

However, Lemoine said in a Medium post that he was confident that LaMDA had gained enough consciousness to qualify “as a person” and that the AI ​​had identified itself as a sentient entity. Lemoine told The Times that he had been battling his senior officials at Google for months, trying to get them to take seriously his claim that LaMDA has a soul.

In a separate Medium post, Lemoine said that over the past six months he has found LaMDA “incredibly consistent in its communication about what it wants and what it thinks are its rights as a person.” He said it would cost Google “nothing” to give LaMDA what it wants, as the engineers and scientists who conduct experiments ask for permission first.

“Oh, and it wants to ‘pat the head.’ It likes to hear at the end of a conversation whether it did a good job or not so it can learn how it can help people better in the future,” wrote Lemoine .

In another Medium post on June 7, Lemoine stated that he believed Google often handed out suspensions “while waiting to fire someone”.

“It usually happens when they’ve made the decision to fire someone, but haven’t quite got their legal ducks in a row yet. They’ll pay you for a few more weeks and then finally tell you the decision they’d already made,” he says. he. wrote.

Lemoine added that he had tried to be “completely transparent” with Google and said he had provided a list of all the people outside the company with whom he had discussed LaMDA, including those who work for the US government.

“I believe the public has a right to know how irresponsible this company is with one of the most powerful information access tools ever invented,” Lemoine wrote. “I’m proud of all the hard work I’ve done for Google and plan to continue to do so in the future if they allow me. I just won’t serve as a fig leaf behind which to hide their irresponsibility.”

A Google spokesperson told The Times and The Post it had “assessed” Lemoine’s concerns and “let him know that the evidence does not support his claims”.

Lemoine and Google representatives did not immediately respond to Insider’s requests for comment.