Google’s AI LaMDA Is Not Conscious, But Has Racial And Gender Bias

Google’s AI LaMDA Is Not Conscious, But Has Racial And Gender Bias

Google’s AI LaMDA Is Not Conscious, But Has Racial And Gender Bias

Although a conscious AI is a thoroughly freaky concept, it is not (yet) a reality. But a racist and sexist AI? Sadly very much the reality.

In a recent interview with wired, engineer and mystical Christian priest Blake Lemoine discussed why he believes Google’s great language model called LaMDA has become conscious, complete with a soul. While that claim has been refuted by many in the artificial intelligence community and led to Lemoine being placed on paid administrative leave by Google, Lemoine also explained how he started working on LaMDA.

His journey with the AI ​​began with a much more realistic problem: examining the model for harmful biases related to sexual orientation, gender, identity, ethnicity, and religion.

“I don’t believe there is such a thing as an unbiased system,” Lemoine told wired. “The question was whether [LaMDA] had one of the harmful biases we wanted to eliminate. The short answer is yes, I’ve found plenty.”

Lemoine also explained that the Google team did a good job fixing these biased “bugs” as far as he could tell. When asked whether LaMDA displayed racist or sexist leanings, Lemoine answered carefully, stating that he “wouldn’t use that term.” Instead, he claims that “the real question is whether or not the stereotypes it uses would be endorsed by the people who” [LaMDA is] talk about.”

ALSO SEE:

Amazon used AI to promote diversity. Too bad it’s plagued by gender bias.

Lemoine’s reluctance to label LaMDA’s “bugs” as downright racist or sexist highlights an ongoing battle within the AI ​​community, where many have spoken out about the harmful stereotypes AI systems often perpetuate. But when those who do speak out on these issues are largely black women — and those women are subsequently fired from companies like Google — many feel it’s up to men in tech like Lemoine to continue drawing attention to the current AI bias issues. , rather than confusing the attention spans of researchers and the public with claims about AI sense.

“I don’t want to talk about sentient robots because there are people on all ends of the spectrum that harm other people, and that’s what I’d like to focus the conversation on,” said former Google Ethical AI team co-lead Timnit Gebru. nasty wired

Artificial intelligence has a long history of harmful stereotypes, and Google is not new or unaware of these issues.

In 2015, Jacky Alciné tweeted that Google Photos tagged 80 photos of a black man in an album titled “gorillas.” Google Photos learned how to do this using a neural network, which analyzed massive amounts of data to categorize subjects like humans and gorillas—obviously, incorrectly.

It was the responsibility of the Google engineers to ensure that the data used to train the AI ​​photo system was correct and diverse. And when it failed, it was their responsibility to fix the problem. According to the New York TimesGoogle’s response was to eliminate “gorilla” as a photo category, rather than retrain its neural network.

Companies like Microsoft, IBM and Amazon also face the same biased AI problems. At each of these companies, the AI ​​used for facial recognition technology encountered significantly higher error rates in identifying the gender of darker-skinned women than compared to gender-identifying lighter-skinned women, as reported by the Time

ALSO SEE:

Meet the designer who creates high-tech nail art and fights facial recognition with flowers

In 2020, Gebru published a paper with six other researchers, four of whom also worked at Google, criticizing large language models like LaMDA and their tendency to imitate words based on the datasets they learn from. If those datasets contained biased language and/or racist or sexist stereotypes, then AIs like LaMDA would replicate those biases when generating language. Gebru also criticized the training of language models with increasingly large data sets, which would allow the AI ​​to learn to mimic language even better and convince the public of progress and feeling, which Lemoine fell into.

After a dispute over this paper, Gebru says Google fired her in December 2020 (the company claims she resigned). A few months later, Google also fired Dr. Margaret Mitchell, founder of the AI ​​Ethics Team, a co-author of the paper and advocate of Gebru.

Despite a supposed commitment to “responsible AI”, Google still faces ethical AI issues, leaving no time for deliberate AI claims

After the drama and admitting it had damaged its reputation, Google promised to double its responsible AI research staff to 200 people. And according to recode, CEO Sundar Pichai pledged his support to fund more ethical AI projects. And yet the small group of people still on Google’s ethical AI team believe that the company may no longer be listening to the group’s ideas.

After Gebru and Mitchell left in 2021, two more prominent ethical AI team members left a year later. Alex Hanna and Dylan Baker left Google to work for Gebru’s research institute, DAIR, or Distributed Artificial Intelligence Research. The already small team got even smaller, perhaps pointing to why Lemoine, who isn’t on the AI ​​ethics team, was asked to step in and investigate LaMDA’s biases in the first place.

As more and more societal functions turn to AI systems in their advancement, it is more important than ever to continue exploring how the underpinnings of AI influence its functions. In an often racist and sexist society, we cannot afford to rely on our policing systems, transportation methods, translation services and more on technology that has racism and sexism built into it. And, as Gebru points out, when (mainly) white males in tech choose to focus on things like AI sense rather than these existing biases — especially when that was their original goal, like Lemoine’s involvement with LaMDA — the biases will increase. keep spreading, hidden under the din of robot feeling.

“There’s a pretty big gap between the current story of AI and what it can actually do,” Giada Pistilli, an ethicist at Hugging Face, told wired “This story evokes fear, surprise and excitement at the same time, but it is mainly based on lies to sell products and take advantage of the hype.”