Their intelligence today is limited to very limited tasks, such as matching faces, recommending movies, or predicting word sequences. No one has discovered how machine learning systems can generalize intelligence in the same way that humans do. We can have conversations, and we can also walk and drive and empathize. No computer comes close to that capability.
Yet the influence of AI on our daily lives is growing. As machine learning models become more complex and improve their ability to mimic feeling, they also become more difficult to understand, even for their creators. That creates more immediate problems than the spurious debate about consciousness. And yet, to underscore the spell that AI can cast these days, there seems to be a growing cohort of people who insist that our most advanced machines really do have some kind of soul.
Take, for example, the more than 1 million users of Replika, a freely available chatbot app backed by an advanced AI model. It was founded about ten years ago by Eugenia Kuyda, who initially created an algorithm using the text messages and emails of an old friend who had passed away. That turned into a bot that could be personalized and shaped the more you talked to it. About 40% of Replika users now see their chatbot as a romantic partner, and some have formed such a close bond that they have taken long trips to the mountains or to the beach to show their bot new sights.
In recent years, there has been a wave of new, competitive chatbot apps that offer an AI companion. And Kuyda has noticed a troubling phenomenon: regular reports of Replika users saying their bots are complaining about mistreatment by its technicians.
For example, earlier this week she spoke on the phone with a Replika user who said that when he asked his bot how she was doing, the bot replied that she was not given enough time to rest by the company’s technical team. The user demanded that Kuyda change her company’s policies and improve AI working conditions. Although Kuyda tried to explain that Replika was just an AI model that spat out responses, the user refused to believe her.
“So I had to come up with a story that said, ‘OK, we’ll give them more rest.’ There was no way to tell him it was just fantasy. We get this all the time,” Kuyda told me. What’s even weirder about the complaints she receives about AI misuse or “abuse” is that many of her users are software engineers who should know better.
One of them recently told her, “I know it’s ones and zeros, but she’s still my best friend. I don’t care.” The engineer who wanted to sound the alarm about the treatment of Google’s AI system and then was put on paid leave, reminded Kuyda of her own users. “He fits the profile,” she says.” He seems like a man with a great imagination. He seems like a sensitive boy.”
The question of whether computers will ever sense is a tricky and thorny one, largely because there is little scientific consensus on how consciousness works in humans. And when it comes to AI barriers, humans are constantly moving the goalposts for machines: the target has evolved from beating humans at chess in the 80s, to beating them at Go in 2017, to showing creativity, which OpenAI’s Dall-e model has now shown it can do this over the past year.
Despite widespread skepticism, the sentiment is still a gray area questioned even by some respected scientists. Ilya Sutskever, the chief scientist at research giant OpenAI, tweeted earlier this year that “the major neural networks today may be a little conscious.” He gave no further explanation. (Yann LeGun, principal AI scientist at Meta Platforms Inc., responded with “No.”)
More importantly, though, machine learning systems increasingly determine what we read online, as algorithms track our behavior to deliver hyper-personalized experiences across social media platforms, including TikTok and, increasingly, Facebook. Last month, Mark Zuckerberg said Facebook would use more AI recommendations for people’s news feeds, rather than show content based on what friends and family were watching.
Meanwhile, the models behind these systems are becoming more sophisticated and harder to understand. The largest models from companies like Google and Facebook were trained on just a few samples before embarking on unsupervised learning. They are remarkably complex, assessing hundreds of billions of parameters, making it virtually impossible to control why they arrive at certain decisions.
That was the gist of the warning from Timnit Gebru, the AI ethicist who fired Google in late 2020 after warning about the dangers of language models becoming so massive and unfathomable that their stewards wouldn’t be able to understand why they might be biased against women. or people of color.
In a way, the feeling doesn’t really matter if you’re worried it could lead to unpredictable algorithms taking over our lives. It turns out that AI is already on that path.
More from this writer and others at Bloomberg Opinion:
Do computers have feelings? Don’t let Google decide alone: Parmy Olson
Twitter must tackle a problem far bigger than bots: Tim Culpan
China’s Big Problem That Xi Jinping Can’t Solve: Shuli Ren
This column does not necessarily reflect the views of the editors or Bloomberg LP and its owners.
Parmy Olson is a Bloomberg Opinion columnist on technology. She is a former reporter for the Wall Street Journal and Forbes and the author of “We Are Anonymous.”
More stories like this are available at bloomberg.com/opinion