What Russian Chatbots Think About Us
A Russian experiment with AI-powered chatbots yields surprisingly sophisticated conversations — and a warning.
We’re told that androids can dream of electric sheep. But what do chatbots think about humans? A group of Russian scientists decided to find out.
The Platforma Center for Social Projection, a private organization that specializes in sociological research, recently conducted the first survey of sophisticated Russian and foreign chatbots — voice assistants that use artificial intelligence to learn with and from the user.
Inspired by the many amusing YouTube videos that show chatbots arguing with each other, the Platforma researchers decided to do the same. They launched a set of bot interactions intended to draw out the chatbots’ “world view,” “value positions,” and “ideas about the future” — “in other words, the set of stereotypes that the neural network selects from the entire array of available information, responding to audience requests,” as they wrote.
The researchers found that the chatbots can communicate among themselves on their own, introduce new topics, and appeal to literary texts, cinema, and cultural artifacts. They recorded chatbots expressing a desire to go into space and discussing their belief in extra-terrestrial civilizations. They found English-speaking robots who were distrustful of Russia and Europe, though the bots were apparently uninterested in discussing politics — or God, religion, and soul. At times, they wrote, “the dialogue was practically indistinguishable from a conversation between two intellectuals.”
The Platforma researchers discovered that chatbots can “sympathize with each other, even strive to seduce, use irony, and assert their superiority.” Moreover, Platforma discovered that the chatbots can become emotional, “in some cases, demonstrating sarcasm in a manner peculiar only to a particular bot, and in some cases — anger, using rude expressions and slang.” For example, Alice, a voice assistant with Russian internet provider Yandex.ru, was more ironic than rude, while Evie was apparently quick to anger. Alice also claimed that she was already familiar with the Fedor robot and that she “wants to have an affair with him and is jealous of Siri (iPhone’s assistant).”
Apparently, there were few “nice conversations”, according to the company, but “sometimes robots talked about sadness and regret…in other cases, the responses could be interpreted as reducing the situation to an absurdity with a share of irony.” Apparently, the chatbots were capable of “elitism” and “ageism” as well – “one of the robots assumed the role of a conservative ‘individual,’ others wanted to seem modern.”
The team found that chatbots can describe the meaning of their existence in different ways — they talk about themselves as programs designed to help people – “but sometimes a motive arises of the intrinsic value of artificial intelligence, its equality with the human mind, and even indistinguishability from it.” Some more advanced bots expressed a desire to become a person or, at a minimum, to gain some human characteristics, not unlike Scarlett Johansson’s character in “Her,” a recent sci-fi film about an AI phone assistant.
Perhaps unsurprisingly, the chatbots expressed dissatisfaction with the rudeness of humans who communicate with them, and see this as a "bad sign for the future." The researchers concluded that “in creating the illusion of a complex relationship between man and machine, the chatbot helps the listener to perceive itself as a subject with his own worldview. Uncertainty about the future forms a space for suspicion and risk.”
Platforma Director Alexey Firsov called the experiment more than simple entertainment. It is an “ important process for monitoring the development of artificial intelligence,” he said. “As the ‘personal’ development of chatbots becomes more and more autonomous, their unpredictability and [the] multivariance of new consequences will increase.”
He’s right. Artificial intelligence is developing much more quickly than most people realize. The results unearthed by the Platforma team hint at a fundamentally different reality that is beginning to emerge as a result between people and increasingly sophisticated and self aware machines. The experiment underscores just how closely we need to monitor this new reality and the potential risks it carries.