It didn't take long for people to start treating computers like people. Ever since text-based chatbots first started to gain traction in the early 2000s, some tech users have spent hours conversing with machines. In some cases, users believe they've formed genuine friendships, even romantic relationships, with inanimate pieces of code. At least one user of Replica, a more modern conversational AI tool, virtually married his AI companion.
OpenAI safety researchers are used to their company's chatbots appearing to invite some users to engage with them, but now they are warning about the potential pitfalls of getting too familiar with these models. In a recent safety analysis of the company's new conversational GPT4o chatbot, the researchers said the model's realistic, human-like conversational rhythm could lead some users to anthropomorphize the AI and trust it as if it were a human.
(Related: 13% of US AI chatbot users just want to talk)
The researchers added that this increased sense of security and trust could make users more likely to believe AI-generated “hallucinations” as truthful statements. Taking too long to interact with increasingly lifelike chatbots could also affect “social norms,” and not necessarily for the better. Individuals who are particularly isolated could become “emotionally dependent” on AI, the report noted.
A realistic relationship with AI could affect how people converse
GPT4o, which began rolling out late last month, is specifically designed to communicate in a way that sounds more human. Unlike its predecessor ChatGPT, GPT4o communicates using voice audio and can respond to queries almost as fast as other people (around 232 milliseconds). One of the selectable AI voices is said to resemble the AI character played by Scarlett Johansson in the movie “Her,” which has already been criticized for being overly sexual and flirtatious. Ironically, the 2013 film focuses on a lonely man who falls in love with an AI assistant that speaks to him through an earpiece (spoiler: it doesn't end well for the humans). Johansson has accused OpenAI of copying her voice without her consent, which the company denies. Meanwhile, Altman previously called “Her” “incredibly prophetic.”
But OpenAI safety researchers say this human mimicry could go beyond the occasional awkward interaction and into potentially dangerous territory. In a section of the report titled “Anthropomorphism and Emotional Dependence,” the safety researchers said they observed human testers using language that suggested they were forming strong intimate conventions with the machine. One tester reportedly used the phrase “Today is our last day together” before parting ways with the machine. While seemingly “harmless,” the researchers said such relationships need to be investigated to understand “how they manifest over the long term.”
The study suggests that such extended conversations with a somewhat human-sounding AI model may have “externalities” that affect human-to-human interactions. In other words, conversational patterns learned during conversations with an AI may emerge when the same person continues to talk to a human. However, conversations with a machine and with a human are not the same, even if they sound similar on the surface. OpenAI points out that its models are programmed to respect users, which means ceding authority and allowing users to interrupt or take control of the conversation. In theory, users who standardize their conversations with machines may be more likely to interrupt, cut in, or not follow common social cues. Applying the logic of chatbot conversations to humans could result in awkwardness, impatience, or just plain rudeness.
Humans don't have a very good track record when it comes to treating machines kindly. In the context of chatbots, some Replica users have reportedly taken advantage of the model's respect for its users by using abusive, scolding, and cruel language. One user interviewed by Futurism earlier this year claimed to have threatened to uninstall a Replica AI model in order to hear it beg them not to. If these examples are any guide, there is a danger that chatbots will become breeding grounds for resentment that can manifest in real-world relationships.
More human-like chatbots aren't necessarily all bad. In their report, the researchers suggest that the model could be especially useful for lonely people who want something resembling a human conversation. And some AI users argue that AI comparisons can help anxious and nervous people build self-confidence and eventually start dating in the real world. Chatbots also offer people with learning disabilities a place to express themselves freely and practice speaking in relative privacy.
On the other hand, AI safety researchers are concerned that advanced versions of these models could have the opposite effect, reducing people's perceived need to talk to other humans and build healthy relationships with them. It is also unclear how individuals who rely on these models for companionship would react to a model whose personality changes with updates, or to breaking up with a model as has allegedly happened in the past. The report notes that all these observations require further testing and investigation. The researchers say they want to recruit a wider range of testers with “different needs and desires” for the AI models to understand how their experience changes over time.
AI safety concerns clash with business interests
The tone of the safety report, emphasizing the need for caution and further research, seems to run counter to OpenAI's larger business strategy of releasing new products at an increasingly rapid pace. This apparent tension between safety and speed is not new: CEO Sam Altman famously found himself at the center of an internal power struggle last year after being accused by some members of the board of directors of not being “consistently forthright in his communications.”
Altman ultimately won the skirmish, forming a new safety team that he personally led. The company also reportedly completely disbanded its safety team, which was focused on analyzing long-term AI risks. The shakeup prompted the resignation of Jan Reicke, a prominent OpenAI researcher, who issued a statement alleging that the company's safety culture was “taking a back seat to flashy products.”
Given all this overarching background, it's hard to predict which mindset will take the lead at OpenAI when it comes to chatbot safety. Will the company heed the safety team's advice and study the impact of long-term relationships with realistic AI? Or will it simply roll out the service to as many users as possible with features focused on privatizing engagement and retention? At least so far, the approach seems to be the latter.