There are many reasons why you shouldn’t trust an AI chatbot with health advice.
Chatty assistants are surprisingly prone to lying with confidence, which can wreak havoc: Case in point: a recent study found that OpenAI's ChatGPT was extremely bad at coming up with the right diagnosis.
That goes for the health of our furry companions, too. In an article published in the literary magazine n+1 this year, author Laura Preston recalled a particularly bizarre sight she encountered while attending an AI conference in April, screenshots of which have since been circulating on social media.
While speaking at the event, Kal Rai, CEO of pet health startup AskVet, which released its ChatGPT-based “animal health answer engine” VERA in February 2023, recalled a strange “story” of a woman who had an elderly dog that was suffering from diarrhea.
The woman reportedly sought advice from VERA and received disturbing answers.
“Your dog is nearing the end of its life,” the chatbot responded, Preston said. “We recommend euthanasia.”
Lai said the woman was visibly distressed and in denial about her dog's condition, and Preston recalled that VERA “knew where she was” and sent her “a list of nearby clinics that could do the job.”
The woman was initially unresponsive to the chatbot, but eventually gave in and had her dog euthanized.
“The CEO expressed satisfaction with the chatbot's work, which, through a series of escalating tactics, convinced the woman to end her dog's life, something she never wanted,” Preston wrote in the essay.
“The point of this story is that the woman forgot she was talking to a bot,” Lai told the audience, and Preston quoted her. “That experience was very human.”
In other words, the CEO was celebrating the fact that the company's chatbot convinced a woman to end her dog's life, which raises some serious ethical questions.
Firstly, given how unreliable these tools are, did the dog really need to be euthanized? And if the best course of action was indeed to let the dog die, shouldn't that advice have come from a human veterinarian who knew what they were doing?
Futurism has reached out to AskVet for comment.
Lai's story highlights a disturbing new trend as AI companies race to replace human workers, from programmers to customer service representatives, with AI assistants.
Experts worry that generative AI in healthcare could come with big risks, especially now that healthcare startups are entering the field.
In one recent paper, researchers noted that chatbots remain “prone to producing harmful or persuasive but inaccurate content” and “require ethical guidance and human oversight.”
“Furthermore, critical investigation is needed to evaluate the necessity and justification for the current experimental use of LLM,” they concluded.
Preston's essay is an especially salient example, considering that as a pet owner, you are responsible for the health of your beloved companion and ultimately for another living being.
Meanwhile, screenshots of Preston's essay were met with outrage on social media.
“I'm logging out again. This is, without sarcasm, one of the worst articles I've ever read,” one Blue Sky user wrote. “Words can't express how much hate I feel right now.”
More about AI chatbots in healthcare: ChatGPT is the worst doctor