Modern medicine has an empathy problem, and artificial intelligence, if used properly, could help mitigate this problem.
Despite the proliferation of communication training programs over the past 10 to 20 years, physicians often fail to express empathy, especially during stressful moments when patients and their families receive bad news and make difficult decisions. Because empathy has been shown to increase patient understanding and trust in the medical team, a lack of empathy leads to poorer quality patient care.
Will AI help? This may seem like a cynical question: Doctors who struggle to express empathy may come across as robots. But researchers and medical professionals are increasingly asking this question, and not just because we're living through the AI hype cycle.
One reason there's growing interest in AI to solve the empathy problem in healthcare is that this aspect of care has proven particularly difficult to improve. This isn't surprising, given that doctors are under increasing pressure to see large numbers of patients quickly, while also being swamped with paperwork and overwhelming administrative tasks. This overwhelm translates into a lack of time and, perhaps more importantly, emotional energy. The American Medical Association reports that 48% of doctors experienced burnout last year.
The magnitude of the empathy problem and its high clinical and ethical stakes have led to a variety of applications of AI being explored, none of which are likely to be a panacea, and while each is well-intentioned, the entire endeavor carries risks.
Responses to reports of vaccine injuries should be based on empathy
A rather extreme option has been proposed by Dr. Arthur Gershon Jr., a member of the National Academy of Medicine and clinical professor of health systems and population health sciences at the University of Houston. Dr. Gershon urges us to prepare for a time when some human doctors will be replaced by AI avatars. Dr. Gershon believes it is possible, even likely, that AI avatars that appear on computer screens will be programmed to look like “doctors,” have “in-depth conversations” with “patients and families,” and be customized to “respond very appropriately” to patients' moods and words.
Whether AI will progress this far raises difficult questions about the ethics of empathy, including the risk that in the near future computer programs will not be able to experience empathy and therefore have negative dehumanizing effects on patients. To be sure, not all human doctors who sound empathetic are truly empathetic in the moment. But while doctors cannot always control their own emotions, they can recognize their patients' emotions and respond appropriately, even in the midst of a difficult situation.
No simulated AI “doctor,” no matter how intelligent it may appear, can truly care for its patients unless it somehow learns to experience human-like empathy. Until that day comes (and it may never come), bot-generated phrases like “Sorry to break the news” seem to belittle the very concept of empathy.
A more moderate vision revolves around a variety of uses for generative AI to support real-time doctor-patient communication. Anecdotal evidence suggests this use of the technology is promising, like Dr. Joshua Tamayo Thurber's moving story about how ChatGPT saved the day in a California emergency department when he struggled to find the right words to connect with a distraught patient's family. Preliminary academic research, like a highly-discussed article in JAMA Internal Medicine, also suggests that generative AI programs based on large-scale language models can effectively simulate empathetic discourse.
But another recent study suggests that while the content of an empathetic message matters, so does the identity of the person sending it: People rate AI-generated empathetic texts as better on average than human-generated ones when they don't know who wrote them or what they wrote. But once the recipient knows the words were generated by a bot, the machine advantage disappears.
In our upcoming book, Move Slow and Upgrade, one of us (ES) suggests one possibility: Integrate a version of generative AI into patient portals to increase physician empathy. Patients see the portal as a lifeline, but physicians spend so much time responding to messages in their inbox that that back and forth contributes to physician burnout. Perhaps it would be a win-win: by pressing an empathy button that edits draft messages, physicians could improve patient satisfaction and reduce the number of follow-up questions from patients.
As a primary care physician, how would you like to collaborate with AI?
This application of AI-generated empathy is promising in many ways, but it also carries many risks, even if obvious challenges are resolved—such as the technology performing consistently well, being regularly audited, being configured to comply with HIPAA, neither doctors nor patients being forced to use it, and doctors using it transparently and responsibly. Many thorny problems still remain. For example, how can doctors rapidly use AI and monitor its output without placing too much trust in the technology's performance? What if the technology creates a problem of multiple personalities, where doctors sound like saints online but are robots in person? And how can we create new forms of AI dependency to avoid further deterioration of human communication?
Some visionaries are leveraging the potential of AI to improve physician communication skills. For example, one of us (TC) is involved in the SOPHIE project, a University of Rochester initiative to create AI avatars trained to play patients and provide personalized feedback. This project could help improve physicians' ability to express empathy appropriately. Preliminary data is promising, but it is too early to draw firm conclusions, and further clinical trials are ongoing.
This approach has the advantages of being reproducible, scalable, and relatively inexpensive. However, it can suffer from many of the same limitations as traditional human-based communication training courses. For example, at the individual level, communication skills tend to deteriorate over time, necessitating repeated training. Another problem is that the physicians who most need communication training may be the least likely to attend training. Also, it is unrealistic to expect training programs like SOPHIE to overcome the system-level stresses and dysfunctions that are the primary causes of empathy problems.
Because technology changes quickly, now is the time to have a thoughtful and inclusive conversation about the possibilities we have raised here. While we neither have all the answers, I hope that the discussion about AI and empathetic communication will be based on the recognition that both the message and the communicator matter. Focusing too much on what AI can do could lead us to overestimate the value of its output and underestimate the important relationship of care, which, at least for the time being and perhaps fundamentally, can only occur between humans. At the same time, prematurely concluding that AI is useless could lead us to unnecessarily maintain a dysfunctional system that leads many patients to view doctors as robots.
Evan Selinger, PhD, is Professor of Philosophy at the Rochester Institute of Technology and author, with Albert Fox Cahn, of the forthcoming book Move Slow and Upgrade: The Power of Incremental Innovation (Cambridge University Press). Thomas Carroll, MD, PhD, is Associate Professor of Medicine at the University of Rochester Medical Center.