The use of artificial intelligence (AI) in medical diagnostics is on the rise, but new research from the University of Adelaide finds that there are still significant hurdles to overcome when compared to clinicians.
In a paper published in the Lancet Digital Health, Australian Machine Learning Institute PhD student Lana Tikhomirov, Professor Carolyn Semler and a team from the University of Adelaide drew on external research to explore what they call the “AI gap”.
The AI chasm has arisen because the development and commercialization of AI decision-making systems has outpaced understanding of their value to clinicians and their impact on human decision-making.
“This can result in automation bias (ignorance of AI errors) and misuse,” Tikhomirov said.
“Misconceptions about AI limit our ability to make the most of this new technology and successfully augment humans.
“While technology implementations in other high-risk environments, such as increased automation in airplane cockpits, have been explored in the past to understand and improve their use, evaluating AI implementations for clinicians remains a neglected area.
“AI should be used more like a clinical drug than a device.”
Research has shown that clinicians are situationally motivated and mentally resourceful decision makers, whereas AI models make decisions without understanding the context or correlation between data and patients.
“The clinical environment is rich with sensory cues that may not be noticeable to a novice observer but are used to make a diagnosis,” Tikhomirov said.
“For example, the brightness of a nodule on a mammogram may indicate the presence of a certain type of tumor, and certain symptoms noted on an imaging requisition can affect the sensitivity with which a radiologist can spot features.”
“With experience, clinicians learn what cues direct their attention to the most clinically relevant information in their environment.
“The ability to use this domain-relevant information is known as cue utilization and is a hallmark of expertise that enables clinicians to rapidly extract important features from clinical evidence and guide subsequent processing and analysis of specific clinical features while maintaining a high degree of accuracy.
“AI models cannot question the datasets in the same way that clinicians are encouraged to question the validity of what they've been taught, a practice known as epistemic humility in clinical practice.”