AI is here to stay, so it’s vital that radiologists understand its benefits and limitations.
Merely mentioning artificial intelligence (AI) can strike fear into the hearts and minds of many practicing radiologists. But, the reality of what this technology can and cannot do could be quite different from what many providers believe.
To address any lingering fears over AI, as well as fact versus fiction, Woojin Kim, MD, chief medical information officer at Nuance Communications, discussed the current playing field for AI at this year’s annual meeting for the American Healthcare Radiology Administrators: The Association for Medical Imaging Management (AHRA).
“It’s important to have this conversation because many see headlines about how AI performs faster and better than radiologists, and they wonder if they’re going to be replaced,” he says. “But, there are many limitations and a number of challenges that are related to AI in medical imaging. We need to have a deep discussion about the reality of AI.”
Overall, he says, AI presents a lot of hope in medical imaging as many tools can accurately identify imaging findings, such as lung nodules, pneumonia, brain bleeds, or breast cancers. In doing so, the technology helps radiologists by leading to fewer missed cases and accelerating diagnosis time. These tools can also be used to improve other aspects of the imaging value chain, such as image quality improvement, dose reduction, exam protocoling, and patient scheduling. But, AI still has its stumbling blocks.
Related article: How AI is Evolving in Diagnostic Imaging
For example, many AI techniques, in particular, deep learning, have a "black box" problem. This situation arises when a provider cannot determine how an AI algorithm reached an erroneous decision with a high degree of confidence, such as recommending contrast agent for a contraindicated patient. These mistakes erode trust and prevent the clinical deployment of these AI tools.
Additionally, AI algorithms can be brittle, Kim explains. Training the data set to detect pneumonia or breast cancer successfully at one hospital does not mean the AI model will perform as well at another institutions.
“Radiologists and radiology administrators must understand that just because an AI model claims 99% accuracy, that doesn’t mean it will work somewhere else with the same 99% accuracy,” he says. “This brittleness is a problem we’re talking about more and more.”
With AI, there is also the possibility of an adversarial attack where hackers can infiltrate a PACS and inject noise into an image, causing the AI model to make an incorrect diagnosis. In addition, radiologists can also be fooled by fake images created by AI. Such problems could present significant cybersecurity issues for healthcare systems in the future.
Alongside these limitations, there are also several challenges to using AI, he says. First, AI tools must be trained on clinically-relevant and useful cases, and images should be labeled by someone who has expert radiology training or background. While many other challenges also exist, if AI tools aren’t integrated seamlessly into the workflow, radiologists will have several more steps to click through while reading, slowing down the diagnostic process and impeding acceptance.
Legal implications could also be problematic, Kim explains, if the technology misses or makes an incorrect diagnosis.
“We live in a very litigious society, especially in healthcare,” he says. “If AI makes a mistake, who gets sued-the AI company, the radiologist, the hospital, the person who made the purchasing decision? It remains uncertain.”
And, choosing the right AI tool can also be difficult. To overcome that hurdle, Kim suggests some questions administrators should ask vendors to determine which AI technology solution might be the best fit. Buyers should know how the AI model was trained, what type of data set was used, whether it has FDA clearance, how it’s integrated into the workflow, and whether any institution has piloted it with real-world results.
Ultimately, Kim says, he wants administrators to have a better understanding of what AI can and cannot do so they can better analyze the plethora of articles published on the topic.
“I don’t want people to think that AI is all hype and doesn’t really work; conversely, I want their thought process to go beyond ‘this is cool.’ I want people to understand that AI has tremendous potential and to have the ability to critique AI algorithms before they make decisions,” he says. “I want them to be better prepared to make assessments so they can consume or even develop something that can truly help with their internal processes.”
FDA Clears Updated AI Platform for Digital Breast Tomosynthesis
November 12th 2024Employing advanced deep learning convolutional neural networks, ProFound Detection Version 4.0 reportedly offers a 50 percent improvement in detecting cancer in dense breasts in comparison to the previous version of the software.
Improving Adherence to Best Practices for Incidental Abdominal Aortic Aneurysms on CT and MRI
November 5th 2024In recent interviews, Eric Rohren, M.D., and Krishna Nallamshetty, M.D., discuss the potential of abdominal aortic aneurysms (AAAs) to progress into life-threatening consequences and an emerging AI-powered tool that may bolster adherence to best practice recommendations in radiology reporting of incidental AAA findings on CT and MRI.