Radiologists have mixed feelings about AI.
Artificial intelligence (AI) is booming in all areas of business and medicine. But it’s taken a special hold in radiology. RSNA 2017 had its first machine learning showcase, highlighting the companies and technologies already changing how radiology is practiced. About 30 machine learning companies demonstrated their capabilities.
AI brings a mix of feelings to those who practice radiology, including fear, hope, and hype, said Woojin Kim, MD, a Chief Medical Information Officer for Nuance’s healthcare division, and an MSK radiologist.
Media have bombarded the radiology field with talk of AI.
“Even in mainstream artificial intelligence articles, for some reason, radiology is often used as an example of where machines will replace humans,” he said. What’s inspiring that fear, hope and hype, and is it justified?
Fear
“When I see some of the experts in AI talking about radiology, it is evident that they have a very simplified view of what we do as radiologists,” Kim said.
But even some radiologists and those considering the field are fearful. Andrew Ng, a Stanford adjunct professor, and well-known machine learning expert who co-founded Google Brain and recently led Baidu AI Lab, posted an email from a radiology resident on Twitter. The third-year resident asked whether to quit and do something else, questioning how close radiologists were to being replaced. “It’s clearly affecting the mindset of those who want to go into radiology or who are currently in training,” Kim said.
Fear of technology is nothing new. With desktop publishing, many typographers may have lost their jobs, but the number of graphic designers increased, he said. Another example is the “Luddite fallacy,” in which English textile workers destroyed textile-making machines out of fear that the technology would put them out of work. Instead, the machines caused the price of textiles to drop, and more people bought them. The textile industry grew.
Just as with these examples, Kim said that technology will not replace radiologists, but will create new opportunities and new ways to provide value.
Hype
AI algorithms are getting easier to create. “You can get a set of images for an imaging finding you’re interested in, and with several lines of code or by using a deep learning development environment, you can train deep neural networks,” so much so that it’s almost a cookie-cutter operation for some. Because of that, many algorithms are popping up. If they’re created by someone without deep clinical radiology knowledge, though, the algorithms won’t work well. A granuloma is not the same as a pulmonary nodule, for example. “Coming up with a good medical imaging AI algorithm is a difficult task and still takes a lot of effort,” he said.
From what Kim is seeing, not all AI developers fully leverage radiologists’ knowledge. “That’s what makes our job complex. We need to know about things that can mimic disease, so I don’t call an artifact or benign finding a disease.” That subtle knowledge makes a huge difference in interpretation.
A study published in September 2017 in the Journal of Digital Learning used IBM’s Watson to determine if a patient should get contrast for a musculoskeletal MRI, based on the study’s free-text clinical indications. The machine performed well in this situation, with 83.2% clinical accuracy for the test set, compared to the original protocol. When compared to the test set and the second radiologist reader concordance, Watson was in agreement 90%.
That’s great news and shows promise. However, if those running the artificial intelligence system don’t understand how the decisions are made in the deep learning algorithms, that can be a problem in medicine, said Kim. This “black box” problem happens when a bad decision is made, but no one understands how the deep learning program arrived at the decision.
In the study, Watson committed 25 classification errors in the 280-case test set. One, however, was potentially fatal: assigning contrast dye to a patient with end-stage renal disease. Watson had a high confidence level for that decision, showing that radiologists can’t rely on AI just because of high confidence levels. If a person makes a mistake, they usually know why, said Kim. Until the black box problem is solved, radiologists must be heavily involved.
Hope
The goal should not be replacing radiologists with deep learning algorithms, but making radiologists better. With artificial intelligence working in the background, diagnostic accuracy can increase.
Radiologists and others have tunnel vision for how artificial intelligence can be used in the field. It’s not just about making a finding and auto-generating a report, Kim said. AI can be used in every step of the medical imaging process, including patient scheduling, imaging protocols, workflow, creating actionable reports, communication, quality assessment, and patient safety and follow-ups.
“If I have to make measurements on a bunch of lymph nodes and compare them to two prior studies, that’s a repetitive task that AI is nicely suited for,” said Kim. “I know the really cool thing right now is having the machines make the findings on images, but there are many ways AI can benefit radiology.” The technology can be adopted more quickly when using it for lower hanging fruit than diagnostic use, partly because of the FDA regulations.
One way it can help is by pulling up applicable clinical history and records. “We know we can do a better job as radiologists if we have good clinical information about the patient,” said Kim. A lot of times, that’s missing, and it’s cumbersome to get the needed data. “I don’t need the entire medical history of the patient for every exam I interpret, but it is extremely valuable to have pertinent information from the patient’s medical record that can help me make a better diagnosis.” To do that now, he has to go through dozens of screens and mouse clicks to see if there’s useful information. “This is where artificial intelligence can help out.” It can go through patient’s medical records, looking for lab values, pathology results, and past family, medical and surgical history that might be helpful, and provide a single summary.
AI can help with repetitive work, anything the radiologist has to do more than once a day is a good use for AI. Many academic centers ask doctors to protocol exams in advance, to increase efficiency, so when a patient arrives, the protocol is ready. “Artificial intelligence can help, depending on prior protocols and clinical indication,” Kim said. The AI protocols can be presented to the radiologist, who can agree or disagree, making necessary changes and allowing the neural network to learn to make better future decisions.
The technology can also be used to help prioritize patients, like vRad and MetaMind are doing to detect intracranial hemorrhages as part of the workflow.
While AI can provoke a healthy dose of fear factor, it’s a useful technology for radiologists. “We should embrace it, and not put our heads in the sand and ignore it. As a field we have to engage and embrace it, so we can guide the direction of where artificial intelligence should go.” Otherwise, other AI experts will direct it. By staying involved, radiologists can mitigate any negative impact it can have on the field.
How to do that? A hospital administrator may not appreciate the true value of a radiologist, preferring the AI algorithm that works 24/7 at a lower cost. In the future, an algorithm may be able to provide nicely structured and formatted reports with a high degree of accuracy. Radiologists should play an integral role at the hospital, from information technology to hospital management. “If we don’t have close relationships with the hospitals, they’re not going to see our value. We need to be constantly engaged, showing how we provide value and that we’re part of the care management team,” Kim said.
While trained radiologists and those in training may be nervous about AI, they should learn more about it. “That would help alleviate some of the fears,” he said. They can see how AI can play an integral role in their future practice. He recommends that radiology training programs incorporate informatics.
Those wanting to learn more can find information through medical societies like the ACR, RSNA, and SIIM. Kim recommends attending the SIIM Conference on Machine Intelligence in Medical Imaging (C-MIMI). For hands-on experience with deep learning, the NVIDIA DIGITS platform allows users to easily design and train neural networks.
Whether radiology tasks will be streamlined as a result of AI remains to be seen. The number of medical images is increasing, as is demand for medical imaging. Medicine relies more heavily on technology for diagnoses. “It’s not a simple formula,” said Kim. “We have to wait and see.”
New Study Examines Short-Term Consistency of Large Language Models in Radiology
November 22nd 2024While GPT-4 demonstrated higher overall accuracy than other large language models in answering ACR Diagnostic in Training Exam multiple-choice questions, researchers noted an eight percent decrease in GPT-4’s accuracy rate from the first month to the third month of the study.
FDA Clears AI-Powered Ultrasound Software for Cardiac Amyloidosis Detection
November 20th 2024The AI-enabled EchoGo® Amyloidosis software for echocardiography has reportedly demonstrated an 84.5 percent sensitivity rate for diagnosing cardiac amyloidosis in heart failure patients 65 years of age and older.
The Reading Room Podcast: Emerging Trends in the Radiology Workforce
February 11th 2022Richard Duszak, MD, and Mina Makary, MD, discuss a number of issues, ranging from demographic trends and NPRPs to physician burnout and medical student recruitment, that figure to impact the radiology workforce now and in the near future.
FDA Clears Updated AI Platform for Digital Breast Tomosynthesis
November 12th 2024Employing advanced deep learning convolutional neural networks, ProFound Detection Version 4.0 reportedly offers a 50 percent improvement in detecting cancer in dense breasts in comparison to the previous version of the software.