Radiology must have a plan for implementing the growing number of AI algorithms.
AI's rapid maturation and pace of development is transforming healthcare. While augmenting and enhancing existing clinical workflows, AI also brings subtler, less obvious changes. Those medical specialties that are early adopters can point the way and provide important lessons as more areas of healthcare – and life in general – take the AI plunge.
The FDA has cleared more AI algorithms for radiology than any other field, and all trends point toward an increasing number of clearances this year. Researchers and companies have developed algorithms to detect hemorrhages, tumors, embolisms, and many more pathologies, ranging from the common everyday conditions to rare ones.
While this explosion in AI-driven creativity holds great promise, it is not sustainable in its present form. As the saying goes, it’s possible to have too much of a good thing, and this is especially true when it comes to AI algorithms.
The 2025 Workflow
Imagine it’s 2025, and hundreds of AI algorithms for radiology are available. Every radiologist uses AI tools for all the mundane tasks, like detecting, measuring, comparing and collecting data.
When our 2025 radiologist interprets a head CT that’s been analyzed by 50 different algorithms, what do they see? Perhaps a brain bleed marked with a big red star provided by Company X. The bleed is also measured, with the measurement marked with yellow lines and numbers provided by Company Y. Then, they see bone fractures, circled in blue, provided by vendor Z. Of course, the brain volume is also measured, by company A, with red segmentation lines.
However, it's not just about the presentation layer, with its Jackson Pollock confusion of colors and shapes. There is also a huge problem with false positives. A good algorithm might have approximately a 5-percent false-positive rate, but if 10 algorithms are running together, all of those “5 percents” begin to add up, resulting in an unacceptable 50-percent false-positive rate. Add to this that each algorithm might have different exclusion criteria and, perhaps, even different measurement units.
In short, if many independent algorithms are applied on the same anatomical region, AI could create an intolerable user experience.
This future is worryingly possible, especially if many AI vendors all build clinically-viable algorithms.
One Algorithm to Rule Them All
The solution is consolidation – true consolidation at the algorithm level. In radiology, each body part being scanned will require one single algorithm that combines all of the different bits and pieces and turns them into a single, unified product.
This “consolidator” will have to be a new type of AI algorithm, producing an output that’s simple for the user and highly accurate. It will probably have to be able to access more than just the raw outputs of the other algorithms; it’ll need to penetrate at least one layer deeper, into the raw data from the network itself.
The system will have two key tasks. First, it will have to aggregate the outputs of each algorithm into a single, cohesive result per imaging exam. Secondly, it must act as a context-aware filter and present information only in the right context.
To aggregate, the AI consolidator will use a smart algorithm to determine the final result per exam and reduce the number of false positives. It will sort through different AI algorithms that perform the same task if they give different answers. It will feature an independent means of AI evaluation to determine that, say, algorithm A is better in detection and evaluation of large lung nodules, while algorithm B excels in small nodules. This means when its constituent AI components are updated or improved, the consolidator will need to be updated too.
Regarding context-awareness, the consolidator should be able to present the information in the right context. Let’s say on a head CT you get results from an ASPECTS algorithm, bleed detection, bleed measurement, mass effect, and LVO detection. Probably you’ll only want to show the ASPECTS if it’s an ischemic stroke patient and, potentially, rule out bleeds for these cases. On the other hand, for post-op scans, the consolidator should not display ASPECTS. This would go beyond image recognition and aggregate from the electronic health record, as well, to understand the clinical context of the patient, as well as the user behavior.
So, who will be the consolidator with the core competency to solve this issue? I see two options here.
Third-party companies, like PACS and workflow solution providers, seem well-positioned to take on this AI consolidation role. However, the task is so complex that dedicated consolidation solutions will need to emerge. Extensive collaboration will be needed between them and the AI companies that develop the algorithms and can adjust them according to the consolidation workflow and other AI companies' algorithms analyzing the same scan. This could add new layers of technical and commercial complexity.
Ultimately, the AI companies themselves will be in a strong position to be consolidators by offering broad solutions in each anatomical region and consolidating other remaining algorithms into their workflow perfectly fitted for AI and its unique requirements. Alternatively, stratification may occur per protocol with a single AI provider focusing on chest CT, another focusing on abdomen CT, etc. Each one will address all the relevant aspects of the given disease (detection, measurement, comparison, subsequent findings, etc.). Either of these options will prevent strange situations where the various AI outputs related to the same disease will contradict each other, leading to an unusable workflow.
Consolidation is essential if we want doctors to be able to use broad AI in their routine work. And, this has to happen soon, or it will become a bottleneck in the adoption of these many AI solutions across our healthcare system. Finding an effective way to consolidate algorithm outputs into a single, integrated view at a manageable Positive Predictive Value is absolutely necessary for widespread adoption of AI. It will ensure that AI won’t become “too much of a good thing.”
New Study Examines Short-Term Consistency of Large Language Models in Radiology
November 22nd 2024While GPT-4 demonstrated higher overall accuracy than other large language models in answering ACR Diagnostic in Training Exam multiple-choice questions, researchers noted an eight percent decrease in GPT-4’s accuracy rate from the first month to the third month of the study.
FDA Clears AI-Powered Ultrasound Software for Cardiac Amyloidosis Detection
November 20th 2024The AI-enabled EchoGo® Amyloidosis software for echocardiography has reportedly demonstrated an 84.5 percent sensitivity rate for diagnosing cardiac amyloidosis in heart failure patients 65 years of age and older.
The Reading Room Podcast: Emerging Trends in the Radiology Workforce
February 11th 2022Richard Duszak, MD, and Mina Makary, MD, discuss a number of issues, ranging from demographic trends and NPRPs to physician burnout and medical student recruitment, that figure to impact the radiology workforce now and in the near future.
FDA Clears Updated AI Platform for Digital Breast Tomosynthesis
November 12th 2024Employing advanced deep learning convolutional neural networks, ProFound Detection Version 4.0 reportedly offers a 50 percent improvement in detecting cancer in dense breasts in comparison to the previous version of the software.