New deep learning tool is designed to help radiologists evaluate chest X-rays regardless of where they work.
A new deep learning model could help radiologists in any facility interpret chest X-rays.
In a new study published in The Lancet Digital Health, investigators from Australia outlined their new tool. It is designed to alleviate heavy workloads and make it easier for providers who do not have specialty thoracic training to read these scans while reducing errors.
Chest X-rays are already the most common imaging study worldwide, and that number is growing, said the team from annalise.ai, the company that created the AI model. Developing a tool to help shoulder the weight and process the workload will be critical.
“The ability of the AI model to identify findings on chest X-rays is very encouraging,” said Catherine Jones, MBBS, thoracic radiologist, chest lead at annalise.ai, and lead study author. “Radiologists and non-radiology clinicians incorporate clinical factors into decision-making, but ultimately rely on perception of findings to underpin our clinical interpretation.”
For their study, Jones’ team trained their deep learning model on 821,681 chest X-ray images taken from 520,014 studies in 284,649 patients. They evaluated radiologist performance alone and compared it to how the same radiologist performed with the model.
Overall, 20 radiologists assessed 2,568 chest X-rays both with and without the tool. Based on the team’s assessment, the tool significantly helped radiologists improve their classification for 102 of 127 clinical findings (80 percent), and it was statistically non-inferior for 19 findings (15 percent).
Additionally, assisted radiologists had an average area under the curve of 0.808 compared with 0.713 for unassisted radiologists. The model alone had an area under the curve of 0.957, and model classification by itself was distinctly more accurate than unassisted radiologists for 117 of 125 clinical findings (94 percent). It was also non-inferior to unassisted radiologist for all clinical findings.
“Radiologist accuracy improved across a large number of clinical chest X-ray findings when assisted by the deep-learning model,” the team said. “Effective implementation of the model has the potential to augment clinicians and improve clinical practice.”
Further research, they said, is needed to confirm the efficacy and accuracy of their tool in real-world settings.
For more coverage based on industry expert insights and research, subscribe to the Diagnostic Imaging e-Newsletter here.
What is the Best Use of AI in CT Lung Cancer Screening?
April 18th 2025In comparison to radiologist assessment, the use of AI to pre-screen patients with low-dose CT lung cancer screening provided a 12 percent reduction in mean interpretation time with a slight increase in specificity and a slight decrease in the recall rate, according to new research.
Meta-Analysis Shows Merits of AI with CTA Detection of Coronary Artery Stenosis and Calcified Plaque
April 16th 2025Artificial intelligence demonstrated higher AUC, sensitivity, and specificity than radiologists for detecting coronary artery stenosis > 50 percent on computed tomography angiography (CTA), according to a new 17-study meta-analysis.
New bpMRI Study Suggests AI Offers Comparable Results to Radiologists for PCa Detection
April 15th 2025Demonstrating no significant difference with radiologist detection of clinically significant prostate cancer (csPCa), a biparametric MRI-based AI model provided an 88.4 percent sensitivity rate in a recent study.