New deep learning tool is designed to help radiologists evaluate chest X-rays regardless of where they work.
A new deep learning model could help radiologists in any facility interpret chest X-rays.
In a new study published in The Lancet Digital Health, investigators from Australia outlined their new tool. It is designed to alleviate heavy workloads and make it easier for providers who do not have specialty thoracic training to read these scans while reducing errors.
Chest X-rays are already the most common imaging study worldwide, and that number is growing, said the team from annalise.ai, the company that created the AI model. Developing a tool to help shoulder the weight and process the workload will be critical.
“The ability of the AI model to identify findings on chest X-rays is very encouraging,” said Catherine Jones, MBBS, thoracic radiologist, chest lead at annalise.ai, and lead study author. “Radiologists and non-radiology clinicians incorporate clinical factors into decision-making, but ultimately rely on perception of findings to underpin our clinical interpretation.”
For their study, Jones’ team trained their deep learning model on 821,681 chest X-ray images taken from 520,014 studies in 284,649 patients. They evaluated radiologist performance alone and compared it to how the same radiologist performed with the model.
Overall, 20 radiologists assessed 2,568 chest X-rays both with and without the tool. Based on the team’s assessment, the tool significantly helped radiologists improve their classification for 102 of 127 clinical findings (80 percent), and it was statistically non-inferior for 19 findings (15 percent).
Additionally, assisted radiologists had an average area under the curve of 0.808 compared with 0.713 for unassisted radiologists. The model alone had an area under the curve of 0.957, and model classification by itself was distinctly more accurate than unassisted radiologists for 117 of 125 clinical findings (94 percent). It was also non-inferior to unassisted radiologist for all clinical findings.
“Radiologist accuracy improved across a large number of clinical chest X-ray findings when assisted by the deep-learning model,” the team said. “Effective implementation of the model has the potential to augment clinicians and improve clinical practice.”
Further research, they said, is needed to confirm the efficacy and accuracy of their tool in real-world settings.
For more coverage based on industry expert insights and research, subscribe to the Diagnostic Imaging e-Newsletter here.
Deep Learning Model with DCE-MRI May Help Predict Proliferative Hepatocellular Carcinoma
May 20th 2024Incorporating dynamic contrast-enhanced MRI, a deep learning model demonstrated a 20 percent higher AUC in external validation testing than clinical factors alone and over a 17 percent higher AUC than radiological factors alone in predicting proliferative hepatocellular carcinoma (HCC).
CT Study: AI Algorithm Comparable to Radiologists in Differentiating Small Renal Masses
May 14th 2024An emerging deep learning algorithm had a lower AUC and sensitivity than urological radiologists for differentiating between small renal masses on computed tomography (CT) scans but had a 21 percent higher sensitivity rate than non-urological radiologists, according to new research.
MRI-Based Deep Learning Algorithm Shows Comparable Detection of csPCa to Radiologists
May 8th 2024In a study involving over 1,000 visible prostate lesions on biparametric MRI, a deep learning algorithm detected 96 percent of clinically significant prostate cancer (csPCa) in comparison to a 98 percent detection rate for an expert genitourinary radiologist.
Study Finds High Concordance Between AI and Radiologists for Cervical Spine Fractures on CT
May 6th 2024Researchers found a 98.3 percent concordance between attending radiology reports and AI assessments for possible cervical spine fractures on CT, according to new research presented at the 2024 ARRS Annual Meeting.