New deep learning tool is designed to help radiologists evaluate chest X-rays regardless of where they work.
A new deep learning model could help radiologists in any facility interpret chest X-rays.
In a new study published in The Lancet Digital Health, investigators from Australia outlined their new tool. It is designed to alleviate heavy workloads and make it easier for providers who do not have specialty thoracic training to read these scans while reducing errors.
Chest X-rays are already the most common imaging study worldwide, and that number is growing, said the team from annalise.ai, the company that created the AI model. Developing a tool to help shoulder the weight and process the workload will be critical.
“The ability of the AI model to identify findings on chest X-rays is very encouraging,” said Catherine Jones, MBBS, thoracic radiologist, chest lead at annalise.ai, and lead study author. “Radiologists and non-radiology clinicians incorporate clinical factors into decision-making, but ultimately rely on perception of findings to underpin our clinical interpretation.”
For their study, Jones’ team trained their deep learning model on 821,681 chest X-ray images taken from 520,014 studies in 284,649 patients. They evaluated radiologist performance alone and compared it to how the same radiologist performed with the model.
Overall, 20 radiologists assessed 2,568 chest X-rays both with and without the tool. Based on the team’s assessment, the tool significantly helped radiologists improve their classification for 102 of 127 clinical findings (80 percent), and it was statistically non-inferior for 19 findings (15 percent).
Additionally, assisted radiologists had an average area under the curve of 0.808 compared with 0.713 for unassisted radiologists. The model alone had an area under the curve of 0.957, and model classification by itself was distinctly more accurate than unassisted radiologists for 117 of 125 clinical findings (94 percent). It was also non-inferior to unassisted radiologist for all clinical findings.
“Radiologist accuracy improved across a large number of clinical chest X-ray findings when assisted by the deep-learning model,” the team said. “Effective implementation of the model has the potential to augment clinicians and improve clinical practice.”
Further research, they said, is needed to confirm the efficacy and accuracy of their tool in real-world settings.
For more coverage based on industry expert insights and research, subscribe to the Diagnostic Imaging e-Newsletter here.
New Study Examines Short-Term Consistency of Large Language Models in Radiology
November 22nd 2024While GPT-4 demonstrated higher overall accuracy than other large language models in answering ACR Diagnostic in Training Exam multiple-choice questions, researchers noted an eight percent decrease in GPT-4’s accuracy rate from the first month to the third month of the study.
FDA Clears AI-Powered Ultrasound Software for Cardiac Amyloidosis Detection
November 20th 2024The AI-enabled EchoGo® Amyloidosis software for echocardiography has reportedly demonstrated an 84.5 percent sensitivity rate for diagnosing cardiac amyloidosis in heart failure patients 65 years of age and older.