In multiple mammography datasets with the original radiologist-detected abnormality removed, deep learning detection of breast cancer had an average area under the curve (AUC) of 87 percent and an accuracy rate of 83 percent, according to research presented at the recent Society for Imaging Informatics in Medicine (SIIM) conference.
How effective is deep learning at detecting abnormal features on mammography beyond an immediate region of abnormality identified by a radiologist?
Noting challenges with the use of saliency maps in deep learning models that reportedly contribute to incorrect localization of abnormalities, researchers explored the use of deep learning in three separate mammography databases and a total of 13,669 mammograms with the originally detected abnormality removed, according to a study presented at the recent Society for Imaging Informatics in Medicine (SIIM) conference.1,2
For the study, the researchers reviewed 2,620 mammography images from Curated Breast Imaging Subset of the Digital Database for Screening Mammography (CBIS-DDSM), 2006 images from the Categorized Digital Database for Low Energy and Subtracted Contrast-Enhanced Spectral Mammography Images (CDD-CESM), and 9,043 mammography images from a third database. The study authors noted that regions of interest delineating positive BI-RADS classifications (BI-RADS 4 and 5) or negative BI-RADS classifications (BI-RADS 1 and 2) were obscured for the study.
Exploring the use of deep learning in three separate mammography databases with the originally detected abnormalities removed, researchers found an average area under the curve (AUC) of 87 percent, an average accuracy rate of 83 percent and an average 78 percent recall for predicting malignant versus benign classification.
The researchers found an average area under the curve (AUC) of 87 percent, an average accuracy rate of 83 percent and an average 78 percent recall for predicting malignant versus benign classification.2
“Deep learning classifiers perform well at identifying abnormal mammograms even with radiologist-identified abnormalities completely removed, indicating that surrounding tissue architecture contains clues toward patient diagnosis,” wrote study co-author Hari Trivedi, M.D., an assistant professor of radiology and biomedical informatics at Emory University, and colleagues.
References
1. Arun N, Gaw N, Singh P, et al. Assessing the trustworthiness of saliency maps for localizing abnormalities in medical imaging. Radiol Artif Intell. 2021;3(6):e200267. doi: 10.1148/ryai.2021200267. eCollection 2021 Nov.
2. Hwang I, Brown Mulry B, Zhang L, et al. Inherent barriers to breast cancer detection in mammograms using deep learning. Poster presented at the Society for Imaging Informatics in Medicine (SIIM) 2023 Annual Meeting, June 14-16, Austin, Tx. https://siim.org/page/siim23_about_siim23
(Editor’s note: For related content, see “Study Assesses Ability of Mammography AI Algorithms to Predict Breast Cancer Risk,” “Digital Mammography Meta-Analysis Suggests AI Performs as Well as Radiologists” and “What a New Study Reveals About AI, Bias and Mammography Assessment.”)
Study Assesses Potential of Seven-Minute AI-Enhanced 3T MRI of the Shoulder
February 20th 2025Researchers found that the use of seven-minute threefold parallel imaging-accelerated deep learning 3T MRI had 89 percent sensitivity for supraspinatus-infraspinatus tendon tears and 93 percent sensitivity for superior labral tears.
Study: AI Boosts Ultrasound AUC for Predicting Thyroid Malignancy Risk by 34 Percent Over TI-RADS
February 17th 2025In a study involving assessment of over 1,000 thyroid nodules, researchers found the machine learning model led to substantial increases in sensitivity and specificity for estimating the risk of thyroid malignancy over traditional TI-RADS and guidelines from the American Thyroid Association.