Built with an interpretable module for local bilateral dissimilarity on mammography exams, the deep learning platform AsymMirai may provide enhanced clarity of the reasoning behind artificial intelligence (AI) assessments of breast cancer risk up to five years with comparable AUCs to a previously validated black box AI model.
For the retrospective study, recently published in Radiology, researchers compared the prognostic capability of AsymMirai with Mirai, another deep learning mammography-based modality for assessing breast cancer risk. While both models use standard mediolateral oblique and craniocaudal mammography views as inputs, the study authors noted that Mirai is comprised of a convolutional neural network and a transformer. The transformer-free AsymMirai maintains spatial correspondence between inputted mammography images and extracted features and provides one bilateral dissimilarity score based on average scores for each input value, according to the researchers.
In the analysis of 210,067 screening mammograms from 81,824 patients, the researchers found that AsymMirai offered comparable AUCs (area under the curves) for breast cancer risk assessment as Mirai. Specifically, the AsymMirai model, with its emphasis on local bilateral dissimilarity, had one-, three- and five-year AUCs of 79 percent, 68 percent and 66 percent, respectively, in comparison to 84 percent, 72 percent and 71 percent, respectively, for the Mirai model.
“Using the existing Mirai front-end convolutional neural network for feature extraction, (AysmMirai) calculates differences in the latent space, providing the location of the dissimilarity, which is visually intuitive. This score approximates that of Mirai … with only a slight reduction in 1-5-year risk prediction performance,” wrote co-senior author Cynthia Rudin, Ph.D., a professor of computer science, electrical and computer engineering, statistical science and biostatistics and bioinformatics, and director of the Interpretable Machine Learning Lab at Duke University, and colleagues.
Noting the role of location consistency for highlighted tissue and a possible 40 percent threshold for shift of the prediction window, the researchers noted superior breast cancer prediction for the AysmMirai model in 383 patients who had a window shift of 40 percent or less. In this population, AysmMirai demonstrated a 92 percent AUC at one year and an 88 percent AUC at five years.
Three Key Takeaways
- Interpretable AI for breast cancer risk assessment. AsymMirai, featuring an interpretable module for local bilateral dissimilarity, may enhance the clarity of AI assessments in breast cancer risk prediction. This interpretability aids in understanding the reasoning behind AI-generated predictions, which is crucial for clinical acceptance and trust.
- Comparable AUCs with improved reasoning clarity. In addition to employing an interpretable module, AsymMirai demonstrates comparable performance to Mirai, a black box AI model, in terms of area under the curves (AUCs) for breast cancer risk assessment. This suggests that improved interpretability may not significantly compromise predictive accuracy and could offer clinicians a more transparent insight into AI-generated risk assessments.
- Superior predictive performance in certain patient subgroups. AsymMirai exhibits superior breast cancer prediction, particularly in patients with a limited shift in the prediction window (40% or less), showing higher AUCs at both one (92 percent) and five years (88 percent). The study authors indicated the potential of AsymMirai to detect abnormalities in breast tissue earlier, possibly facilitating more timely intervention or risk management strategies. However, the performance difference between AsymMirai and Mirai varies across demographic groups, with Mirai performing better for patients aged 50 to 70 and African American women, highlighting the importance of considering demographic factors in AI-based risk assessment models.
“We originally expected this to be the case because AysmMirai would find abnormalities in the tissue before the development of the actual lesion,” added Rudin and colleagues. “While this does occur, most patients with location consistency of 40% or less showed little change from prior examinations and thus correspond to a very low-risk group … .”
The research findings showed larger AUC differences between the AsymMirai and Mirai models for patients between 50 to 70 years of age (ranging between six to eight percent higher AUCs for Mirai) and African American women (ranging between six to nine percent higher for Mirai)
(Editor’s note: For related content, see “Mammography-Based Deep Learning Model Facilitates Higher Breast Cancer Detection on Screening MRI,” “Mammography-Based Deep Learning Model May Help Detect Precancerous Changes in High-Risk Women” and “Deep Learning Detection of Mammography Abnormalities: What a New Study Reveals.”)
Beyond the inherent limitations of a retrospective study, the study authors acknowledged that bilateral dissimilarity may not be the sole basis for predictions made with the Mirai platform. Noting that African American patients comprised 3.75 percent of the training data set for the Mirai system, the researchers conceded the modality lacks equal performance across racial groups. The researchers also pointed out, with respect to location consistency on mammography exams, that five-year follow-up data was only available for 10.7 percent of those who had a 40 percent or lower window shift.