A new study suggests that deep learning algorithms with multimodal ultrasound have comparable specificity and sensitivity to subjective expert assessment and use of the O-RADS classification to distinguish between benign and malignant ovarian tumors.
When it comes to differentiating between benign and malignant ovarian tumors, emerging research shows deep learning algorithms that incorporate multimodal ultrasound (US) have comparable diagnostic accuracy to use of the Ovarian-Adnexal Reporting and Data System (O-RADS) and expert assessment.
In the retrospective study, recently published in Radiology, researchers assessed 422 women with ovarian tumors (304 benign tumors and 118 malignant tumors) and a mean age of 46.4. They found that the deep learning decision fusion and deep learning feature fusion algorithms had respective specificity rates of 80 and 85 percent and similar sensitivity at 92 percent. Use of the O-RADS risk stratification system had a 92 percent sensitivity and an 89 percent specificity whereas expert assessment was associated with 96 percent sensitivity and 87 percent specificity, according to the study.
“Our results suggest that targeted DL algorithms could assist practitioners of US, particularly those with less experience, to achieve a performance comparable to experts. Our models could also be further developed to assess lesions found within a screening population,” wrote Wei-Wei Fong, MD, PhD, who is affiliated with the Department of Obstetrics and Gynecology at the Ruijin Hospital and the Shanghai Jiao Tong University School of Medicine in China, and colleagues.
Noting that recently developed deep learning models for detecting malignant ovarian tumors were based on one type of ultrasound, Fong and colleagues said their deep learning algorithms were multimodal in nature. These algorithms incorporated input from color Doppler US, gray scale US revealing the plane with maximal dimension and gray scale US focused on the maximum size of the solid tumor component, according to the study.
Fong and colleagues noted that the multimodal deep learning algorithms in their study are akin to the common clinical use of multiple types of US images to diagnose ovarian cancer.
The study authors acknowledge that the findings from their single center study need further exploration and validation in future multicenter studies. Fong and colleagues also noted that the data sets in their retrospective study were limited in size. The assessment of US images by one expert in US may limit the general application of the study findings with O-RADS and expert assessment, according to Fong and colleagues.
Meta-Analysis Shows Merits of AI with CTA Detection of Coronary Artery Stenosis and Calcified Plaque
April 16th 2025Artificial intelligence demonstrated higher AUC, sensitivity, and specificity than radiologists for detecting coronary artery stenosis > 50 percent on computed tomography angiography (CTA), according to a new 17-study meta-analysis.
New bpMRI Study Suggests AI Offers Comparable Results to Radiologists for PCa Detection
April 15th 2025Demonstrating no significant difference with radiologist detection of clinically significant prostate cancer (csPCa), a biparametric MRI-based AI model provided an 88.4 percent sensitivity rate in a recent study.
The Reading Room: Artificial Intelligence: What RSNA 2020 Offered, and What 2021 Could Bring
December 5th 2020Nina Kottler, M.D., chief medical officer of AI at Radiology Partners, discusses, during RSNA 2020, what new developments the annual meeting provided about these technologies, sessions to access, and what to expect in the coming year.
Can CT-Based AI Radiomics Enhance Prediction of Recurrence-Free Survival for Non-Metastatic ccRCC?
April 14th 2025In comparison to a model based on clinicopathological risk factors, a CT radiomics-based machine learning model offered greater than a 10 percent higher AUC for predicting five-year recurrence-free survival in patients with non-metastatic clear cell renal cell carcinoma (ccRCC).