A new study suggests that deep learning algorithms with multimodal ultrasound have comparable specificity and sensitivity to subjective expert assessment and use of the O-RADS classification to distinguish between benign and malignant ovarian tumors.
When it comes to differentiating between benign and malignant ovarian tumors, emerging research shows deep learning algorithms that incorporate multimodal ultrasound (US) have comparable diagnostic accuracy to use of the Ovarian-Adnexal Reporting and Data System (O-RADS) and expert assessment.
In the retrospective study, recently published in Radiology, researchers assessed 422 women with ovarian tumors (304 benign tumors and 118 malignant tumors) and a mean age of 46.4. They found that the deep learning decision fusion and deep learning feature fusion algorithms had respective specificity rates of 80 and 85 percent and similar sensitivity at 92 percent. Use of the O-RADS risk stratification system had a 92 percent sensitivity and an 89 percent specificity whereas expert assessment was associated with 96 percent sensitivity and 87 percent specificity, according to the study.
“Our results suggest that targeted DL algorithms could assist practitioners of US, particularly those with less experience, to achieve a performance comparable to experts. Our models could also be further developed to assess lesions found within a screening population,” wrote Wei-Wei Fong, MD, PhD, who is affiliated with the Department of Obstetrics and Gynecology at the Ruijin Hospital and the Shanghai Jiao Tong University School of Medicine in China, and colleagues.
Noting that recently developed deep learning models for detecting malignant ovarian tumors were based on one type of ultrasound, Fong and colleagues said their deep learning algorithms were multimodal in nature. These algorithms incorporated input from color Doppler US, gray scale US revealing the plane with maximal dimension and gray scale US focused on the maximum size of the solid tumor component, according to the study.
Fong and colleagues noted that the multimodal deep learning algorithms in their study are akin to the common clinical use of multiple types of US images to diagnose ovarian cancer.
The study authors acknowledge that the findings from their single center study need further exploration and validation in future multicenter studies. Fong and colleagues also noted that the data sets in their retrospective study were limited in size. The assessment of US images by one expert in US may limit the general application of the study findings with O-RADS and expert assessment, according to Fong and colleagues.
The Reading Room: Artificial Intelligence: What RSNA 2020 Offered, and What 2021 Could Bring
December 5th 2020Nina Kottler, M.D., chief medical officer of AI at Radiology Partners, discusses, during RSNA 2020, what new developments the annual meeting provided about these technologies, sessions to access, and what to expect in the coming year.