Several factors – other than length of clinical experience – weigh into a provider’s interpretive accuracy.
Breast imaging specialists, general radiologists, providers who read ultrasounds – when it comes to screening mammograms everyone is looking for the same thing. But, the level of accuracy in interpretation depends on many factors.
In a new study published June 22 in Radiology, investigators from the Harvey L. Neiman Health Policy Institute and the American College of Radiology’s National Mammography Database Committee showed that interpretive performance with mammography is largely based on geography and breast sub-specialization, as well as diagnostic mammography and diagnostic ultrasound performance.
It was surprising, said the team led by Cindy Lee, M.D., FACMQ, FSBI, assistant professor of radiology at NYU Grossman School of Medicine, because these factors even outweighed length of practice. Still, the team stressed, these factors that predict performance aren’t well understood.
“It was interesting that in many cases certain characteristics predicted higher performance in some areas and, at the same time, lower performance in others,” Lee said, noting that academic-practice breast imaging radiologists with many years of practice may have higher recall rates alongside higher cancer detection rates. “This example highlights the importance of assessing performance across measures holistically versus individual metrics in isolation, supporting guidance in the ACR BI-RADS atlas.”
Rather than focusing on a provider’s clinical experience level, Lee’s team pivoted and concentrated on other factors that play into interpretive and diagnostic accuracy. Theirs is the largest study to date based on Medicare claims and screening performance data, including 1,223 radiologists from the National Mammography Database between 2018 and 2019.
The team examined provider demographics, clinical practice patterns, and any sub-specialization, and they used seven metrics to evaluate their mammography interpretation performance:
Overall, the team found that 31.7 percent of participating radiologists reached an acceptable performance level on all metrics. For individual metrics, though, 52 percent-to-77 percent had acceptable performance ranges.
When the team dug a little deeper, they determined that geography played a significant role in mammography interpretive performance, though, with radiologists in the Midwest achieving acceptable recall rates, PPV1, PPV2, and cancer detection rates (odd ratio of 1.4-2.5). Radiologists in the West were also more likely to achieve acceptable recall rates, PPV2, and PPV 3 (odd ratio of 1.7-2.1), however, they more often fell below acceptable rates of invasive cancer detection (odd ratio of 0.6).
As anticipated, providers with a specialization in breast imaging also performed better than general radiologists, the team said. According to their results, breast imagers were more likely to have acceptable levels for PPV1, invasive cancer detection rates, percentage of DCIS, and overall cancer detection rate (odd ratio of 1.4-4.4). In addition, providers who performed diagnostic mammography had higher levels of PPV1, PPV2, PPV3, invasive, cancer detection rate, and overall cancer detection rate (odds ratio of 1.9-2.9).
In contrast, though, performance was lower for those conducting breast ultrasound. These providers were less likely to have acceptable levels of PPV1, PPV2, percentage DCIS, and cancer detection rate (odds ratio of 0.5-0.7).
Although breast imaging specialists have higher interpretive performance, the accuracy of general radiologists – and efforts to improve their performance – will become more important in the coming years, said Andrew Rosenkrantz, M.D., MPA, professor and director of health policy in the NYU Grossman School of Medicine radiology department.
“Most mammograms performed in the United States are interpreted by general radiologists and not by breast subspecialty radiologists, who account for less than 10 percent of all radiologists,” said Rosenkrantz, who was also the lead study author and Neiman Institute senior affiliate research fellow. “As the U.S. population ages and greater numbers of women comply with screening guidelines, the demand for all radiologists to interpret screening mammograms is anticipated to increase. Hence, attention to the interpretive screening performance of all radiologists is critical and strategies to improve interpretive accuracy among generalists would be beneficial.”
Ultimately, the team said, their findings highlight the existing performance variations among all provider who interpret breast images, pointing to the need for more extensive training.
“The findings indicate not only variation in radiologist performance in screening mammography nationally, but also the association of such variation with specific radiologist characteristics in a predictable manner that can be applied to predict those radiologists with better or worse performance,” the team concluded. “Furthermore, the findings support the value of breast sub-specialization in achieving better patient outcomes among radiologists who interpret screening mammography.”
For more coverage based on industry expert insights and research, subscribe to the Diagnostic Imaging e-Newsletter here.
FDA Clears Updated AI Platform for Digital Breast Tomosynthesis
November 12th 2024Employing advanced deep learning convolutional neural networks, ProFound Detection Version 4.0 reportedly offers a 50 percent improvement in detecting cancer in dense breasts in comparison to the previous version of the software.
Is the Kaiser Score More Effective than BI-RADS for Assessing Contrast-Enhanced Mammography and MRI?
October 14th 2024For women with breast-enhanced masses, Kaiser scoring (KS) demonstrated a 20 percent higher AUC than BI-RADS classification for contrast-enhanced mammography (CEM) and was comparable to KS for breast MRI.
FDA Clears New Features in AI-Powered Mammography Software Suite
October 11th 2024Therapixel’s MammoScreen suite has received 510(k) FDA clearances for a breast density assessment feature and updated software that includes automated pre-reporting, which reportedly expedites reporting of mammography findings.