The radiology studies and news you can’t miss this month.
A new study published in American Journal of Roentgenology investigated the relationship between the amount of time a radiologist takes to read an image and his or her diagnostic accuracy. Researchers from the University of Florida College of Medicine and Skokie Hospital performed a literature review, analyzing existing studies in order to better determine whether a radiologist’s speed might negatively affect a scan’s interpretation.
This question about potential trade-offs between speed and accuracy has been long debated in the radiology community. And, as the study authors argue, previous studies show mixed results on the issue. There are some studies that suggest factors like familiarity, experience, and subspecialized training will lead to both a faster read and a more accurate interpretation. However, other studies suggest that radiologists with the best accuracy tend to work slower, paying careful attention to aspects of each image before coming to any conclusions. Anecdotal evidence, based on practical considerations and relevant experience, is similarly mixed.
In reviewing the literature, the researchers found that existing studies have small sample sizes and are limited to a small number of imaging studies. Because of that, they conclude that it is difficult to generalize any sort of relationship between speed and accuracy, and as such, currently, there is no credible causal relationship between the two.
Ruptured brain aneurysms are fatal in nearly 40% of all cases. Unfortunately, detecting potential brain aneurysms is tedious work. They often hide in plain sight, coming in a variety of sizes and shapes that can make them difficult to see even in a succession of radiologic images. Now, researchers from Stanford University have created an artificial intelligent tool called HeadXNet which can help augment radiologists’ ability to correctly identify aneurysms in computed tomographic angiography (CTA) scans. The study was published in JAMA Network Open on June 7.
This tool, developed from a three-dimensional convolutional neural network, was trained on over 611 head CTA studies to help it learn common aneurysm segmentations. The researchers then tested the model on a set of 818 examinations with 662 unique patients. More than 300 of those imaging studies included at least one clinically significant, non-ruptured intracranial aneurysm. The study authors excluded any studies that included hemorrhage, ruptured aneurysm, posttraumatic or infectious pseudoaneurysm, arteriovenous malformation, surgical clips, coils, catheters, or other surgical hardware to better segment out non-ruptured aneurysms that may not be obvious at first glance.
The researchers wanted to see if the model could help augment the diagnostic accuracy of clinicians. They recruited eight clinicians, each having between 2 and 8 years of experience, to evaluate a set of 115 brain scans, once using HeadXNet and once without using the tool. When the physicians used the tool, they demonstrated significant improvements in sensitivity, accuracy, and interrater agreement on their reads of the CTA scans. The authors conclude that such a model could be integrated into the clinical workflow to help clinicians to identify potential brain aneurysms and make more informed clinical decisions regarding patient care.
Recent mammography guidelines published by the U.S. Preventative Services Task Force (USPSTF) suggest that women without history of breast cancer should not seek regular mammograms until after the age of 50. They argue earlier screenings can result in more false-positive results, as well as more psychological harms, without affecting breast cancer-related mortality rates. Instead, they argue, women aged 40-49 years should be screened only if they have significant breast cancer risk factors. But does using personal risk factors in this age group actually minimize those false-positive results?
It’s a question that researchers from the University of Wisconsin-Madison wanted to answer. They have now published a study in Radiology suggest that screening women based on age can detect more cancers than using a patient’s risk profile, but also, unfortunately, results in more false-positives and benign biopsy results.
The study authors conducted a retrospective, cross-sectional study utilizing a database of more than 20,000 digital mammograms from 10,280 average-risk women in the 40-49 age group. Those scans resulted in 50 screen-detected cancers but 1,787 false positives and 384 benign biopsy procedures.
When the researchers then applied two hypothetical screening scenarios, one that would immediately trigger a mammography for women at the age of 45 and a second scenario that would initiate screening based on a risk prediction model that uses variables like family history, race, prior breast biopsy, and breast density to calculate five- and ten-year invasive breast cancer risk score, they came up with some interesting findings.
First, very few women under the age of 45 would meet the criteria to be screened on the risk prediction model. Second, when women who were screened simply based on their age, the group was able to detect significant more cancers. But this approach also brought with it a higher potential for false positive screenings and benign biopsies. The risk-based scenario, on the other hand, did not detect as many cancers but it also didn’t increase the rate of false positives and benign biopsies.
The authors conclude that there are significant trade-offs that clinicians should consider when suggesting women between 40-49 are screened for breast cancer-and current risk models and thresholds may not identify younger women who will go on to develop invasive cancers. They suggest that researchers, physicians, and policy makers work to improve risk prediction models to better serve women under the age of 50.
A new study published in the European Journal of Radiology suggests the use of compressed sensing (CS), or a technique that acquires less data through k-space random undersampling, can help reduce the amount of time required to do a thorough ankle MRI scan without losing diagnostic data quality.
Researchers from Germany’s Technical University of Munich sought to reduce the length of time required to do a standard protocol two-dimensional (2D) turbo spin echo (TSE) MR sequence of the ankle joint. Given the complexity of the ankle joint complex, most scans can take significant time-and be quite uncomfortable for patients. They also often result in limited resolution despite the amount of time taken. The research group hypothesized the use of CS could offer an opportunity to reduce scanning time without reducing data quality.
To test the idea, the study authors recruited 20 volunteers with a mean age of approximately 30 years to have their ankles scanned coronally and sagitally using parallel imaging based on sensitivity encoding (SENSE) as well as a combined protocol using SENSE and CS. The latter protocol reduced the amount of time required for scanning by approximately 20%. Two experienced radiologists then graded image quality of all scans using a standardized 5-point Likert scale, as well as assessed the signal-to-noise ratio and contrast-to-noise ratio when examining different anatomical structures.
The group found that there was near perfect agreement between the images that were taken using traditional SENSE protocols and those using SENSE and CS when the radiologists were assessing ligaments, subchondral bone, and cartilage. The signal-to-noise ration was slightly higher for SENSE/CS sequences but the finding was not statistically significant. When evaluating contrast-to-noise rations, there was no significant differences between the two protocols. Interrater agreement was also considered “substantial to excellent” for both techniques.
Based on these findings, the authors conclude that CS can help to reduce the long acquisition time required by conventional ankle MRI scans without losing the image quality required for accurate diagnosis.
Advances in artificial intelligence (AI) and radiology are happening on a rapid scale. How is the field to keep pace?
It starts with roadmaps. Earlier this year, the journal Radiology published a “research roadmap” highlighting research priorities for the field with regards to AI and other machine learning techniques. That paper teased that a second roadmap would soon follow, with a broader focus on the challenges involved with implementing AI in the field. That second roadmap has now been published in the Journal of the American College of Radiology, with highlights from a workshop convened by the National Institutes of Health (NIH) where key radiology stakeholders discussed their priorities for translational research as research laboratories around the globe are already demonstrating the value of AI algorithms in interpreting diagnostic images.
The stakeholders from the meeting, which included researchers from the NIH, the American College of Radiology, and several prominent research institutions, recommended that the radiologic ecosystem work together to:
The ultimate goal of this second roadmap is to help facilitate a space within the field to allow “robust” collaborations between clinicians and AI researchers to more easily translate AI tools from the bench to the bedside.
The Reading Room: Artificial Intelligence: What RSNA 2020 Offered, and What 2021 Could Bring
December 5th 2020Nina Kottler, M.D., chief medical officer of AI at Radiology Partners, discusses, during RSNA 2020, what new developments the annual meeting provided about these technologies, sessions to access, and what to expect in the coming year.