When 22 radiologists are told to look for different numbers of lesions on the same chest X-ray, would you expect them to hit that number, identify more, or identify less? This was the question posed in a study just published in the March print issue of Radiology.
When 22 radiologists are told to look for different numbers of lesions on the same chest X-ray, would you expect them to hit that number, identify more, or identify less? This was the question posed in a study just published in the March print issue of Radiology.
Lead author Warren Reed, BSc, from the University of Sydney, and colleagues found that giving the radiologists a specific number to look for did not affect the accuracy of the reading. The only difference found was in the physicians’ eye movements, when they were told to look for a higher number of lesions. Eye tracking showed that their eyes moved significantly more during the reading, even though the findings were the same.
For the study, the 22 radiologists, all with at least six years experience after certification from the American Board of Radiology, were asked to interpret 15 abnormal (and identical) posteroanterior chest images twice. Before each observation, they were told how many lesions they could expect to find.
“I didn’t have a strong sense how the study would turn out,” said Geoffrey Rubin, MD, chair of Duke Department of Radiology. “I could accept the results either way.”
Rubin said the take-away message was positive: a person’s expectation when looking at an imaging study does not seem to influence the person’s tendency to make correct observations.
He added that it’s been a potential criticism of human observers, that preconceived notions can affect findings. “The study supports that readers can be dispassionate to that preparatory information and can be reliable in how they read the data set regardless,” he said.
Rubin said that the study sends a useful message to radiologists. Many radiologists want to read a film cold, without the referring physician providing any information in case it colors the reading. “The data suggests that [providing information] won’t affect that radiologist’s interpretation. It should be the same interpretation with respect to abnormalities.”
The study does raise some questions on whether the results can be generalized, according to Rubin. “They used a fairly specific paradigm, with simulated lesions on chest X-rays. We’re not really told how big they are, how obvious they are,” he said. Plus, he said the data are only germane to expert readers, since the participating clinicians are all board-certified. “It would be interesting to repeat this test with some internists who do chest X-rays in their office, for example,” he said.
Given how many radiographic images are read, he was glad to have investigators focusing on how diagnostic images are interpreted. Reading images “is not a quantitative activity; you don’t have a number you can look at like a blood test. There are more degrees of freedom in that interpretation. It’s important to understand how expert observers interpret that image.”
FDA Clears MRI-Based AI Segmentation of Organs at Risk During Radiation Therapy
May 2nd 2025Capable of segmenting over 37 organs and structures in the head, neck and pelvis, the MR Contour DL software is currently being showcased at the European Society for Radiotherapy and Oncology (ESTRO) conference.
The Reading Room Podcast: Current Perspectives on the Updated Appropriate Use Criteria for Brain PET
March 18th 2025In a new podcast, Satoshi Minoshima, M.D., Ph.D., and James Williams, Ph.D., share their insights on the recently updated appropriate use criteria for amyloid PET and tau PET in patients with mild cognitive impairment.
Study Examines CT-Based AI Detection of Incidental Abdominal Aortic Aneurysms
April 29th 2025The AI software Viz AAA offered a sensitivity of 87.5 percent in detecting abdominal aortic aneurysms on contrast-enhanced CT, according to new retrospective research presented at the American Roentgen Ray Society (ARRS) conference.