When 22 radiologists are told to look for different numbers of lesions on the same chest X-ray, would you expect them to hit that number, identify more, or identify less? This was the question posed in a study just published in the March print issue of Radiology.
When 22 radiologists are told to look for different numbers of lesions on the same chest X-ray, would you expect them to hit that number, identify more, or identify less? This was the question posed in a study just published in the March print issue of Radiology.
Lead author Warren Reed, BSc, from the University of Sydney, and colleagues found that giving the radiologists a specific number to look for did not affect the accuracy of the reading. The only difference found was in the physicians’ eye movements, when they were told to look for a higher number of lesions. Eye tracking showed that their eyes moved significantly more during the reading, even though the findings were the same.
For the study, the 22 radiologists, all with at least six years experience after certification from the American Board of Radiology, were asked to interpret 15 abnormal (and identical) posteroanterior chest images twice. Before each observation, they were told how many lesions they could expect to find.
“I didn’t have a strong sense how the study would turn out,” said Geoffrey Rubin, MD, chair of Duke Department of Radiology. “I could accept the results either way.”
Rubin said the take-away message was positive: a person’s expectation when looking at an imaging study does not seem to influence the person’s tendency to make correct observations.
He added that it’s been a potential criticism of human observers, that preconceived notions can affect findings. “The study supports that readers can be dispassionate to that preparatory information and can be reliable in how they read the data set regardless,” he said.
Rubin said that the study sends a useful message to radiologists. Many radiologists want to read a film cold, without the referring physician providing any information in case it colors the reading. “The data suggests that [providing information] won’t affect that radiologist’s interpretation. It should be the same interpretation with respect to abnormalities.”
The study does raise some questions on whether the results can be generalized, according to Rubin. “They used a fairly specific paradigm, with simulated lesions on chest X-rays. We’re not really told how big they are, how obvious they are,” he said. Plus, he said the data are only germane to expert readers, since the participating clinicians are all board-certified. “It would be interesting to repeat this test with some internists who do chest X-rays in their office, for example,” he said.
Given how many radiographic images are read, he was glad to have investigators focusing on how diagnostic images are interpreted. Reading images “is not a quantitative activity; you don’t have a number you can look at like a blood test. There are more degrees of freedom in that interpretation. It’s important to understand how expert observers interpret that image.”
Can AI Enhance PET/MRI Assessment for Extraprostatic Tumor Extension in Patients with PCa?
December 17th 2024The use of an adjunctive machine learning model led to 17 and 21 percent improvements in the AUC and sensitivity rate, respectively, for PET/MRI in diagnosing extraprostatic tumor extension in patients with primary prostate cancer.
Can Radiomics Bolster Low-Dose CT Prognostic Assessment for High-Risk Lung Adenocarcinoma?
December 16th 2024A CT-based radiomic model offered over 10 percent higher specificity and positive predictive value for high-risk lung adenocarcinoma in comparison to a radiographic model, according to external validation testing in a recent study.