When 22 radiologists are told to look for different numbers of lesions on the same chest X-ray, would you expect them to hit that number, identify more, or identify less? This was the question posed in a study just published in the March print issue of Radiology.
When 22 radiologists are told to look for different numbers of lesions on the same chest X-ray, would you expect them to hit that number, identify more, or identify less? This was the question posed in a study just published in the March print issue of Radiology.
Lead author Warren Reed, BSc, from the University of Sydney, and colleagues found that giving the radiologists a specific number to look for did not affect the accuracy of the reading. The only difference found was in the physicians’ eye movements, when they were told to look for a higher number of lesions. Eye tracking showed that their eyes moved significantly more during the reading, even though the findings were the same.
For the study, the 22 radiologists, all with at least six years experience after certification from the American Board of Radiology, were asked to interpret 15 abnormal (and identical) posteroanterior chest images twice. Before each observation, they were told how many lesions they could expect to find.
“I didn’t have a strong sense how the study would turn out,” said Geoffrey Rubin, MD, chair of Duke Department of Radiology. “I could accept the results either way.”
Rubin said the take-away message was positive: a person’s expectation when looking at an imaging study does not seem to influence the person’s tendency to make correct observations.
He added that it’s been a potential criticism of human observers, that preconceived notions can affect findings. “The study supports that readers can be dispassionate to that preparatory information and can be reliable in how they read the data set regardless,” he said.
Rubin said that the study sends a useful message to radiologists. Many radiologists want to read a film cold, without the referring physician providing any information in case it colors the reading. “The data suggests that [providing information] won’t affect that radiologist’s interpretation. It should be the same interpretation with respect to abnormalities.”
The study does raise some questions on whether the results can be generalized, according to Rubin. “They used a fairly specific paradigm, with simulated lesions on chest X-rays. We’re not really told how big they are, how obvious they are,” he said. Plus, he said the data are only germane to expert readers, since the participating clinicians are all board-certified. “It would be interesting to repeat this test with some internists who do chest X-rays in their office, for example,” he said.
Given how many radiographic images are read, he was glad to have investigators focusing on how diagnostic images are interpreted. Reading images “is not a quantitative activity; you don’t have a number you can look at like a blood test. There are more degrees of freedom in that interpretation. It’s important to understand how expert observers interpret that image.”
New Study Examines Short-Term Consistency of Large Language Models in Radiology
November 22nd 2024While GPT-4 demonstrated higher overall accuracy than other large language models in answering ACR Diagnostic in Training Exam multiple-choice questions, researchers noted an eight percent decrease in GPT-4’s accuracy rate from the first month to the third month of the study.
FDA Grants Expanded 510(k) Clearance for Xenoview 3T MRI Chest Coil in GE HealthCare MRI Platforms
November 21st 2024Utilized in conjunction with hyperpolarized Xenon-129 for the assessment of lung ventilation, the chest coil can now be employed in the Signa Premier and Discovery MR750 3T MRI systems.
FDA Clears AI-Powered Ultrasound Software for Cardiac Amyloidosis Detection
November 20th 2024The AI-enabled EchoGo® Amyloidosis software for echocardiography has reportedly demonstrated an 84.5 percent sensitivity rate for diagnosing cardiac amyloidosis in heart failure patients 65 years of age and older.