Two radiologists take vastly different paths reviewing a case - one far more thorough than the other. But what’s the payoff for the extra work? Is it enough?
For those seeing the title of this piece and becoming concerned: Rest assured that I am not about to suggest sealing a rad in a box (no matter how similar your reading room might be to one) or subjecting him to a random chance of extermination.
Rather, consider a hypothetical imaging study, just performed and about to be assigned to a radiologist’s work list. Radiologist A or his colleague, B, of equal skill and experience, will get the study for interpretation, based on the vagaries of their shared work lists and PACS. I do not pretend to understand quantum mechanics well enough to further stretch the analogy. If it helps you, pretend this is a Twilight Zone episode.
Radiologist A gets the case: Sizes up the provided clinical history, looks over the study, activates his macro for a “no significant pathology” case, and moves on. Took him a couple of minutes.
Radiologist B gets the case: Sees the same clinical history but feels it is inadequate for diagnostic and/or reimbursement purposes. Manages to track down someone on the clinical service who can supply better background. In the process, notices that the patient had prior studies which were not uploaded to the system, and after getting off the first phone call rings up the technologist to reload the prior. Tries looking over the images while distracted by these calls, noticing nothing emergent but sets the case aside until the comparison can occur because there are a couple of uncertain issues (possible 1 mm pulmonary nodules, ditzels in the liver, etc.).
B gets the call from the tech that the prior is now loaded, but it’s over half an hour later and his recollection of the case is already fuzzy, so he essentially rereads the study as he compares with the previous case. Recognizes that the 1 mm incidentalomas are stable, but now sees that the previous report did not mention them. Agonizes a while about how to phrase his dictation so as not to unduly worry the patient or referring clinicians, also to avoid putting the previous study’s reader in a bad medico-legal spot.
Finally ready to sign the report, Rad B notices that the referring clinician has been flagged as a doctor who wants to be personally called on all cases, positive or negative. Sighs, picks up the phone, and engages in a fruitless series of attempts to contact the referrer by every means he can conceive, without success. Agonizes further about how to document these efforts so as to avoid the appearance of noncompliance with department regulations, yet also avoid documenting the clinician’s lack of responsiveness in the legally-discoverable written record.
Radiologist B has spent nearly an hour with various aspects of this case, not counting the additional time he will endure a nagging feeling that some aspect of the thing will come back to haunt him during the next few days. Remember that his colleague, A, spent a couple of minutes, and if you asked him about the case a week, a day, or an hour later, he’d have to look it up to know what you were talking about.
Like the cat of the original analogy, a patient whose case might go to radiologist A or B, knowing how they’d handle his diagnostic care, would likely have a strong preference in the matter. Physicians referring patients to A and B’s radiological group, the group itself, and the society collectively paying for and depending on healthcare surely would as well.
Maybe Radiologist B has a 0.1 percent lower chance of getting named in a lawsuit as a result of his more thorough, conscientious approach. Maybe his peer-review stats will look a bit nicer. Maybe he hopes to establish a good track record that will give him a slight edge over A for job security, promotion, etc. in the coming decade - assuming A’s greater productivity numbers don’t get the spotlight instead.
Is this the sum total of what we’re doing to encourage B to keep it up, and not become more like A in the future?
Study Reaffirms Low Risk for csPCa with Biopsy Omission After Negative Prostate MRI
December 19th 2024In a new study involving nearly 600 biopsy-naïve men, researchers found that only 4 percent of those with negative prostate MRI had clinically significant prostate cancer after three years of active monitoring.
Study Examines Impact of Deep Learning on Fast MRI Protocols for Knee Pain
December 17th 2024Ten-minute and five-minute knee MRI exams with compressed sequences facilitated by deep learning offered nearly equivalent sensitivity and specificity as an 18-minute conventional MRI knee exam, according to research presented recently at the RSNA conference.
Can Radiomics Bolster Low-Dose CT Prognostic Assessment for High-Risk Lung Adenocarcinoma?
December 16th 2024A CT-based radiomic model offered over 10 percent higher specificity and positive predictive value for high-risk lung adenocarcinoma in comparison to a radiographic model, according to external validation testing in a recent study.