Defensive or vague wording in radiology reports may protect me by making it harder to pin me down - but it’s less helpful to the ordering clinician.
It doesn’t take being in the health care field to know about the practice of “defensive medicine.” Mostly, this refers to excessive action in the name of giving attorneys as little opportunity as possible to target the defensive practitioner. One might also consider other motivations: Perhaps the physician is a worrier by nature, and will lie awake at night if he leaves a 0.001 percent chance that he failed to properly diagnose a zebra. Even a desire to avoid bad stats in one’s peer-review program - it’s not much fun to know that every blemish on your internal record is being tallied for potential use against you.
In radiology, we certainly practice our share of defensive medicine. For instance, recommending follow-up studies on stuff we know is going to be unimportant. Having patients give informed consent that minor percutaneous procedures pose potential risks “including, but not limited to” everything from minor bruising to death. Noting date and time of when critical results were called in, and the name, title, and firstborn child of the clinician who got handed the hot potato. As the conventional wisdom goes, “If you don’t document it, you didn’t do it.”
Because our specialty is more attached to the written record than most, there’s an even greater focus on the words and phrases we’re using. I recall an attorney once talking admiringly of another radiologist who was a “master of doublespeak,” and could generate entire reports without actually committing to any meaning that might prove inconvenient in the event of subsequent litigation.
Not all of us have such abilities, and that’s probably a good thing since our value in interpreting these studies is identifying and describing the pathology (or lack thereof) that we see. It would be nice if we could devote 100 percent of our attention to this. Unfortunately, the motivations for such defensive dictation are very real, and I imagine that most of us are less purely focused.
Perhaps a bright-eyed and bushytailed young physician, fresh from training, is only 1 percent distracted by an awareness of defensively phrasing himself. A 20-year veteran who has been dragged through a few frivolous lawsuits, or gotten sick of warnings about negative stats during peer review, might be 10 percent distracted or more.
I think a big piece of the problem is that most if not all of the mechanisms for measuring radiologist performance out there are very much focused on the negative. If one identifies every single case of acute pathology crossing the path of his ER in a year without flaw, nothing much happens. It’s expected. But if the same doc misses, or deems stable, a 1 mm pulmonary nodule that someone else later identifies - zing! Instant feedback.
Defensive or downright vague wording in reports games this system. Suppose I’m on the fence about whether or not there is very mild diverticulitis. If I make my best decision and say that there is, or that there is not, somebody later disagreeing with me potentially results in a negative peer review, a lawsuit, etc. But if I hedge and say something like “cannot rule out very mild diverticulitis,” it’s harder to pin me down. It’s also less useful to the clinician who ordered the study.
I think there is currently too little focus on that latter point. Unless a clinician is overwhelmed with admiration or gratitude for a good pickup, to the point that he reaches out to the interpreting radiologist and/or his department leadership, the value added by the radiologist doesn’t count.
Correcting this does not require a massive new river of paperwork (though I can just see some new federal initiative requiring all clinicians ordering studies to subsequently grade the “helpfulness” of each report). But it would take very little time and effort, say on a monthly or quarterly basis, for major referrers to be shown a list of radiologists most frequently reading their studies, with the simple question: Who on this list has been particularly helpful to you? (With an option for supplying further detail.)
Might be telling to get such feedback, rather than wait until clinicians call up, in bad moods, to complain about alleged misreads.
Can Generative AI Facilitate Simulated Contrast Enhancement for Prostate MRI?
January 14th 2025Deep learning synthesis of contrast-enhanced MRI from non-contrast prostate MRI sequences provided an average multiscale structural similarity index of 70 percent with actual contrast-enhanced prostate MRI in external validation testing from newly published research.
Can MRI Have an Impact with Fertility-Sparing Treatments for Endometrial and Cervical Cancers?
January 9th 2025In a literature review that includes insights from recently issued guidelines from multiple European medical societies, researchers discuss the role of magnetic resonance imaging (MRI) in facilitating appropriate patient selection for fertility-sparing treatments to address early-stage endometrial and cervical cancer.