Defensive or vague wording in radiology reports may protect me by making it harder to pin me down - but it’s less helpful to the ordering clinician.
It doesn’t take being in the health care field to know about the practice of “defensive medicine.” Mostly, this refers to excessive action in the name of giving attorneys as little opportunity as possible to target the defensive practitioner. One might also consider other motivations: Perhaps the physician is a worrier by nature, and will lie awake at night if he leaves a 0.001 percent chance that he failed to properly diagnose a zebra. Even a desire to avoid bad stats in one’s peer-review program - it’s not much fun to know that every blemish on your internal record is being tallied for potential use against you.
In radiology, we certainly practice our share of defensive medicine. For instance, recommending follow-up studies on stuff we know is going to be unimportant. Having patients give informed consent that minor percutaneous procedures pose potential risks “including, but not limited to” everything from minor bruising to death. Noting date and time of when critical results were called in, and the name, title, and firstborn child of the clinician who got handed the hot potato. As the conventional wisdom goes, “If you don’t document it, you didn’t do it.”
Because our specialty is more attached to the written record than most, there’s an even greater focus on the words and phrases we’re using. I recall an attorney once talking admiringly of another radiologist who was a “master of doublespeak,” and could generate entire reports without actually committing to any meaning that might prove inconvenient in the event of subsequent litigation.
Not all of us have such abilities, and that’s probably a good thing since our value in interpreting these studies is identifying and describing the pathology (or lack thereof) that we see. It would be nice if we could devote 100 percent of our attention to this. Unfortunately, the motivations for such defensive dictation are very real, and I imagine that most of us are less purely focused.
Perhaps a bright-eyed and bushytailed young physician, fresh from training, is only 1 percent distracted by an awareness of defensively phrasing himself. A 20-year veteran who has been dragged through a few frivolous lawsuits, or gotten sick of warnings about negative stats during peer review, might be 10 percent distracted or more.
I think a big piece of the problem is that most if not all of the mechanisms for measuring radiologist performance out there are very much focused on the negative. If one identifies every single case of acute pathology crossing the path of his ER in a year without flaw, nothing much happens. It’s expected. But if the same doc misses, or deems stable, a 1 mm pulmonary nodule that someone else later identifies - zing! Instant feedback.
Defensive or downright vague wording in reports games this system. Suppose I’m on the fence about whether or not there is very mild diverticulitis. If I make my best decision and say that there is, or that there is not, somebody later disagreeing with me potentially results in a negative peer review, a lawsuit, etc. But if I hedge and say something like “cannot rule out very mild diverticulitis,” it’s harder to pin me down. It’s also less useful to the clinician who ordered the study.
I think there is currently too little focus on that latter point. Unless a clinician is overwhelmed with admiration or gratitude for a good pickup, to the point that he reaches out to the interpreting radiologist and/or his department leadership, the value added by the radiologist doesn’t count.
Correcting this does not require a massive new river of paperwork (though I can just see some new federal initiative requiring all clinicians ordering studies to subsequently grade the “helpfulness” of each report). But it would take very little time and effort, say on a monthly or quarterly basis, for major referrers to be shown a list of radiologists most frequently reading their studies, with the simple question: Who on this list has been particularly helpful to you? (With an option for supplying further detail.)
Might be telling to get such feedback, rather than wait until clinicians call up, in bad moods, to complain about alleged misreads.
FDA Grants Expanded 510(k) Clearance for Xenoview 3T MRI Chest Coil in GE HealthCare MRI Platforms
November 21st 2024Utilized in conjunction with hyperpolarized Xenon-129 for the assessment of lung ventilation, the chest coil can now be employed in the Signa Premier and Discovery MR750 3T MRI systems.
FDA Clears AI-Powered Ultrasound Software for Cardiac Amyloidosis Detection
November 20th 2024The AI-enabled EchoGo® Amyloidosis software for echocardiography has reportedly demonstrated an 84.5 percent sensitivity rate for diagnosing cardiac amyloidosis in heart failure patients 65 years of age and older.
New Study Examines Agreement Between Radiologists and Referring Clinicians on Follow-Up Imaging
November 18th 2024Agreement on follow-up imaging was 41 percent more likely with recommendations by thoracic radiologists and 36 percent less likely on recommendations for follow-up nuclear imaging, according to new research.