Is there a certain line of self-preservation in radiology reporting for findings and impressions?
Jackie Mason fans might recognize the paraphrasing of the headline. An occasional schtick of his stand-up act was to ask rhetorical questions of his audience at large and then, after half a moment’s pause, jab a finger at someone in the front row: “Answer the question, before I throw you out of here!”
A little over 20 years ago, my residency department imploded. The body imaging section was particularly badly hit, and one of the repercussions was that post-call residents never quite knew who would be “reading out” their overnight body cases until the morning arrived. For a while, it was guaranteednot to bea body imaging rad since we didn’t have any of those left.
The interventional docs stepped up more than others to fill the role, especially the younger ones: Closer to their training, they remembered more, had more energy, and maybe were hungrier for extra money if the department’s new leadership had applied financial incentives to the task.
One of the older guys initially pitched in, but that didn’t last long. A year-mate of mine related the codger’s discomfort to me. We had been trained to find and comment on every abnormality or anatomical variant the images had to offer. Skip over a ditzel in the liver, however unimportant it was, and you risked being told by a proper body imaging attending about your “miss.”
One morning, when the trainee was reviewing his call cases with the elder interventionalist and dutifully going through his laundry list of abnormalities, the attending interrupted him. “Hold on, hold on, hold on. Look, you’re being naive if you think you’re going to make all the findings. Just answer the clinical question and move on.” (Translation: I don’t know what to do with all of these incidental findings you’re throwing at me, so stop it.)
In a fantasy world, that would be a nice way to do our work. When I first started doing telerad and much of the worklist was “prelim” reads, I imagined that was what we were supposed to be doing. We would answer the clinical question, point out any other glaring acute issues, and leave the full reporting to whoever was reading cases out the next day. After all, we were getting paid 25 percent less per case because they were prelims.
Unfortunately, it was swiftly evident that there was absolutely nothing you could leave out of a prelim report if you cared about your QA stats. Nitpicking and grouchy rads would allege a “miss” if you failed to, say, mention a 1 mm pulmonary nodule that had been stable for a decade. I directly asked the leadership of the telerad company (by far the biggest at the time), when I was coming on board what they thought was safe to leave out of a prelim and none of them could give me an answer.
On top of that, even back in my residency, there was no guarantee you’d actually get a “clinical question.” Granted, we got them more frequently than we do now but I remember thinking, when my year-mate told me of the interventional guy’s advice, “How do you answer a question that nobody asked?”
Over the years, it has occurred to me more than a few times that actually addressing asked (or implied) clinical questions could be downright hazardous to one’s reputation or career. For instance, I not uncommonly receive chest X-rays on patients being worked up for neuro symptoms and the given clinical history is “R/O stroke.” If my CXR impression says anything remotely like “No evidence of stroke within the chest,” I guarantee you I will be taken to task for being snarky. I would also probably get in trouble for “Stroke not ruled out on this chest x-ray.”
(Editor's note: For related content, see "Clinical Histories in Radiology: Could They Get Worse?" and "Seven Takeaways from Best Practice Recommendations for Incidental Radiology Findings in the ER.")
Garbage “reasons for exam” like “r/o path” or even “r/o pain” just beg to have brutally direct responses, like “Path not ruled out” or “Cannot r/o pain on imaging. Recommend asking your patient.”
I consider it near garbage when I receive studies that have ICD codes instead of actual words in their history. If I get a scan for “C34.90,” my choices are to go ahead and read the thing knowing nothing about the patient, expend some time looking up that code, or spend even more by digging in the patient’s record. (While an assortment of other referrers and clerks hassle me about dozens of other cases that are in danger of exceeding their “turnaround times.”) I admit that I have fantasized about looking up the ICD codes for every single incidental abnormality on those scans and filling my impression with their alphabet soup. Let the referrers see what it feels like.
The fantasy, of course, is that one or more referrers would receive such garbage-in, garbage-out reports and experience a moment of clarity: “Good heavens, have I been giving my consultants (and patients) short shrift? That ends now! Henceforth, I shall provide proper clinical histories, and make sure that all who work with me know to do the same!” (Like I said, it’s a fantasy.)
I have confirmed via social media that I’m far from the only rad who’s sorely tempted to give referrers exactly what they ask for but usually pulls back from the brink in the name of self-preservation.
I hasten to add that I don’t think most rads would skimp on pertinent findings just to be snarky with referrers. On the other hand, we generally dictate reports with separate sections for “Findings” and “Impression.” All of the info is there in the former section, even if a lot of referrers don’t bother really reading that part. Most of us try to be nice and guess at what the referrers might want summarized in the impression section, when they haven’t bothered to spell it out in their referral. In a functioning society, however, niceness is generally a reciprocal thing.
Study: AI Bolsters Sensitivity for Pneumothorax on CXR and Significantly Reduces Reporting Time
October 30th 2024For clinically actionable pneumothorax, an artificial intelligence algorithm demonstrated a 93 percent AUC and a 96 percent specificity rate in a study involving chest X-rays from over 27,000 adults.