A rad's job is pretty straightforward-at least in theory. When your job description meets the uncertainties of the real world, how do you cope?
One might describe the job of a diagnostician this way: Identify, gather, and assimilate evidence to arrive at meaningful conclusions.
This is all very nice when there is a sufficient amount of easily-recognized, relevant evidence at hand (and when the possible conclusions of relevance are readily defined). In other words, you know what you’re looking for, or looking to exclude.
An ideal diagnostic case might therefore begin with a proper clinical history (as opposed to, say, “R/O pathology”). Armed with this information, the radiologist looks through a series of images, whether or not accompanied by relevant prior studies. Following whatever search-pattern she has developed, she makes numerous observations, gradually assembling a body of positive and pertinent-negative points of evidence.
With experience, much of the latter might not even be a fully-conscious affair. One develops a sort of mental “autopilot” for routine negatives.
At some point during this process, the rad begins to form an overall conclusion. The evidence mounts, and she’s increasingly sure what her final opinion of the case is going to be. It’s still possible that some other evidence will come along to change her mind-a surprising finding in the last few dozen images, or an additional discovered bit of clinical history that puts the accumulated evidence in a new light.
Barring such late-stage changes of mental course, I envision a sort of speedometer for one’s readiness to accept a conclusion, commit to it (such as in a dictated report), and move on to something else. The needle starts at zero, and over the course of looking at a case it moves up, whether gradually or by fits and starts. Sometimes it will reach 100%; for instance, if you’re looking at an x-ray and there’s clearly an acute fracture, there’s no room left for doubt.
Less than certain
Most of the time, there isn’t such certainty. One of the biggest adjustments a rad has to make, moving from training to subsequent practice, is getting comfortable with the notion of rendering a report when the evidence you’re seen leaves you at less than 100%. Especially when calling a case normal, since it’s mathematically much harder to prove a negative. It takes experience to develop comfort (or at least, to minimize discomfort) with rendering reports when you’re 90, 80, 70% certain. Hedging starts to creep in as a defense-mechanism.
One thing I’ve noticed, when there doesn’t seem to be enough evidence for personal satisfaction: As you linger over an issue in the hope that some other clues will reveal themselves, or some brilliant insight might overcome you, the information you’ve already gathered gradually seems to attain more importance. I think the psych types would refer to this as a type of “anchoring bias.”
Thus, even though you don’t have any more evidence than you did before, you wind up with a subjective sense that the evidence has sufficiently mounted-which conveniently lets you feel better about coming to a conclusion and moving on to something else.
It’s not entirely unreasonable. Suppose, for instance, there is only one salient piece of evidence in a given situation, and you’re found it. You might not know it, but you’ve got 100% of the info anyone could ever find. Systematically searching and finding nothing elsewhere should eventually bring you to a point where you know that, yes, you’ve looked everywhere, and can therefore be confident that you know all of the relevant facts.
That said, in the real world there is no way to know for sure just how much evidence is out there, whether you’ve looked in all of the right places for it, or-for that matter-whether you have properly recognized evidence (or a lack thereof) when you were looking at it.
Muddying the waters further is the matter of subjectively creating evidence where none really exists, or seeing something that should be considered evidence but deciding that it really isn’t. Interpreting imaging-cases, the best example of this I can think of is artifact. Is that a lesion, or motion-blur? Are those a few pixels of contrast-enhancement, or partial volume-averaging?
This isn’t confined to the interpretation of diagnostic imaging, although it is a tidier and more readily-observed phenomenon in our professional venue: We’ve got dozens, if not hundreds, of compartmentalized events (cases to read) per day, each of which is a closed system: There are a finite number of images, and a finite number of things on them to look at.
Related article: The End of Radiologists?
Outside of rad-work, I find it equally important to remain on guard against the artificial sensation of mounting evidence. Particularly in my work as a telerad, working for the past 7.5 years in a home-office. A telerad’s contact with colleagues, support-staff, and referring clinicians tends to be on an as-needed basis via email, instant-message, and phone-compared to a conventional, on-site position, where one has routine in-person interactions.
Take away all of that in-person stuff, and a lot of the “evidence” we use-facial expressions, tones of voice, chatting unrelated to actual work-goes away. This is stuff that, consciously or otherwise, we use to know where we stand with other folks, what’s of interest to them, their personality quirks, etc.
Without this, it’s a lot easier to “fill in the blanks” and come to erroneous conclusions. For instance, suppose someone sends a brief email or instant message. Maybe that’s just their style, or they were pressed for time. But maybe the recipient takes that as terse, angry, even hostile. Look up the difference between “k” and “kk,” for example.
Knowing how readily such misunderstandings can happen, one might make special efforts (some might say bend over backwards) to prevent them. I’ve found such efforts to be very much worthwhile as I’ve gotten longer in the teleradiological tooth.