Does our pattern recognition expertise in imaging extend to our impressions of tells in the work of our colleagues?
Around the beginning of my post-training rad career, quality assurance (QA) was getting more formalized and regimented in a lot of places. Even the two-bit outpatient imaging centers I worked in were getting involved in programs like “RADPEER.” (By the way, why does it get the all caps treatment? If it’s supposed to be an acronym, I’ve never been able to find out what it stands for.)
Part of the appeal was that it more easily fit into the workflow. Any time you picked up a case that had a prior on record, you could QA the previous interpretation. There was no need to have a formalized program that randomly selected cases for review and assigned them to appropriate rads. Of course, it wasn’t exactly a rigorous scientific exercise. There were plenty of opportunities for bias and other systemic flaws.
The instructions explained that, as you went about your usual business of reading the current case and comparing it, you would “form an impression” of the previous rad’s work, and thus be able to pass judgment on it. I couldn’t put my finger on why, but that turn of phrase didn’t sit well with me.
A couple of decades later, I have fleshed that out. If there is a study I can competently read, and someone has interpreted it, I am going to either agree or disagree with them. The degree of (dis)agreement falls along a spectrum. I might, for instance, have issues with choices of word, precision of measurement, etc., but my take is still going to be yea/nay. “Forming an impression” sounds more namby-pamby. It’s what I might do after looking at abstract art.
On the other hand, I do “form an impression” of the rad who did the previous read. It’s not something I’m particularly proud of nor embarrassed. It’s just what happens when observing others’ behavior at any length, and not just when they are working. We are pattern recognition machines, and our brains like to connect dots and create narratives.
Some of the impressions we form are a lot more reasonable than others, and our mental commitment to them varies as well. You might draw all sorts of conclusions about someone after glancing at them for a second as you pass by on a city street, but they are fleeting, and you wouldn’t be surprised or disturbed to find out you were dead wrong. On the other hand, you might have strong sentiments about someone in a red sportscar who dangerously cuts you off on a highway.
Compared with that, we have an abundance of information about a fellow rad when we review his or her work, especially if we have seen other work from the rad. If, for instance, you are in a small rad group without a lot of turnover, every case with a prior you see is going to incrementally build your mental profile of one of your few colleagues, even if you all work remotely and never actually interact with them. The more you see, the more you will refine your impressions of them all.
You might think that would require taking specific note of a rad’s name at the bottom of his or her reports, but I have found that is not really necessary. People’s dictation styles are sufficiently individualized that, even if I don’t know a specific name, I will recognize “this is the rad who says ‘unremarkable’ for everything,” or “this is the one who provides measurements to the 0.01 mm level.” “Here’s the ‘everything’s limited’ disclaimer king.”
Some of this stuff impacts the quality of a rad’s work, and some of it doesn’t. The way it impacts my radiologist profiling varies widely. Sometimes it is more adaptive and reasonable than others. Most of it doesn’t even register consciously. It probably did at some point earlier in my career, but only so many years and thousands of cases can go by before that sort of thing goes on autopilot while I focus on more important things (like reading the current case decently).
I can recognize some “tells” in my profiling, things that I know shouldn’t have a bearing on my impression of the rad who read the prior comparison case. They help keep me honest. For instance, if a rad’s pattern of speech (or other style of reporting) reminds me of another rad I once knew, I am on my guard against letting the old acquaintance color my attitude.
Suppose an absolute charlatan from my first job tended to use a certain turn of phrase, and I came to associate that wording with him and his no-good radiological ways. Many years later, I am now reading a case and pull up a comparison from last month. The rad reading that case, no relation to the charlatan, turns out to use the same verbiage. Part of my brain is going to want to unfairly tar him or her with the same brush as the charlatan. I should resist that, right?
On the other hand, suppose that turn of phrase happens to be a handy mechanism for hedging, or otherwise reporting in a way that is intended to mask one’s ineptitude. It might be a lot more reasonable for me to less readily trust the work of the obfuscating individual.
It is not always so easy to decide. I am reminded of a social media posting, months ago, discussing one or more rads who routinely filled the “comparison” section of reports with verbiage like “The most recent prior studies available,” as opposed to “CT of 10/6/24.” Some rads expressed understanding that this could be a useful time-saving device in that it is one more field you can have in your template or macros that requires no conscious thought or action.
Meanwhile, other rads (myself included) see that verbiage and can’t help but feel a certain loss of faith that the rad who used such a shortcut is doing diligent/good work. It begs the question: Is that rad really retrieving and reviewing relevant priors, or is he or she trying to gloss over the decision to cut some corners?
I can sometimes feel myself regarding such scraps of “could be this, could be that” evidence as droplets in the ocean of my profile for one rad or another. Individually, the tidbits amount to nothing but as I gradually experience dozens and then hundreds of prior case reports from them, the evidence mounts. Is this someone who has routinely missed things I know most rads would not? Am I constantly having to phrase my reports in ways that don’t point a finger at him or her?
Alternatively, if a rad I am inadvertently profiling commonly makes good findings (especially seeing things I know I would have missed!), offers strong differential diagnoses, and generally dots I’s, crosses T’s, I am not about to let softer “evidence” against him or her prevail in my mind.