…ask yourself: What am I trying to accomplish here?
Yes, QA is an unwelcome headache that most of us, “in the trenches” of radiology, would gladly see vanish forever. It takes time out of far-too-busy days, it puts one in the uncomfortable position of rendering judgment upon colleagues (and being judged by them), and has the potential for unpleasant future repercussions in one’s career.
That is, as I’ve seen QA conceived, mandated, and executed thus far. I like to imagine it conducted a different way, but I’ll get back to that later.
Typically, there’s a required number or percentage of cases that a radiologist has to review, decide whether the other rad erred, and possibly declare how bad the flub was. The reviewed rad may or may not have the opportunity to know what it’s alleged he did wrong, or failed to do right. If he believes he performed properly, or that his error wasn’t as egregious as all that, he might have the chance to offer a rebuttal. Someone, who may or may not be any more capable than the other two rads, decides who was right.
In other words, the ball only gets rolling on the “say so” of a radiologist going through cases he’s required to review. There’s a lot of discretionary power there…and I don’t think that discretion is exercised as much or as frequently as it should be.
Individual attitudes and styles varying as they do, I’m sure there are more than a few rads out there who consider the whole system a waste of their time, and minimize this waste by rapid-fire usage of “I agree” with every single interpretation they have been given to over-read.
I’m also sure there are plenty of rads out there who use this as an opportunity to take colleagues down a peg or three, nitpicking and alleging error on every single case they see. Some might consider it defensive maneuvering; their own accuracy rate might look better if they ensure that everyone else’s is routinely diminished.
Most of us are somewhere in the middle, and probably influenced at any given QA moment by our current circumstances. Including things like mood and busy-ness which should have no impact on whether a given QA case is found to be right or wrong.
Since such subjective influences can be subtle and not immediately apparent to the QA-reviewer, I’ve made it a part of my routine, before clicking on a “disagree” button of any sort (our QA system is all computerized; no Scantron-type forms for us!), to ask myself, is anything constructive being accomplished if this becomes a QA case?
Some reasons why the answer might be a resounding NO:
• Patient care is not being impacted, nor would it likely be if the same rad read similar cases the same way for the next year.
• The “error” being alleged boils down to a difference in personal reporting styles between the original rad and the QA reviewer.
• The QA reviewer is conveying an academic point (for instance, a neurorad casting aspersions on a general rad who’s daring to read spinal MR in a way not up to the neuro guy’s standards).
• The system of QA being utilized is less than anonymous, and the relationship between the two rads is coloring the process.
• Alleged “error” is that the original interpretation was either less definitive or insufficiently hedge-y in the reviewer’s opinion…without a substantial difference in overall meaning of the report.
I could come up with more, but you get the idea. The upshot is, before I “pull the trigger” on the QA process and give another radiologist a crisis of confidence, a bout of anxiety, an ulcer, or a “ding” on his professional record, I feel I’d better have a pretty good reason for it.
I’ve had occasion to circumvent the QA process (should I be admitting that in public? I might get dragged into a black van and spirited away for reeducation). Once in a while, I’ve noted a case where a colleague significantly erred, and had the ability to reach out to said colleague to alert him. While such conversations can be awkward, they have almost always turned out well, as:
• The erroneous rad appreciates not being “turned in” to the QA machine.
• If he feels that no error was made, it can be discussed in an informal, off-the-record way. He might be right!
• If he agrees that there was an error, the ability to rectify it (phone clinicians, addend the report, etc.) is in his hands…and he can fix it without the delay that a QA process would entail.
• Either way, one or both rads learns something from the experience.
• The QA adjudicator(s) have one less item on their overflowing plate.
Which gets back to my earlier statement: I believe that such “off the record” handling of discrepant interpretations should be added as an option to the current QA system.
Easy enough to do if a given QA system isn’t bilaterally anonymous (the reviewing rad can just note the name of the original reader and pick up the phone). Otherwise, also easy to include another menu-option for case-reviewers: Along with “agree” and “disagree,” there could be a “might disagree-ask for discussion” button.
Choosing the third would shoot the reader a message that there might be a QA, and would he like to discuss it with the reviewer? Declining would shunt the case into the traditional QA discrepancy flow. Accepting would likely as not resolve the matter without need for further formal procedure (unless the reviewer felt that the discussion didn’t sufficiently resolve matters).
Yes, yes, I know that this would take a measure of power out of the hands of the bean counters and administrators, and they might squawk about it. Which wouldn’t be such a bad thing (the restoration of some autonomy to us professionals, I mean…although such squawking might be satisfying to hear, too).
The Reading Room: Artificial Intelligence: What RSNA 2020 Offered, and What 2021 Could Bring
December 5th 2020Nina Kottler, M.D., chief medical officer of AI at Radiology Partners, discusses, during RSNA 2020, what new developments the annual meeting provided about these technologies, sessions to access, and what to expect in the coming year.