Quality assurance works best as a learning opportunity rather than a mere chance to point out mistakes.
I’ve written more than once about Quality Assurance (QA) in our field, specifically about how it’s often poorly executed. I could easily fill this week’s entry by re-hashing various procedural QA missteps perpetrated by former employers…but, I’ll resist the temptation.
I once proposed a more collegial system, focused less on tracking people’s stats and more on examining perceived errors, with an emphasis on correcting those errors. Perish the thought, maybe leaving those involved feeling as if they’d learned something from each episode, instead of accused and/or shamed.
In a nutshell: If team member A thought that B had erred, A reached out to B to discuss. Only if the two of them failed to come to an agreement might things get punted to a QA committee. Usually, however, either A or B would realize that the other was right, and thus learn something. If wrong, B might addend a report and/or call a clinician to address the error.
This, of course, requires a certain egalitarian culture in a group’s practice. If A and B have differing levels of clout (academically or power-wise within the group), it becomes tougher to have an intellectually-honest exchange over such things. One of many examples would be if A had started working with the group just a few weeks ago, and B was a senior partner. An initial “Hey, I think you made a mistake” call might never happen, for fear that the newbie might make himself persona non grata.
So it was that, just this past week, I got a jingle from one of the established members of my group (I myself am a mere 19 months in), regarding an abdominal MR I’d read not long before. I’d seen an enhancing renal lesion without reassuring features, and came down pretty strongly on the notion that this was a malignancy.
The more-senior member of the group believed there were signs of fat in the lesion, making it an angiomyolipoma (a benign entity, for ye not in the know). Thus, to avert needless surgery and the like, I might want to review the images and addend my report.
He called when I was in the last few minutes of reading through a pan-scan following another patient’s cancer, which I wanted to finish while everything was fresh in my mind. Still, I didn’t want the other rad to have to sit around waiting for me, so I noted the case’s info and said I’d have a look, hoping it didn’t sound like I was blowing him off.
When I did get a chance to open the case, for the life of me I couldn’t see what he was talking about. I went back and forth between the imaging sequences on the MR and couldn’t see a hint of fat in the lesion. Yet, with what I knew of him from my perspective as a remote-telerad in the group (my interactions with him could be counted on a single hand), the other rad was a capable guy, and he’d stuck his neck out to inform me that he thought I’d made a mistake. I was more inclined to believe him than my own lying eyes.
Conflicted on what to do next, I reached out to another one of the rads who’d struck me as more than a little competent during the preceding months. I asked him to take a look without saying anything to bias him. He concurred with me.
As ego-gratifying as that was, I almost wished he hadn’t. If he’d pointed out something that I could see made me wrong, I could have sheepishly made my addendum and gone on with my day. Now, though, how should I proceed? Call back the original senior rad and tell him I’m standing my ground, perhaps coming across as arrogant or defiant? Let sleeping dogs lie and hope the senior doesn’t circle back around later to see that I’ve ignored his sage advice?
I needn’t have worried. A couple hours later, the senior guy shot me a message to say that he’d run the case by another rad he trusted, who had agreed with me that the lesion was, indeed, likely to be malignant. Which reassured me that nobody was going to have issues with my reading—but, more importantly, filled me with an awful lot of gratitude and respect for the senior rad.
He could easily have said nothing more on the matter, leaving me to whatever uncertainty with which the exchange had left me. A lot of folks would have done exactly that: It would be the path of least resistance and energy-expenditure. Plus, having stuck his neck out in the first place to advise me that he thought I was wrong, who would blame him for not wanting to revisit the subject? That, right there, was intellectual honesty of a sort one rarely gets to see.
So, let’s review what this informal little episode of QA achieved:
New Study Examines Short-Term Consistency of Large Language Models in Radiology
November 22nd 2024While GPT-4 demonstrated higher overall accuracy than other large language models in answering ACR Diagnostic in Training Exam multiple-choice questions, researchers noted an eight percent decrease in GPT-4’s accuracy rate from the first month to the third month of the study.
The Reading Room Podcast: Emerging Trends in the Radiology Workforce
February 11th 2022Richard Duszak, MD, and Mina Makary, MD, discuss a number of issues, ranging from demographic trends and NPRPs to physician burnout and medical student recruitment, that figure to impact the radiology workforce now and in the near future.