Every imaging facility should have a solid peer review system, and peer review data should be used in the careful development of quality metrics in radiology performance.
I recently visited Philadelphia, and while there, I came across some of Ben Franklin’s more famous quotations, including this well-known one: “Those who live in glass houses shouldn’t throw stones.”
I’ve relied on that notion many times as a radiologist. Why? Because to me we are in the most humbling of specialties. We all make errors and our errors are usually in plain evidence in perpetuity. So, I’ve been hesitant to be overly critical of others’ mistakes.
But lately I’ve started to think a lot more about that position. Isn’t that also an abdication of responsibility of quality in my profession? Specifically I’ve come across poor quality reports that have led to over-imaging (often read by non-radiologists) or incorrect treatment. While I’ve made my share of errors, I know about them because they are pointed out to me; and as a result, I am able to consider what I can do to improve.
But what about when someone orders their own test and reads it, or one of their partners reads it? Is there oversight for this? There’s no one else typically to feed back an “over-read” disagreement if you read your own stuff. Do those readers find out what mistakes they made? Moreover, are they challenged to improve and held to any standards for this? I think this is an area where health imaging is sorely lacking.
So it points out a few things:
First is the importance of peer review for all readers - and peer review with consequences. Every imaging facility needs to provide a system for peer review and to create a system that allows for verification of errors by second or more peer reviews. To me, there should be grading not only of accuracy, but clinical relevance. For those errors that are clinically relevant, and that are agreed upon by reviewers, there should be some form of education and redirection. For repeated errors, there should be mandatory CME or otherwise documentation of competency.
Second is the use of peer review data in the careful development of quality metrics in radiology performance. Such metrics would cover far more ground than just report accuracy, and include a variety of service errors, omissions or dysfunction, but should include elements of accuracy and report completeness. Metrics we are all aware of are things like turn-around-time, but can include far more than this, including safety data and regularity of fulfilling documentation requirements. Such metrics should emphatically be directed at items that improve the patient experience or safety.
We owe our patients this much. And we certainly can’t be critical of the quality of anyone else, if do not firs have our own house in order.
New bpMRI Study Suggests AI Offers Comparable Results to Radiologists for PCa Detection
April 15th 2025Demonstrating no significant difference with radiologist detection of clinically significant prostate cancer (csPCa), a biparametric MRI-based AI model provided an 88.4 percent sensitivity rate in a recent study.
The Reading Room Podcast: Current Perspectives on the Updated Appropriate Use Criteria for Brain PET
March 18th 2025In a new podcast, Satoshi Minoshima, M.D., Ph.D., and James Williams, Ph.D., share their insights on the recently updated appropriate use criteria for amyloid PET and tau PET in patients with mild cognitive impairment.
Can CT-Based AI Radiomics Enhance Prediction of Recurrence-Free Survival for Non-Metastatic ccRCC?
April 14th 2025In comparison to a model based on clinicopathological risk factors, a CT radiomics-based machine learning model offered greater than a 10 percent higher AUC for predicting five-year recurrence-free survival in patients with non-metastatic clear cell renal cell carcinoma (ccRCC).
Could Lymph Node Distribution Patterns on CT Improve Staging for Colon Cancer?
April 11th 2025For patients with microsatellite instability-high colon cancer, distribution-based clinical lymph node staging (dCN) with computed tomography (CT) offered nearly double the accuracy rate of clinical lymph node staging in a recent study.