Every imaging facility should have a solid peer review system, and peer review data should be used in the careful development of quality metrics in radiology performance.
I recently visited Philadelphia, and while there, I came across some of Ben Franklin’s more famous quotations, including this well-known one: “Those who live in glass houses shouldn’t throw stones.”
I’ve relied on that notion many times as a radiologist. Why? Because to me we are in the most humbling of specialties. We all make errors and our errors are usually in plain evidence in perpetuity. So, I’ve been hesitant to be overly critical of others’ mistakes.
But lately I’ve started to think a lot more about that position. Isn’t that also an abdication of responsibility of quality in my profession? Specifically I’ve come across poor quality reports that have led to over-imaging (often read by non-radiologists) or incorrect treatment. While I’ve made my share of errors, I know about them because they are pointed out to me; and as a result, I am able to consider what I can do to improve.
But what about when someone orders their own test and reads it, or one of their partners reads it? Is there oversight for this? There’s no one else typically to feed back an “over-read” disagreement if you read your own stuff. Do those readers find out what mistakes they made? Moreover, are they challenged to improve and held to any standards for this? I think this is an area where health imaging is sorely lacking.
So it points out a few things:
First is the importance of peer review for all readers - and peer review with consequences. Every imaging facility needs to provide a system for peer review and to create a system that allows for verification of errors by second or more peer reviews. To me, there should be grading not only of accuracy, but clinical relevance. For those errors that are clinically relevant, and that are agreed upon by reviewers, there should be some form of education and redirection. For repeated errors, there should be mandatory CME or otherwise documentation of competency.
Second is the use of peer review data in the careful development of quality metrics in radiology performance. Such metrics would cover far more ground than just report accuracy, and include a variety of service errors, omissions or dysfunction, but should include elements of accuracy and report completeness. Metrics we are all aware of are things like turn-around-time, but can include far more than this, including safety data and regularity of fulfilling documentation requirements. Such metrics should emphatically be directed at items that improve the patient experience or safety.
We owe our patients this much. And we certainly can’t be critical of the quality of anyone else, if do not firs have our own house in order.
Study Reaffirms Low Risk for csPCa with Biopsy Omission After Negative Prostate MRI
December 19th 2024In a new study involving nearly 600 biopsy-naïve men, researchers found that only 4 percent of those with negative prostate MRI had clinically significant prostate cancer after three years of active monitoring.
Study Examines Impact of Deep Learning on Fast MRI Protocols for Knee Pain
December 17th 2024Ten-minute and five-minute knee MRI exams with compressed sequences facilitated by deep learning offered nearly equivalent sensitivity and specificity as an 18-minute conventional MRI knee exam, according to research presented recently at the RSNA conference.
Can Radiomics Bolster Low-Dose CT Prognostic Assessment for High-Risk Lung Adenocarcinoma?
December 16th 2024A CT-based radiomic model offered over 10 percent higher specificity and positive predictive value for high-risk lung adenocarcinoma in comparison to a radiographic model, according to external validation testing in a recent study.