In decision support for imaging studies, who calls the shots?
Longer-term readers of this column may recall that I have slowly but surely been working my way backwards through stacks of journals which have accumulated on tables, shelves, and various other horizontal surfaces of my home over the years. New issues get priority, as their contents are more likely to be relevant to current practice, but an item from early 2012 just caught my attention.
The piece examined the impact of putting “clinical decision support” into a computerized order-entry system, specifically for the researchers’ ER in the setting of ordering pulmonary CTAs. I won’t get into the details, but they reported an over 20% decrease in usage and almost 70% increase in diagnostic yield of their CTAs over a two-year period.
Sounds impressive, right? Does anybody who actually reads these studies think they aren’t over-ordered? In the past 10–15 years, I’ve seen them go from a very small minority of the daily CT load to one of the most common studies gracing my worklist. It’s now a rare exception that I get a chest which isn’t a CTA.
It’s not hard to imagine why. Thrombosis in the pulmonary arteries is nothing to take lightly, and isn’t exactly a rare bird. It can present in more than a couple of ways, and can even be an incidental finding. It’s not uncommon for health care professionals, let alone med students, to be taught that if you even think of PE in a given situation, you should probably rule it out.
All of which makes it a dicey matter to try spelling out, in one-size-fits-all, cookie-cutter fashion, circumstances under which a physician should not order a CTA. Some other denizens of the health care sphere who have gained the ability to order scans may be comfortable blindly using checklists and guidelines. However, a doc with four years of med school and another three or more years of postgraduate training, plus however many subsequent years of practice, is going to be very mindful of how protean disease (let alone PE) can be, and resistant if not outraged when someone who hasn’t even seen his patients tries telling him how he should be managing them…especially when the liability of a bad outcome is his, not the protocol writer’s.
I, myself, have not seen many of these “decision support” systems in action. I do not know whether they most commonly take the form of gentle suggestions and reminders to clinicians, or more forceful directives. I would not be surprised if in some facilities this “support” winds up meaning that a clinician, in order to get the test he reasonably believes his patient needs, must enter false or misleading information to get the computer-based order entry system to comply. Or more insidious coercion, wherein a physician doing what he feels is right for his patients runs the risk of administrative or financial punishment for not meeting metrics…perhaps even those established at a state or federal level in the name of “quality.”[[{"type":"media","view_mode":"media_crop","fid":"30444","attributes":{"alt":"Eric Postal, MD","class":"media-image media-image-right","id":"media_crop_9559956955712","media_crop_h":"0","media_crop_image_style":"-1","media_crop_instance":"3197","media_crop_rotate":"0","media_crop_scale_h":"0","media_crop_scale_w":"0","media_crop_w":"0","media_crop_x":"0","media_crop_y":"0","style":"border-width: 0px; border-style: solid; margin: 1px; float: right;","title":"Eric Postal, MD","typeof":"foaf:Image"}}]]
The term “decision support” implies that the decision-maker is still calling the shots, and that the support mechanism is there to help, rather than hinder or usurp, the decision-making process. Hopefully, if such support systems become more widely adopted and/or required, this won’t ultimately turn out to be another instance of friendly-sounding verbiage covering up an uglier truth.
New Study Examines Short-Term Consistency of Large Language Models in Radiology
November 22nd 2024While GPT-4 demonstrated higher overall accuracy than other large language models in answering ACR Diagnostic in Training Exam multiple-choice questions, researchers noted an eight percent decrease in GPT-4’s accuracy rate from the first month to the third month of the study.
FDA Grants Expanded 510(k) Clearance for Xenoview 3T MRI Chest Coil in GE HealthCare MRI Platforms
November 21st 2024Utilized in conjunction with hyperpolarized Xenon-129 for the assessment of lung ventilation, the chest coil can now be employed in the Signa Premier and Discovery MR750 3T MRI systems.
FDA Clears AI-Powered Ultrasound Software for Cardiac Amyloidosis Detection
November 20th 2024The AI-enabled EchoGo® Amyloidosis software for echocardiography has reportedly demonstrated an 84.5 percent sensitivity rate for diagnosing cardiac amyloidosis in heart failure patients 65 years of age and older.