The government and insurance companies will eventually run out of ways to further complicate the system. Until then, I'd like to suggest some categories of diagnostic codes that we would actually find useful.
Not long ago, I was calling in markedly abnormal results on a scan. The ordering ER doc, a good ol’ boy if his accent was any indication, expressed his view that the patient’s clinical condition correlated with my findings: “Yep, he’s as sick as dirt.”
It got me thinking, not for the first time, that we’re awfully regimented about the reasons for imaging studies and the diagnoses made from them. At least, the insurance companies and government are. While our clinical histories are often no more elaborate than “pain” or “post-op” and many of our impressions are “normal study” and “no acute disease,” the government and insurance companies require us to describe things a little differently if we expect to be paid rather than audited. If you haven’t seen some of the ICD-10 codes, you should take a peek. It’s enough to make you laugh (or cry, depending).
I’m sure some folks out there find real value in having codes that distinguish left elbow pain after being hit by a Nerf football from left elbow pain after being struck by an errant ping-pong serve. I’ve got to imagine that they’re going to run out of ways to further complicate the system sooner or later. Before they finally sit back and proudly declare that there’s nothing more to be done, I’d suggest adding some categories of diagnostic codes that we clinical folks would actually find useful.
First category, exemplified by my ER buddy above, would be for patients who are visibly not doing well without a clinical indication as to why. This is often the case for the demented, the delirious, or the simply uncommunicative. A good example would be the occasionally-used “failure to thrive” quip regarding a 102-year-young nursing home resident with multiple medical problems, any one of which could be acting up.
This batch of diagnostic codes could run the spectrum of severities: “Sick as dirt” would be near the deeper end of the pool, perhaps a little worse than merely “sick as a dog” but somewhat better than “looks like death warmed over.” At the mild end of the scale would be a line most interns have heard when summoned from the call-room at 3 a.m., because “nurse doesn’t like the looks of the patient.”
Another category could be for studies ordered without an apparent reason at all, even to the person doing the ordering. For instance, a primary-care clinician got a pulmonary consult, and the recommendation was for a high-res chest CT (or the primary-care guy knows that pulmonary will want one before even seeing the patient). “Subspecialist recommends study” makes more sense than some of the ICD codes I’ve heard. How about if the study is ordered by a resident, or even a med student, who’s just trying to get his scut work done and didn’t hear why his attending told him to get the MRI? Perhaps the attending didn’t even see fit to explain his reason to his underling. “Superior housestaff officer wants the study.”
Another batch of codes could address situations in which there’s really no medical reason for imaging - but not doing a study could have bad repercussions. For instance, “patient expects imaging.” This would address the patient who came in with a cough (present for a month, but somehow worthy of an ER visit at 2 a.m.), sat in the waiting room for four hours while actual emergencies were managed, and is now ready to blow a gasket if discharged without any expensive testing. I don’t know if too many of my fellow imaging brethren know about the joys of Press-Ganey scoring, but our colleagues in the ER unfortunately have to pay attention to this sort of thing far too much.
“Defensive medicine” could be another code in this category. That way, the next time champions of the trial lawyers claimed that tort reform would lead to no real reduction in medical costs, it wouldn’t take any troublesome research to turn up the stats to prove them wrong; just tally up all studies done under the CYA code.
New Study Examines Short-Term Consistency of Large Language Models in Radiology
November 22nd 2024While GPT-4 demonstrated higher overall accuracy than other large language models in answering ACR Diagnostic in Training Exam multiple-choice questions, researchers noted an eight percent decrease in GPT-4’s accuracy rate from the first month to the third month of the study.
FDA Grants Expanded 510(k) Clearance for Xenoview 3T MRI Chest Coil in GE HealthCare MRI Platforms
November 21st 2024Utilized in conjunction with hyperpolarized Xenon-129 for the assessment of lung ventilation, the chest coil can now be employed in the Signa Premier and Discovery MR750 3T MRI systems.
FDA Clears AI-Powered Ultrasound Software for Cardiac Amyloidosis Detection
November 20th 2024The AI-enabled EchoGo® Amyloidosis software for echocardiography has reportedly demonstrated an 84.5 percent sensitivity rate for diagnosing cardiac amyloidosis in heart failure patients 65 years of age and older.