As recommendations have become a commonplace expectation on radiology reports, is there a point where we are crossing the line?
I am not much of a radiology journal reader. Every now and then, especially when I have got to catch up on CME requirements, I go on a minor binge and see what happened in the past few months. Otherwise, a headline has to reach out and grab my attention for me to click on it.
Earlier this month in the Journal of the American College of Radiology (JACR), a study focused on concurrence between “follow-up” recommendations in radiologists’ reports and their referring clinicians. Overall, the rate was a bit over 88 percent, which sounds harmonious. They dissected things a bit. For example, surgeons were less likely to agree.
I didn’t dig into the details about how the study got done, but it provoked a bit of thought about just how many of our rad reports contain recommendations. To examine this sort of thing in a scholarly fashion, there needs to be a certain “critical mass” of recs. The ghost of my younger radiological self nudged me: “Hey! Remember when most of our reports didn’t contain recommendations? They used to be the exception, not the rule.”
That might not make a lot of sense to more recently minted rads, but I am approaching dinosaur status. I completed residency in 2004, and there wasn’t a whole lot of routine recommending going on. BI-RADS was already a thing, but the Fleischner Society guidelines weren’t out yet. Advice, for follow-ups or otherwise, was something of a Wild West. Everybody had their own ideas about what to suggest on the rare (as compared to now) occasion they suggested anything.
We would, of course, pipe up if someone had ordered the wrong type of study for a given issue, or if some finding could be better characterized by another modality. However, even these instances carried a certain negative connotation. It was in vogue to accuse rads of self-referral, lining their own pockets.
A rad’s suggestion of additional imaging could also be perceived as disrespect for referring clinicians. Couldn’t they make such decisions for themselves? Worse, we might be tying their hands. If we advised something on the record, a medical malpractice lawyer might later crucify them for not doing what we had suggested.
At the time, I was a lot closer to my own pre-radiological clinical experience. It was very much in my mind that one shouldn’t order imaging, or even much bloodwork, without a plan. If testing shows A, I will do this. If it shows B, I will do that. “Send a bunch of bloods and scan the patient’s whole body” as a fishing expedition was not practicing medicine. Expecting the radiologist to tell you what to do next was the thinking of a charlatan.
If that was the case, and referring clinicians had a clear plan of action based on what the imaging showed, who the heck was I to come along and fling unsolicited recommendations at them? I might as well stroll up to a couple of guys playing chess in the park and tell them what moves they should make.
Of course, things changed as the years drifted by. Along with the Fleischner guidelines, we saw BI-RADS’ cousins (Lung, LI, TI, PI…) enter the picture. Scans got quicker and easier to do, and there was less gatekeeping from the actual radiologists. Emergency rooms increasingly embraced the usage of CT as a triaging tool. Referring clinicians got overwhelmed with patient volume, and a lot of management (both related to imaging and otherwise) got offloaded to non-physicians.
In the process, the stigma of routinely recommending things in rad reports faded and got replaced not only with acceptance but even an expectation of it. A lot of rads now have a “Recommendations” section in their dictation templates, whether because they decided it was a good idea or because they got tired of getting addendum requests for not having made any recommendations. I wouldn’t be surprised if a bunch of residents are being trained to do this.
At some point, the scope of our recommendations began to expand. Some of it was still very much in our bailiwick, for instance specifying how a follow-up study should be done to avoid limitations present in the current exam. I have received way too many “f/u nodule” chest CTs, for instance, marred by respiratory motion or obscured from exacerbations of asthma, COPD, etc. I thus have macros essentially saying “Hey, sorry this follow-up didn’t resolve the issue, but maybe make the next follow-up when the patient can control his breathing.”
A step further beyond our radiological borders, but still reasonable, was when we started talking about clinical stuff. I’m not talking about a vague mention of “correlation,” but a targeted physical exam. For instance, on a current scan, we happen to notice a dermal lesion that wasn’t there before, or an asymmetric density in a breast. Once upon a time, we might have given offense by implying that the referrer hadn’t already done a thorough physical exam to find anything of relevance. Now, it is kind of an expectation that such things may have been overlooked. If we don’t point them out, a melanoma or breast cancer may remain under the radar as it grows.
I still raise an eyebrow when I see rads recommending input from other clinical subspecialties. The most common is: “Surgical consultation recommended.” Are you trying to tell the referrer that you don’t think he or she knows how to manage the patient? (Granted, some don’t.) Suppose the referring doc has no idea why you said that, but obediently gets the surgeon or whoever else involved. Is this third party supposed to magically know what you expected him or her to do?
When I need to go out on a limb like that, I at least try to provide some direction: “A tissue diagnosis will probably be necessary,” or “At this size, (lesion) is often removed on an elective basis.”
Swimming in the radiological waters of the past couple of decades, I can’t help but have been steeped in them a little, and I definitely include more recommendations than I once did. Old habits (and attitudes) die hard, however, and I still pump the brakes whenever I feel like I might be drifting into someone else’s clinical lane.
Can Radiomics Bolster Low-Dose CT Prognostic Assessment for High-Risk Lung Adenocarcinoma?
December 16th 2024A CT-based radiomic model offered over 10 percent higher specificity and positive predictive value for high-risk lung adenocarcinoma in comparison to a radiographic model, according to external validation testing in a recent study.
Mammography Study Suggests DBT-Based AI May Help Reduce Disparities with Breast Cancer Screening
December 13th 2024New research suggests that AI-powered assessment of digital breast tomosynthesis (DBT) for short-term breast cancer risk may help address racial disparities with detection and shortcomings of traditional mammography in women with dense breasts.
Study Shows Merits of CTA-Derived Quantitative Flow Ratio in Predicting MACE
December 11th 2024For patients with suspected or known coronary artery disease (CAD) without percutaneous coronary intervention (PCI), researchers found that those with a normal CTA-derived quantitative flow ratio (CT-QFR) had a 22 percent higher MACE-free survival rate.