While it stands to reason that referring physicians would prefer a condensed summary of relevant imaging findings, vagaries with insurers, patients and other possible readers of the radiology report may warrant an inefficient minutiae-cluttered approach.
Reading cases was simpler in residency.
It didn’t feel that way at the time. Learning so much stuff all at once is bewildering. It was like being in the middle of a parade and trying to catch every piece of confetti.
In a way, that was key to the simplicity. At that point in a budding rad career, the mission for interpreting cases is about as clear-cut as can be: Identify everything abnormal or otherwise noteworthy on all images, even if 99 percent of it has no bearing on the pertinent diagnosis or clinical scenario. Miss anything and it becomes a potentially embarrassing teaching point.
Back then, I had a target audience of one. My preliminary reads had to pass muster with the attending radiologist. Period.
Gaining a bit of experience and working with different attendings, I saw that I actually had an audience of one at a time. I might, for instance, learn that attending A expected me to point out every ditzel on a scan while attending B wouldn’t mind if I glossed over such things. Attending C might actually give me a little grief for creating a minutiae-cluttered report.
I also started to learn that certain types of imaging study demanded tailoring of reports. A random spinal X-ray from the ER didn’t need the same measurements as a formal scoliosis series. I didn’t have to pull out Greulich and Pyle for every pediatric hand film that came my way.
Things plunked along that way until later years of residency. Then, covering overnights with more advanced imaging, my audience of one began to morph into a multitude.
(Editor’s note: For related content, see “Current Insights on Multimedia Radiology Reports,” “Enjoying Quiet Moments Amid the Boilerplate Blather in Radiology Reporting” and “Reinventing Radiology Reports in the Age of Value-Based Care.”)
For example, the ER sends a trauma patient for head-to-toe CT workup. I render my prelim—which, again, has to cover everything I see on the images if I don’t want to be embarrassed during the next morning’s readout. All of the patient’s injuries (or lack thereof) are included. The ER and trauma team are satisfied.
Unfortunately, the scan also reveals non-traumatic incidentals. The patient happens to be undergoing a Crohn’s disease exacerbation. There may be a possible malignancy. (In a patient with known cancer patient, the trauma scan may inadvertently serve as an unplanned restaging study.)
As the ER processes the patient for admission and calls a bunch of consults, each of the relevant teams comes along to review the imaging. Hopefully my laundry list prelim contains all of the details they will want but maybe it doesn’t. Alternately, having made my interpretation with a view to the acute trauma setting, I haven’t quite made the gazillion measurements I would have for a known restaging. For whatever reason, the subspecialists might request more details from me. Maybe they come armed with some prior scan from another facility that I didn’t originally have, and now they want a comparison.
Fast forward almost 20 years, and the potential audience for my radiology reports has grown vastly. There is now truly a multitude of folks who might eventually look over anything I dictate. I have no way of knowing who’s going to be involved or what sort of details they will want. Sometimes, through the vagaries of the RIS and a dearth of quality clinical histories, I don’t know who’s referring the patient or why. There isn’t even an audience of one.
It’s no longer just about other docs. Technology has considerably advanced. Imaging gets around that much faster. Patients, their families, and social media are a lot more likely to weigh in with questions and armchair theories. Insurers are ever bolder about denying payment for already rendered health care, lest reports contain certain verbiage. Non-physician regulators feel free to require their own wording.
“Know your audience” is common advice in more than a few venues, but when I render a radiology report, I really have no way of knowing who’s liable to be in my audience. It’s pretty much a given that anybody at any time can turn up and demand an addendum from me. If I think that addendum isn’t warranted, the burden is overwhelmingly on me to explain why. Otherwise, I could get labeled a “disruptive physician” or cast as a villain.
Of course, most rad gigs have no incentives or other productivity tracking for this. If I read a case and never have to look back at it, my credit for it is the same as if I get called by half a dozen clinicians to individually walk them through it, compare it against some prior study they belatedly provide from another facility, etc.
I have known folks from other walks of life who think this is absurd. This includes a couple of graphic designers and writers, for instance, whose standard deals include their original work plus up to two revisions. Any more adjustments carry extra charges, both because they’re taking more of the professional’s time and to serve as pushback against endless demands. Then again, those folks also thought it was nuts that docs would do things like read X-rays for $5 a pop.
As it stands, my best hope of satisfying the multitude is to generate every report as if I were back in residency—measuring everything, commenting on it all as if any tiny detail could be relevant to someone, and taking far longer to complete each case. Meanwhile, I would be generating needlessly lengthy reports that would take far longer for referrers to read, potentially burying what they wanted to know in a wall of words.
That’s exactly the opposite of what any given member of the multitude wants from me: nice, succinct reads focused on their particular interests. I would love to oblige! But how can I do that without potentially dissatisfying the rest of the multitude and creating more work for myself down the line?
Can Radiomics Bolster Low-Dose CT Prognostic Assessment for High-Risk Lung Adenocarcinoma?
December 16th 2024A CT-based radiomic model offered over 10 percent higher specificity and positive predictive value for high-risk lung adenocarcinoma in comparison to a radiographic model, according to external validation testing in a recent study.
Can AI Facilitate Single-Phase CT Acquisition for COPD Diagnosis and Staging?
December 12th 2024The authors of a new study found that deep learning assessment of single-phase CT scans provides comparable within-one stage accuracies to multiphase CT for detecting and staging chronic obstructive pulmonary disease (COPD).