“I don’t know” is grossly underutilized. In medicine, particularly the diagnostic specialties, it can be very difficult to utter.
While taking call during the first half of my residency, I was approached by one of the ER docs, who wanted to review a chest x-ray. My prelim report was negative, but we dug up the films (yes, films! Remember those?) and hung them. Still looked fine to me, and nothing was glaringly wrong to the ER guy either. Except…
“What’s this,” he asked, pointing at the PA view. It was one of those spots where several structures (mediastinal, pleural, skeletal, etc.) overlapped to create a summation-shadow. Nothing looked out of the ordinary, and I said so. “But what is it,” he persisted, his tone not accusatory but genuinely curious.
I knew that, given time and resources, I could tell him each of the anatomic structures which comprised the shadow. Somewhere at the other end of the hospital, there was a monograph our department’s chest-maven had given us that covered all such points of anatomic interest. There was also our residency-library, in which a time-consuming search might eventually yield a relevant review-article or textbook chapter. If I roamed the hospital in search of internet access, I might research online. But this was late at night, and I, the only radiology resident in house, was swamped with other stuff.
As a medical student, one of the most valuable lessons I learned was that “I don’t know,” while an entirely appropriate statement, is grossly underutilized. In medicine, particularly the diagnostic specialties, it can be very difficult to utter. Everyone-patients, nurses, other physicians-expects a consultant to be the Answer Man. For that matter, after four years of medschool, five years of residency, and maybe a year or two of fellowship, we can have similar expectations of ourselves. A study for which we can’t provide a conclusive diagnosis can be unsatisfying, frustrating, or embarrassing. “I don’t know” raises questions: Should I have known? Would someone else have known?
Even when one does manage to utter those difficult words, the ordeal is rarely over. Ensuing questions tempt one to recant one’s lack of knowing: “Well, what could it be?” “What’s your best guess?” “What do we do next?” One might succumb and fudge an answer, so as not to leave referrers empty-handed. No matter how carefully phrased and padded with caveats, such guesses can go awry (Rad: “Gosh, it could be anything…I guess it might be diagnosis XYZ?” Clinician: “Thanks.” Note in the chart: “Radiologist suspects XYZ.”).
Some areas of imaging have addressed this issue. Bone lesions leap to mind: Lacking a definitive diagnosis, one can identify aggressive or nonaggressive features, and place the abnormality into categories of “touch” or “don’t touch.” Thus, the answer to “what is the lesion?” can be “I don’t know, but here’s what I can tell you about it…” A similar solution came to me during that night on call: I managed to nudge my ego aside and tell the ER doc that no, I couldn’t tell him precisely which normal structures overlapped to create the summation-shadow which had caught his eye…but whatever it was, it was comparable to hundreds of chest-films I’d seen before, and I could assure him that it was non-pathological. That satisfied him.
Study Reaffirms Low Risk for csPCa with Biopsy Omission After Negative Prostate MRI
December 19th 2024In a new study involving nearly 600 biopsy-naïve men, researchers found that only 4 percent of those with negative prostate MRI had clinically significant prostate cancer after three years of active monitoring.
Study Examines Impact of Deep Learning on Fast MRI Protocols for Knee Pain
December 17th 2024Ten-minute and five-minute knee MRI exams with compressed sequences facilitated by deep learning offered nearly equivalent sensitivity and specificity as an 18-minute conventional MRI knee exam, according to research presented recently at the RSNA conference.
Can Radiomics Bolster Low-Dose CT Prognostic Assessment for High-Risk Lung Adenocarcinoma?
December 16th 2024A CT-based radiomic model offered over 10 percent higher specificity and positive predictive value for high-risk lung adenocarcinoma in comparison to a radiographic model, according to external validation testing in a recent study.