“Could someone please help me?”
“Could someone please help me?”
“I'm Dr. Trefelner, I'd be happy to.”
“I need a subspecialist to read a skull series for left occipital headache.”
“I have my neuroradiology CAQ, cardiac CT CAQ, and a chest and body fellowship so I should be able to help.”
“So actually you're just a general radiologist.”
“Excuse me?”
“Obviously you don't focus on one area. How many skull x-rays have you read in the last 10 years for left occipital headache, and what was your accuracy rate?”
“Not a lot…but I can tell you that this study is normal.”
“Sorry, but I need a real expert with a proven track record. I'll find someone online.”
“Let me know when you do; now I have a headache.”
I have to agree with the clinician above that yes, I am a general radiologist. I have a lot of subspecialty training. But maybe that isn't enough anymore. I just returned from the “Maximizing Value in Radiology Through Quality and Safety Improvements” ACR conference in Phoenix. Included was a presentation entitled, “Professional Outcomes: Proving Our Worth,” by Craig Blackmore M.D., MPH, scientific director at the Center for Healthcare Solutions at Virginia Mason Medical Center in Seattle.
To sum up his talk: toss out your board certification, burn you CAQ certificates, and shred your fellowship diplomas because they mean nothing without measurable metrics to prove your worth. And sadly, I have to agree with him. Someone who earned board certification 20 or 30 years ago isn't necessarily still an expert. I've worked with radiologists who were still basing their readings on their training with Dr. Roentgen. But is metrics the trend for radiology?
Measurable metrics? What about publicly reported metrics? On Sept. 7, 2010, the Society of Thoracic Surgeons began reporting ratings on more than 90% of the 1100 cardiac surgery programs in the U.S. One to three stars are assigned based on 11 performance measures. The society has already been praised for its actions and at the same time heavily criticized for not going far enough by reporting the names and results of individual physicians. Most experts believe this next step will happen soon.
Already those groups with low rankings are feeling the pressure to get their scores up or lose patients and revenue. And if you think this is an aberration, hold onto your seats. This fall the Texas legislature unanimously passed (Unanimously? A group of politicians? It must be some kind of miracle) a law requiring public universities to post online the budget of each academic department, the curriculum vitae of each instructor, full descriptions and reading lists for each course, and student evaluations of each faculty member, according to an Oct. 23 report in the Wall Street Journal.
Such data mining now opens professors to a problematic cost-benefit analysis, which often ignores other, intangible, contributions. Not unexpectedly, professors who teach easy courses are ranked higher by students than those who teach difficult classes like statistics. Consumer healthcare advocates see this sort of web-based reporting as a model for hospitals and medical groups and are currently lobbying for similar legislation. (No more golfing during those CME courses!)
In his paper, “Productivity of Physician Specialization: Evidence from the Medicare Program,” Amitabh Chandra, Ph.D., an economics professor at Harvard and a fellow with the Dartmouth Institute for Health Policy, uses Medicare data to demonstrate “that [geographic] areas with relatively more medical specialists have much higher spending per Medicare beneficiary, but do not produce higher quality of care, higher patient satisfaction, or lower mortality.” Curiously, areas with fewer specialists have less costly care and similar outcomes. Why? Specialists order more tests and charge more. In this era of cost containment, our government is trying to train fewer specialists and more primary care doctors because of this, leading to the question: is radiology going in the wrong direction with increasing subspecialization?
George Bernard Shaw said “no man can be a pure specialist without being in the strict sense an idiot.” Zing! There are advantages to being a general radiologist. I can remember during my training when clinicians would come to the reading room to review a complex patient case with multiple imaging studies. The highly trained university professors in charge would refuse to read studies outside their narrow sphere of expertise and tell the clinicians they would need to visit the ultrasound, chest, neuro, body, and nuclear sections for help in pulling this information together.
The truth is that most studies are negative and those that are positive are usually basic meat and potatoes cases. The 10% that need an expert can always be referred if necessary. I believe tracking and reporting this sort of performance data will prove our worth and afford us opportunities to improve where we are weak. It will also facilitate the removal of colleagues who are past their prime.
Now, if only we could implement similar policies for members of Congress. As I understand it, Senator Byrd died at age 92 in July and yet he is still introducing legislation and voting. I suspect there are a lot of the undead in Congress who-short of a stake to the heart or silver bullet-will be hard to pry loose, even after an election.
Study Reaffirms Low Risk for csPCa with Biopsy Omission After Negative Prostate MRI
December 19th 2024In a new study involving nearly 600 biopsy-naïve men, researchers found that only 4 percent of those with negative prostate MRI had clinically significant prostate cancer after three years of active monitoring.
Study Examines Impact of Deep Learning on Fast MRI Protocols for Knee Pain
December 17th 2024Ten-minute and five-minute knee MRI exams with compressed sequences facilitated by deep learning offered nearly equivalent sensitivity and specificity as an 18-minute conventional MRI knee exam, according to research presented recently at the RSNA conference.
Can Radiomics Bolster Low-Dose CT Prognostic Assessment for High-Risk Lung Adenocarcinoma?
December 16th 2024A CT-based radiomic model offered over 10 percent higher specificity and positive predictive value for high-risk lung adenocarcinoma in comparison to a radiographic model, according to external validation testing in a recent study.