Forget the controlled environment of the MOC exam. Here’s my version of an exam to really test real-world radiology aptitude.
I’m back from my first visit to the Maintenance of Certification examination. Hopefully my next such trip will be about in ten years, as opposed to a few months. Now, along with the others who sat for the thing, I get to sit and fret for a couple of weeks before my actual results show up.
Still, what with the massive winter storm around the time of the exam, I suppose I can consider it a triumph that I actually managed to get to and from the test without a hitch. (Note to any ABR honchos who may be tuning in: If you continue to use Chicago as a testing center, maybe winter isn’t the best option? How about a nice, sunny spot for us East Coast rads who don’t want to schlep to AZ? I nominate South Florida.)
As most radiologists with awareness of the MOC exam can tell you, I would have to have my head examined if I proceeded to describe the test itself, since the ABR reserves the right to do all sorts of horrible things to examinees who share their experiences with one another. Just as well, since most reading this have probably sat for more than a few exams already. You plunk yourself down in front of a computer (or bubbled paper and a No. 2 pencil), plow through it, and after a few hours of your life have drifted away you leave.
This makes for a nice, controlled, uniform environment in which examinees are on a level playing field (aside from individual differences in their preparation for the occasion). Were I designing an exam to pronounce who is(n’t) cut out to continue practicing as a radiologist, however, I would not be aiming for a standardized, tidy experience. I’d want to simulate what the “real world” of radiology is, and appears likely to be for the visible future. Call my exam the Radiology Aptitude Test, since the acronym RAT would serve multiple purposes (including, perhaps, a cartoony little mascot which could then be marketed for additional profits).
As with MOC, examinees would, in signing up for the RAT, specify areas of subspecialty in which they felt most experienced and/or capable. That’s about where the similarity ends, however. On the day of the exam, they find their choices more or less ignored. Thought you were going to be tested on neuro and MSK? Too bad, because you’re getting mammo and nukes; that’s what your simulated employer wants from you. This tests the radiologist’s ability to deal with showing up to a new job and finding out that his interview day talk of working within his subspecialty was just lip-service.
The other initial surprise offered by the RAT is that each section of the exam you are taking contains about twice as many questions as it was supposed to. This simulates the experience of showing up to your radiology workplace and finding a heap of leftover cases from the previous guy that are now your problem (and yes, any cases you leave in turn at the end of your shift will be held against you). Good news, though: You are permitted to stay as late as you like to get through it all - try not to be distracted by any plans you had for later in the day, such as familial/social obligations or your flight home.
Also unique to the RAT is repeated interruption for the examinee. A virtual phone intermittently materializes to obscure the current case or question. The examinee can choose whether to answer it immediately, return the call in a little bit, or ignore it and hope the caller solves his own problems (or finds another, more willing examinee). If taken, a call might be a simulated colleague-radiologist wanting input on a case, a technologist with a protocol question, a referring clinician who wants somebody else’s CT interpretation reviewed and addended, etc.
The payoff for this would be exam results more fleshed out than a simple pass/fail. A RAT examinee would get scores for not only diagnostic accuracy, but also readiness to field distracting phone calls, deal courteously, efficiently, and helpfully with colleagues and referrers, adaptability to unexpected circumstances, and willingness to put in extra hours to get the job done.
Just imagine having an impartial document from a certifying authority proclaiming you to be in the 99th percentile of its “Team Player” or “Productivity” evaluations - couldn’t hurt at your next job interview or contract (re)negotiation.
Or, we could just continue to count on our routine on-the-job performance and letters of recommendation. But where’s the fun in that?
Study Reaffirms Low Risk for csPCa with Biopsy Omission After Negative Prostate MRI
December 19th 2024In a new study involving nearly 600 biopsy-naïve men, researchers found that only 4 percent of those with negative prostate MRI had clinically significant prostate cancer after three years of active monitoring.
Study Examines Impact of Deep Learning on Fast MRI Protocols for Knee Pain
December 17th 2024Ten-minute and five-minute knee MRI exams with compressed sequences facilitated by deep learning offered nearly equivalent sensitivity and specificity as an 18-minute conventional MRI knee exam, according to research presented recently at the RSNA conference.
Can Radiomics Bolster Low-Dose CT Prognostic Assessment for High-Risk Lung Adenocarcinoma?
December 16th 2024A CT-based radiomic model offered over 10 percent higher specificity and positive predictive value for high-risk lung adenocarcinoma in comparison to a radiographic model, according to external validation testing in a recent study.