• AI
  • Molecular Imaging
  • CT
  • X-Ray
  • Ultrasound
  • MRI
  • Facility Management
  • Mammography

Why is That Radiologist Arguing with the Monitor?

Blog
Article

Is there a legitimate mechanism for questioning the questions on radiology board exams?

“True and false” questions were a bane of my grade school existence.

It took some years to put my finger on why that was the case. Eventually, a teacher of mine summed the matter up so nicely that I wondered why T/F was considered academically kosher at all. There is an awful lot of stuff that is not so well defined as true or false, right, or wrong. The world, as any radiologist can tell you, is mostly shades of gray, not black or white.

One becomes more aware of this with aging and experience. Fewer and fewer things have simple pat answers. Exceptions turn up for every rule. If a question insists you choose between two diametrically opposed options, you are increasingly likely to see that, absent additional details, neither is quite right. You have questions for your questioner.

In grade school, overachievers (like me and many other future physicians) were the most common offenders of raising a hand to ask about exam questions. Teachers were understandably wary of giving away answers and, very humanly, loath to interact when they thought they were going to have a nice, quiet hour while the class was busy with their tests.

A lot of teachers learned to institute harsh rules like “No questions during the exam. Do the test as best as you can.” Some would even claim they hadn’t written the test, and thus didn’t have any clarifications to offer.

That didn’t make me any happier when I was confronted with true/false questions that, in my estimation, weren’t answerable as is. As an overachiever, my whole day might be ruined if I missed a question, so I opted to cram handwritten notes in the margin next to the question: “True if [contingency A], False if [B].” I don’t know if my teachers thought it was cute, clever, or simply sufficient effort to merit consideration, but it usually worked.

Fast forward to more advanced levels of education, and true/false mostly faded from the scene. Multiple choice was still abundant but at least that usually offered one thing that was clearly better than the other options. Technology also crept into the picture. One was either filling in bubbles with a #2 pencil, or entirely sans paper as keyboards and mice took over.

While this eliminated one’s ability to argue on the spot, it didn’t remove the desire to do so. A student could still approach the teacher/prof after exams to talk about question #83. Alternatively, when instructors cared enough to make exams into learning experiences by reviewing them afterward to showcase correct answers, students who had gotten things wrong could plead their cases. Sometimes, a reasonable individual would nullify “bad” questions.

Standardized testing from outside entities pretty much killed all of that. If there was some sort of procedure for challenging this question or that, it was so buried in bureaucracy as to make it nonviable for students who wanted satisfaction (and points) promptly. “Accountability” and “transparency” are nice words, but don’t often translate into practice. The folks behind these exams don’t want to have to refigure them for every smart whippersnapper who comes along.

It is nice to look forward to a day when one has taken one’s last exam and won’t have to put up with such things anymore. For a lot of rads like myself, that was to be the board exam, and we weren’t thrilled when the rules suddenly got changed. We would have to do this “Maintenance of Certification” thing with the understanding that the details hadn’t even been scoped out.

I was one of the relatively few rads who actually did a 10-year recertification exam before the American Board of Radiology (ABR) did away with that bit of nonsense. After flying to Chicago in the middle of the winter for a test that might just derail my career, I was comparatively happy to trade that for the Online Longitudinal Assessment (OLA), a couple of questions to do each week on my home computer.

It is still, of course, an imperfect system, and even with multiple choice, one can find oneself with issues about how the questions or answers are phrased. In radiology, one can also believe that the images displayed aren’t quite as diagnostic as whoever wrote the question thought they were. I have, more than once, found myself sitting and arguing with my screen while the question’s timer slowly counts down.

To be fair, there are some mechanisms that might address this. A rad gets 10 chances to “skip” questions during each cycle. I think that is supposed to be for when a question is outside of the rad’s usual scope of practice, but it could just as easily be used if the rad thinks the question is no good. I have a borderline neurotic tendency to hoard such things, however, so I probably haven’t skipped 10 questions in my entice OLA career.

After each question, a rad also has a chance to appraise the question itself, and theoretically someone goes through those appraisals. With enough bad reviews, might one imagine bad questions get rewritten or removed? I used to fill those things out, but that gets tiresome, especially if I save up questions from a few weeks and do them all at once.

I seem to remember, once or twice, typing some protestation after feeling I had gotten a question “wrong” unjustly. I had no expectation that it could lead to my subsequently getting credit for the question, or even any particular hope that someone would look at my statement at all. An awful lot of websites out there have “contact us” links or forms that go straight to Limbo; nothing you input will ever get a response. Why should the ABR be a special exception?

Still, it felt slightly better than verbally arguing with the monitor so I typed in my issues, and never heard anything back. If I were of a mind to imagine that my input was, in fact, seen, I could go on to fantasize that whoever reviewed it A) saw that I was right, but had no mechanism to credit me for the question, or B) didn’t want to admit that I had a point, yet also had no intellectual ammo to debate the matter with me, which, in effect, cedes the point to me on moral grounds.

My monitor has yet to disagree with me on any of these fronts, so as far as I am concerned, my “19% above passing standard” rating on OLA is an underestimation. Maybe I should put that on my CV.

Recent Videos
© 2025 MJH Life Sciences

All rights reserved.