There doesn’t have to be quite as much bean-counting aggravation behind the scenes.
I never really liked the expression, “pay per click,” as used in radiology. It makes perfect sense in other venues, like advertising. There, a bit of revenue is generated for every click the ads receive. I don’t know the fine details, but imagine things are pretty constant; click equals $X.
In our line of work, that’s not the case…or, if it is in some practices, I’ve never seen it. The closest thing that happens is in “eat what you kill” or other such productivity-based models, where a rad’s comp increases each time he clicks to sign a report. Or uses a click-equivalent to sign off, such as a voice-command.
The issue I have with that? Long before a rad gets to that point, s/he’s clicked more than a few times: To open the case, to manipulate images, to sift through prior reports, to move around in and edit the dictation, sometimes to communicate with referrers and support-staff. It’s really “pay per clicks.”
And that’s a best-case scenario: Far from rarely, a rad will open a case and do more than a little bit of clicking around before realizing that the case can’t be read as-is. Images, priors, or history missing, the case needs to be read by someone else, or the rad just gets interrupted and someone else takes the case while s/he’s away. No pay-per-click going on there. Also none for clicking when a rad has to revisit a signed report for addenda, questions from clinicians, etc.
Then, there’s the little matter of different cases not having the same worth: Even if it were truly pay-per-click, clicks for XR, CT, MR, etc. understandably pay different amounts. And if you really want to get into the weeds, a given study-type will vary in its yield depending on where the patient got imaged, what payor is involved…
That doesn’t stop some radgroups from trying to bean-count and assign case-values (“work units,” for instance) for the reading rads, of course. They might simplify, and use the average value of a given case-type that the group sees from all its sources. Both for simplicity, and so the rads don’t go nuts lest they see that the chest-CT they got to read is worth less than a chest on the worklist right next to it.
Having been on the receiving-end of this for a chunk of my career, I can tell you that the values never seem to perfectly reflect the time/work balance. For instance, a conscientiously-reading rad who winds up with mostly XR on his list, even if reading efficiently and taking minimal breaks, really can’t keep work-unit pace with one who got a bunch of MR. Even if the MR-rad is far more leisurely.
That’s not all to be blamed on the radgroups in which this happens. Some give everybody access to the same worklists, instituting workflow-rules meant to curb “cherrypicking.” The idea being that, if all rads in the fullness of time get access to roughly the same case-mix, everyone will be hurt by XR and helped by MR equally.
Still, people find ways to game systems, and docs tend to be a smart bunch. If there’s a trick, they’ll find it sooner or later. Also, if the folks making the rules are reading cases like everyone else, whatever rules they make will tend to favor themselves; that’s just human nature.
Another factor: Not every rad has the same skillset. Some might appropriately beg off higher-value exams like MSK MR simply because they know they wouldn’t do a good job. Some older rads might never have gotten comfortable with any MR. Of course, nobody’s stopping them from learning something new…but not everybody’s going to do that, especially later in their careers. Finally, sometimes subspecialty reads are requested/needed, so certain high-value exams aren’t accessible to everyone.
Another lever some radgroups can adjust is boosting the value of cheaper exams, so rads receiving more of them won’t be at such a disadvantage. Groups can finagle a little bit, robbing Peter-MR to pay Paul-XR…but that can be tricky when you don’t know how many of each type of study your group will be receiving. A month in advance, let alone a year. If you wind up with more XR and fewer MR than you expected, you might find yourself in a fiscal hole.
Every level of complexity that gets added to per-click schemes introduces new hassles in bean-counting. Ways for the unforeseen to sneak in and mess things up. Wrinkles that will create winners and losers amongst the rads (or seem to, and someone perceiving he’s being done out of something will make just as much noise as if he genuinely has). Unsurprising, then, that more than a few who’ve worked as per-click rads…or overseen them…have less-than-fond memories of the experience.
Meanwhile, a lot of the rads flocking to per-click groups do so because, philosophically, it appeals to them. A lot have previously experienced salaried jobs where everybody’s supposedly on their honor to do their fair share of the work…but that doesn’t seem to pan out. Compensation fails to be proportional to productivity. “Pay per click,” or “eat what you kill,” goes a long way towards preventing that.
Hybrid plans might be most capable of satisfying all comers. A set salary, or hourly rate, coupled with the notion that someone is keeping an eye on who’s actually getting the work done. That could be in the form of a monthly, quarterly, or even annual incentive-payment…or simply periodic performance-reviews, with outliers at both ends of the spectrum being recognized and perhaps rewarded (or sanctioned).
The nice thing about such a system is that, with less of one’s total comp on the line, keeping tabs on every little click stops being so important. Rads have less of a motivation to cherry-pick, game the system, or complain about it. And there doesn’t have to be quite as much bean-counting aggravation behind the scenes.
Still, one doesn’t want to completely lose track of who’s doing the work. Tune in next time for my “double-curve” solution.
The Reading Room Podcast: Emerging Trends in the Radiology Workforce
February 11th 2022Richard Duszak, MD, and Mina Makary, MD, discuss a number of issues, ranging from demographic trends and NPRPs to physician burnout and medical student recruitment, that figure to impact the radiology workforce now and in the near future.
New Study Examines Short-Term Consistency of Large Language Models in Radiology
November 22nd 2024While GPT-4 demonstrated higher overall accuracy than other large language models in answering ACR Diagnostic in Training Exam multiple-choice questions, researchers noted an eight percent decrease in GPT-4’s accuracy rate from the first month to the third month of the study.