Simple tips for helping your group operate at its full potential.
A yesteryear-column of mine focused on the issue of how a rad might hope to stand out from the crowd. Statistically speaking, it’s tough to do (in a good way, at least) by the metrics we generally watch: Who’s really going to notice if your error-rate is 0.1% better than the average of your radgroup, for instance?
Instead, I had observed that there were much more easily-achieved ways of getting an edge on the competition-doubling as pitfalls for those failing to make relatively simple efforts. Routinely showing up on time, for instance, or promptly returning calls and emails. “80 percent of success is just showing up,” as the quote goes.
Not just the individual
This is not solely the province of the individual rad, wanting to look good to peers, referrers, or employers. There are also pretty simple ways a radgroup can stand out in the eyes of its members, prospective new hires, and various outsiders.
My personal favorite: Get out of the rads’ way and let them do their job. I’m now in my fourth post-fellowship gig. Each time I’ve gone to a new job, it’s been an improvement in more than a couple of ways-but the best trend has been a steady reduction of things that interrupt my workflow over the course of a typical day. Every time a policy or procedure brings a rad up short from reading cases or whatever else they’re supposed to be doing, it’s whittling away at job satisfaction-if not overall efficacy.
A close second: Give the rads the tools they need to do their jobs. Crummy hardware and software might be easier on the balance-sheets, but they’ll probably wind up costing more in lost productivity and work quality. And even if they don’t, driving your workforce nuts by forcing them to work with error-prone stuff from the bargain bin will not make them eager for a long future of the same. This also pertains to hiring adequate (in terms of number and capability) ancillary staff.
Communicate regularly, clearly, and honestly. Rads tend to be smart people, accustomed to gathering information and drawing conclusions-and that goes for their professional future as well as medical diagnoses. If you don’t let them know what the group’s up to (opportunities, obstacles, strategic plans), they might just try reading between the lines and filling in the blanks themselves, resenting having to do so in the process. Informing them and getting their input will yield far better results.
Stand up for yourselves-and one another. I’ve seen more than a couple of groups that appear to live in constant dread of losing referrers and contracts. At any possible juncture, they’ll kowtow to demands from these outsiders and eagerly throw their own rads under the bus. Groveling doesn’t engender respect, however-it just encourages more unreasonable demands. Make no mistake: Without such respect, those contracts will unhesitatingly refer elsewhere the moment it seems favorable to them.
Be proud of your priorities. It’s difficult to hide what you’re all about-over the long haul, virtually impossible. It comes through in all sorts of ways: Choices of words, involuntary reactions, and patterns of behavior. Folks will get a sense of what’s motivating your group (and/or its leadership, since these things should be in synchrony but often are not), so let those motivators be good things. Practicing good medicine, for instance, as opposed to gobbling up contracts and crushing or absorbing other radgroups. The best way to make sure that your group comes across as a bunch of knights in shining armor (rather than con-artists and thieves in the night) is to actually be those knights.
The Reading Room Podcast: Emerging Trends in the Radiology Workforce
February 11th 2022Richard Duszak, MD, and Mina Makary, MD, discuss a number of issues, ranging from demographic trends and NPRPs to physician burnout and medical student recruitment, that figure to impact the radiology workforce now and in the near future.
New Study Examines Short-Term Consistency of Large Language Models in Radiology
November 22nd 2024While GPT-4 demonstrated higher overall accuracy than other large language models in answering ACR Diagnostic in Training Exam multiple-choice questions, researchers noted an eight percent decrease in GPT-4’s accuracy rate from the first month to the third month of the study.