While artificial intelligence (AI) models have been acknowledged for aiding imaging analysis or facilitating workflow enhancements, this author envisions AI as a potential workstation conceierge capable of turning common venting and gripes into actionable items for significant improvements.
After residency and fellowship, I realized I would have more than four weeks off per year. Not knowing what to do with it all, I stopped evenly spacing out vacations around the calendar. Some breaks from work were more clustered than others.
However, I discovered that it didn’t seem to matter how long it had been since my last vacation. Whenever one was coming up, I would just then feel like it was in the nick of time. All my frustrations and dissatisfactions with work would surge to a peak. It didn’t matter if my last time off had been two weeks or five months earlier.
I am experiencing a similar phenomenon as my workplace is about to go live with a totally new rig for its telerad division. This will cover hardware, PACS, RIS and voice recognition as well the operating system. I may be setting myself up for disappointment by thinking so, but it seems like everything is about to improve in a big way.
Accordingly, as I work through my remaining handful of shifts with the old setup, it feels like I am just now on my last nerve with everything it gets wrong. If the new apparatus were to arrive in January instead of this month, I suspect I wouldn’t have budged from my disgruntlement baseline at this point.
Working from a home office, I have the fringe benefit of being able to mouth off at my telerad rig when it’s misbehaving. It is the sort of behavior that might make me look like a complete lunatic to other rads, support staff, or referring docs who could be within earshot if I worked onsite.
As I was expressing my frustration to my dumb machine when it tried to prematurely sign off a report for the nth time this past week, the thought suddenly occurred to me: Wouldn’t it be great if the machine weren’t so dumb?
By that, I don’t mean that it did a better job at everything it is supposed to do. We all wish for that, pretty much every time we use imperfect technology for anything. We express the wish to our colleagues, to our leadership (who, we imagine, have the ability to get us better tools if we just ask them the right way) and really anybody who will listen. Heck, we would shower our wishes upon the manufacturers who give us these things in the first place but I did say “anybody who will listen.” That doesn’t seem to describe them.
Perhaps because of all the chatter about artificial intelligence (AI), I suddenly realized: This could finally be a role for AI that I would welcome if anybody developed it. Forget about AI that tries to help analyze images, or organize reports. I have yet to see that do anything helpful for me. No, let AI be the listening ear when I have gripes/wishes about the imperfect tools I have for doing my job.
Part of what makes us feel like we’re being listened to is an appropriate response. As things stand, if I tell my workstation that it’s pathetically bad for making the same voice recognition mistakes today that it did half a year ago, despite my faithfully training it along the way, the dumb machine just sits there. I might as well talk to the wall or send an email to the voice recognition company’s CEO for all the impact that would have.
Suppose, instead, there was an AI on my system that understood when I was venting at it. Let there be something resembling a consciousness that seems to feel bad when I tell it that its software is doing a lousy job for me. Better yet, let it respond, telling me that my feedback will help it improve things. Even if that won’t happen, let the responses give me the illusion that I’m being heard. Also, give me the ability to tell it “Good job!” when it earns positive feedback.
This might be a fun toy, and a way for rads to relieve some stress. However, just a little bit of extra sophistication could provide a lot more utility.
Long-term readers of this blog might recall an idea I have proposed in the past: a button somewhere on the desktop that a rad could click on and dictate a quick message whenever he or she was experiencing a problem. This could be related to a software bug or some aspect of the rig that he or she wishes had more functionality. Hit the button, dictate your piece, and the message gets sent to the inbox of someone who might be able to do something with the feedback. The rad gets the bee out of the bonnet right away, and no valuable ideas are lost between that moment and the next departmental meeting.
My current idea goes one step further. The AI is the recipient of all such feedback from rads who are using the system. It aggregates the data, and organizes it so stuff about voice recognition is separated from stuff about the PACS. Further, at the moment any feedback is submitted, the AI has access to system diagnostics so it knows whether there are extenuating circumstances that might be at fault. The issue may be caused by a struggling server, for instance, or a workstation that is currently overtaxed because the rad has a dozen other processes running in the background.
In essence, the AI would be a radiology version of Alexa, Siri, or Bixby. It would be our own workstation concierge with the ability to respond intelligently and sometimes provide solutions.
In addition to being able to give the user on-the-spot explanations or fixes, the AI could also identify trends to bring to the attention of the software administrators for the radgroup, or even the manufacturer. Let the biggest, most frequent offenses be highlighted as “action items” for the next software upgrade.
If you really want a powerful AI, give it the ability to make tweaks to the software when enough individual complaints identify the same issues.
In effect, an awful lot of the problems that routinely have rads wondering whether it’s worth their time to contact support staff could be handled by a decent AI. Unlike a finite support staff, the AI could be Johnny-on-the-spot for many users at once, whether fit is for momentary venting or feedback with potential for substantial systemic improvement.
FDA Clears AI-Powered Ultrasound Software for Cardiac Amyloidosis Detection
November 20th 2024The AI-enabled EchoGo® Amyloidosis software for echocardiography has reportedly demonstrated an 84.5 percent sensitivity rate for diagnosing cardiac amyloidosis in heart failure patients 65 years of age and older.
The Reading Room: Artificial Intelligence: What RSNA 2020 Offered, and What 2021 Could Bring
December 5th 2020Nina Kottler, M.D., chief medical officer of AI at Radiology Partners, discusses, during RSNA 2020, what new developments the annual meeting provided about these technologies, sessions to access, and what to expect in the coming year.
FDA Clears Updated AI Platform for Digital Breast Tomosynthesis
November 12th 2024Employing advanced deep learning convolutional neural networks, ProFound Detection Version 4.0 reportedly offers a 50 percent improvement in detecting cancer in dense breasts in comparison to the previous version of the software.