I'd like a gift from the voice recognition software industry. It'd be called the "anti-macro" for those phrases that should not be transcribed.
I'd like a gift from you folks. The holidays aren't all that far behind, but if that isn't a valid reason for you to indulge me, consider my upcoming birthday. Or Valentine's Day.
In years gone by, I would have actually considered this less of a gift from you and more of a helpful product suggestion from me. Now I understand that you really don't care what physicians like me think of your products, since most of us have no say regarding which voice-rec platform is used in our workplaces. The administrative types who make such decisions (who personally don't use your products) are the folks you woo - or, at least, might have been until the much larger non-medical market for your products came along. Users of iPhone's Siri, for instance.
Still, kids write to Santa, so I'm writing to you. Not only am I suggesting a gift you might give me, I'll help you name it: The "anti-macro."
Your industry has gotten a lot of mileage out of the macro, and users have taken it further than you might have intended. Your original plan was probably just to save us breath and a couple of seconds by letting us use brief phrases to get larger blocks of text transcribed. Meanwhile, we've increasingly used it as a sort of error-avoidance mechanism; the less we actually say, the less of an opportunity your product has to horribly misrepresent our words.
Still, not everything is macro-able, and some users might not adopt the tool as quickly and completely as others. Free form dictation is still widely used, and transcription errors still occur. I, for one, have noticed that the same errors seem to be made over and over, no matter how much I hit the "train" button and try to make use of the "learning" ability you insist your software possesses. Many of these errors take the form of words and phrases I do not use, and am confident I will not in the future.
For individual words, the solution is easy enough:
I go to the "vocabulary" or "dictionary" files in the VR program, and delete the offending entity. Since I might refer to a CT lesion as "discrete," for instance, I long ago removed "discreet" from the file. Presto, the latter homophone never darkens my dictations again. I've 86'ed quite a few terms from your software's repertoire in this fashion. With an offending word gone, the software has no choice but to transcribe the next most similar term in its files when I dictate it.
There are, however, words I want to keep around, which appear with irritating frequency in the wrong places.
Hence, my proposed anti-macro: A phrase, specified by the user, which the software will henceforth not transcribe. In creating an anti-macro, I want to be able to tell my software: Never put these words together in this order again, no matter how much you think it sounds like I just said them.
For instance, suppose I were vexed that about 50 percent of the time when I dictated "emphysema," the word "hematochezia" got transcribed instead. I'd rather not lose the ability to dictate the latter term, so I couldn't just delete it from the dictionary. I would also prefer not to have my reports suggest that my patients were experiencing anal bleeding via their lungs, so I'd proceed to make an anti-macro of "pulmonary hematochezia," and voila! My software would never again string together those two words.
Another example, this time non-hypothetical: For years, VR (from more than one vendor) has routinely turned my spoken "soft hyphen tissue" not into "soft-tissue," but "soft tissue tissue" or "soft height and tissue." (VR actually loves mangling my hyphens at many other junctures.) I have retrained the term "hyphen" and even the phrase "height and" till I'm blue in the face, with no improvement. Know what I bet would work? An anti-macro of "tissue tissue" and maybe even one of "height and."
I'm willing to bet, if you asked other routine voice-rec users, you'd find many eager recipients of an anti-macro gift in your next round of software upgrades. Just think of all the time and effort of holiday-shopping you'd be spared! They'd surely happily reciprocate with lots of alpha- and beta-testing feedback.
The Reading Room: Artificial Intelligence: What RSNA 2020 Offered, and What 2021 Could Bring
December 5th 2020Nina Kottler, M.D., chief medical officer of AI at Radiology Partners, discusses, during RSNA 2020, what new developments the annual meeting provided about these technologies, sessions to access, and what to expect in the coming year.