Ambient speech capabilities in emerging voice recognition products and software updates may convert the clinical context of conversational speech into structured data for radiology reports.
Pixel-based artificial intelligence (AI) has dominated market attention in radiology over the past few years. However, a more familiar and less heralded technology has been evolving for at least two decades and has been fueled by major advances with the growth of cloud computing.
What is the technology? Artificial intelligence-powered voice recognition.
To put it in a more colloquial way, today’s radiology voice recognition solutions are not your parents’ speech recognition technology. In fact, they have far surpassed the ones you may have been using just a few years ago.
Voice recognition is so embedded in clinical workflows that many radiologists and other clinicians take it for granted. Indeed, there may only be a peripheral awareness of how much the technology has advanced. Developments in deep learning and natural language processing―based on massive amounts of voice data―have vastly improved the speed and accuracy of voice recognition engines. The rapid expansion of cloud-hosted AI has further fueled the growth and evolution of speech technology.
The early software required users to train the speech recognition engine by reciting prepared training text. Users also had to be careful to review and correct recognition errors. Accuracy depended on the quality of the input device, background noise, and other factors. Accents and special vocabularies were often problematic. Fortunately, capabilities steadily increased as machine learning technology evolved, and developers continually improved the software based on user feedback.
The widespread deployment of cloud computing over the past five years has accelerated neural network and deep learning techniques. Continuously training speech recognition technology with securely anonymized speech data makes the engine “smarter” as more users interact with it. The latest generation of voice recognition technology from Nuance Communications extracts information from thousands of terabytes of voice data while concurrently predicting what the user may say next. The technology anticipates and prepares to render what is spoken based on context, user patterns, and speech characteristics such as accent. The cloud-based radiology reporting system from Nuance Communications is hosted in Microsoft Azure and enables users to benefit immediately from this continuous learning process in ways never before possible.
Voice recognition is becoming the new UX for radiologists. In fact, ambient speech is the current state of the art voice technology used in solutions such as Nuance Dragon Ambient eXperience (DAX) and PowerScribe. The ambient capabilities recognize and understand the relevant clinical context of conversational speech and convert it into structured, organized output for radiology reports and other applications.
Advances in natural language understanding automatically turn free-form dictation into structured data. Structured data supports the American College of Radiology’s Common Data Elements initiative, aimed at creating a common ontological framework that standardizes meaning from the point of read to the point of care. In PowerScribe One, it helps to create organized, consistent reports from spoken narrative, and provides real-time clinical decision support and evidence-based follow-up recommendations. Structured data also expands interoperability with other systems including PACS, viewers, and EHRs with bidirectional, real-time data exchange.
While pixel-based AI models and other technologies often capture the headlines, cloud-hosted and AI-driven voice recognition is quietly and effectively powering a new generation of radiology reporting. Today, instead of users wondering about voice recognition accuracy, they’re seeing improvements of everyday radiology workflows and new ways of applying the technology to enhance efficiency for improved patient outcomes.
Dr. Agarwal is the chief medical information officer for Diagnostic Imaging and AI at Nuance Communications.
FDA Clears AI-Powered Ultrasound Software for Cardiac Amyloidosis Detection
November 20th 2024The AI-enabled EchoGo® Amyloidosis software for echocardiography has reportedly demonstrated an 84.5 percent sensitivity rate for diagnosing cardiac amyloidosis in heart failure patients 65 years of age and older.
The Reading Room Podcast: Emerging Trends in the Radiology Workforce
February 11th 2022Richard Duszak, MD, and Mina Makary, MD, discuss a number of issues, ranging from demographic trends and NPRPs to physician burnout and medical student recruitment, that figure to impact the radiology workforce now and in the near future.
FDA Clears Updated AI Platform for Digital Breast Tomosynthesis
November 12th 2024Employing advanced deep learning convolutional neural networks, ProFound Detection Version 4.0 reportedly offers a 50 percent improvement in detecting cancer in dense breasts in comparison to the previous version of the software.