Pixel-based artificial intelligence (AI) has dominated market attention in radiology over the past few years. However, a more familiar and less heralded technology has been evolving for at least two decades and has been fueled by major advances with the growth of cloud computing.
What is the technology? Artificial intelligence-powered voice recognition.
To put it in a more colloquial way, today’s radiology voice recognition solutions are not your parents’ speech recognition technology. In fact, they have far surpassed the ones you may have been using just a few years ago.
Voice recognition is so embedded in clinical workflows that many radiologists and other clinicians take it for granted. Indeed, there may only be a peripheral awareness of how much the technology has advanced. Developments in deep learning and natural language processing―based on massive amounts of voice data―have vastly improved the speed and accuracy of voice recognition engines. The rapid expansion of cloud-hosted AI has further fueled the growth and evolution of speech technology.
The early software required users to train the speech recognition engine by reciting prepared training text. Users also had to be careful to review and correct recognition errors. Accuracy depends on the quality of the input device, background noise, and other factors. Accents and special vocabularies were often problematic. Fortunately, capabilities steadily increased as machine learning technology evolution, and developers continually improved the software based on user feedback.
The widespread deployment of cloud computing over the past five years has accelerated neural network and deep learning techniques. Continuously training speech recognition technology with securely anonymized speech data makes the engine “smarter” as more users interact with it. The latest generation of voice recognition technology from Nuance Communications extracts information from thousands of terabytes of voice data while concurrently predicting what the user may say next. The technology anticipates and prepares to render what is spoken based on context, user patterns, and speech characteristics such as accent. The cloud-based radiology reporting system from Nuance Communications is hosted in Microsoft Azure and enables users to immediately benefit from this continuous learning process in never before possible.
Voice recognition is becoming the new UX for radiologists. In fact, ambient speech is the current state of the art voice technology used in solutions such as Nuance Dragon Ambient eXperience (DAX) and PowerScribe. The ambient capabilities recognize and understand the relevant clinical context of conversational speech and convert it into structured, organized output for radiology reports and other applications.
Advances in natural language understanding automatically turns free-form dictation into structured data. Structured data supports the American College of Radiology’s Common Data Elements initiative, aimed at creating a common ontological framework that standardizes meaning from the point of read to the point of care. In PowerScribe One, it helps to create organized, consistent reports from spoken narrative, and provides real-time clinical decision support and evidence-based follow-up recommendations. Structured data also expands interoperability with other systems including PACS, viewers, and EHRs with bidirectional, real-time data exchange.
While pixel-based AI models and other technologies often capture the headlines, cloud-hosted and AI-driven voice recognition is quietly and effectively powering a new generation of radiology reporting. Today, instead of users wondering about voice recognition accuracy, they’re seeing improvements of everyday radiology workflows and new ways of applying the technology to enhance efficiency for improved patient outcomes.
Dr. Agarwal is the chief medical information officer for Diagnostic Imaging and AI at Nuance Communications.