Radiology groups explore how artificial intelligence can improve their workflow.
Radiologists may not think about video game components helping them do their work. But graphic processing units (GPU), the computing power behind video games, have been in radiology equipment for years.
“You’d be hard pressed to find diagnostic instruments like CT, MRI, and ultrasound, that don’t have GPU embedded in them for real time reconstruction,” said Kimberly Powell, senior director of industry business development at the technology company NVIDIA. Now GPU processing technology is being applied to deep learning, also known as machine learning and artificial intelligence. Computer algorithms able to detect an intracranial hemorrhage? MetaMind has done that. Deep learning technologies that can find malignant lung nodules better than radiologists? Enlitic says their system does that.
Medical groups and institutions like vRad and Massachusetts General Hospital are developing partnerships with technology companies to develop deep learning uses for radiology. They’re not planning to ditch the doctors, as some critics would have you believe. Instead, they’re looking at how this technology can improve patient care and workflow processes.
Here’s what they’re doing.
Massachusetts General Hospital and NVIDIA
Mass General (MGH) recently announced NVIDIA as the founding technology partner of MGH Clinical Data Science Center, which will use artificial intelligence to help advance health care. They plan to use the technology to improve detection, diagnosis, treatment, and disease management. MGH is starting their collaboration with radiology and pathology, and plans to expand to genomics and medical records research after that.
One reason they’re starting with radiology is that deep learning systems need a vast store of medical images, said Garry Choy, MD, MS, a staff radiologist and assistant chief medical information officer at MGH.
“We have 13 billion images,” he said. He hopes the digital-based technologies will help them interpret studies without sacrificing diagnostic accuracy, allowing them a faster turnaround time while maintaining accuracy.
While Choy didn’t delineate what algorithms they’re going to start with, he said that a MGH collaborator is already using the NVIDIA system to help detect surgical complications, like misplaced surgical sponges. This would potentially be used in the operating room before completing a surgery.
In addition to clinical applications, Choy envisions using artificial intelligence to automate “some of the work flow that’s better left to a machine, and leave the more complex higher order thinking to the doctor.” One of the biggest complaints among Mass General doctors, he said, is that they’re burned out from too much paperwork. “Our hope is that the Data Science Center will be an incubator for these types of technologies.”
vRad and Metamind
While some physicians express concern that using artificial intelligence will remove doctors from the reading room, and serve as a cost saving measure, said vRad CIO Shannon Werb, he doesn’t see that happening. “The reason we’re so interested in being involved, is we think it brings a lot of possible benefits to patients,” Werb said. He’s interested in how artificial intelligence can assist in the diagnostic process, and be used in the workflow to help patient care. They’re collaborating with MetaMind, and their first algorithm can detect an intracranial hemorrhage in seconds, as an automatic part of the workflow.
Currently, vRad plans to use artificial intelligence as triage, changing the workflow to affect the order that their radiologists pick up images to read. While trauma cases are automatically put at the top of the list, some images may not be labeled as such, as the clinical symptoms don’t lead the emergency doctor to think of head trauma, for example. If all incoming cases are run through their algorithm, a case identified by artificial intelligence as having possible intracranial hemorrhage, would move to the top of the reading list, rather than waiting the traditional 20-30 minutes in line. The radiologist opening the study won’t know that there’s something of concern, nor that it was shifted in the list. “Our goal is to let doctors be doctors. We’re not trying to replace them,” said Werb.
At this time, Werb doesn’t anticipate their artificial intelligence algorithms crossing the line to make diagnostic recommendations which could influence the doctor’s judgment. This would be a different category of FDA approval from a medical device risk perspective. Computer-aided detection (CAD), which does this, has that FDA approval.
CAD Breast Imaging[[{"type":"media","view_mode":"media_crop","fid":"48566","attributes":{"alt":"AI radiology","class":"media-image media-image-right","id":"media_crop_3015830026743","media_crop_h":"0","media_crop_image_style":"-1","media_crop_instance":"5801","media_crop_rotate":"0","media_crop_scale_h":"0","media_crop_scale_w":"0","media_crop_w":"0","media_crop_x":"0","media_crop_y":"0","style":"height: 135px; width: 180px; border-width: 0px; border-style: solid; margin: 1px; float: right;","title":"©studiostoks/Shutterstock.com","typeof":"foaf:Image"}}]]
If new artificial intelligence systems sound different than what CAD does with breast imaging, it’s because they are. CAD received FDA approval – and gets insurance reimbursement – for highlighting potential lesions in the image set. This is an added level of FDA scrutiny, said Werb. While vRad isn’t afraid of that scrutiny, the FDA is still trying to understand the implications of artificial intelligence, and make sure that it’s happening at the right pace.
CAD actually doesn’t use artificial intelligence. Instead, it relies on predefined measurements. CAD developers designed a system that looks at changes in pixel data of mammography images. The software allows CAD to analyze any mammogram. The algorithm is specific to the modality.
While Choy calls the previous CAD technology “supervised learning,” the current artificial intelligence technology is “unsupervised learning.”
“You can give it 10,000 images of colon cancer. It will find the common features across those images automatically,” Choy said. “If there are large data sets, it can teach itself what to look for.” With CAD, you had to tell the machine what to look for, and there were many false positives.
Machine learning uses a generalized set of technology, and isn’t preprogrammed like CAD. Machine learning requires a lot of data initially, both positives and negatives, so it can learn to identify the differences. Then the software can process the general data to apply what it’s learned.
AI in the Radiology Department
Choy anticipates that MGH and NVIDIA artificial intelligence technology will be ready to use in the clinic in the next year or two, though he didn’t specify an exact target date.
Werb said that vRad and Metamind’s intracranial hemorrhage algorithm is integrated into their platform in beta mode, to enable them to capture the required testing data for FDA approval. He anticipates they’ll be ready to demonstrate positive patient outcomes by the end of the year.
As vRad’s primary business is emergency radiology, said Werb, they’re focusing their algorithms on a specific case list, like detecting pulmonary emboli and aortic tears. “If we don’t identify and treat those patients quickly, they can die. Our list is focused on critical conditions that radiologists typically play a critical role in diagnosing,” Werb said. While other companies may go after a broader list of algorithms, vRad is more focused at this time.
IBM’s Watson system is probably the best known in the field. IBM acquired Merge Healthcare, and gained access to its large body of medical images. Mostly, though, artificial intelligence programs like Watson are helping in fields outside of radiology. Hospitals like Memorial Sloan Kettering Cancer Center, Yale Cancer Center, the Cleveland Clinic, and University of Texas MD Anderson Cancer Center are among the medical centers using Watson to diagnose and develop custom treatment plans.
As for radiology, the start-up Enlitic gained attention when it claimed its software was able to identify malignant lung tumors in X-rays more accurately than four radiologists. Zebra Medical Systems partnered with Dell and with a focus on medical imaging. Google released open-source artificial intelligence software that can be used for medical imaging as well. “They want to be the operating system for machine learning,” said Werb, much as they did for the Android phone.
Follow the Money
While medical groups want to use technology for patient care, there is money to be made in the field. A recent Frost & Sullivan report estimates that by 2021, revenues from artificial intelligence in health care will be about $6billion, up from $633 million in 2014. This technology has the potential to change a lot in health care. “We went all in on deep learning. We believe this is as big as the internet. We believe it will change the face of computing,” said Powell of NVIDIA.
Many doctors are seeing the potential as well, and they’re excited, said Werb. “It dramatically changes their ability to provide a positive and high quality outcome for the patients,” he said. “I encourage the industry to stay focused on the patient side first. You’ll drive down health care cost later as a result.”
FDA Clears AI-Powered Ultrasound Software for Cardiac Amyloidosis Detection
November 20th 2024The AI-enabled EchoGo® Amyloidosis software for echocardiography has reportedly demonstrated an 84.5 percent sensitivity rate for diagnosing cardiac amyloidosis in heart failure patients 65 years of age and older.
The Reading Room Podcast: Emerging Trends in the Radiology Workforce
February 11th 2022Richard Duszak, MD, and Mina Makary, MD, discuss a number of issues, ranging from demographic trends and NPRPs to physician burnout and medical student recruitment, that figure to impact the radiology workforce now and in the near future.