CHICAGO-The overarching theme of RSNA 2016 has been deep learning and machine intelligence. Both are designed to help you with your workflow and ability to provide optimal patient care. But, questions still exist about what these tools are and how you can implement them. To answer this question, Vlado Menkovski, a former research scientist with vendor Philips, discussed the differences between these two tools, highlighting how they can be used. “This technology has provided breakthroughs,” he said. “It’s been exciting to see the potential impacts it’s had on imaging analysis.” Machine Intelligence Simply put, machine learning is akin to writing a computer program to address a known and well-understood problem. For example, scientists can understand the process needed to launch a satellite into space, he said, and they can easily write a program to make it happen. Via machine learning, you can pick apart your data, learn from it, and use it to make predictions about your findings. For example, he said, you can use machine intelligence to create algorithms that predict cancer prognoses. You can program the algorithm to consider tumor size and other characteristic seen in an image to determine whether the patient has a poor prognosis. Deep Learning Overall, deep learning is a method for implementing machine intelligence. The main component is the artificial neural network, designed after the human brain. But, while the neurons of the human brain can fire and connect to each other in any way, the segments of the artificial neural network are connect in specific patterns and discrete layers. Deep learning can rebuild images layer-by-layer, identifying edges, but as use of Big Data increases, you’ll be able to train computer models to do even more. Currently, though, these networks can already be trained to use coordinates for width and height or to segment pixels to identify different organs. What One Company Offers Some companies are already jumping into find the best ways to make deep learning and machine intelligence applicable in radiology. One company – Enlitic – has developed a lung nodule detector designed to reach positive predictive values 50 percent higher than those achievable by a radiologist. As the model analyzes images, it learns and, over time, can offer a probability score for malignancy. The company is also investigating whether these tools can be used to identify wrist fractures.According to company chief medical officer Igor Barani, MD, up to 40 percent of fractures are missed, leading to improper healing and pain. The model is being trained to find fractures on X-ray images, overlaying it on a heat map to highlight locations in a conventional PACS viewer. Radiologists are checking for accuracy, and results, so far, are positive. Eventually, Barani said, Enlitic wants to expand its deep learning and machine intelligence capabilities to CT and MRI scans for a wider variety of medical conditions, incorporating the ACR guidelines along the way. The end-goal, he said, is to build a neural network that uses genomic, clinical, and imaging data factors to evaluate the entire human body and detect pathological states and deviations from normal anatomy. Much work still needs to be done, and the industry needs to determine how best these tools can be used to augment the services you and your colleagues provide. Deep learning and machine intelligence will be best used, Barani said, when radiologists better understand what these technologies can and cannot do. “Half the battle has to do with expectation management,” he said. “You have to avoid the hype about deep learning and machine intelligence. It’s very important to help people understand the problems it can help solve and which it can’t.”
Can MRI-Based AI Bolster Biopsy Decision-Making in PI-RADS 3 Cases?
December 9th 2024In patients with PI-RADS 3 lesion assessments, the combination of AI and prostate-specific antigen density (PSAD) level achieved a 78 percent sensitivity and 93 percent negative predictive value for clinically significant prostate cancer (csPCa), according to research presented at the Radiological Society of North American (RSNA) conference.
The Reading Room: Artificial Intelligence: What RSNA 2020 Offered, and What 2021 Could Bring
December 5th 2020Nina Kottler, M.D., chief medical officer of AI at Radiology Partners, discusses, during RSNA 2020, what new developments the annual meeting provided about these technologies, sessions to access, and what to expect in the coming year.
RSNA 2020: Addressing Healthcare Disparities and Access to Care
December 4th 2020Rich Heller, M.D., with Radiology Partners, and Lucy Spalluto, M.D., with Vanderbilt University School of Medicine, discuss the highlights of their RSNA 2020 session on health disparities, focusing on the underlying factors and challenges radiologists face to providing greater access to care.
New Interventional Radiology Research Shows Merits of Genicular Artery Embolization for Knee OA
December 3rd 2024In a cohort of over 160 patients with knee osteoarthritis (OA), including grade 4 in nearly half of the cases, genicular artery embolization led to an 87 percent improvement in the quality of life index, according to research presented at the
Siemens Healthineers Debuts New Photon-Counting CT Systems at RSNA
December 2nd 2024Debuting at the Radiological Society of North American (RSNA) conference, the new photon-counting computed tomography (PPCT) scanners Naeotom Alpha.Pro and Naeotom Alpha.Prime reportedly combine rapid scan times with high-resolution precision.