CHICAGO-The overarching theme of RSNA 2016 has been deep learning and machine intelligence. Both are designed to help you with your workflow and ability to provide optimal patient care. But, questions still exist about what these tools are and how you can implement them. To answer this question, Vlado Menkovski, a former research scientist with vendor Philips, discussed the differences between these two tools, highlighting how they can be used. “This technology has provided breakthroughs,” he said. “It’s been exciting to see the potential impacts it’s had on imaging analysis.” Machine Intelligence Simply put, machine learning is akin to writing a computer program to address a known and well-understood problem. For example, scientists can understand the process needed to launch a satellite into space, he said, and they can easily write a program to make it happen. Via machine learning, you can pick apart your data, learn from it, and use it to make predictions about your findings. For example, he said, you can use machine intelligence to create algorithms that predict cancer prognoses. You can program the algorithm to consider tumor size and other characteristic seen in an image to determine whether the patient has a poor prognosis. Deep Learning Overall, deep learning is a method for implementing machine intelligence. The main component is the artificial neural network, designed after the human brain. But, while the neurons of the human brain can fire and connect to each other in any way, the segments of the artificial neural network are connect in specific patterns and discrete layers. Deep learning can rebuild images layer-by-layer, identifying edges, but as use of Big Data increases, you’ll be able to train computer models to do even more. Currently, though, these networks can already be trained to use coordinates for width and height or to segment pixels to identify different organs. What One Company Offers Some companies are already jumping into find the best ways to make deep learning and machine intelligence applicable in radiology. One company – Enlitic – has developed a lung nodule detector designed to reach positive predictive values 50 percent higher than those achievable by a radiologist. As the model analyzes images, it learns and, over time, can offer a probability score for malignancy. The company is also investigating whether these tools can be used to identify wrist fractures.According to company chief medical officer Igor Barani, MD, up to 40 percent of fractures are missed, leading to improper healing and pain. The model is being trained to find fractures on X-ray images, overlaying it on a heat map to highlight locations in a conventional PACS viewer. Radiologists are checking for accuracy, and results, so far, are positive. Eventually, Barani said, Enlitic wants to expand its deep learning and machine intelligence capabilities to CT and MRI scans for a wider variety of medical conditions, incorporating the ACR guidelines along the way. The end-goal, he said, is to build a neural network that uses genomic, clinical, and imaging data factors to evaluate the entire human body and detect pathological states and deviations from normal anatomy. Much work still needs to be done, and the industry needs to determine how best these tools can be used to augment the services you and your colleagues provide. Deep learning and machine intelligence will be best used, Barani said, when radiologists better understand what these technologies can and cannot do. “Half the battle has to do with expectation management,” he said. “You have to avoid the hype about deep learning and machine intelligence. It’s very important to help people understand the problems it can help solve and which it can’t.”
AI Facilitates Nearly 83 Percent Improvement in Turnaround Time for Fracture X-Rays
December 19th 2023In addition to offering a 98.5 percent sensitivity rate in diagnosing fractures on X-ray, an emerging artificial intelligence (AI) software reportedly helped reduce mean turnaround time on X-ray fracture diagnosis from 48 hours to 8.3 hours, according to new research presented at the Radiological Society of North America (RSNA) conference.
The Reading Room: Artificial Intelligence: What RSNA 2020 Offered, and What 2021 Could Bring
December 5th 2020Nina Kottler, M.D., chief medical officer of AI at Radiology Partners, discusses, during RSNA 2020, what new developments the annual meeting provided about these technologies, sessions to access, and what to expect in the coming year.
Can an Emerging PET Radiotracer Enhance Detection of Prostate Cancer Recurrence?
December 14th 2023The use of 68Ga-RM2 PET/MRI demonstrated a 35 percent higher sensitivity rate than MRI alone for the diagnosis of biochemical recurrence of prostate cancer, according to research recently presented at the Radiological Society of North America (RSNA) conference.
RSNA 2020: Addressing Healthcare Disparities and Access to Care
December 4th 2020Rich Heller, M.D., with Radiology Partners, and Lucy Spalluto, M.D., with Vanderbilt University School of Medicine, discuss the highlights of their RSNA 2020 session on health disparities, focusing on the underlying factors and challenges radiologists face to providing greater access to care.
Can AI Improve Detection of Extraprostatic Extension on MRI?
December 4th 2023Utilizing a deep learning-based AI algorithm to differentiate between diagnostic and non-diagnostic quality of prostate MRI facilitated a 10 percent higher specificity rate for diagnosing extraprostatic extension on multiparametric MRI, according to research presented at the recent RSNA conference.
Study: Regular Mammography Screening Reduces Breast Cancer Mortality Risk by More than 70 Percent
November 30th 2023Consistent adherence to the five most recent mammography screenings prior to a breast cancer diagnosis reduced breast cancer death risk by 72 percent in comparison to women who did not have the mammography screening, according to new research findings presented at the annual Radiological Society of North America (RSNA) conference.