With a newly developed segmenting method, it is possible to configure large amounts of data from various imaging datasets automatically and with little expertise.
Configuring artificial intelligence algorithms for use with large imaging datasets can now be done without a specialist’s knowledge or a significant amount of computing power.
Instead, scientists from the German Cancer Research Center have developed – and made freely available – a deep-learning segmentation method that can be applied to both CT and MRI data. It can also be used with electron and fluorescence microscopy, said the team co-led by Fabian Isensee, a doctoral student in medical image computing.
Typically, segmenting these algorithms correctly can be a laborious task for someone with a special skill set.
“It is not trivial, and it normally involves time-consuming trial-and-error,” Isensee said.
But, this method, dubbed nnU-Net and published Dec. 7 in Nature, makes it easier. It is a deep-learning segmentation method that can automatically configure itself – including pre-processing, network architecture, training, and post-processing – to any new task in the bio-medical arena. By distilling domain knowledge into three parameter groups (fixed, rule-based, and empirical parameters), nnU-Net can instantaneously adapt to any new data set.
The automatic configuration runs without manual intervention with the exception of a few empirical choices, Isensee said, so the method makes it possible for individuals with even a small amount of expertise in configuring self-learning algorithms to do so successfully. There is also no additional cost beyond standard network training procedures.
To determine how well the method worked, the team applied nnU-Net to 11 international biomedical image segmentation challenges comprising 23 different datasets and 53 segmentation tasks. Various organs, organ sub-structures, tumors, lesions, and 2D and 3D images of cellular structures were included from MRI, CT, electron microscopy, and fluorescence microscopy. For all tasks, nnU-Net was trained only with data provided from each challenge.
Based on the team’s assessment, nnU-Net outperformed most existing segmentation solutions that were specifically optimized for respective tasks. Ultimately, the team said, nnU-Net established a new state-of-the-art standard in 33 of 53 target structures and demonstrated a performance that was one par with other leading models.
Given these results, said study director Klaus Maier-Hein, there is considerable potential for this model to be used with highly repetitive tasks, such as those performed in large-scale clinical studies.
But, even though nnU-Net performed well across the 53 diverse tasks, it is not optimally suited for all segmentation tasks, Isensee’s team said. For example, tasks that require highly domain-specific target metrics for evaluation could influence the method design, and it is possible that some dataset properties have not been considered, potentially leading to lower performance.
Still, the team said, using nnU-Net can alleviate much of the time and expertise burden needed when working with the artificial intelligence algorithms.
“We propose to leverage nnU-Net as an out-of-the-box tool for state-of-the-art segmentation, as a standardized and dataset-agnostic baseline for comparison and as a framework for the large-scale evaluation of novel ideas without manual effort,” they said.
Study: AI Boosts Ultrasound AUC for Predicting Thyroid Malignancy Risk by 34 Percent Over TI-RADS
February 17th 2025In a study involving assessment of over 1,000 thyroid nodules, researchers found the machine learning model led to substantial increases in sensitivity and specificity for estimating the risk of thyroid malignancy over traditional TI-RADS and guidelines from the American Thyroid Association.
Can CT-Based AI Provide Automated Detection of Colorectal Cancer?
February 14th 2025For the assessment of contrast-enhanced abdominopelvic CT exams, an artificial intelligence model demonstrated equivalent or better sensitivity than radiologist readers, and greater than 90 percent specificity for the diagnosis of colorectal cancer.