• AI
  • Molecular Imaging
  • CT
  • X-Ray
  • Ultrasound
  • MRI
  • Facility Management
  • Mammography

Can Deep Learning Models Improve CT Differentiation of Small Solid Pulmonary Nodules?

News
Article

One deep learning model had a 72.4 percent accuracy rate for differentiating between benign and malignant solid pulmonary nodules on non-contrast CT while another deep learning model demonstrated an 87.1 percent AUC for differentiating benign and inflammatory findings.

Emerging research suggests that deep learning models may help close the gap in differentiating between benign and malignant solid pulmonary nodules (SPNs) < 8 mm on non-contrast computed tomography (CT) scans.

For the retrospective study, recently published in Academic Radiology, researchers assessed the capability of deep learning models for assessing small SPNs on chest CTs in 719 patients who had surgical resection of the pulmonary nodules. The deep learning models were developed with nodule features as well as five different peri-nodular region features via the Multiscale Dual Attention Network (MDANet), according to the study.

The researchers found that a deep learning model that incorporated features of the nodule and a 15 mm peri-nodular region) demonstrated a higher area under the curve (AUC) (73 percent) and accuracy rate (72.4 percent) in external validation testing than other models that incorporated 5 mm, 10 mm, 20 mm, or no peri-nodular region features.

Can Deep Learning Models Improve CT Differentiation of Small Solid Pulmonary Nodules?

The axial CT images above reveal an 8 mm lung adenocarcinoma (left) in a 59-year-old man and a 7 mm hamartoma (right) in a 67-year-old woman. Deep learning assessment suggested a malignancy probability of 63 percent for both cases. (Images courtesy of Academic Radiology.)

“(The deep learning) models’ ability to distinguish between benign and malignant SPNs ≤ 8 mm gradually improved as the peri-nodular region increased from 0 to 15 mm. However, its discriminatory capacity decreased when the peri-nodular region was 20 mm in the external validation cohort,” wrote lead study author Yuan Li, M.D., who is affiliated with the Department pf Thoracic Surgery at the First Affiliated Hospital of Chongqing Medical University in Chongqing, China, and colleagues.

“These results suggest that using MDANet to derive features from peri-nodular regions can improve model efficiency and generalizability. The peri-nodular region may contain information helpful for distinguishing SPNs. … SPN microenvironmental changes are virtually unidentifiable on CT images; however, DL may detect them.”

The study authors also evaluated deep learning models for differentiating between inflammatory nodules and benign tumors < 8 mm. In comparison to models that incorporated 5 mm, 15 mm, 20 mm peri-nodular region features, and no peri-nodular region features, researchers found that the module that emphasized the nodule and a 10 mm peri-nodular region had the highest AUC (87.1 percent) and accuracy rate (93.8 percent) in external validation testing.

“These results indicate that MDANet could accurately classify benign tumors and inflammatory nodules. We discovered that the accuracy of DL models in distinguishing between benign tumors and inflammatory nodules was affected by the size of the peri-nodular region,” added Li and colleagues.

Three Key Takeaways

1. Enhanced discrimination using peri-nodular features. Incorporating a 15 mm peri-nodular region in deep learning models significantly improves their ability to differentiate between benign and malignant solid pulmonary nodules (SPNs) < 8 mm on non-contrast CT scans, achieving an AUC of 73 percent and accuracy of 72.4 percent.

2. Optimal peri-nodular region size. The deep learning models' performance improves with the peri-nodular region size up to 15 mm, beyond which the discriminatory capacity decreases. For distinguishing between inflammatory nodules and benign tumors < 8 mm, a 10 mm peri-nodular region provided the highest AUC (87.1 percent) and accuracy (93.8 percent).

3. Superiority of the MDANet model. The MDANet model, which incorporates both nodule and peri-nodular region features, outperforms other convolutional neural networks like VGG19, ResNet50, ResNeXt50, and DenseNet121 in classifying malignant and benign SPNs < 8 mm, demonstrating the highest AUC (73 percent) and accuracy rate (72.4 percent) in external validation testing.

Another component of the study involved comparison of the MDANet model to other convolutional neural networks (VGG19, ResNet50, ResNeXt50, DenseNet121) for differentiating between malignant and benign SPNs < 8 mm.

Noting a key benefit of the MDANet model to incorporate nodule and peri-nodular feature regions as opposed to reliance on nodule features with the other networks, the study authors said external validation testing showed the MDANet model had the highest AUC (73 percent) and accuracy rate (72.4 percent).

(Editor’s note: For related content, see “AI Adjudication Bolsters Chest CT Assessment of Lung Adenocarcinoma,” “Can Deep Learning Bolster Image Quality with Low-Dose Lung CT?” and “Study Shows Benefits of AI in Detecting Lung Cancer Risk in Non-Smokers.”)

In regard to study limitations, the authors noted potential bias with respect to patient selection in the retrospective research. They also acknowledged time-consuming manual delineation of regions of interest (ROIs), and a lack of assessment of the deep learning models’ capability of predicting changes to SPNs over time.

Recent Videos
Radiology Study Finds Increasing Rates of Non-Physician Practitioner Image Interpretation in Office Settings
Can Fiber Optic RealShape (FORS) Technology Provide a Viable Alternative to X-Rays for Aortic Procedures?
Does Initial CCTA Provide the Best Assessment of Stable Chest Pain?
Nina Kottler, MD, MS
The Executive Order on AI: Promising Development for Radiology or ‘HIPAA for AI’?
Practical Insights on CT and MRI Neuroimaging and Reporting for Stroke Patients
Related Content
© 2024 MJH Life Sciences

All rights reserved.