• AI
  • Molecular Imaging
  • CT
  • X-Ray
  • Ultrasound
  • MRI
  • Facility Management
  • Mammography

Can CT-Based AI Provide Automated Detection of Colorectal Cancer?

News
Article

For the assessment of contrast-enhanced abdominopelvic CT exams, an artificial intelligence model demonstrated equivalent or better sensitivity than radiologist readers, and greater than 90 percent specificity for the diagnosis of colorectal cancer.

An emerging artificial intelligence (AI) platform may provide a viable option for automated detection of colorectal cancer (CRC) on contrast-enhanced abdominopelvic computed tomography (CT) exams.

For the retrospective study, recently published in the American Journal of Roentgenology, researchers reviewed data from contrast-enhanced abdominopelvic CT exams for 3,945 patients (mean age of 62) in order to assess an AI model’s ability to detect CRC. The cohort included a 2,662-patient training set, an 841-patient group for internal testing and 442 patients for external validation testing, according to the study.

In external validation testing, the AI software had an 80.8 percent area under the receiver operating characteristic curve (AUC) and a 90.9 specificity rate. In comparison to two reviewing radiologists who had sensitivity rates of 73.1 percent and 80.8 percent, the researchers noted the AI software offered an 80.8 percent sensitivity rate and detected five CRC cases that were missed by one of the reviewing radiologists.

Can CT-Based AI Provide Automated Detection of Colorectal Cancer?

Here one can see overlapping of an AI-predicted box (yellow) and the reference-standard bounding box (red) for a lesion on a contrast-enhanced abdominopelvic CT for a 36-year-old patient with colorectal cancer. (Image courtesy of the American Journal of Roentgenology.)

Whether it is growing worklist volumes, a lack of appropriate bowel preparation or when abdominopelvic CT exams performed for reasons other than CRC screening, the study authors noted that CRC can be overlooked, and AI may provide reinforcement for CRC detection.

“Despite the presence of extensive prior work exploring CRCs missed by radiologists on routine examinations, this issue continues to present a challenge in radiology practice and remains a basis of ongoing quality assurance efforts. The present findings suggest a potential role of AI in this setting by providing an automated evaluation that may help reduce the frequency of CRCs missed by radiologists,” wrote lead study author Seung-seob Kim, M.D., who is affiliated with the Department of Radiology and the Research Institute of Radiological Science at the Severance Hospital and the Yonsei University College of Medicine in Seoul, South Korea, and colleagues.

While acknowledging that the AI software missed cases of CRC involving lesions 2 cm or less, the study authors found that the AI model detected 93.8 percent of annular CRCs with circumferential tumor extent exceeding 50 percent of the bowel lumen, and 62.5 percent of annular CRCs in which the circumferential tumor extent did not exceed 50 percent of the bowel lumen.

Three Key Takeaways

1. AI performance in CRC detection. The AI model demonstrated an 80.8 percent sensitivity and 90.9 percent specificity in external validation testing, performing comparably to radiologists. Notably, it identified five cases of CRC that were missed by one of the radiologists.

2. Strengths and weaknesses of AI. The AI software showed high detection rates for larger, circumferential tumors (93.8 percent for those exceeding 50 percent of the bowel lumen) but had reduced sensitivity for smaller lesions (≤2 cm).

3. Clinical implications. AI can serve as a valuable reinforcement tool in CRC detection on contrast-enhanced CT, potentially reducing missed diagnoses, particularly in non-screening exams in which CRC may be overlooked.

“Such observations are consistent with known greater sensitivity of routine CT for large colonic masses and for colonic masses with longer circumferential extent,” added Kim and colleagues.

External validation testing also revealed no false-positive lesion assessments with AI in 91 percent of the cohort, according to the study authors.

“This result is better than that of previous studies of CAD or AI systems for CRC detection on CT colonography, which reported at least two false-positive lesions per patient,” noted Kim and colleagues.

(Editor’s note: For related content, see “Consensus Recommendations on MRI, CT and PET/CT for Ovarian and Colorectal Cancer Peritoneal Metastases," “Survey Results Reveal Doubling of CT Colonography Use During COVID-19 Pandemic” and “Systematic Review: PET/MRI May be More Advantageous than PET/CT in Cancer Imaging."

In regard to study limitations, the authors acknowledged potential patient selection bias with the cohort being restricted to patients who had CT and a colonoscopy within a two-month period. The researchers said other limitations included a lack of assessment of the impact of false-positive results with AI, only utilizing axial CT images for the study and the directing of radiologist reviewers in external validation testing to look specifically for CRC.

Recent Videos
Employing AI in Detecting Subdural Hematomas on Head CTs: An Interview with Jeremy Heit, MD, PhD
Pertinent Insights into the Imaging of Patients with Marfan Syndrome
How Will the New FDA Guidance Affect AI Software in Radiology?: An Interview with Nina Kottler, MD, Part 2
How Will the New FDA Guidance Affect AI Software in Radiology?: An Interview with Nina Kottler, MD, Part 1
Teleradiology and Breast Imaging: Keys to Facilitating Personalized Service, Efficiency and Equity
Radiology Study Finds Increasing Rates of Non-Physician Practitioner Image Interpretation in Office Settings
Can Fiber Optic RealShape (FORS) Technology Provide a Viable Alternative to X-Rays for Aortic Procedures?
Does Initial CCTA Provide the Best Assessment of Stable Chest Pain?
Nina Kottler, MD, MS
The Executive Order on AI: Promising Development for Radiology or ‘HIPAA for AI’?
Related Content
© 2025 MJH Life Sciences

All rights reserved.