News
Deep Learning Model CGS-Net Enhances Medical Image Analysis
Are you looking for ways to analyze medical images more accurately? The Context Guided Segmentation Network (CGS-Net), a novel deep learning model developed by NCI-supported researchers, improves medical image segmentation by incorporating contextual information. By simultaneously processing two different zoom levels of tissue, CGS-Net mirrors how pathologists use a microscope to switch between broad and detailed views. One part of the model focuses on the zoomed-out view to understand the surrounding context, while another focuses on the zoomed-in view to examine specific areas.
Researchers tested the model on a data set of lymph node tissue and cancer samples. They fed the model with a variety of images at both zoom levels and evaluated its performance in detecting cancer. The results showed that combining both zoom levels helped the model detect cancer more accurately than using only a high-detail view.
Corresponding author, Yifeng Zhu, shared that, “Integrating AI-powered diagnostic tools in pathology has the potential to bridge gaps in cancer diagnosis, providing faster, more accurate assessments—especially in regions where delays in diagnostics impact patient outcomes caused by treatable risk factors.”