Cancer Data Science Pulse
Applying Artificial Intelligence (AI) to Whole-Body Images to Reveal Rare Cancers
What if you could apply AI to a whole-body image and spot cancer you missed before? Research shows that a boost from AI could give you even greater insight into medical images, perhaps enabling you to detect even rare and unsuspected cancers. Dr. Baris Turkbey, Dr. Fahmida Haque, and Dr. Stephanie Harmon of NCI’s Molecular Imaging Branch in the Center for Cancer Research describe how AI-assisted whole-body imaging may someday help not only in detecting cancer, but also in planning, tracking, and managing more precise treatments.
Why Use a Whole-body Image?
In general, whole-body imaging is reserved for people who are at particular risk for cancer. They may have a genetic predisposition for breast cancer or carry another cancer-related gene that warrants broad visualization. In such cases, an oncologist might prescribe whole-body imaging to better track disease or monitor treatment efficacy.
Oncologists also may use whole-body imaging when managing cancer in children and young adults. One whole-body scan can take the place of multiple, highly targeted scans, limiting exposure to radiation while still allowing oncologists to screen for cancer and track its progression.
The advantage of using a whole-body scan is it gives you a baseline. This lets you see changes in the cancer and if it’s spread to other body systems or organs. Whole-body imaging is especially useful for tracking cancers like lymphoma, melanoma, and multiple myeloma, where it’s essential to see how (and how quickly) the disease is progressing.
This broad, imaging perspective also may help detect unexpected, incidental changes, which could be early signs of cancer or other rare diseases.
How Can AI Help Find Hard-to-Identify Cancers in Whole-Body Images?
Researchers are applying AI to biomedical images to better understand the extent of cancer, track its progression, and help in decision making. We can use AI to determine a tumor’s boundaries (i.e., segment) and track cancer’s progression. Or, in the case of whole-body imaging, we can segment whole organs to delineate their shape and, ultimately, their function. Using these tools, we can see tumors, organs, and lesions in new ways, often in three or four dimensions.
This broad visualization is particularly useful for detecting cancers in unexpected locations or for use with organs or body systems that are lacking in well-curated data sets.
The ability to “see” beyond what a human might detect sets these AI tools apart and hinges on a machine learning term called “zero shot,” meaning that the AI can improvise when confronted with unfamiliar data to infer what might be present, such as a tumor or organ, on a scan.
The following AI tools can help you segment and track tumors:
- LesionLocator: This application works with different types of imaging, such as positron emission tomography, MRI, or computed tomography. It also offers zero-shot training and four-dimensional results.
- FDG-PET/CT_AI: Another multimodal AI for locating tumors from whole-body PET or CT scans using a radioactive tracer called fluorodeoxyglucose (FDG). That tracer helps show changes in metabolic activity that could be indicative of disease.
- PPGL_AI: This multimodal AI aids in detecting a rare cancer type (i.e., Phenochromocytoma and paraganglioma, or PPGL) by tracking somatostatin receptor activity, which is common in PPGL tumors.
AI, when applied to whole-body images, gives an even broader perspective on cancer. These tools are useful for segmenting organs from whole-body imaging:
- VISTA3D—This model works with CT images, giving a three-dimensional look at the whole body, as well as targeted organs.
- MedSAM—A multimodal AI that works with CT, MRI, and endoscopy images, and offers three-dimensional results.
- MedSAM2—Another multimodal AI for use with CT, PET, MRI, ultrasound, endoscopy images, as well as videos, with three-dimensional results.
- TotalSegmentator—This AI works with CT and MRI to return results on more than 100 anatomical structures. It requires training for new structures.
Why Do Biomarkers Matter?
Whole-body imaging works best when combined with known biomarkers. In this way, we can refine our imaging approaches by giving them a firm biological context (e.g., proteins, genes, metabolites, treatment response).
This ensures we reserve whole-body scans for those patients who will benefit the most. This not only reduces unnecessary imaging but also helps us improve accuracy, giving us fewer false positive results. We’re able to improve our chances of accurately predicting cancer and its outcomes.
For example, by combining multiparametric MRI (a type of scan that’s particularly useful for prostate cancer) with data from our clinical model, we were able to create an integrated approach that revolutionizes the way we interpret biopsies and manage prostate cancer. This type of model offers more precise identification and could reduce the need for additional and often unnecessary biopsies. (See our January 2025 news article for more information on this.)
Why Are Radioactive Tracers Critical?
Another way we can use AI to increase the power of imaging is by combining this technology with radiopharmaceuticals. These radioactive drugs help highlight metabolic activity in tissues of interest and work in both PET- and CT-based images.
For example, in a recent study, our team used the radioactive tracer 68Ga-DOTATATE-PET/CT to image PGGL cancer, a rare and hard-to-identify neuroendocrine cancer. Using this imaging-tracer combination, automated with AI, we were able to accurately assess tumor burden. This approach could enable oncologists to begin treatment much earlier and monitor response to therapy more efficiently.
Although we used this approach for PGGL, it could be applied to other cancer types and using other tracers, such as prostate-specific membrane antigen (PSMA), fibroblast activation protein inhibitor (FAPI), Fluoroestradiol (FES), Fluorothymidine (FLT), and many others.
Linking these radioactive tracers to specific biomarkers has opened up an exciting new area of treatment called theranostics. With AI, we can use theranostics to precisely target therapy. For example, we can map the tumor against receptor activity to find targetable sites for medications. With such targeted treatment we can minimize the damage to healthy tissues. And we can follow treatment over time to assess if treatment is working and, when needed, make the necessary adjustments to that therapy.
Are There Drawbacks to AI-Based Whole-Body Imaging?
Perhaps the biggest obstacle is cost, which usually isn’t (completely) covered by insurance. Whole-body scanning, like any type of scanning, also exposes the body to radiation. We also run into the risk of false positive results or incidental findings, that could lead to more invasive, and sometimes unnecessary, diagnostic studies.
Whole-body imaging is best when applied judiciously to reduce unnecessary radiation exposure. And there are tools that can help with this. For example, using a Pix-2-Pix GAN model, we can generate images by inference, giving us a good idea of how a cancer will progress. This reduces the need for multiple scans within a short timeframe. We also can reduce exposure with machine learning that relies on an ultra-low dose PET, minimizing the amount of radiation needed to perform the scan.
Sources
Haque, F., Carrasquillo, J.A., Turkbey, E.B., et al. An Automated Pheochromocytoma and paraganglioma Lesion Segmentation AI-Model at Whole Body 68Ga-DOTATATE PET/CT. EJNMMI Research, 2024.
Haque F., Chen A., Lay N., et al. Development and Validation of Pan-cancer Lesion Segmentation AI-model for Whole-body 18F-FDG PET/CT in Diverse Clinical Cohorts. Computers in Biology and Medicine, 2025.
Ma, K.C., Mena, E., Lindenberg, L., et al. Deep Learning-Based Whole-Body PSMA PET/CT Attenuation Correction Utilizing Pix-2-Pix GAN. Oncotarget, 2024.
Hasani, N., Farhadi, F., Morris, M.A., et al. Artificial Intelligence in Medical Imaging and its Impact on the Rare Disease Community: Threats, Challenges, and Opportunities. PET Clinics, 2022.
Wang, Y-R., Baratto, L., Hawk, E.K., et al. Artificial Intelligence Enables Whole-Body Positron Emission Tomography Scans with Minimal Radiation Exposure. European Journal of Nuclear Medicine and Molecular Imaging, 2021.
Leung, K., Rowe, S., Sadaghiani, M., Leal, J., Mena, E., Choyke, P., Du, Y., Pomper, M. Fully Automated Whole-Body Tumor Segmentation on PET/CT Using Deep Transfer Learning. Journal of Nuclear Medicine, 2024. Abstract 241979.
Rokuss, M., Kirchhoff, Y., Akbal, S., et al. LesionLocator: Zero-Shot Universal Tumor Segmentation and Tracking in 3D Whole-Body Imaging. 2025. Preprint only; but code is available.
He, Y., Guo, P., Tang, Y., et al. VISTA3D: Versatile Imaging Segmentation and Annotation model for 3D Computed Tomography. 2024. Preprint only.
Ma, J., He, Y., Feifei, L., et al. Segment Anything in Medical Images. Nature Communications, 2024.
Wasserthal, J., Breit, H-C., Meyer, M.T., et al. TotalSegmentator: Robust Segmentation of 104 Anatomic Structures in CT Images. Radiology, 2023.
Categories
- Data Sharing (66)
- Informatics Tools (42)
- Training (40)
- Precision Medicine (36)
- Data Standards (36)
- Genomics (36)
- Data Commons (34)
- Data Sets (27)
- Machine Learning (25)
- Artificial Intelligence (25)
- Seminar Series (22)
- Leadership Updates (14)
- Imaging (13)
- Policy (10)
- High-Performance Computing (HPC) (9)
- Jobs & Fellowships (7)
- Semantics (6)
- Funding (6)
- Proteomics (5)
- Information Technology (4)
- Awards & Recognition (3)
- Publications (2)
- Request for Information (2)
- Childhood Cancer Data Initiative (1)
Leave a Reply