Cancer Data Science Pulse
Providing NCI with Nearly 40 Years of Biostatistics: A Conversation with Dr. Eric “Rocky” Feuer
Dr. Eric “Rocky” Feuer retired at the end of 2023 after dedicating 37 years of government service to NCI. However, his commitment to improving cancer research through biostatistics remains unwavering. Dr. Feuer continues his work with NCI as a consultant, working to pass on his passion and dedication for population science and cancer control to the next generation of researchers.
Are you familiar with the statistic, “One in eight women will be diagnosed with breast cancer in their lifetime.”? If so, then you’re familiar with Dr. Feuer’s work. Throughout his career, he has developed numerous statistical methods and tools for analyzing, presenting, and interpreting population-based cancer statistics, which have influenced the lives of millions. We spoke to Dr. Feuer to shed light on the remarkable progress he has made in this field.
What inspired you to pursue a career in biostatistics and cancer research?
During my undergraduate studies, I first became interested in statistical methods I learned about in psychology classes, and ultimately majored in math with an emphasis in statistics. I chose biostatistics for graduate school and earned my Ph.D. from the School of Public Health at the University of North Carolina (UNC) at Chapel Hill. At the same time, Michael Jordan was playing basketball for UNC and went on to win a national championship for the Tar Heels. I knew almost nothing about college basketball when I arrived at UNC, but I quickly became a fan, attended many games, and even learned the major of each player. None of them were majoring in biostatistics, though, so my dream of tutoring a player went unfulfilled.
After graduation, I worked as the chief statistician for the Cancer Center of Mt. Sinai Hospital School of Medicine, where I designed and analyzed clinical trials and laboratory experiments. However, after gaining work experience from two government internships, I felt that public service was a better fit for me because of the national and international scope of the work as well as the idealistic goal of advancing the broad interests of the American public. There was one issue when looking for my first postgraduate position—my job search occurred during the Reagan Administration when there was a reduction in force. The government was letting go of more workers than it hired.
I eventually landed a job at NCI in 1987, focusing on population-based cancer registries, especially how advances in prevention, screening, and treatment can impact population cancer statistics. In 1999, shortly after NCI formed its Division of Cancer Control and Population Sciences, I became the chief of the Statistical Research and Applications Branch in the Surveillance Research Program. I held this position until my retirement at the end of 2023.
What do you believe are the biggest challenges facing cancer research today, and how can they be addressed?
I believe one of the biggest challenges today is how to address the formidable and growing gap between the rapid pace of innovation and our ability to efficiently harness it to improve overall population health.
To address this challenge, we need to ask ourselves how the healthcare system can optimally implement innovations in the treatment, early detection, and prevention of cancer. We must evaluate the risks, benefits, and cost-effectiveness of various implementation strategies. For example, artificial intelligence (AI) is making inroads into all phases of medicine. To adopt new technologies based on evidence, researchers synthesize diverse data using modeling while establishing standards for their safety and effectiveness. An interesting question for many AI technologies is how, when, and if clinicians can integrate them into clinical care.
What are some of the most important lessons you hope to convey to cancer researchers/data scientists who are interested in biostatistics?
Monitoring population trends is not a sufficient goal for surveillance. To inform future cancer research, we need to learn from past experiences. This requires feedback loops to help us understand the impact of cancer control interventions so we can optimize the implementation of future research efforts.
In biostatistics, it’s important to focus on the development and application of statistical methods, understand the strengths and limitations of available data sources, and learn as much as possible about the subject matter area you are studying. This is necessary to ensure that the proposed tools and analyses make assumptions that align correctly with the underlying data and background knowledge of the subject matter area.
In population science, it is very rare to have all the data you need to provide an answer to a question. However, it is important to first carefully think through how you would approach the problem if you had perfect data and then, in a systematic way, peel back the layers to the data you have. This allows you to determine how to modify your approach in synthesizing the evidence and assess any limitations of inferences you can make from your analyses.
What are some of the models/tools you have developed in your career at NCI?
Much of my statistical work has been to develop methods for the analysis, presentation, and interpretation of population-based cancer statistics. Those who use and interpret these types of statistics (and the problems they encounter) have been the real motivation behind the methods. Although I have developed many different models/tools, there are three I’d like to specifically highlight, each with its own unique story for why I developed it in the first place.
DevCan—Probability of Being Diagnosed or Dying of Cancer
This software tool can compute the risk of being diagnosed with breast (and the risks of many other cancers) between any two ages, including over an entire lifetime.
Early in my career at NCI, I realized that many cancer statistics, like cancer incidence rates, are difficult to understand and interpret. For example, cancer incidence rates are rather abstract quantities that are difficult for most people to understand and interpret. So, I revised some older methodologies and started releasing these sorts of numbers annually.
In 1993, my team and I highlighted that 1-in-8 women will have a breast cancer diagnosis in their lifetime. A few months after that release, I was half asleep on my couch watching the 11 p.m. news broadcast. A view of the U.S. Capitol filled the screen, and women—who were holding signs and advocating for more funding for breast cancer research—were yelling, “We won’t wait, the rate is now 1-in-8!” I almost fell off the couch.
This metric was much easier to understand than standard cancer incidence rates. It surprised people how many lives breast cancer impacts, moving them to advocacy. This moment set a path for the rest of my career to look for ways to develop metrics and methods that would help support evidence-based advocacy and policy to serve the needs of the U.S. population.
Joinpoint Trend Analysis Software
This statistical software can help you characterize cancer trends using joined linear segments on a log scale. Researchers use it worldwide to characterize population trends in cancer incidence and mortality rates and other health indicators.
In the late 1980s and into the early-to-mid 1990s, U.S. cancer mortality rates remained flat despite advances in treatment, screening, and prevention. NCI directors asked a simple but important question: “Is the trend changing?” Sometimes, the simple questions are the more difficult to answer.
I convened a group to work on this problem, and in 2000, we introduced the Joinpoint methodology and software. This trend model not only answers the question, “Is the trend changing?” but also helps researchers like you interpret which external forces (e.g., changes in risk factors, diagnostic technologies, screening, and treatment) may have caused the change in the trend. Researchers use this methodology to characterize trends in NCI reports, and it’s now the standard cancer registries use throughout the world, with over 5,000 downloads a year.
This model accounts for the underreporting of cancer cases for the most recently diagnosed cases. Each year, cancer registrars collect and submit not only cases diagnosed in that year added, but also the cases that were missed or coded incorrectly in previous years. To produce unbiased cancer incidence trends, we need “delay-adjusted” inflated case counts because the underreporting of cases is highest for the most recent diagnosis year submitted and decreases over time.
In 2002, my colleagues and I first implemented a statistical model using data from NCI’s Surveillance, Epidemiology, and End Results cancer registry program. The model utilizes historical patterns of updated case counts to adjust the current count for anticipated future additions and corrections to the data. As registries throughout North America continued to mature, we later initiated and led an effort in coordination with NCI, the Centers for Disease Control and Prevention, and the North American Association of Central Cancer Registries to develop a unified approach for estimating and reporting delay-adjusted rates across all of North America. Delay-adjusted rates and trends are now the standard for reporting cancer incidence rates and trends in reports by NCI, our partners, and in research papers.
This accomplishment has led to a more accurate estimation of cancer rates in the most recent years of data. It’s a critical component in the identification of emerging trends.
You also created the Cancer Intervention and Surveillance Modeling Network (CISNET). What is CISNET, how is it unique, and how do you see it evolving?
Initiated in 2000, CISNET is a consortium of NCI-funded investigators. It uses simulation modeling to extend evidence provided by trial, epidemiologic, and surveillance data to guide public health research and priorities. CISNET fills a unique niche in the NCI portfolio by connecting innovations in cancer research to strategies that most effectively deploy these interventions to maximize their population impact while minimizing harm and burden. Notably, CISNET has supported the U.S. Preventive Services Task Force in revising screening guidelines for colorectal, breast, lung, and cervical cancer.
In the past, simulation modeling often suffered from credibility issues. Independent modeling efforts often yielded highly divergent results and differences were difficult to resolve. CISNET innovated a systematic modeling approach with multiple modeling groups per cancer site. Collaborative work within each cancer site addressed central questions with a common set of inputs and outputs. Reproducibility added credibility to results, while differences highlighted areas for further study and knowledge gaps that researchers needed to address.
To fully achieve CISNET’s goals, a core value has been making the research and policy communities aware of existing modeling capacity and encouraging collaborations. While CISNET’s research has historically mostly focused on single cancer site research (initially in the lung, prostate, breast, colorectal, esophageal, and cervical cancers, and recently expanded to include gastric, bladder, uterine cancers, and multiple myeloma), the consortium is excited about the potential to encourage more cross-cancer collaborations.
Leave a Reply
Categories
- Data Sharing (64)
- Training (39)
- Informatics Tools (39)
- Genomics (35)
- Data Standards (34)
- Data Commons (32)
- Precision Medicine (32)
- Data Sets (26)
- Machine Learning (24)
- Seminar Series (22)
- Artificial Intelligence (20)
- Leadership Updates (13)
- Imaging (12)
- High-Performance Computing (HPC) (9)
- Policy (9)
- Jobs & Fellowships (7)
- Funding (6)
- Proteomics (5)
- Semantics (5)
- Publications (2)
- Request for Information (2)
- Information Technology (2)
- Awards & Recognition (2)
- Childhood Cancer Data Initiative (1)
Deborah Merke on April 17, 2024 at 10:22 a.m.