News
Addressing Racial and Ethnic Bias in Algorithms that Predict Colorectal Cancer Risk
NCI-funded researchers looked at the impact of racial and ethnic bias on algorithms that help clinicians make more informed decisions on the care of patients with colorectal cancer.
According to the authors, there’s been a trend in recent years to omit race and ethnicity from clinical algorithms. The intention is to remove racial bias from medical decisionmaking by offering similar care to every patient regardless of their race and ethnicity. And, as noted by corresponding authors, Ms. Sara Khor and Dr. Aasthaa Bansal, of the University of Washington, “We need to better understand the implications of removing race and ethnicity from these algorithms. Decision context really matters. There may be circumstances where equal care does not meet the needs of some patient group.”
For example, what about subgroups of individuals who may need more resources because they face greater risk for disease recurrence. Would removing race and ethnicity from algorithms negatively impact these special subgroups?
This was the premise behind a retrospective study examining how race factors into a colorectal cancer recurrence risk prediction model. The researchers looked at a cohort of 4,230 patients with subgroups of Asian/Hawaiian/Pacific Islanders, Black/African Americans, Hispanics, and non-Hispanic Whites. They assessed the algorithm’s performance by evaluating the model’s calibration, discriminative ability, false-positive and false-negative rates, positive predictive value, and negative predictive value.
Excluding race and ethnicity as a predictor led to worse calibration, negative predictive value, and false-negative rates in racial and ethnic minority patients, compared with non-Hispanic Whites. Including race and ethnicity as a variable improved algorithmic fairness in calibration, discriminative ability, both positive and negative predictive value, and false-negative rates.
The researchers believe these findings have important implications for people designing risk calculators, noting, “Simply removing the race and ethnicity variable as a predictor could lead to higher bias in the model’s performance. Predictions that are less accurate in racial and ethnic minority groups can misguide care for these patients, ultimately contributing to health disparities.”
They conclude that there is “no one-size-fits-all solution to designing an equitable algorithm.” Instead, they suggest that, when designing models, developers look at the decision the model is helping to solve and carefully take into account any race-specific health or economic consequences that might arise downstream from that decision.
NCI Program Director Dr. Sandra Mitchell, a senior scientist in the Healthcare Delivery Research Program notes, “This study shows how important it is to examine the potential impacts on clinical decisionmaking of excluding race and ethnicity when developing and validating a predictive algorithm. Dr. Bansal’s team found that omitting racial and ethnic data from the prediction model produced differentially inaccurate predictions. This could result in suboptimal clinical decisions and health outcomes down the line. When testing algorithms to support clinical decisionmaking, researchers need to thoughtfully examine the implications and potential consequences of decisions to include or exclude race, ethnicity, and other social determinants of health.”