HER: Michael Cary

Health Equity Reimagined

Solutions in Action: Spotlight

DUSON Faculty and Inaugural AI Health Equity Scholar Aims to Eliminate Algorithmic Bias in Healthcare


Michael Cary
Michael Cary, PhD, RN, and Elizabeth C. Clipp Term Chair of Nursing

In healthcare, artificial intelligence, also known as AI, uses data and computational science to act the way a human provider would. It helps detect diseases, gives diagnoses and offers treatment plans, for starters. While a breakthrough in many ways for medicine because it can process huge amounts of data and deliver quick assessments, which allows providers to focus more efficiently on patient care, AI is also fraught with problems that can create worse health outcomes for some patient populations. 

 “It’s such a complicated problem. It requires an interprofessional approach that draws upon the knowledge and expertise of experts across different fields. No one discipline can own it,” Cary said. “Each of us brings a unique perspective and skillset to the table which helps to provide more comprehensive solutions.” As a member of Duke’s Algorithm-Based Clinical Decision Support (ABCDS) oversight committee, Cary works collaboratively with a group spanning multiple disciplines including engineers, statisticians, lawyers and ethicists, in addition to health professionals, to ensure that AI-based algorithms used at Duke Health are safe, equitable and compliant with federal regulations.

“The vast majority of algorithms have inherent flaws in them,” Cary said. “We’re certainly aware of some of those flaws in health care.” For example, health-related data lacks diversity and may not include clear definitions of race and ethnicity or variables related to gender or sexual orientation. A host of other factors, such as social determinants of health, are often not collected in a structured way, and this hole in the raw data that is used in creating algorithms means the algorithms themselves may generate results that are biased and inaccurate. Often, those bearing the brunt of these inaccuracies are people of color and other marginalized groups that historically have not been represented in biomedical research and data-collecting efforts.

A simple example of bias in AI is a pulse oximeter, which uses a light sensor on a patient’s finger to test for oxygen levels. Studies have found that readings from this device can be less accurate for populations with darker skin tone and those wearing nail polish than those with lighter skin tones or no nail polish. The algorithm used in the development of the pulse oximeter was biased because it used incomplete raw data that did not adequately include a range of patients based on age, gender or skin tone. 

Another recent example of bias in AI that demonstrates how it can have devastating effects on people of color is a 2019 report on an algorithm frequently used to identify patients with high needs. The bias was introduced when health care cost was used as the predictor, instead of health care needs. Black patients historically use health care less than white patients due to implicit bias and structural barriers, including historical mistrust of the profession and lack of access to health care because of socioeconomics and geography. The algorithm indicated Black patients were spending less on health care than their white counterparts and interpreted that to mean Black patients needed the services less, even when individuals in both groups had similar numbers of chronic conditions and other health markers. Removing the bias and correcting the algorithm (how much patients spent on healthcare) remedied the disparity and resulted in a significant increase in the percentage of Black patients who would have been enrolled in a high-risk care management program from 17.7% to 46.5%.

Like any cooking recipe, an algorithm outlines well-defined instructions that must be followed for optimum results. However, some flaws, or biases, are “baked in” even before the recipe is attempted, such as a situation where the ingredients are not standardized or simply not available. In such cases, perfectly following a recipe, or an algorithm, does not necessarily guarantee reliable results for all.

These baked-in biases, or blind spots, are what Cary and his research team at Duke, as part of his role on the ABCDS oversight committee, aim to eliminate before they create worse problems down the line that result in poorer health outcomes for patients. “Now we are bringing awareness to these blind spots and mobilizing resources to address them. This is causing disruption in a positive way that I believe will improve patient care,” Cary said. 

Cary was awarded a one-year pilot grant from Duke AI Health, focused on detecting and mitigating bias in health care algorithms. His project started in August and included a 10-member team of students, post-docs and professionals with backgrounds in nursing, medicine, engineering, biostatistics and health policy. They recently finished the largest, most comprehensive scoping review of algorithmic bias in health care to date, covering 109 articles from 2011-2022.

“We are excited about this scholarly contribution and its impact because the evidence has informed and has revised our bias assessment tools,” Cary said. “We are building tools to ensure against discrimination when using these algorithms to deliver patient care at Duke Health.”

In addition to his project, which will develop standards for reducing bias in development, evaluation and monitoring of algorithms deployed at Duke and eventually other health systems, Cary said the framework around eliminating inequities must be human-centered and be developed in the real world. It must include patients and their communities during the design process, to avoid bias. There must be a suite of strategies, or toolbox, that includes best practices for mitigating biases in clinical algorithms. And it must address recruitment and training of the next generation to recognize and practice bias mitigation.

Nurses, in particular, are crucial to the successful mitigation of bias in AI. As the world’s most trusted profession, nurses are in the best position to build confidence with patients and their communities, Cary said, which is crucial to getting more diverse participation in research and studies. Nurses are also able to see firsthand the effects of bias in algorithms, and it’s important that they are good advocates and reporters of these observations. Providing nurses and nursing students ways to engage more in the design, evaluation, and implementation of algorithms is a cornerstone to successful bias mitigation and imperative for the AI-driven digital health care system of the future.

While algorithm design often relies on those with engineering and computer science expertise, there are other elements that are less technical but just as important if bias is to be avoided. “It underscores the need for a very professionally diverse and adequately trained workforce,” Cary said. While social scientists, engineers and statisticians work on issues upstream, the hands-on health care professional sees how the biases can manifest downstream, in the real world, with Black patients being denied access to needed care management services because of a faulty algorithm, for example.  

“We need to revise our training and education for nurses at the entry level because they are the ones engaging with patients most,” Cary said, noting that the school of nursing will be critical at effecting this change through teaching, learning, and creating research opportunities to inform practice. “It really speaks to us thinking differently about how we train health care professionals and who must be brought to the table to ensure human-centered AI.”


Michael Cary, PhD, RN, and Elizabeth C. Clipp Term Chair of Nursing at the Duke University School of Nursing (DUSON), was appointed as the inaugural AI Health Equity Scholar in January 2022.

Scroll back to top automatically