Can Algorithms Really Be Biased?

One should avoid passing judgment on those who create the algorithms.

I recently read a study in Science Magazine that focused on racial biases found in popular algorithms that are used in the area of population health. I would encourage you to read the entire article to get some detailed understanding of the data, methods, analytics, findings, and conclusions, as this is not the forum for such a detailed discussion. 

In general, the study authors found that these population health algorithms, which are used to dole out healthcare services to individuals within certain specific populations, are biased against black people. Their thesis is that these biases occur due to health disparities that are conditional upon some risk score that is calculated as part of the algorithm. The authors also opined that much of this occurred because the algorithm uses cost as a basis for predicting health needs. For example, it found that across every level of algorithm-predicted risk, “blacks and whites have (roughly) the same costs the following year.” The issue, then, is that the data studied would indicate that black people were, in essence, sicker than the corresponding white population for whom data was available.

Imagine, then, that you had two populations (population a and population b). Let’s say that the population contains patients that are significantly sicker than patients in population b. One would expect that the costs for those patients in the population also would be significantly higher than those in population b, but in this study, such was not the case. One can argue whether the algorithm predicts outcomes based on the correct variable, but in my experience, besides avoiding catastrophic health crises (which, by the way, can be very costly), looking to lower the cost of healthcare is not an unreasonable goal. In fact, it is likely pursued by every payer, including Medicare and Medicaid, and hence the growing interest in population health programs.

In this study, however, the problem was an inequity between the two populations (blacks and whites), wherein the costs were the same, but the severity of illness was not. The authors suggest that perhaps a more suitable approach would be to predict some measure of health. For example, the number of active chronic health conditions could be used, or a comorbidity score, which is mostly used in medical research. In this way, the algorithm would predict the need for healthcare services based on the severity of illness rather than the costs – which, according to the authors, “necessarily means being racially biased on health.” Why is that? Again, according to the researchers, poor patients face more barriers to healthcare access – and in this case, they are stating that socioeconomic status and race are closely correlated. In other words, the authors opine that black populations tend to be poorer, and therefore, are in the group with greater access challenges. These challenges included geography, transportation issues, competing demands from jobs and/or childcare, and/or even just the basic knowledge that a certain health condition may require medical care. 

The authors also opined that some determinants depended upon the racial relationship between black and white primary care physicians. They referenced another study that found black providers would be more aggressive about preventive care when the patient was black, as opposed to a white provider. In essence, the authors are laying a foundation for bias among people, and then associate those people to the human intelligence used to create those algorithms. 

This type of algorithmic bias, as the authors duly point out, is not isolated to just healthcare. Several years ago, I worked with a law enforcement agency to develop an algorithm that would predict when and where crimes (of violence, in particular) might occur. The results almost always identified neighborhoods that reported a lower socioeconomic status, and because that was correlated to race, the algorithm would predict these events in neighborhoods with a higher proportion of blacks and Hispanics. Now, there are many reasons that this type of an algorithm might be biased, but they are not due to the algorithm, per se, but rather, the programming and the data that go into that algorithm.

For example, if, historically, the rate of violent crime was significantly higher in specific neighborhoods, then it would make sense that the algorithm would predict future events in those neighborhoods. One reason suggested was that there was a more anemic police presence in those neighborhoods, and as such, there was less of a disincentive to commit violent crimes. And this may very well be true, but it is an indictment on society and the data, not necessarily those who create and develop those algorithms (or the algorithms themselves).

The authors give as examples credit-scoring algorithms or hiring or retail algorithms, which they claim all are influenced by racial and gender biases. I can very well see where these findings should raise the discussion as to how these types of biases could be checked and even eliminated, but I would caution against passing judgment on those who create the algorithms. In this case, for example, the reasonable approach to controlling costs may have resulted in a biased outcome, but not because the developers are biased, nor that the data is biased, but rather than the system is biased, producing the data that drives these algorithms.

And that’s the world according to Frank.

Facebook
Twitter
LinkedIn

Frank Cohen, MPA

Frank Cohen is Senior Director of Analytics and Business Intelligence for VMG Health, LLC. He is a computational statistician with a focus on building risk-based audit models using predictive analytics and machine learning algorithms. He has participated in numerous studies and authored several books, including his latest, titled; “Don’t Do Something, Just Stand There: A Primer for Evidence-based Practice”

Related Stories

Leave a Reply

Please log in to your account to comment on this article.

Featured Webcasts

2025 Coding Clinic Webcast Series

2024 ICD-10-CM/PCS Coding Clinic Update Webcast Series

Uncover critical guidance. HIM coding expert, Kay Piper, RHIA, CDIP, CCS, provides an interactive review on important information in each of the AHA’s 2025 ICD-10-CM/PCS Quarterly Coding Clinics in easy-to-access on-demand webcasts, available shortly after each official publication.

April 14, 2025

Trending News

Featured Webcasts

Audit-Proof Your Wound Care Procedures: Expert Insights on Compliance and Risk Mitigation

Audit-Proof Your Wound Care Procedures: Expert Insights on Compliance and Risk Mitigation

Providers face increasing Medicare audits when using skin substitute grafts, leaving many unprepared for claim denials and financial liabilities. Join veteran healthcare attorney Andrew B. Wachler, Esq., in this essential webcast and master the Medicare audit process, learn best practices for compliant billing and documentation, and mitigate fraud and abuse risks. With actionable insights and a live Q&A session, you’ll gain the tools to defend your practice and ensure compliance in this rapidly evolving landscape.

April 17, 2025
Utilization Review Essentials: What Every Professional Needs to Know About Medicare

Utilization Review Essentials: What Every Professional Needs to Know About Medicare

Dr. Ronald Hirsch dives into the basics of Medicare for clinicians to be successful as utilization review professionals. He’ll break down what Medicare does and doesn’t pay for, what services it provides and how hospitals get paid for providing those services – including both inpatient and outpatient. Learn how claims are prepared and how much patients must pay for their care. By attending our webcast, you will gain a new understanding of these issues and be better equipped to talk to patients, to their medical staff, and to their administrative team.

March 20, 2025

Rethinking Observation Metrics: Standardizing Data for Better Outcomes

Hospitals face growing challenges in measuring observation metrics due to inconsistencies in classification, payer policies, and benchmarking practices. Join Tiffany Ferguson, LMSW, CMAC, ACM, and Anuja Mohla, DO, FACP, MBA, ACPA-C, CHCQM-PHYADV as they provide critical insights into refining observation metrics. This webcast will address key issues affecting observation data integrity and offer strategies for improving consistency in reporting. You will learn how to define meaningful metrics, clarify commonly misinterpreted terms, and apply best practices for benchmarking, and gain actionable strategies to enhance observation data reliability, mitigate financial risk, and drive better decision-making.

February 25, 2025
Navigating the 2025 Medicare Physician Fee Schedule: Key Changes and Strategies for Success

Navigating the 2025 Medicare Physician Fee Schedule: Key Changes and Strategies for Success

The 2025 Medicare Physician Fee Schedule brings significant changes to payment rates, coverage, and coding for physician services, impacting practices nationwide. Join Stanley Nachimson, MS., as he provides a comprehensive guide to understanding these updates, offering actionable insights on new Medicare-covered services, revised coding rules, and payment policies effective January 1. Learn how to adapt your practices to maintain compliance, maximize reimbursement, and plan for revenue in 2025. Whether you’re a physician, coder, or financial staff member, this session equips you with the tools to navigate Medicare’s evolving requirements confidently and efficiently.

January 21, 2025

Trending News

Happy National Doctor’s Day! Learn how to get a complimentary webcast on ‘Inpatient Admission Order: Master the Who, When, and How Webcast‘ as a token of our heartfelt appreciation! Click here to learn more →

Happy National Doctor’s Day! Learn how to get a complimentary webcast on ‘Decoding Social Admissions’ as a token of our heartfelt appreciation! Click here to learn more →

CYBER WEEK IS HERE! Don’t miss your chance to get 20% off now until Dec. 2 with code CYBER24