top of page
Search

Bias by Design

  • Georgia Allaire
  • 8 minutes ago
  • 4 min read

Artificial intelligence (AI) is becoming an essential tool in modern medicine, promising faster diagnoses, lower costs, and greater efficiency. Yet beneath this excitement lies a deeper concern: how data and profit shape whose health is valued most. AI in healthcare is not created in isolation; it is built on publicly funded research, patient information, and private investment. The National Institutes of Health (NIH) has emphasized that data-driven medicine must be developed responsibly, warning that “data is not an objective representation of the world… healthcare AI models that don’t account for bias often perform inadequately” [1]. When innovation becomes inseparable from financial interest, the pursuit of progress risks overshadowing the purpose of care.

AI systems promise to bridge gaps in access, but they can just as easily widen them. Their success depends on the quality and diversity of the data they learn from, data that reflect decades of unequal access to care. This makes bias not only a technical issue but a structural and moral one. When public institutions generate research and datasets that private companies later monetise, the line between scientific advancement and commercial exploitation blurs.


From Public Research to Biased Data

Much of the data used to train medical AI originates from publicly funded research. Hospitals and universities supported by grants from the NIH and other agencies produce large storages of patient data that later become the material for commercial algorithms. The Lancet Digital Health warns that “without careful dissection of the ways in which biases can be encoded into artificial intelligence (AI) health technologies, there is a risk of perpetuating existing health inequalities at scale” [2]. The same paper concludes that “one major source of bias is the data that underpins such technologies” [2].

Public data intended to improve care can, once commercialised, reproduce the very inequities it was meant to solve. Public funds build the foundation, but private firms often capture the profit. It is a pattern in which equity is lost between the grant and the bill. This hidden economy of medicine shows how the flow of data, capital, and ownership determines who benefits from innovation.


When Costs Define Care

The risks of biased AI became visible in a landmark Science investigation. Researchers Ziad Obermeyer and colleagues found that “an algorithm widely used in US hospitals to allocate health care to patients” systematically discriminated against Black patients because it used healthcare spending as a proxy for medical need [3]. The team wrote: “The bias arises because the algorithm predicts health-care costs rather than illness, but unequal access to care means that we spend less money caring for Black patients than for White patients” [3].

By equating cost with illness severity, the algorithm transformed economic inequality into medical inequality. In this behind-the-scenes aspect of AI, financial metrics, not clinical reality, determine who received help. What began as an effort to improve efficiency instead caused injustice, showing that fairness cannot be assumed simply because a system follows a mathematical algorithm.


Global Bias and Invisible Inequality

The effects of algorithmic bias reach far beyond the United States. In Frontiers in Public Health, researchers describe bias as “hidden in code, imperceptible in action” and “an urgent threat to consider” [4]. They note that many AI models “draw from data sets in populations which are unrepresentative of those in the low- and middle-income countries,” resulting in systems that “do not capture the cultural, linguistic, genetic, or environmental variety in underserved populations” [4]. When AI tools trained on Western or high-income populations are deployed globally, they risk reproducing existing inequities under the notion of innovation.


The Moral Cost of Efficiency

AI is often praised for making healthcare more efficient, but efficiency can have an ethical price. JAMA Health Forum warns that “as artificial intelligence (AI) algorithms become an increasingly integral part of health care … it is vital that rigorous processes to mitigate algorithmic bias are established” [5]. When algorithms are optimised for profit, reimbursement, or speed rather than equity, they risk perpetuating the very disparities they were built to overcome. As The Lancet Digital Health observes, “one major source of bias is the data that underpins such technologies” [2]. Efficiency without fairness becomes exploitation, a form of moral debt that medicine cannot afford to ignore.


Conclusion: Seeing What Is Hidden

The rise of AI in medicine reveals a recurring pattern: public funding builds the science, private entities shape the products, and patients ultimately bear the costs. The hidden economy of medicine thrives in this space between discovery and delivery. Fixing it requires transparency about how algorithms are trained, who owns medical data, and how benefits are shared. The goal is not just to make AI smarter, but fairer. If medicine is to be guided by technology, it must ensure that the future it programs serves everyone, not only those it can afford to see.


Reviewed by: Sehar Mahesh

Designed by: Selena Xiao


References:

[1] National Institutes of Health. (2025, July 18). Celi cautions developers, clinicians to beware of bias in healthcare AI models. NIH Record. https://nihrecord.nih.gov/2025/07/18/celi-cautions-developers-clinicians-beware-bias-healthcare-ai-models.


[2] Alderman, J. E., Palmer, J., Laws, E., McCradden, M. D., Ordish, J., Ghassemi, M., … Mackintosh, M. (2024). Tackling algorithmic bias and promoting transparency in health datasets: The STANDING Together consensus recommendations. The Lancet Digital Health. https://doi.org/10.1016/S2589-7500(24)00224-3.


[3] Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. https://doi.org/10.1126/science.aax2342.


[4] Joseph, J. (2025). Algorithmic bias in public health AI: A silent threat to equity in low-resource settings. Frontiers in Public Health, 13. https://doi.org/10.3389/fpubh.2025.1643180.


[5] Ratwani, R. M., Fong, A., & Coiera, E. (2024). Patient safety and artificial intelligence in clinical care. JAMA Health Forum, 5(5), e241523. https://doi.org/10.1001/jamahealthforum.2024.1523.



 
 
 

Recent Posts

See All

Comments


bottom of page