Bioethics Principles in Machine Learning-Healthcare Application Design: Achieving Health Justice and Health Equity

By Dr. Roschelle L. Fritz, Dr. Connie Kim Yen Nguyen-Truong,
Dr. Thomas May, Dr. Katherine Wuestney, Dr. Diane J. Cook

Facebook
Twitter
LinkedIn

Citation

Fritz RL, Nguyen-Truong CKY, May T, Wuestney K, Cook DJ. Bioethics principles in machine learning-healthcare application design: achieving health justice and health equityHPHR. 2024;79. https://doi.org/10.54111/0001/AAAA1

Corresponding Author: Dr. Roschelle Fritz may be reached at the Betty Irene Moore School of Nursing at UC Davis at [email protected]

Bioethics Principles in Machine Learning-Healthcare Application Design: Achieving Health Justice and Health Equity​

Abstract

Health technologies featuring artificial intelligence (AI) are becoming more common. Some healthcare AIs are exhibiting bias towards underrepresented persons and populations. Although many computer scientists and healthcare professionals agree that eliminating or mitigating bias in healthcare AIs  is needed, little information exists regarding how to operationalize bioethics principles like autonomy in product design and implementation. This short course is framed with a Social Determinants of Health lens and a health justice and health equity stance to support computer scientists and healthcare professionals in building and deploying ethical healthcare AI. In this short course we introduce the bioethics principle of autonomy in the context of human-centered design (Module 1) and share options for design thinking models, suggesting four activities to embed ethics principles during design (Module 2). We then discuss the importance of gaining the perspectives of diverse groups to minimize harm and support the fundamental human values of underrepresented persons in support of health equity and health justice ideals (Module 3).

Introduction

The digitizing of healthcare data allows for deep penetration of artificial intelligence (AI) in United States (U.S.) healthcare. Many machine learning healthcare applications (ML-HCAs) are already in use in the U.S. healthcare ecosystem. Given the potential for ML-HCAs to impact individuals and populations at scale, it is critical to approach its development and implementation with care and intention. The European Union, U.S., Japan,1-3 and other countries4 provide guidance for the development and use of AI in societies. Some ML models are exhibiting biased behavior,5-7 so health scientists, bioethicists, and software developers have called for more transparency regarding their performance and functionality, including in healthcare. However, questions remain about how to embed bioethics principles like autonomy, the right to self-determination, equity, and dignity in intelligent software to support health justice and health equity aims. Little information exists detailing how to operationalize these bioethics principles within ML-HCAs. Embedding bioethics principles in machine learning for healthcare applications is crucial, and it is foundational to addressing biased models used in the delivery of care.

Research findings show that health disparities are largely a result of social, economic, and environmental conditions rather than individual behaviors or genetics.8 The Social Determinants of Health framework and a health justice and health equity lens can help developers and healthcare professionals engaging in human-centered design envision the impact of intelligent health technologies on diverse patient populations. Emphasis should be placed on equitable access, usability, quality, and safety. Accountability will be needed to support equitable health outcomes.

 

Course Description

This short course aims to support persons from multiple disciplines in developing principle-based ML-HCAs. There are 3 modules:

Module 1) Translating Bioethics Principles into Action in AI-Enabled Healthcare

Module 2) Bioethics for Machine Learning Pipeline

Module 3) Principles to Practice in Bioethical ML-HCA Design

We outlined the learning objectives and activities, suggested readings, question guides, and summaries, and provided the links to the accessible PowerPoint videos. We encourage developers and healthcare professionals from various disciplines (nurses, community health practitioners, engineers, computer scientists) to support advanced understandings by reflecting on the questions in the Learning Assessment Guides after viewing the recordings.

Module 1: Translating Bioethics Principles into Action in AI-Enabled Healthcare

In the context of healthcare, autonomy is related to the ideas of human agency, dignity, and freedom. These concepts become important as individuals attempt to navigate choices related to assessing and treating illness or injury. Some of these assessments and treatments may include ML-HCAs. ML-HCAs are currently used to monitor cardiac patients’ heart rate and electrical rhythms and for monitoring onset of sepsis in hospitalized patients so hospital-acquired infections can be caught early, and poor outcomes mitigated. But how do these algorithms affect individuals? Do they impact all persons equitably? Unfortunately, ML-HCAs can lead to biased treatment if bias is introduced in development or implementation processes. For example, a predictive model for heart failure aimed at treatment planning that is trained on a primarily White demographic dataset will not account for the fact that Black women are 2.5x more likely to die from heart failure and at a younger age than their White counterparts9 – because this prevalence pattern did not exist in data derived from a White sample. Thus, the Black woman’s automated suggested treatment plan (timing of office visits, cardiac medication dosages, and more) could lead to unfair treatment because she needs to be seen sooner and more frequently. Situations like this can be avoided if attention is given to health equity and justice ideals, and if developers and healthcare professionals take time to envision impacts on patients’ autonomy and right to make informed decisions. Bioethics principles can support operationalizing health justice ideals by providing a convenient and valuable framework to guide the development and implementation of healthcare technology – promoting better fit in practice – or as we say in nursing – better fit at the bedside.

Learning Objectives

  1. Explore the bioethics principle of AUTONOMY in the context of developing ML-HCAs.
  2. Introduce activities in the healthcare setting that embody the principle of AUTONOMY when implementing of ML-HCAs in hospital and community-based settings.

Learning Assignment Guide

  1. What human rights are at risk with the implementation of ML-HCAs?
  2. What can I (developer or healthcare professional) do to ensure ethical use of ML-HCAs?
  3. What can I (developer or healthcare professional) do to support individuals’ AUTONOMY when ML-HCAs are used in the delivery of care?

Suggested Readings

  1. Amann J, Blasimme A, Vayena E, Frey D, Madai VI; Precise4Q consortium. Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Medical Informatics and Decision Making. 2020;20(1):310–310. doi:10.1186/s12911-020-01332-6
  2. Reddy, S, Allan, S, Coghlan, S, Cooper, P. A governance model for the application of AI in health care. Journal of the American Medical Informatics Association: JAMIA. 2020;27(3), 491-497. doi:10.1093/jamia/ocz192
  3. Tiribelli ST. The AI ethics principle of autonomy in health recommender systems. Argumenta. 2023;1-18. doi:10.14275/2465-2334/20230.TIR
  4. World Health Organization. Ethics and governance of artificial intelligence for health: WHO guidance. Executive summary. Geneva. 2021. License: CC BY-NC-SA 3.0 IGO. 

Suggested Readings

In Module 1 we introduce the bioethics principle of autonomy as important to human-centered design. Module 1 contains an interview with a bioethicist, Dr. Thomas May, regarding how to operationalize the bioethics principle of autonomy in ML-HCAs.

Module 2: Bioethics for Machine Learning Pipeline

In the context of healthcare and the use of ML-HCAs, intention toward health justice and health equity begins in the design phase and continues through implementation and beyond. Multiple design models exist that focus attention on end-users’ values and rights. No matter your role on the ML pipeline, three key considerations can assist with embedding health equity and health justice ideals.

First, assess the demographics of your design team. Is your team diverse? Are you surrounded by people who look like you, live like you, and think like you? If so, find people to join your team that will challenge your way of thinking and being.

Second, consider the training data and who is represented in the data. Ensure that the test sample includes similar proportions of demographics that match the demographics of the intended end-user. For example, if you are developing an automated remote monitoring device for U.S. persons with kidney failure, ensure that 35% of your sample are Black Americans. Although Black Americans represent about 13% of the U.S. population, social determinants of health cause a disproportionate number to be affected by kidney failure.
10 Black Americans account for over 35% of all cases.11

Third, spend time early in the design process envisioning how, when, and where the ML-HCA will be used in healthcare. Specifically, consider which member of the healthcare or technology team could serve as the ML-HCA expert – a person with whom the patient could speak should questions arise. Access to a person who understands how the ML-HCA works is like a patient having access to their surgeon. A patient having open heart surgery may not understand the details of the procedure but having access to the cardiac surgeon, and having the ability to ask the surgeon questions is a basic patient right. Similarly, product designers and healthcare professionals implementing intelligent health technologies should have a plan that provides patients access to an expert.

Learning Objectives

  1. Explore design thinking concepts alongside the ethics principle of AUTONOMY.
  2. Discover HOW the healthcare ethics principle of AUTONOMY can be included in the development pipeline of machine learning healthcare applications (ML-HCAs).

Learning Assignment Guide

  1. Which design thinking model would best fit my style or project and why?
  2. Why is the practice of reflexivity important?
  3. What are 4 things that I can do to actualize the principle of AUTONOMY in my design, development, or implementation of an ML-HCA?

Suggested Readings

  1. Amann J, Blasimme A, Vayena E, Frey D, Madai VI; Precise4Q consortium. Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Medical Informatics and Decision Making. 2020;20(1):310–310. doi:10.1186/s12911-020-01332-6
  2. Reddy, S., Allan, S., Coghlan, S., Cooper, P. A governance model for the application of AI in health care. Journal of the American Medical Informatics Association: JAMIA. 2020;27(3), 491-497. doi:10.1093/jamia/ocz192
  3. Tiribelli ST. The AI ethics principle of autonomy in health recommender systems. Argumenta. 2023;1-18. doi:10.14275/2465-2334/20230.TIR
  4. World Health Organization. Ethics and governance of artificial intelligence for health: WHO guidance. Executive summary. Geneva. 2021. License: CC BY-NC-SA 3.0 IGO. 
  5. Stanford Design School. An introduction to design thinking process guide. No date. Accessed 2024. https://web.stanford.edu/~mshanks/MichaelShanks/files/509554.pdf
  6. Umbrello S, Capasso M, Balistreri M et al. Value sensitive design to achieve the UN SDGs with AI: a case of elderly care robots. Minds & Machines. 2021;31, 395-419. doi:10.1007/s11023-021-09561-y
  7. Varkey B. Principles of clinical ethics and their application to practice. Med Princ Prac 2021;30(1):17-28. doi:10.1159/000509119

Summary

Module 2 introduces principled design thinking and supporting intentional ethical design. We share options for design thinking models and suggest four activities to embed ethics principles during design.

Module 3: Principles to Practice in Bioethical ML-HCA Design

Nurses are often the workforce implementing healthcare technologies. Although nurses have followed a professional code of ethics11 since 1950 when caring for patients, new challenges to supporting patients’ rights exist given growing global human migration and immigration patterns that result in healthcare professionals treating diverse persons. Despite the creation of the Patients’ Bill of Rights12 in 1997 to safeguard the rights and responsibilities of all patients receiving care in the U.S., a one-size-fits-all implementation of rights does not exist. Diverse persons having diverse life experiences and ways of being do not all interpret their right to autonomy, self-determination, and dignified care similarly. In this module, we encourage developers and healthcare professionals to intentionally envision a healthcare professional obtaining informed consent for using an ML-HCA. Such intentional forethought, in each phase of design, can facilitate thinking about how diverse persons might receive the technology as part of their care and improve informed use. For example, in our research we found that consenting older Asian immigrants for a study involving a smart health system (smart home performing health monitoring using ambient sensing and ML models) raised questions about safety and security that, culturally, required approval from the family. The resulting conversations led to adding new features to a smart home prototype that is under development. We posit that the differences existing between Western and Eastern thought, specifically the concepts of Individualism and Collectivism, will inform how diverse persons understand and interact with ML-HCAs.

Learning Objectives

  1. Explore the fundamental human values of diverse groups in the context of ML-HCAs.
  2. Discuss opportunities for improving well-being through ML-HCAs while minimizing harm.

Learning Assignment Guide

  1. Name 3 fundamental human rights that should be prioritized when developing ML-HCAs.
  2. What activities can developers, and those implementing ML-HCAs, engage in to operationalize principle-based design?
  3. How does envisioning informed consent support robust design?

Suggested Readings

  1. Amann J, Blasimme A, Vayena E, Frey D, Madai VI; Precise4Q consortium. Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Medical Informatics and Decision Making. 2020;20(1):310–310. doi:10.1186/s12911-020-01332-6
  2. Reddy, S., Allan, S., Coghlan, S., Cooper, P. A governance model for the application of AI in health care. Journal of the American Medical Informatics Association: JAMIA. 2020;27(3), 491-497. doi:10.1093/jamia/ocz192
  3. Tiribelli ST. The AI ethics principle of autonomy in health recommender systems. Argumenta. 2023;1-18. doi:10.14275/2465-2334/20230.TIR
  4. World Health Organization. Ethics and governance of artificial intelligence for health: WHO guidance. Executive summary. Geneva. 2021. License: CC BY-NC-SA 3.0 IGO. 
  5. Wallerstein N, Duran B, Oetzel J, Minkler M. Community-Based Participatory Research for Health: Advancing Social and Health Equity. 3rd edition. San Francisco, CA: Jossey-Bass; 2018.
  6. Nguyen-Truong CKY, Waters SF, Richardson M, Barrow N, Seia J, Eti DU, Rodela KF. An antiracism community-based participatory research with organizations serving immigrant and marginalized communities, including Asian Americans and Native Hawaiians/Pacific Islanders in the United States Pacific Northwest: qualitative description study with key informants. Asian Pac Isl Nurs J. 2023;7:e43150. doi:2196/43150
  7. Ka’ai K, Lee R, Chau V, Xu L. Advancing health equity for Asian Americans, Native Hawaiians, and Pacific Islanders. Health Equity. 2022;6(1):399-401. doi:10.1089/heq.2022.0075
  8. 1Nguyen-Truong CKY, 1Fritz RL, Lee J, Lau C, Le C, Kim J, et al. Interactive co-learning for research engagement and education (I-COREE) curriculum to build capacity between community partners and academic researchers. Asian Pac Isl Nurs J. 2018;3(4):126-138. https://apinj.jmir.org/issues-apinj [1Two First Authors]
  9. 1Fritz RL, 1Nguyen-Truong CKY, Leung J, Lee J, Lau C, Le C, et al. Older Asian immigrants’ perceptions of a health-assistive smart home. Gerontechnology. 2020;19:1-11. doi: 10.4017/gt.2020.19.04.385 [1Two First Authors]
  10. Rice K, Seidman J, Mahoney O. A health equity–oriented research agenda requires comprehensive community engagement. J Particip Med. 2022;14(1):e37657. doi:2196/37657
  11. Viklund EWE, Nilsson I, Hägglund S, Nyholm L, Forsman AK. The perks and struggles of participatory approaches: exploring older persons’ experiences of participating in designing and developing an application. Gerontechnology. 2023;22(1):1-12. doi:10.4017/gt.2023.22.1.816.03

Summary

In Module 3, we discuss the importance of gaining the perspectives of diverse groups to minimize harm and support the fundamental human values of underrepresented persons. Module 3 contains an interview with a nurse scientist and underrepresented-communities advocate, Dr. Connie Kim Yen Nguyen-Truong, regarding what designers should know about culturally responsive designing for diverse individuals and groups.

Power Point Videos

Acknowledgments

The authors would like to thank study participants from diverse underrepresented communities. Funding was provided by the National Institutes of Health National Institute of Nursing Research R01 Bioethics Supplement #3R01NR016732 and in part by the Gordon and Betty Moore Foundation.

Disclosure Statement

The authors have no relevant financial disclosures or conflicts of interest.

References

  1. Brey P, Dainbow B. Ethics by design and ethics of use approaches for artificial intelligence. AI & Ethics.2021. doi:1007/s43681-023-00330-4.
  2. U.S. State Department. Artificial intelligence (AI). 2024. Accessed 2024. www.state.gov/artificial-intelligence/
  1. Fukuyama M . Society 5.0: Aiming for a new human-centered society. Japan Spotlight. 2018;47-50.
  2. World Health Organization. Ethics and governance of artificial intelligence for health: WHO guidance. Executive summary. Geneva. 2021. License: CC BY-NC-SA 3.0 IGO.
  3. Hatherley J, Sparrow R. Diachronic and synchronic variation in the performance of adaptive machine learning systems: the ethical challenges. Journal of the American Medical Informatics Association. 2023;30(2):361-366. doi:10.1093/jamia/ocac218
  4. O’Connor S,  Booth RG. Algorithmic bias in health care: opportunities for nurses to improve equality in the age of artificial intelligence. Nursing Outlook. 2022;70(6):780-782. doi:10.1016/j.outlook.2022.09.003
  5. O’neil . Weapons of math destruction: how big data increases inequality and threatens democracy. Crown. 2017.
  6. Nayak A, Hicks AJ, Morris AA. Understanding the complexity of heart failure risk and treatment in black patients. Circulation: Heart Failure. 2020;8:e007264. doi:10.1161/CIRCHEARTFAILURE.120.007264
  7. Laster M, Shen JI, Norris KC. Kidney disease among African Americans: a population perspective. American Journal of Kidney Diseases. 2018;72(5): S3-S7. doi:10.1053/j.ajkd.2018.06.021
  8. Kneipp SM, Schwartz TA, Drevdahl DJ, Canales MK, Santacroce S, Santos Jr HP, Anderson R. Trends in health disparities, health inequity, and social determinants of health research: a 17-year analysis of NINR, NCI, NHLBI, and NIMHD funding. Nursing Research. 2018;67(3):231-241. doi:10.1097/NNR.0000000000000278
  9. American Nurses Association. Code of Ethics for Nurses with Interpretive Statements. 2024. Accessed 2024. www.nursingworld.org/practice-policy/nursing-excellence/ethics/code-of-ethics-for-nurses/coe-view-only/
  10. U.S. Office of Personnel Management. Patients’ Bill of Rights. 1997. Accessed 2024. www.opm.gov/healthcare-insurance/healthcare/reference-materials/bill-of-rights/

About the Author

Dr. Roschelle L. Fritz, PhD, RN, FAAN

Dr. Roschelle L. Fritz is an Associate Professor at the Betty Irene Moore School of Nursing at UC Davis in Sacramento, CA and Affiliate Faculty at Washington State University, Nursing & Systems Science Department, College of Nursing in Vancouver, WA. She is a Betty Irene Moore Fellow for Nurse Leaders and Innovators, and a repeatedly invited speaker at the National Institutes of Health. She consults with health technology startups and is a former publicly elected hospital commissioner in Washington State. Dr. Fritz has more than 30 years’ experience in nursing in public health, emergency services, hospital administration, nursing education and research. Her research focuses the developing smart homes for older adults and the use of AI in the delivery of healthcare.

Dr. Connie Kim Yen Nguyen-Truong, PhD, RN, ANEF, FAAN

Dr. Connie K Y Nguyen-Truong (she/her/they) is an Associate Professor at Washington State University, Department of Nursing and Systems Science, College of Nursing in Vancouver, WA. She is recognized as a Martin Luther King Jr. Community, Equity, and Social Justice Faculty Honoree, March of Dimes Distinguished Nurse Hero, and by the American Association Colleges of Nursing for Excellence and Innovation in Teaching. She is a Fellow of the National League for Nursing Academy of Nursing Education, American Academy of Nursing, and Coalition of Communities of Color Leaders Bridge – Asian Pacific Islander Community Leadership Institute. Her program of research is cross-sectoral and multidisciplinary/transdisciplinary, and with community and health organizations and leaders, community health workers, student scholars, and scientists. Areas include mentorship, health promotion and health equity, culturally specific disaggregated data; immigrants, refugees, and marginalized communities; community-based participatory/action research and community-engaged research; parent/caregiver leadership, disability leadership justice, and early learning; diversity and inclusion in health-assistive and technology research including adoption; cancer control and prevention, and anti-racism. Dr. Nguyen-Truong received her PhD in Nursing, including health disparities and education, and completed a Post-Doctoral Fellowship in the Individual and Family Symptom Management Center at Oregon Health & Science University School of Nursing.

Dr. Thomas May, PhD

Dr. Thomas May is currently Floyd and Judy Rogers Endowed Professor, Elson S. Floyd College of Medicine, Washington State University; and Faculty Investigator at Hudson Alpha Institute for Biotechnology. In addition to serving on the American Philosophical Association’s Committee on Philosophy and Medicine and as two-term Chair of the American Public Health Association’s Ethics Forum, Dr. May has served as an advisor to the National Vaccine Program Office, the State of Washington Health Care Authority, the Florida Department of Health, the Illinois Guardianship and Advocacy Commission, and the Wisconsin Task Force on Emergency Preparedness.

Dr. Katherine Wuestney, PhD, RN

Dr. Katherine Wuestney is a former graduate student of Drs. Fritz and Cook. She is currently employed at Gilead Sciences as a clinical data manager in clinical data science.

Dr. Diane J. Cook, PhD

Dr. Diane Cook is a Regents Professor and a Huie-Rogers Chair Professor, at Washington State University, School of Electrical Engineering and Computer Science in Pullman.  She founded the Center for Advanced Studies in Adaptive Systems (CASAS) Lab and envisioned an ambient sensor-based smart home to assist older adults with aging in place. She co-directs the NIH Training Program in Gerontechnology. Her areas of research include artificial intelligence, machine learning, data mining, robotics, smart homes, and digital health. She is a Fellow of the IEEE, the FTRA, the National Academy of Inventors, and the American Institute for Medical and Biomedical Engineers.