Post #9: How Bioethics Can Inform Ethical AI Governance

As the global AI community grapples with the task of crafting practical operating models for AI governance, there is a need for new applied frameworks that adapt learnings from other domains where ethics has been successfully incorporated. Bioethics offers the most compelling example of such a domain.     

In this week’s essay, we explore the core discipline of Bioethics and consider the lessons we can glean from its history, principles, and practical applications within the field. Our aim is to extrapolate the established frameworks of Bioethics to complement AI governance practices. This essay is an important, albeit early, step along this journey.   

What is Bioethics? 

Bioethics is a field within applied ethics that addresses moral issues arising from advances in biology, medicine, and healthcare. It encompasses a wide range of important topics, for example, including the rights and responsibilities of healthcare providers and patients, the ethical implications of medical research and experimentation, the allocation of healthcare resources, and the treatment of vulnerable populations. 

The Rise of Bioethics  

Many modern bioethical practices were crafted relatively recently, with the origin of the domain in the late 1960s. This foundation is often traced to an influential 1966 article by Henry Beecher on ethical problems in clinical research, which amongst other things, heavily criticized the failure of medical service providers to inform patients of the risks involved in experimental treatments

In the years since its birth, Bioethics has grown into a fundamental pillar of the medical discipline. This growth was heavily driven by the establishment in 1969 of the Institute of Society, Ethics and the Life Sciences (now called The Hastings Center)—which took the lead in establishing the vision, direction, methods, and intellectual standards of Bioethics—and culminated with the development and adoption of patients’ rights in the medical field.  

The establishment of certain global directives/frameworks also heavily influenced the field. These include: 

  1. The World Medical Association (WMA) Declaration of Helsinki. First adopted in 1964, the WMA Declaration provides ethical guidelines for medical research involving humans, and outlines important principles such as informed consent, the protection of vulnerable populations, and ethical research practices.

  2. The Universal Declaration of Bioethics. Adopted by the United Nations Educational, Scientific, and Cultural Organization (UNESCO) in 2005, it establishes bioethical guidelines and practices, addressing issues such as human dignity and human rights. 

  3. The Belmont Report. Issued by the U.S. National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research in 1979, the Report outlines ethical principles and practices for research involving human subjects.  

These directives have served an essential role in shaping global Bioethics practices and have further concretized the effort while recognizing that local laws (including case law) and implementation in medical research continue to reinforce its structural frame. 

The Four Fundamental Principles of Bioethics 

The discipline of Bioethics is predominantly influenced by four principles which serve as the compass for navigating the most complex and intricate challenges of biomedical practice. These principles are Autonomy, Beneficence, Nonmaleficence and Justice. 

  1. Respect for Autonomy: The Principle of Self-Governance.  This principle emphasizes the importance of respecting individuals' rights to make their own decisions regarding their healthcare and medical treatment. It involves providing patients with relevant information about their condition, treatment options, risks, and benefits, allowing them to make informed choices about their healthcare.  

  2. Beneficence: The Principle of Doing Good.  The principle of beneficence requires physicians to act for the benefit of the patient and is underpinned by various moral obligations placed on physicians to protect and defend the rights of patients, prevent harm, and remove conditions that will cause harm. Beneficence encompasses acts of commission, where healthcare providers actively seek to benefit patients, and may include acts of omission, where they avoid causing harm or allowing harm to occur.  

  3. Nonmaleficence: The Principle of Avoiding Harm.  Nonmaleficence is the obligation of a physician to not harm the patient. This principle derives its roots from the Hippocratic Oath and supports moral dictates such as “do not kill,” “do not cause pain or suffering,” “do not incapacitate,” and “do not deprive others of the goods of life.” Physicians must avoid causing harm to their patients, whether intentionally, negligently, or unintentionally. 

  4. Justice: The Principle of Fairness.  The principle of justice pertains to the fair distribution of resources, rights, and opportunities in healthcare. It involves treating individuals fairly and equitably, regardless of factors such as race, gender, socioeconomic status, or medical condition. The principle embodies a moral charge to address healthcare disparities, advocate for universal access to healthcare, and ensure that scarce health resources are allocated fairly and efficiently across populations.  

Bioethics and AI Ethics Alignment 

AI Ethics draws heavily from the foundational principles of moral philosophy as Bioethics. As such, it is not surprising that there is a strong symmetry and alignment between the fundamental principles of Bioethics and evolving frameworks for AI governance based on ethics. Said another way, these principles of Bioethics can be extrapolated to inform ethical AI governance.  

  1. Autonomy.  Just as individuals have the right to make autonomous decisions about their healthcare, human users should have sufficient control over their interactions with AI systems. This includes providing them with information that emanates from AI/algorithmic transparency, informed consent for personal data collection and usage, and the ability for people to opt out of such data collection or adjust preferences within AI systems, including the important right to be forgotten.  

  2. Beneficence.  AI systems should be designed and deployed to maximize human benefits (flourishing) and minimize harm. This involves ensuring that AI technologies are used to enhance human well-being, promote fundamental human rights - such as personal safety and security, and address societal challenges such as bias, discrimination and social disparities, environmental sustainability, and economic inequality. 

  3. Nonmaleficence.  Both AI developers and users alike have a responsibility to prevent harm and mitigate risks associated with AI systems. This includes addressing issues such as bias and discrimination in AI algorithms, ensuring the safety of AI-driven technologies, and implementing safeguards to protect against unintended consequences or malicious use of AI by bad actors.  

  4. Justice.  Ethical considerations of fairness and equity are paramount in AI ethics. This involves ensuring that AI technologies are developed and deployed in a way that promotes equal access, opportunity, and treatment for all individuals, regardless of factors such as race, gender, socioeconomic status, or geographic location/nationality. Additionally, efforts should be made to address disparities in the accessibility of AI systems, including those resulting from the digital divide. 

We already see signs of such synchronization occurring organically in practice. For example, McKinsey’s Responsible AI (RAI) Principles incorporate the principles of Accuracy & Reliability, which address concerns similar to those targeted by the principle of Nonmaleficence, as well as the principle of Fairness and Human-Centricity which maps squarely to the Bioethics principle of Justice. Other responsible AI or ethical AI principles compare similarly. 

 

Bioethics 

AI Ethical Principles 

OECD  

World Economic Forum 

EY 

June 2023 HBR Article [Spisak, Rosenberg, & Beilby] 
  

Autonomy 

Transparency, Explainability 

 

Human Agency, Privacy 

Data Protection, Transparency  

 

Informed Consent, Opt-In & Easy Exits 

Nonmaleficence  

Robustness, 

Security & Safety, 

Accountability  

Accountability, 

Lawfulness, 

Safety 

Accountability, Reliability, Security, Explainability, Compliance 

 

Aligned Interests, Conversational Transparency,     AI Training & Development 

Beneficence  

Inclusive Growth, Sustainable Development & Well-being 

Beneficial AI 

Sustainability 

 

Health & Well-Being 

Justice 

Human-centered Values & Fairness 

 

Fairness 

Fairness 

 

De-biased and Explainable AI 

Building on the Bioethics Framework 

By extending principles from Bioethics to AI ethics, we can navigate the ethical complexities of AI technologies in a way that promotes human flourishing, societal well-being, and environmental sustainability. This interdisciplinary approach allows us to draw upon established frameworks and lessons learned from Bioethics - over many years - to address the unique challenges and opportunities presented by AI. 

Central to this learning exercise is determining the major drivers behind the success of Bioethics as a critical discipline within the field of biomedicine, and together with novel research, adapting these Bioethics influencers in the AI Ethics discourse. Some of these bioethical drivers have already been incorporated into the AI ethics framework; other influencers should be considered going forward. 

Driver 1: Institutions and Institutional Support.  The establishment and operation of Bioethics centers, research institutes, ethics boards and academic programs at universities and healthcare institutions provide support for the field of Bioethics. As noted above, The Hastings Center and similar institutions provide direction, methods, and intellectual standards that continue to inform the development and impact of Bioethics.  

Driver 2: Public Awareness and Advocacy.  Norms are an often ignored but highly effective modality of both constraint and regulation. Development of norms (and associated standards) and public opinion through awareness and advocacy help in many ways to present Bioethics as a regulatory alternative. This was accomplished by socializing expectations of certain standards of behavior from physicians. We believe it can do the same for AI Ethics. 

Driver 3: Educational Initiatives.  The incorporation of Bioethics into medical and healthcare pedagogy played a significant role in its acceptance and adoption. Many medical schools, nursing programs, and other healthcare training programs include lessons on Bioethics, ensuring that healthcare professionals are equipped with the knowledge, skills, and ethical consciousness necessary to navigate complex ethical dilemmas. This educational approach can also be helpful with AI, especially as the more complex applications of AI and associated ethical dilemmas will be produced by future generations.  

Driver 4: Interdisciplinary Collaboration & Multistakeholderism.  Bioethics as a discipline benefited greatly from drawing on insights from a wide range of fields, including philosophy, medicine, law, sociology, anthropology, and public health. The interdisciplinary nature of AI ethics means that collaboration and dialogue among diverse stakeholders, to foster a more comprehensive and nuanced understanding of ethical issues and strategies, is not only advisable but necessary. 

Driver 5: Standard Setting and Guidelines.  The development of ethical guidelines, codes of conduct, and regulatory frameworks for healthcare and biomedical research have helped promote the acceptance and adoption of Bioethics. There is a similar need for AI standards and guidelines. 

*** 

Do you believe that the AI ethics community can learn from the field of Bioethics? We’d love to hear from you! As always, we welcome your input and collaboration. Until next week. 

The Business AI Ethics research team is part of the Edmond & Lily Safra Center’s ongoing effort to promote the application of ethics in practice. Their research assists business leaders in examining the promise and challenges of AI technologies through an ethical lens. Views expressed in these posts are those of the author(s) and do not imply endorsement by ELSCE.