Disclosure as Absolution in Medicine

Disentangling Autonomy from Beneficence and Justice in Artificial Intelligence

Author

Kayte Spector-Bagdady, JD, MBe, & Alex John London, PhD

Publish date

Disclosure as Absolution in Medicine: Disentangling Autonomy from Beneficence and Justice in Artificial Intelligence
Topic(s): Artificial Intelligence Editorial-AJOB Informed Consent Technology

This editorial appears in the  March 2025 issue of the American Journal of Bioethics.

Introduction

The rush to deploy artificial intelligence (AI) and machine learning (ML) systems in medicine highlights the need for bioethics to deepen its normative engagement in disentangling autonomy from beneficence and justice in responsible medical practice. One of the reasons that informed consent is such a unique tool is its morally transformative nature. Actions that would otherwise be illegal or unethical are rendered permissible by the provision of free and informed consent. But consent is not a panacea to absolve all risks and burdens. The proliferation of AI/ML systems highlights that every additional call for disclosure warrants a deep introspection of goals, and of what values they reflect.

For example, while informed consent might be appropriate when there is a choice whether to use an AI tool in clinical care, we cannot let deference to autonomy substitute for rigorous standards—based in beneficence and justice—that ensure the safe, effective, and equitable deployment of AI in medicine. Shortcomings in AI technologies that do not meet those standards cannot otherwise be absolved through the informed consent process. The assumption that patients are empowered to assess or alleviate such deficiencies is misguided. While much has been written about the inability of informed consent to bear its increasing transformative burden, further exploration of the appropriate division of moral labor between ethical values in the use of AI in clinical practice is warranted.

Autonomy and Informed Consent

The original intent of informed consent was to ensure that a patient understood the options available for medical care, as well as enough information about material risks and potential benefits to make a choice aligned with their considered values. The understanding of “material” is complex in scope and theory, but Faden and Beauchamp offered the definition of information that would allow a patient “on the basis of his or her personal values, desires, and beliefs, to act with substantial autonomy”.vOr, as case law has summarized, “information which the physician knows or should know would be regarded as significant by a reasonable person in the patient’s position when deciding to accept or reject a recommended medical procedure”. Patients also sometimes express valid interests in choosing the environment of a clinic generally, if they have options (e.g., an academic medical center, gender-concordant care, or a religiously affiliated institution).

An important differentiation exists, however, between choices that are individually preference-sensitive versus those that apply to all players within a context. As Schuck argues: “The more private the choice—that is, the more it concerns the integrity of the individual’s own projects and self-conception and the less it directly affects others—the more robust this right [to autonomy] should be”. But many decisions in medicine lie below this metaphorical waterline. Their moral and legal status are not ethically justified by respecting patient autonomy, but rather the way stakeholders balance a range of legitimate values related to beneficence and justice.

Beneficence and Justice and Informed Consent

As the provision of medical care has grown more complex, more is being disclosed during informed consent about the practice and institutionalization of medicine generally. Billing permissions, medical benefits, security expectations, and state-mandated disclosures, among others, are standard boilerplate before a patient even considers a prescribed intervention. There are many problems with this approach. Because the forms have become so lengthy and complex, patients rarely read or understand them. In addition, these pro-forma disclosures are generally offered as “front-door consent”: a contract of adhesion where patients’ only option is to accept—or not receive their care at the clinic. The type of consent suffers from several “pathologies,” including that it is both coerced and often unwitting.

Of course, informed consent will always be bounded and understood within the features of a specific context. The provision of medical care is a division of labor that involves many stakeholders with myriad legal and ethical obligations. Clinical care occurs within an institution that must both be responsive to the needs of individual patients as well as its patient community writ large. Yet, things such as standards for purchasing equipment, sterile practices for the operating room, and responding to patient risk reports are not appropriately normatively grounded in individual autonomy. Such features should not be blind to patient preferences, but they cannot be arranged solely upon them either, given their importance to the baseline functioning of the system.

Thus, as calls to add additional disclosures to consent increase, so does the urgency of sharpening the line between ambiguously material risks and benefits of the medical machinery that are, or are not, otherwise enabling of autonomy. A health system’s ability to provide safe, efficient, and equitable care is a matter of beneficence and justice. If health services are deficient in such attributes, informed consent cannot—and should not—make their provision morally permissible.

AI and Informed Consent

Many patients report concerns about AI in healthcare. Some of these concerns relate to uses that are preference-sensitive and involve free choice, such as whether a patient communicates with a mental health chatbot as a depression screener before making a therapy appointment. Others are different, such as automated AI warnings built into the electronic medical record that identify patients at high risk of becoming septic. The latter application of AI is a feature of the clinical care environment from which neither the patient nor the clinician can typically opt-out. Disclosing their use to patients does not lessen standards for their accurate, reliable, and effective deployment.

Cohen has observed that, legally, one of the situations where an obligation to disclose the use of AI is likely at its highest is “where the physician lacks a good epistemic warrant to believe that the AI/ML recommendation is correct”. This makes sense from a patient-based standard of informed consent, where a physician is legally compelled to disclose information that would be considered material by a reasonable patient—ostensibly to enhance autonomy. Surely, a reasonable patient would deem the use of a diagnostic tool without evidence of accuracy or reliability relevant to their decision of whether to depend on it. Our point is that, ethically, requesting consent to a use of AI that the physician has credible reason to believe is faulty conflates the role of autonomy with beneficence and justice and does not otherwise morally transform its use.

In either the case of the mental health chatbot or sepsis alert, whether a health system is justified in incorporating or offering AI tools in the clinical environment hinges fundamentally on whether the supporting evidence is sufficient. Setting best practices for disclosure of either use aside, we argue that once safety, efficacy, and equity have been established, only then it is appropriate to differentiate between choices to which patients should consent versus those that are imbedded within the clinical environment and affect the functioning of the health system for many patients. The first is an opportunity for a patient—with necessary information about material risks, benefits, and alternatives—to make a free choice that reflects their values. The second must adhere to rigorous standards to enhance the health of the patient community overall. It is ethically critical not to conflate the two—inserting consent into the appropriate role of beneficence and justice threatens the legitimate division of moral labor in medicine. Informed consent cannot legitimize lax evidentiary standards and the hasty deployment of AI systems of questionable clinical value.

Conclusion

The transformative nature of informed consent lies in its capacity to align clinical choices with patient values and foundational respect of their individual personhood. AI tools are being used across a broad swath of applications in medicine—some of which necessarily align with the capabilities of informed consent and others which depend on different values. As we assess, integrate, challenge, and enable new technologies to enhance patient autonomy, we must do so with the awareness of autonomy’s appropriate scope. There remains a foundational and continuing role of beneficence and justice in the provision of clinical care to ensure that these principles are symbiotic rather than antagonistic.

We use cookies to improve your website experience. To learn about our use of cookies and how you can manage your cookie settings, please see our Privacy Policy. By closing this message, you are consenting to our use of cookies.