Medical AI: A Space For Bioethicists and Policymakers to Collaborate

Author

Claire Bortolotto, MA and Ian Stevens, MA, MSc

Publish date

Medical AI: A Space For Bioethicists and Policymakers to Collaborate
Topic(s): Artificial Intelligence Health Regulation & Law

 

With the implementation of Executive Order 14179, Removing Barriers to American Leadership in Artificial Intelligence, President Trump declared America’s commitment to leading the global charge in the advancement of artificial intelligence (AI). In doing so, the new Trump administration has scrapped Biden-era policies meant to prevent harmful biases in favor of a deregulatory approach aimed at accelerating innovation and enhancing private sector autonomy at the cost of critically important ethical oversight.

The Executive Order was only the first of a series of dominos, including the National Institute of Standards and Technology instructing scientists to not focus on “AI safety,” “responsible AI,” or “AI fairness,” and the Office of Management & Budget issuing two memoranda on the use and procurement of AI by federal agencies that further prioritize nationalized economic gains over competing considerations. More recently, efforts on behalf of Senate Republicans to prevent state-level regulation of AI reflect a desire to centralize control at the federal level, effectively giving the Trump Administration and its supporters the liberty to pursue their own AI agenda. The increasing myopic focus on innovation in AI creates space for bioethicists to step in and aid policymakers and their administrative teams who want to understand more about technology’s practical and ethical nuances as it becomes increasingly pervasive. 

The rapid expansion of AI in clinical settings – sometimes referred to as medical AI – is especially concerning. While some believe medical AI holds promise for “promoting human flourishing,” it also raises serious concerns that demand thoughtful scrutiny. A path forward, we suggest, may be found in learning from historical examples of collaboration between policymakers and bioethicists, a partnership that has been demonstrated for over half a century wherein bioethical expertise has informed public policy. 

Evidenced by various advisory bodies spanning from the 1970s to the 2010s, federally appointed commissions for bioethical topics, like the merits of universal healthcare, have continued to expand. Importantly, these advisory bodies garnered bipartisan support during this growth period, spanning from former President Richard Nixon to, most recently, former President Barack Obama. Although neither President Trump, in either term, nor former President Biden formally engaged with bioethics advisory bodies during their respective terms, reference to the discipline in the Mandate for Leadership: The Conservative Promise indicates the continued value of bioethical expertise across party lines despite today’s polarized context. 

The increasing use of AI in healthcare raises questions regarding its appropriate uses. Consequently, bioethicists have allotted increasing attention to medical AI and can provide a range of recommendations to assist concerned policymakers facing a complex landscape marked by rapid technological advances, strained healthcare resources, and executive calls for the incorporation of AI seemingly anywhere and everywhere – an environment ripe for medical mishaps. 

Firstly, bioethicists have expertise on developments in medical AI, which is critical for the policymaking toolbox. Updates on the ever-changing scope of the industry, owing to current and emerging technology, its uses, and associated ethical, legal, and social implications, are critical for policymakers to succeed in offsetting potential risks with proportionate legislative responses. For instance, the states of California (SB1120), Colorado (SB21-169), and Illinois (HB2472)/Illinois (H5395) have passed legislation on the use of AI in assessing health insurance claims. It’s well known how human biases can pollute AI algorithms through training on man-made datasets, setting a learning precedent for the AI, which then approves or denies health insurance based on discriminatory factors. This is evidenced by several class action suits launched against insurance companies, such as the 2022 State Farm Insurance case on the grounds that its automated insurance system frequently denied the claims of Black policyholders. Thus, with AI regulation striving for “innovation” and economic growth above all else, medical AI with blinders on to other factors is bound to exacerbate existing social inequities.  

Secondly, a focus on patient welfare, arguably the pinnacle of “ethical” medical AI, can be achieved by leaning into bioethicists’ ability to advocate for patients. Where technological mishaps are scandalized, and accountability is dodged by the ever increasing consolidation of Big Tech, one thing remains clear – patients will bear the greatest burden of medical AI’s mistakes. 

Consider the investigation into Epic System’s sepsis prediction AI, a case that demonstrates one of the biggest threats to patient welfare being the generation of incorrect information about the health status of a critically ill individual. The opaque review of medical AI, often owing to its proprietary status, risks increasing medical computing errors paired with an overreliance on its capabilities by tired and overworked healthcare providers. Acknowledging that patients frequently lack the means to self-advocate against these risks – particularly owing to a lack of communication regarding the use of AI in their care – bioethicists are working to highlight this critical disconnect. 

Lastly, with lay-offs, rehirings, and now the use of AI tools to improve “efficiency” that have unfolded at the Food & Drug Administration, there is a growing onus on policymakers in positions of power to find other ways to pick up the resulting slack in regulating medical AI, to maintain societal trust in both technology and the public sector. For as the public has learned, relying on the private sector to self-regulate is ill-advised given their recent backpedaling and, as expressed by TJ Leonard, the CEO of Storyblocks, “the technology industry’s track record shows that promises of ethical conduct often succumb to competitive pressures and the relentless drive for innovation, sometimes at the cost of public interest.” And so, beyond enhancing policymakers’ toolkits in identifying the implicated ethical implications of these technologies, we suggest engaging and leveraging the expertise of bioethicists to reduce the workload that can come from critically analyzing medical AI. 

It’s unclear what promises medical AI will ultimately deliver to the healthcare space, and for whom these promises will actually promote “human flourishing.” Indeed, despite the well-demonstrated concerns that come from the widespread application of AI technologies, the federal government continues to expand its use in various roles in the medical arena without meaningful engagement with bioethicists and non-profit organizations who are well-equipped to help. 

In our country’s current entanglement with Big Tech companies, when it comes to medical AI, it appears problematic to accept the Silicon Valley model of “move fast and break things.” Although having another federally sanctioned bioethics commission to weigh in on this matter is an ideal scenario, any engagement with bioethicists in other capacities will provide benefit for policymakers as patients themselves and the public alike. Regardless of the path taken, an interdisciplinary effort that includes policymakers and bioethicists is required to facilitate the effective, trustworthy, and safeguarded application of medical AI in this high-tech, high-stakes era. 

 

Acknowledgements

We’d like to thank the ‘Hastings on the Hill’ team for supporting our construction of this article, including Vardit Ravitsky, Jean-Christophe Bélisle-Pipon, and Erin Williams.

 

Claire Bortolotto, MA, is a Research Associate at Simon Fraser University and an Ethicist at Vancouver Coastal Health 

Ian Stevens, MA, MSc is a Research Assistant at The Hastings Center and an Affiliate at Harvard Medical School, Center for Bioethics 

We use cookies to improve your website experience. To learn about our use of cookies and how you can manage your cookie settings, please see our Privacy Policy. By closing this message, you are consenting to our use of cookies.