Recent headlines have reignited the debate on assisted dying in light of technological advancements. Last month, the controversial ‘Sarco suicide pod’ made news in Switzerland after being used for the first time by a 64-year-old American woman. Designed to allow users a self-administered and controlled process, Sarco operates without direct medical assistance, relying instead on the individual’s actions to initiate the procedure. The capsule works by filling with nitrogen, leading to a painless, rapid unconsciousness and eventual death. However, the event quickly drew the attention of Swiss authorities, who arrested those involved and onlookers, and a criminal case is now in preparation. Although assisted suicide is legal in Switzerland, the authorities’ reaction underscores the ethical and legal complexities introduced by technology that enables end-of-life decisions.
As we see this technology in this controversial area emerge, we have to ask: Will AI-driven systems redefine how society navigates end-of-life choices? Are legal and ethical frameworks ready to handle this challenge?
While the Sarco capsule currently operates without AI, the demand for technology in end-of-life choices signals that AI may be the next step. Globally, AI has already revolutionized healthcare, supporting doctors in diagnostics, treatment planning, and prognostic predictions. In countries where assisted dying is permitted in some form, like Canada, the Netherlands, Belgium, and select U.S. states, AI could soon become a significant tool—not just for managing information but also for aiding critical decisions. However, integrating AI into this area raises profound ethical and legal questions. How can AI technology provide support in complex end-of-life decisions while ensuring these choices remain humane and ethically sound?
Three Scenarios for AI in End-of-Life Decisions
Three ways AI could be integrated into end-of-life care include supporting evaluations, assisting in procedures, and helping with post-procedural assessments. Each use case poses unique ethical and legal challenges:
1. Evaluation of Euthanasia Requests: In jurisdictions that allow euthanasia, patients must often meet specific criteria for suffering, hopelessness, and incurability. AI could theoretically assist doctors by analyzing patient data, uncovering patterns, and offering insights on alternatives—potentially lightening the load on healthcare systems. However, reliance on AI to make life-and-death evaluations invites the question: Can AI make such complex ethical judgments, and who holds responsibility for the outcome?
2. Assistance in Execution: Although this notion may seem futuristic, Sarco’s developer, Philip Nitschke, has expressed interest in using AI to verify a user’s identity and mental state before activating the capsule. AI could monitor factors like oxygen levels within the capsule in real time to ensure safety. However, using AI in such a direct role raises concerns about accountability. If the technology fails, who bears the responsibility? In traditional euthanasia settings, a doctor is accountable, but in AI-driven scenarios, accountability could become ambiguous, potentially resting between manufacturers, healthcare providers, and even the patient.
3. Post-Procedure Assessment: Retrospective reviews of euthanasia cases are vital to ensuring legal and ethical compliance. In theory, AI could identify patterns in large numbers of cases, improving consistency in how these cases are assessed. However, reliance on algorithmic analysis could reduce complex human experiences to data points, eroding the dignity and unique circumstances of each individual case. It is vital that AI remains a tool in human hands, ensuring it supports rather than replaces ethical and humane review processes.
A Global Call for Clear Guidelines
As AI’s role in end-of-life care expands, it will be essential to establish international guidelines to address the ethical and regulatory risks. Although the EU’s AI Act categorizes medical AI as high-risk, it does not yet address AI applications specifically designed for euthanasia or assisted suicide. As technology advances to support patients’ desire for greater control over end-of-life decisions, the absence of clear standards risks blurring the line between personal autonomy and accountability in AI-driven decisions about life and death.
AI’s use in euthanasia must be rigorously assessed not only for technical reliability but also for ethical validity. While AI could streamline logistical aspects, it risks undermining ethical oversight and diminishing the right to a thorough, individualized assessment for each patient. Therefore, bioethicists, healthcare professionals, and policymakers must collaborate to set clear, protective boundaries. A human-centered approach can ensure that AI respects cultural diversity and enhances patient autonomy rather than determining end-of-life choices.
The rapid advancement of AI makes it imperative to establish these guidelines now rather than waiting until AI becomes commonplace in end-of-life care. Delaying action would mean attempting to regulate a technology that has already transformed lives. Whether we aim to limit AI’s influence in this area or set standards for its safe use, the time to act is now. By fostering a global dialogue, we can create frameworks that prioritize patient autonomy, protect healthcare providers, and align potential technology-driven end-of-life care with ethical, humane standards.
Hannah van Kolfschooten (@hvkolfschooten) is a lecturer-researcher at the University of Amsterdam.