by Sally J. Scholz, PhD
This editorial can be found in the latest issue of American Journal of Bioethics.
In “Artificial Intelligence, Social Media and Depression,” Laacke and colleagues (2021) consider the ethical implications of artificial intelligence depression detector (AIDD) tools to assist practitioners in diagnosing depression or posttraumatic stress. Although the revised account of health-related autonomy (HRDA) offers important correctives in the era of digital data, I argue that additional considerations ought to operate in institutional contexts where autonomy is already compromised, such as in the military. Complementing the health-related account with insights from military ethics and the ethics of war, specifically, jus in bello internal obligations to military personnel, demonstrates the importance of considering alternative ethical frameworks and institutional contexts of compromised autonomy prior to implementing any AIDD.
HRDA provides some checks to ensure that the use of AIDD preserves the autonomy of the participant. But what about situations wherein autonomy is already compromised? Does the context of voluntary involvement in an institution of compromised autonomy, like the military, affect an ethical analysis of artificial intelligence instruments in health care contexts? On one hand, diagnostic technology may provide a lifesaving intervention to help soldiers or veterans who are suffering from depression or posttraumatic stress disorder (PTSD). This could facilitate greater attention to mental health concerns in the field and help alleviate some of the burden on the veterans’ health system, allowing for ease of triage to better allocate resources. Further, given the documented stigma surrounding mental health care, artificial intelligence (AI) could make mental health a uniform element of overall well-being in institutionalized contexts like the military.
On the other hand, diagnostic technology contributes to what might be called “control creep” by institutions and is particularly troubling for institutions that already compromise autonomy. The military is a large institution with responsibility for the health and well-being of its members. Although members join voluntarily, their lives are circumscribed by the decisions of others. Further, diagnostic technology risks expanding the divide between soldiers and citizens, making it even easier for citizens to renounce collective responsibility for war, abandon the obligation to care for military personnel, and fail to engage responsibly with the burdens of war (Scholz 2020). As a society, we ought to resist quick fixes that reduce the moral burden of war to an algorithm, including the costs for the mental health of military personnel.
The dominant framework in the ethics of war, just war theory, articulates principles to guide decisions to go to war, behavior within war, and peaceful resolution of war. The principles emerge from a long tradition aimed at preserving the rights of the innocent and avoiding unnecessary harm. The just war tradition considers maintaining the moral rights and status of those fighting as instruments of a political community to be part of the moral burdens of war (Walzer 1977).
A human rights-based approach to just war theory includes a defense of the basic rights of soldiers while they serve their government in times of peace as well as in times of war. Brian Orend describes the “big five” rights of security, subsistence, liberty, equality, and recognition, which guide the jus in bello internal obligations to military personnel. Although military involvement transforms certain basic rights of military personnel, they still possess rights, albeit in context-delimited form. The right to liberty includes the right to refuse certain enhancements (Orend 2013, 146), and the right to equality ensures access to adequate health care.
Digital diagnostic tools, whether through social media or wearables, often place efficiency above other considerations, as Laacke and colleagues note. Within voluntary institutions like the military, efficient provision of health care quickly gives way to considerations of the costs imposed on the nation. But what other costs are borne with a reliance on machines for diagnosing mental health concerns? Consider, for instance, the cost to the individual, the cost to society, and the cost to collective understanding of mental health within institutional contexts of compromised autonomy.
Control creep, like its relative, surveillance creep, may influence the obedience demanded of soldiers. In discussions of surveillance creep, AI ethicists often bring up the erosion of an expectation against surveillance, sacrificing privacy and subtly affecting decisions (Frischmann and Selinger 2018, 26). Users of platforms become accustomed to their information being part of “big data,” allowing marketing intrusions. Control creep acts in a similar way. The expectation of monitoring alleviates the personal responsibility for mental well-being and the agency to seek help when one is at risk. It also relieves societies of the burden of providing extensive in-person care. Both have troubling ramifications for the ethics of war, which relies on societies being circumspect in their decision making to employ troops, as well as individuals being equipped with deliberative and reflective skills to know when an action or order is unjust.
Autonomy is not merely about decisions in the moment; it is a skill that must be practiced and that could be eroded by systems that exercise too much control. Activities that “inspire self-reflection, interpersonal awareness, and judgment … are valuable because they’re linked to the exercise of free will and autonomy” (Frischmann and Selinger 2018, 18). Within institutional contexts where autonomy is already compromised (voluntarily or nonvoluntarily), the use of AI tools risks expanding control, further eroding autonomy as well as the practices that sustain and support it.
Individuals need to be able to reflect on, interpret, and decide what to communicate with others about the trauma they suffer. That isn’t possible when AI moves from personal enhancement to big data (Frischmann and Selinger 2018, 27). Autonomy and agency are practiced in the decisions about how to understand one’s experience and what to communicate about one’s emotional state—to oneself and to others. Further, self-reflection is crucial not just to human dignity but also to determining the just action in war. AI trades efficient diagnoses for conscientious reasoning within sociocultural contexts, the very type of reflective thinking expected of just warriors.
Contemporary challenges to human dignity in the ethics of war include the use of autonomous weapons. Should the decision to kill be left to artificial intelligence? How is society cheapened by the delegation of such decisions to machines (Scharre 2018, 289)? Just as we need to think about the moral burden of killing in war, we also need to think of the moral burden of care for the members of an institution that society relies on for protection and defense. Depending on digital diagnostic tools seems to invite society to further distance itself from responsibility for military personnel.
An additional social dimension is found in the classifications and biases that are trained into AI (Alvarado and Morar 2021; Klugman 2021). Depression and posttraumatic stress are socially described and determined. Within institutions that control much of the day-to-day existence of participant members, the institution has an outsized role in the social determination of illness. Institutional values and social settings inform expected behavior. Determining what counts as PTSD or depression within the context of a health care system controlled by the institution could create perverse incentives, minimizing outlays of resources and maximizing the utility of the personnel.
As international relations and the nature of war have changed, the ethics of war shifts too. A more isolationist, security-oriented approach in the post-9/11 era means that the military has a vested interest in ensuring that its personnel do not pose a security risk. If ethics are not incorporated into the design and usage of AIDD, then the same tools used for assisting diagnoses and providing care for mental health issues could be employed to surveil soldiers and breed a systemic relation of pernicious oversight, further eroding the trust of military personnel needed for conscientious deployment and refusal of unjust orders.
Similarly, military personnel are asked to reveal whether they had received treatment for mental health conditions for security clearances. HRDA for AIDD in the context of the military would require regularly scheduled opt-in agreements (not just opt-out agreements), which themselves pose problems of consent fatigue (Ranisch 2021). The fear of stigma, as well as the potential ramifications for one’s career, means that many military personnel will not accept the risks of AIDD; in an institutional setting seeking to promote efficiency, that likely means that a great many more people who need mental health resources will not receive them.
Finally, within institutions of compromised autonomy, the use of AIDD, even with a revised concept of autonomy, risks entrenching a system of response to mental health problems that emphasizes the individual while systemic problems go unnoticed. Posttraumatic stress disorder was not properly recognized until it was understood as affecting large numbers of veterans. Providing health care in institutional settings ought not to be only about responding to individuals displaying worrying signals picked up by digital devices; it ought to be about creating the conditions in which all members are supported by a system that encourages reflection on emotional states prior to and after traumatic events, where resources rather than merely tools are available, where autonomy is fostered through practices that encourage self-reflection and communication, and where human dignity—rather than algorithmic training—guides judgment and decision making for both individuals and society.