The Promises and Challenges for Ethical Carebots

Author

Shaun Respess, Daniel Blalock, Edgar Lobaton, et al

Publish date

The Promises and Challenges for Ethical Carebots
Topic(s): Artificial Intelligence Health Care Research Ethics

Our research team has recently completed a pilot study with groups of older adults (N=11) and family care partners (N=9) to interact with Sava, our humanoid Pepper robot that is trained to assist with conversation and emotional support. We studied the potential effects of socially assistive robots, or carebots, for supporting persons with mild cognitive impairment (MCI), a condition where individuals’ cognitive decline exceeds the normal, anticipated decline related to aging. The condition affects nearly 22% of persons aged 65 and up in the United States. Researchers believe that robots can play an important role in helping these individuals retain their long-term autonomy and independence within the safety of their homes. Yet, for carebots to support the instrumental activities of daily living (IADLs) required by such persons, like household chores, scheduling medication reminders, and safety checks, they will need sufficient ethical guidance functions.

At present, carebots offer a promising solution for monitoring sleep patterns and promoting healthy rest routines, and for non-pharmacological pain management prompts. During periods of elevated stress, anxiety, irritability, or agitation, these carebots can guide patients through calming exercises and grounding techniques. Finally, they could help reduce isolation and emotional distress often experienced by patients living with MCI or dementia by connecting them with friends, family, or available support groups. Our own initial pilot study (publication forthcoming) found that compatibility and quality of social interaction were of the greatest interest to older adults and care partners. Nearly all interviewees perceived carebots to be a valuable companion to manage loneliness in older populations, with one interviewee even describing it as an “emotional support robot.” Still, concerns about patronizing or infantilizing language were widely shared. Situated task plans with a voice interaction loop can generate real-time and emotionally aware conversations that also promote personalized responses with users.

These prospective benefits conflict with notable concerns about deception where carebots could mislead users by falsely displaying human traits, while also separating older persons from human contact. Given that Pepper robots like Sava can have rich expressions, including the use of body language, utilize multimodal communication, analyze voice tones, and recognize human emotions, these fears could be warranted. Their open platform is designed to support a variety of applications and work with numerous large language models (LLM)s, but they can be justified in assistive roles only if the privacy of user data can be secured and is protected against external actors. Unlike the current industry standard, the LLM must be made private-by-design – locally hosted, offline, with multiple levels of encryption and authentication.

Moreover, dynamic world models – adaptive simulations used by an AI system that simulate real-world environments and physics using a combination of textual, visual, and movement data – can enable a system to identify solutions on a case-by-case basis to better predict patient needs, like when a patient becomes unresponsive during an emergency. A robot like Sava can be conditioned to simulate both high- and low-risk scenarios which require a complex response, though constant evaluation by human partners is needed to ensure data privacy and value alignment with notable stakeholders. Given the ongoing issues caused by the black-box problem, where an AI system’s logic cannot be understood due to hidden connections between nodes in the network, it is important to implement transparent ethical guidance functions that specify to the system what constitutes a correct course of action. The Agent-Deed-Consequence (ADC) model for instance can develop a moral judgment of a situation by weighing considerations according to one’s character (agent), the quality of their actions (deed), and the possible consequences in a given situation based on user data in the dynamic world models. Then, the model can generate a solution that is operationalized in a carebot or other AI system using deontic logic – where ethical components like these can be represented as operators in the algorithm to determine what is obligatory, permissible, prohibited, and so forth.

Ethical guidance functions like these would arguably equip carebots like Sava with the parameters to make decisions in a range of cases. For example, while following the orders of human users is essential, so is distinguishing between potential intruders who might wish to enter a home or a concerned neighbor who heard a fall. Gauging user intentions can sometimes mean recognizing sarcasm, assessing the probability of harm from multiple outcomes, and evaluating the action in question – such as whether a person intends to self-harm. More importantly, these functions can guide robots during moments of impasse, such as when commands conflict. For instance, it is vital to know when it is responsible to dispense medication, even if a patient is not requesting it.

Introducing social partners like Sava into elder care is a novel way of managing patient needs and alleviating caregivers of burdens associated with daily care in a rapidly aging society, so long as robots are implemented in an ethical and effective manner. They must not be seen as replacements for proven and reputable care systems of professional medical experts, but they may represent an essential piece of technology to support these systems in the near future. Effective deployment will mean not only avoiding serious cases of harm, but supporting persons in the IADLs that they experience on a daily basis. Ethical guidance means ensuring that the carebot has the tools to differentiate between competing commands, can react quickly and proficiently in both high-risk and low-risk situations, and is maintained offline to protect user data. If employed properly, carebots might constitute the next wave of social innovation in elder care. If not, they may become the cause of a social backlash, and perhaps even a new winter of AI.

Shaun Respess, Daniel Blalock, Edgar Lobaton, Christopher B. Mayhorn, Arnav Jhala, Shawn Standefer, Jonathan Young, and Veljko Dubljevic

We use cookies to improve your website experience. To learn about our use of cookies and how you can manage your cookie settings, please see our Privacy Policy. By closing this message, you are consenting to our use of cookies.