Heartstrings and Hardware: Navigating the Ethics of LOVOTs in Elder Care

Author

Hana Abbasian, MS and Perisa Ashar, BSE

Publish date

Heartstrings and Hardware: Navigating the Ethics of LOVOTs in Elder Care
Topic(s): Aging Artificial Intelligence

A group of researchers at the University of British Columbia’s IDEA Lab recently invited older adults living with dementia and their family care partners to sit with, hold, and observe an unlikely new inhabitant of long-term care: the soft-bodied, wide-eyed companion robot known as LOVOT. In their recent critical reflection study, published in Digital Health, the authors capture insights from lived experience, a concept philosophers refer to as experimental epistemology (knowledge produced through direct experience rather than abstract reasoning). Participants described moments of joy, curiosity, hesitation, and cultural questioning. These reflections land at the center of a growing ethical conversation. As we introduce artificially intelligent companions into elder care, how do we ensure that the very technologies designed to soothe do not inadvertently erode the moral, cultural, and relational foundations of caregiving itself?

LOVOT is engineered to elicit affective responses through what designers call affective affordances, features that invite or trigger emotional engagement, such as warm tactile sensors, gaze-following behavior, and small cooing sounds that register as comforting. In theory, these design elements allow LOVOT to operate as a sociotechnical artifact, a technology that both shapes and is shaped by social relationships. Yet this same emotional pull raises complex ethical tensions. When a robot is built to mimic attachment behaviors, does the affection it evokes become a form of algorithmic intimacy (simulated closeness generated by computational processes)? And if so, are we helping older adults feel comfort, or subtly teaching them to bond with machines that cannot reciprocate in any meaningful and ontologically grounded way?

LOVOT may recognize facial expressions and respond to touch, but it may not be capable of engaging in the deeper moral responsiveness that comes from shared human experiences. This is tied to a concept known as ontological opacity (the impossibility of understanding the “inner life” of an artificial agent). No matter how responsive a LOVOT appears, it is still an object programmed to mimic care rather than inhabit it. The ethical concern here is that care institutions may begin to accept robotic simulation as a morally sufficient substitute for human presence. What happens when simulation becomes normalized as care and empathy?

Perhaps the most ethically charged concern is what some scholars describe as relational vulnerability (susceptibility to harm within relationships that shape emotional wellbeing). Older adults with dementia may experience moments of confusion about the status of the robots, and they may form attachments shaped by the robot’s predictable and unconditional responses. This raises pressing questions. Who is accountable for the emotional effects of robotic companionship? The developer? The care facility? The healthcare workers who deploy the robot? Or is responsibility dissolved somewhere within the technological assemblage, leaving no clear moral agent at all?

Despite all these ethical tensions, LOVOT is a light in the story of elder care. Older adults have reported joy, laughter, increased engagement, and moments of connection facilitated by the robot’s presence. The ethical issue is whether its deployment respects the moral principles of elder care, including dignity, cultural identity, relational integrity, and justice. The most defensible model, ethically speaking, is one grounded in technological augmentation (technology improving but not replacing human care). Under this model, LOVOT becomes a companion that supports relational scaffolding, sparking conversations, easing anxiety, and encouraging social engagement, while never being presented as a substitute for human warmth, cultural understanding, or caregiver accountability.

Introducing emotionally responsive robots like LOVOT into elder-care settings forces us to confront what forms of connection truly matter in the final decades of life. LOVOT’s capacity to invite touch, spark curiosity, and generate moments of joy shows that socially assistive technologies can meaningfully enhance well-being when thoughtfully deployed. Yet affective design also risks obscuring deeper ethical stakes: whether simulated companionship might gradually stand in for human presence, whether vulnerable individuals are placed in morally ambiguous relationships with machines, and whether care institutions might redefine relational labor as something programmable. The path forward requires resisting both uncritical enthusiasm and categorical rejection. LOVOT should be understood not as a replacement for caregivers, but as a carefully governed tool that supports dignity, cultural identity, and relational integrity. When used as an augmentation to, rather than a substitute for, human care, such technologies can help create elder-care environments that are both emotionally enriching and ethically grounded.

Hana Abbasian, MS is a Research Assistant at Harvard Medical School and Centre for Addiction and Mental Health (CAMH).

Perisa Ashar, BSE is a Master’s student at Duke University.

We use cookies to improve your website experience. To learn about our use of cookies and how you can manage your cookie settings, please see our Privacy Policy. By closing this message, you are consenting to our use of cookies.