As missions venture to more distant places or require a longer, sustained human presence, astronauts are exposed to a variety of physiological stressors and face extreme challenges to mental health. The circumstances of space voyages in extended deep space missions, such as round-trip missions to Mars lasting approximately 2.5 years, will increase feelings of isolation and depression, and limit contact with humans beyond the immediate crew to certain intervals. Communication via the Deep Space Network can take anywhere from five to 20 minutes one-way, further reducing social opportunities and access to resources. In such conditions, in-mission immediate psychological support is an important factor when planning for crew health and safety. AI therapeutic approaches are one way that could help maintain cognitive and emotional stability.
Mental health protections can be achieved through predictive modeling using data sets compiled from mood monitoring, language analysis of voice and logs or personal accounts, environmental variables, electrodermal responses (such as perspiration), variability in biorhythms (such as heart rate or sleep patterns), and other physiological markers. In space mission scenarios, parsing this information and identifying its meaning or indications, especially given the complexity of biological signals, can be achieved using natural language processing (NLP) and deep learning (DL) methods. Machine-learning-based stress detection and alerts can provide real-time analysis, accurate anomaly recognition, and suggest conversation or emotional support within therapeutic intervention.
While continuous monitoring enables early detection, it also raises ethical concerns about psychological autonomy and surveillance, as it may be experienced as an extension of control rather than care. In space missions, where opportunities for agency are constrained, AI therapy systems can better support autonomy when their influence is transparent, limited, and designed to support astronauts’ long-term goals within space travel.
Higher use and proper calibration of wearables for each astronaut can further improve the effectiveness of systems using NLP and DL methods. They can supplement diverse data to provide more personalized recommendations for what therapeutic interventions an astronaut may need, informing more precise treatment decisions rather than a blanket wellness alert.
However, this codification of intimate psychological data raises concerns of ownership, privacy, and trust, which may influence an astronaut’s willingness to engage with the system. While astronauts relinquish some autonomy to ensure mission safety and the development of more stable habitats for future crews, their contributions to these data sets should not compromise personal confidentiality. Because astronaut selection is highly competitive, psychological disclosure may be perceived as a risk to future mission eligibility. As such, limitations on data access and use are essential to ensure confident participation without fear of professional consequences.
Current AI assistant systems have proven effective, as demonstrated by the implementation of the Crew Interactive Mobile Companion (CIMON), which uses voice processing and NLP to assist astronauts in daily functions, increasing productivity while reducing stress. Initial uses revealed problems with linguistic interpretation, inaccurate emotion recognition, and suboptimal stress-mitigation suggestions that were not always appropriate to the individual’s specific needs. These issues highlight the ethical importance of addressing model bias and personalization, especially before expanding into medical-oriented AI tools. Future AI medical applications, such as the Crew Medical Officer Digital Assistant (CMO-DA), will use NLP and machine learning models trained on spaceflight medical literature and crew records to provide autonomous medical assistance. As these systems support and guide medical and psychological decisions, there is a risk that algorithmic authority may influence personal judgment and raise ethical concerns about automation complacency.
Technical issues concerning implementing these systems remain, including cybersecurity of the communication vectors, functionality in the resource-constrained environment of deep space, and handling high latency or noisy data. Since these solutions are intended to help astronauts in real-time and alleviate the wait for communication with ground stations, missions could use an on-board cloud server as a possible solution to make communication faster and more reliable.
Together, these operational and technical tensions raise ethical considerations when adapting these diagnostic and predictive models, especially when they are used for sensitive information such as health monitoring and intervention. These concerns become significant in therapeutic contexts, where there are ethically pressing apprehensions surrounding the harms of psychotherapy. Risks, such as therapeutic misconception, may lead users to overestimate an AI system’s capabilities for therapeutic support. AI conversational agents lack emotional depth and empathy and are unable to form or facilitate the human connection that therapeutic processes often need.
Clearly defining the role, purpose, and limitations of AI therapy can help minimize the ethical risks associated with its use. Outside the terrestrial environment, space offers limited opportunities for connection with human therapists. As such, astronauts represent a very unique and vulnerable population.
Astronauts are selected based on how well they perform and think in demanding conditions. They are healthy, rigorously trained for missions into deep space (excluding spaceflight participants and tourists), and possess select-in qualities, such as adaptability to high levels of stress or strong social communication skills.
Since ethical concerns in psychotherapy directly relate to the users in question, astronauts can provide informed consent and interact with AI systems with greater caution than an average user, and are more likely to discern the limits of AI. In the case of space missions, when NLP and DL models, like those used with CIMON and CMO-DA systems, help predict, diagnose, or provide structured therapeutic exercises, they are deployed in contexts where the users (astronauts) are accustomed to role-playing training for mission scenarios and can engage with these systems as collaborative partners or use them for guided reflection.
For instance, potential mental health therapies could range from supplementing sleep CBT adjustments to help cope with the 90-minute cycle of night and day, recommending engaging with family recordings, virtual or augmented reality via the HoloLens adapted with prerecorded simulations, relaxation techniques and yoga, chat-based support to carry on conversations, or religious and spiritual support. While AI therapies or tools can assist in these areas, there is no expectation that they are proper or equal replacements.
Understanding its non-human nature and that AI models are meant to support, not replace human relationships, provides astronauts with transparency into its therapeutic benefits. Similarly, by having a responsive tool for emotional processing and privacy, it eases the burden of dual relationships of intercrew psychotherapy for the onboard physician, supporting overall crew health and harmony. AI therapeutic processes could further strengthen crew dynamics by recognizing overall crew patterns and moods, and by building coping strategies, teamwork, and conflict-resolution scenarios — all essential mission components that are served by good mental health.
Given the importance of psychological health, informed consent, and crew harmony, NLP and DL can be ethically viable AI methodologies for deep space missions. To ensure astronaut autonomy and health while maintaining sensitivity to ethical responsibility in constrained environments, clear communication during pre-mission training about what astronauts can expect from these systems is recommended, alongside ground-based monitoring of the system’s efficacy when communication allows and with astronauts’ continuous consent.
While AI has been viewed with a critical or negative bias in academic philosophies of AI ethics, it can be accepted as beneficial within the practical contexts of human spaceflight if integrated with ethical caution, transparency, and operated within clearly defined limits.
Olivia Bowers, MS, MBE, is the Managing Editor of Voices in Bioethics (Columbia University)