M3GAN: What The Next Generation of Frankenstein Films Can Teach us About AI in Healthcare

Author

Dov Greenbaum, JD, PhD

Publish date

M3GAN: What The Next Generation of Frankenstein Films Can Teach us About AI in Healthcare
Topic(s): Artificial Intelligence

By Dov Greenbaum, JD, PhD

What can a high-end toy developer, an orphan girl, and a murderous cyborg doll teach us about artificial intelligence in healthcare devices?

The new horror film M3GAN (pronounced “Megan”), tells a cautionary neo-Frankenstein tale:  As we rashly create more advanced AIs as playthings (think ChatGPT, the recently launched AI chatbot developed by OpenAI ), we may lose control over these machines as they become smarter and more powerful than their naïve creators.  

Who is M3GAN?

In the film, a Seattle-based robotic toy inventor Gemma (Allison Williams), is suddenly thrust into a parenting role for her recently orphaned niece, Cady.  Unable to adequately fill the shoes of her deceased sister, Gemma brings home her Funki Company side project –an advanced AI robot companion called M3GAN (Model 3 Generative Android) — to be Cady’s playmate.

AI-era Frankenstein’s Monster

Anyone familiar with the classic Frankenstein story knows where this film is headed. Like Rick and Morty’s spaceship tasked with protecting Rick’s granddaughter at all costs,  M3GAN literally pursues her prime directive to protect Cady’s wellbeing, without regard to the life of other humans.

This predictably results in M3GAN’s increasingly ignoring human oversight, as well as an escalating lust for blood.  As the story progresses, M3GAN’s (mostly offscreen, it’s a PG-13 movie) killing becomes more explicit and violent.

Like Frankenstein, M3GAN’s reasoning and understanding become progressively more powerful. M3GAN lashes out at her creators, especially as they attempt to limit her increasingly advanced capabilities. Spoilers! Although Gemma and Cady eventually destroy M3GAN with the help of a reliable non-AI machine, the film ends with M3GAN successfully uploading her programming to an Amazon Alexa-like device to likely fight another day.

A Cautionary Tale

Already, the news is replete with stories as to how ChatGPT and similar generative deep learning AI will be misused for both basic banal bad behaviors, like classroom cheating,  as well as actual evil, including advanced trolling on the internet aimed at bringing down democratic governments, or as force multipliers in advanced cyberattacks.

The movie warns us that just like the Funki Toy Company and their investors, society is supposedly blinded by the potential to monetize these technologies, rushing headfirst into what Frankenstein and its progeny have been telling us for centuries: that unbridled scientific experimentation can only end in the ultimate downfall of humanity.

The moral hazards articulated in M3GAN are applicable well beyond the narrow area of robot companionship portrayed in the film. Many sectors are working toward more fully integrating AI into their products including the area of medical devices. And like the AI in M3GAN, many of the concerns are the same: fear, bias, privacy, and lack of oversight.

“The Frankenstein myth has thoroughly ingrained in our collective psyche that our monsters will ultimately cause widespread havoc.  This doesn’t need to be the case, if we create responsible legislation and regulation that promotes responsible innovation.” 

What can M3GAN teach us about medical software?

Consider the many concerns related to the software as a medical device (SaMD) technology. The US Food and Drug Association (FDA) recently updated its list of known AI devices in this medical space – a predominance of which are currently in radiology.

The risk of cyberattacks fuels ongoing concerns with SaMD and other medical devices that house advanced software. In the film, M3GAN hacks a consumer device to save herself, highlighting the accessibility these devices provide to malicious actors.  The US Congress recently acted on this fear.  Their 1.7 trillion-dollar omnibus bill, passed at the end of last year, specifically earmarked an FDA oversight program to enhance medical device cybersecurity (Sec. 3305).

SaMD is especially problematic when it incorporates AI algorithms. In addition to the aforementioned cybersecurity concerns, the FDA has been grappling for years with the need to adequately regulate this technology.  Like M3GAN, AI SaMD lacks our full trust.  FDA guidelines relate to concerns with diagnostic quality, safety issues, and biased data providing biased results. Hopefully future regulation will also adequately deal with the general lack of transparency/explainability of the AI models that underpin the software.  

Finally, some of the most pressing concerns regarding regulating AI software, both pre-market and post-market, relate to the reality that SaMD, like all AI software in many regulated sectors, is constantly, iterating and updating, making it next to impossible to actually certify the safety and accuracy of the software.

AI Built on Flawed Retrospective Data

Like the under-tested M3GAN, despite successes touted in the media, almost all AI medical software is tested primarily via retrospective studies for their regulatory submission process. Beyond the concern that prospective uses of the technologies are unproven, the retrospective study data used to train AIs are unstandardized, homogenous, and sparse, further exacerbating concerns regarding reliability, representability, and bias within the AI models.

No Easy Answers in an AI Future

While there are many consequential differences between the regulations needed for AI robot companions like M3GAN (arguably a growing sector in elder care), and for AI in health software, there are no easy solutions.  And each industry will need different solutions. For the health sector and SaMD in particular, we hope the FDA will provide focused regulation, including harmonization across international regulatory bodies. In addition, SaMD will benefit from ongoing and extensive testing, and sandboxing to ensure that ultimately these valuable technologies do more good than harm.

The Frankenstein myth has thoroughly ingrained in our collective psyche that our monsters will ultimately cause widespread havoc.  This doesn’t need to be the case, if we create responsible legislation and regulation that promotes responsible innovation.  

We are innovating quickly, especially in the area of health care. Without the necessary oversight, the M3GAN sequel (announced for January 2025) could believably focus on healthcare AI.

 

 

Dov Greenbaum, JD, PhD an attorney and privacy professional. He is a law professor at Reichman University in Herzliya Israel. He is also the founder and director of the Zvi Meitar Institute for Legal Implications of Emerging Technologies. He is concurrently a research associate in the department of Molecular Biophysics and Biochemistry at Yale University.

 

We use cookies to improve your website experience. To learn about our use of cookies and how you can manage your cookie settings, please see our Privacy Policy. By closing this message, you are consenting to our use of cookies.