Author

Keisha Ray

Publish date

Recently the Editor-in-Chief of JAMA, Dr. Kirsten Bibbins-Domingo, sat in conversation for just a little over 20 minutes with Dr. Michael Howell, the Chief Clinical Officer at  Google, to discuss the evolution, possibilities, and challenges of AI in clinical practice as part of a series from JAMA on AI in medicine. As a bioethicist who spent much of my graduate studies focusing on biotechnology, AI, and social media as they relate to health and medicine, this was my World Cup. AI is a necessary topic to tackle in health sciences, especially for accomplished doctors in positions that craft professionals’  perspectives on tech in healthcare. So I took great interest in this conversation and there could be no better-positioned duo to endeavor the discussion. 

Although the conversation produced a thoughtful exchange of ideas that expanded the idea of how AI can be incorporated in clinical practice, what stood out to me was what was missing– how AI will affect clinical access and experience? For many in healthcare, especially those who are on borrowed hours in a day, finding ways for AI to absorb the burden of time-consuming labor brings gleeful thoughts of reclaimed bandwidth for one’s patients, one’s research, and who knows what else! The potential is endless when time is returned to you.

Both parties were also delighted at the idea that AI could wield supportive power in diagnostic work, like assisting with searching through resource information or drafting patient notes quickly,  with the ability to monitor and update them. They touched on the challenges of AI, namely its penchant to “hallucinate” (this is when what information it has learned from or consumed leaks into the results it provides or when it invents source material that it then references to support its responses to information-specific requests). Even with the mention of these concerning behaviors from AI technology, there was an overall perspective of looking to the future of AI’s inclusion in clinicians’ work. 

One of my greatest critiques as a bioethicist, public health professional, and health equity expert is how siloed much of the health sciences are. Public health is seen to be so far distant from direct clinical care or specialized practice, social work,  palliative/nursing professionals, and insurance carriers, and it is this separation of expertise that has created myopia when analyzing AI in health and medicine.  The excitement of how our own work will be markedly improved by a new technical tool outshines the reflection that should happen.  

There is a mythos in martial arts surrounding Okinawa Kobudo, or the weapons of Okinawa, where out of a need to fight off Japanese Samurai, the farmers and peasants of Okinawa resorted to fashioning weapons of war out of their farming equipment. At the time of producing the original tools, the intent was to sow their land. It is fair to believe that at no point did these farmers consider they would need to turn them into deadly weapons, yet this was to become their fate. With this story in mind as a parable of possibility, the same vision of technology as a tool must also see technology as weapon, and AI has already started to be carved into this fate, too. Recently a lawsuit was levied against United Health Group for its use of AI algorithms to systematically deny patient claims for extended stays in nursing facilities, forcing them out of rehabilitation or to resort to out-of-pocket payments far earlier than necessary. The AI being utilized was found to have a 90% error rate (this is cited formally in the class action complaint), which was known to the insurer (was it intentional design? This is the argument being made by the plaintiffs).

For most people in the United States, access to health care relies heavily on the approval of insurance claims for coverage. And often, the most unaffordable types of care are the most lifesaving or sustaining interventions. Every AI team wants to be in the business of health because it is big business that will always need support and always exist. But it would be remiss of physicians and health professionals to start the conversation about AI’s inclusion in the two places they occupy the most– clinical and educational. But healthcare for patients frequently begins at the barriers, one of the most prominent in our pluralist healthcare system being affordability. This is where the shadowy figure of third-party decision-makers steps in, particularly through the form of insurers. 

While specialized staff who are overworked and short on time celebrate the new AI tools on the horizon that will make nurturing medical care more fruitful, these weapons are laying waste to the patients just before or after they have sought care.

In Virginia Eubanks’ book, Automating Inequality, she opens with a sordid tale of her partner’s unforeseen medical emergency brought on by a mugging just three weeks after establishing coverage with insurance. Both Eubanks and her partner discussed how fortuitous this was, that had this violence been visited upon them just a month prior, they would have been left with few options beyond emergency care, which would have bankrupted them. This was until they began receiving denials of their claims. As it turned out, the automated algorithms their insurer used to decide what claims were most likely fraudulent had ensnared theirs, specifically flagging the timing of their new coverage and how it overlapped with their particular zip code/neighborhood. 

These two factors and the size of claims they were filing (he sustained a series of nonlethal but advanced injuries that required an immense number of procedures and therapies) tipped off the algorithm to start denying their claims under the presumption of falsity. Once these denials began rolling in, they became fodder for follow-up denials, forcing them to continue to refile or appeal. In the midst of this, the author’s partner had to prolong or discontinue needs-based services and was sent astronomical medical bills, creating tension and emotional distress on top of trying to heal. Eubanks spent weeks sitting on a phone trying to untangle this web. 

Eubanks story isn’t just a story about AI. It’s about programmed bias within a highly automated system that had swapped human decision-making for biases and math. Ultimately, Eubanks was able to produce an empathetic person. But what AI offers to industries looking to maximize profit over a patient’s needs is the ability to entirely eliminate employees, leaving patients and their advocates to fight with a ghost in the insurer’s machine.

At one point in the aforementioned conversation, Dr. Howell touts the profound growth that AI has undergone in less than a year, citing a test of its ability to answer typical medical questions people have and seeing how those answers stood up against other physicians in a blinded scoring. At first, the physicians preferred other physicians’ answers on most of the criteria. But the second time the AI model was preferred. This is an amazing feat of technology and should not be diminished in its accomplishment, but then the question must become: “Who can benefit from a machine that physicians prefer over themselves and who is on the the other end of this tool?”

Evan Thornburg, MAUB is a bioethicist, health sciences and humanities communicator and creator on TikTok, and health equity officer working in the Division of HIV Health at the Philadelphia Department of Public Health.

@(TikTok) EVN the (Bio) Ethicist

We use cookies to improve your website experience. To learn about our use of cookies and how you can manage your cookie settings, please see our Privacy Policy. By closing this message, you are consenting to our use of cookies.