In an effort to justify the growing landscape of AI use in healthcare, there have been countless studies and empirical arguments—ranging from concerns with the actual AI systems to patient perspectives on AI use in healthcare. However, many such studies tend to focus on the question of how to make the use of AI in healthcare more ethical, rather than challenging whether it should be used at all.
In fact, this seems reflective of the most common justification for AI use that I see parroted by both experts and lay people alike: AI is here and it’s here to stay, it’s up to us to adapt accordingly.
I find this justification rather strange and lacking any argumentative force. If I broke into your house and said that I’m here and here to stay and that you’d better just adapt to it, you probably wouldn’t be compelled by the force of my argument and let me stay. Similarly, such a justification for the use of AI in healthcare should not be accepted, and the rejection of this fallacious argument should be reflected in the informed consent process when any AI is to be used systematically in patient-centered healthcare.
For instance, one recent paper discussing disclosure of AI use in clinical care claims, without justification, that “Ultimately, AI will become a routine part of clinical care…” The paper goes on to claim that informed consent should only be required if AI is making independent decisions or if the use of AI deviates from accepted standard uses. This argument rests on the assertion that if the use of AI is the accepted medical standard in a given context, then it can be assumed that patients are consenting implicitly. This assertion rests on a misapplication of the medical standard argument, however, as the medical standard in this case is constructed upon unethical uses of AI to a significant degree.
One recent study found that approximately 65% of US hospitals use AI to assist their predictive models, most commonly for inpatient health trajectories and to identify high risk outpatients. One could argue that 65% is too low to be considered the accepted medical standard, but this would be far too ambiguous to present a strong argument for or against. Rather, there’s a far more troubling number coming out of this study—less than half of these hospitals are evaluating their AI models for bias.
The accepted medical standard argument should lose all force if it is discovered that the standard is not serving the patient’s best interest. Further still, according to Paige Nong, PhD, author of the aforementioned study, the unchecked bias of such AI models is disproportionately affecting poor and rural communities, as it is often under-resourced hospitals that are unable to properly screen AI models for bias. Despite the high percentage of hospitals using these models, it isn’t reasonable to assume that patients are implicitly consenting to put themselves at risk of receiving lower quality healthcare due to the unchecked biases of these AI models. To prevent such a risk, patients ought to be informed of AI use in their healthcare and be given the option to opt out—especially in cases of prediction of identification.
However, some might object that such a goal is impractical for several reasons. The first, and likely most common reason, would be that it’s simply too much of a hassle. Such an objection rests on the idea that most people simply don’t care that much—to require patients to undergo the informed consent process for AI use would be a waste of time for both the patient and the healthcare system. The other reason would be that such decisions should fall upon the shoulders of healthcare leadership, not the patients. In this sense, healthcare leadership has a duty to prevent the biases of AI systems from potentially harming patients, it shouldn’t be the duty of patients to attempt and avoid such systems altogether.
Such objections fail to capture the scope of general perspectives on AI use. Regarding the latter of the two reasons, it is equally impractical to assume that healthcare leadership will implement AI systems with the patient’s best interests in mind. In more affluent healthcare systems, AI is seen as a long term investment that will help to keep costs down. And as already noted, less affluent healthcare systems simply lack the resources to screen for bias. As for the former reason—that people simply won’t care enough—this sentiment severely underestimates just how much people dislike AI. One recent poll of registered voters in the US came in with over half of the participants believing that the risks of AI outweigh the benefits.
And beyond just polls a report from More Perfect Union found that in just one three month period in 2025, 20 data center projects were blocked or delayed by community organizing or pushback. Many people are indifferent to AI, but just as many people adamantly oppose it.
Regardless of the reasons, studies show that many people are exasperated by the constant encroachment of AI into nearly every facet of their lives. In this sense, it would seem absurd not to disclose AI usage in the informed consent process—AI optimists would be happy to hear it’s being used while AI pessimists would be relieved to avoid many of the ethical concerns that surround it.
Of course, I could be wrong about all of this. AI could truly benefit medicine and safely become standard use. The key term here is “safely.” The ethical concerns are just too great to overlook. Just because more than half of US hospitals are using it, with many of these using it despite serious concerns of bias, it doesn’t mean it’s right to just assume patients are implicitly consenting.
As parents love to ask their children, “if all of your friends jumped off a cliff, would you do it too?”
Seamus Donahue, MA is the Program Manager for the Indiana University Center for Bioethics.