Artificial Intelligence and the Ethics of Clinical Research

Author

David Resnik, JD/PhD, Mohammad Hosseini, PhD, and Nichelle Cobb, PhD

Publish date

Artificial Intelligence and the Ethics of Clinical Research
Topic(s): Artificial Intelligence Clinical Ethics

Utilizing artificial intelligence (AI) in clinical research and healthcare management offers many benefits but also raises ethical concerns, including questions related to the reliability and trustworthiness of AI systems; accountability, transparency, and fairness in AI decision-making; protection of privacy and confidentiality; loss of personal connection between patients and clinicians; and deskilling of professionals. In this blog post, we will discuss four ethical issues raised by the use of AI in clinical research that require immediate attention by investigators, institutional review boards (IRBs), and institutions. 

Using AI to Prepare Documents Submitted to the IRB

The first issue is using large language models (LLMs), such as OpenAI’s ChatGPT or Microsoft’s CoPilot, to assist with writing documents submitted to the IRB.  While LLMs can provide valuable assistance writing documents and reviewing the medical literature, they can also make mistakes of fact, reasoning, and citation.  LLM-related errors could be a significant concern when clinical researchers use these tools to conduct background research related to the safety of a drug, biologic, or medical device.  Ellen Roche’s tragic death as a result of inhaling hexamethonium as part of Johns Hopkins University study illustrates the kinds of problems that can arise when clinical researchers do not carefully and thoroughly review the background literature.  The use of AI to research to perform this task could make things worse if researchers rely on the machine to do their work and do not carefully review and validate its output.  To minimize the risk of these types of mistakes resulting from the use of LLMs to write or edit documents submitted to the IRB, clinical researchers should disclose their use of LLMs and take full responsibility for their submissions.  Additionally, institutions should provide researchers with guidance about the reliability of different LLMs, how to use tools responsibly, and what to do in case something goes wrong. 

Using Data from Human Participant Research to Train AI Models

Second, de-identified human research data are being used by researchers, clinicians, and institutions to train or fine-tune AI systems used in medical research, writing, or decision-making. Although most informed consent documents used in clinical research include provisions in which participants can grant permission for the general use of their data for secondary research purposes, it is likely that few (if any) of these forms specifically mention the use of data to train or fine-tune AI systems. While one could argue that these provisions cover the use of data to train AIs, it is likely that most research participants are unaware of this, and some would be upset if they knew their data were being used to train AI systems and/or benefit private companies. Going forward, IRBs should consider whether consent documents should inform new participants about the secondary use of their data to train AI models (e.g., by for-profit entities) and give participants an opportunity to opt-out, to the extent that this is possible, given that comingling of clinical and research data occurs in many health care institutions. They may also need to consider whether already enrolled participants should be informed about using their data to train AI models or whether to grant researchers a waiver of consent [under 45 CFR 46.116f] to do this. 

Privacy and Confidentiality Issues Related to Uploading Human Research Data to AI Systems

The third issue is how to protect the privacy and confidentiality of human research data that are uploaded to AI systems. Even if the data are de-identified, it may still be possible for authorized users of the system (or hackers) to re-identify individuals through secondary processing or associating the data with other available data.  To address these issues, some institutions have begun to contract with GenAI companies to allow faculty, students, and staff to use their systems while meeting institutional security requirements, such as data encryption and firewall protection. IRBs may need to consider these issues when reviewing studies that include plans for using AI to analyze/share data and consult with (institutional) information technology experts on data security issues and the risks of using certain AI systems.  

Biased Applications of Human Research Data

The fourth issue concerns the potential for harmful applications of AIs when used to analyze clinical research data. For example, AIs might use clinical data to recommend a racially or ethnically biased decision related to medical diagnosis, treatment, or insurance reimbursement. Although bias is undeniably an important issue when considering the use of AIs in health care, some might object that the applications of human subjects research fall outside the scope of an IRB’s authority or responsibility since the Common Rule states that “The IRB should not consider possible long-range effects of applying knowledge gained in the research (e.g., the possible effects of the research on public policy) as among those research risks that fall within the purview of its responsibility” (45 CFR 46.111a2). However, it has become increasingly clear to some researchers and bioethicists that this provision in the Common Rule unnecessarily restricts the IRB’s moral responsibilities, since research with human subjects often has significant impacts on communities, populations, or public health that IRBs cannot, in good conscience, ignore. 

There are likely to be many other ethical issues raised when using AI in clinical research that we have not discussed here. We have highlighted four that require immediate attention, and we encourage additional research and analysis concerning the issues we have addressed here as well as others.  We also encourage federal agencies, such as OHRP, to develop guidance for investigators, IRBs, and institutions on these topics.  

Acknowledgements

This research was supported, in part, by the Intramural Program of the NIH.  It does not represent the views of the NIH or US government.  

David Resnik, JD/PhD is a Bioethicist at the National Institute of Environmental Health Sciences, National Institutes of Health.
Mohammad Hosseini, PhD is an Assistant Professor of Preventive Medicine Northwestern University
Nichelle Cobb, PhD is a Senior Advisor for Strategic Initiatives at the Association for the Accreditation of Human Research Protection Programs

We use cookies to improve your website experience. To learn about our use of cookies and how you can manage your cookie settings, please see our Privacy Policy. By closing this message, you are consenting to our use of cookies.