
This editorial appears in the February 2025 issue of the American Journal of Bioethics.
In this issue, Chapman et al. recommend large changes to Institutional Review Boards (IRBs) to address group harms in research. We agree with the concerns underlying their recommendations. Researchers have a responsibility to foresee and prevent group harms from arising out of their work. Communities at risk of research-driven group harms should also have the power to protect their interests in the design and implementation of research. We disagree, however, on the proposed solutions to these problems. Rather than expand the purview of an already overburdened and under-resourced IRB system and rather than rely so heavily on compliance procedures, we should consider other mechanisms that operate within the research lifecycle to address risks of group harms.
It may seem odd to suggest we look beyond IRBs in a world where only in the United States are research ethics committees prohibited from considering downstream consequences (including group harms) of research in their review. Given international norms, expanding the purview of regulatory compliance in the U.S. may seem a natural solution to research-driven group harms. But that ignores known flaws within the research regulatory system. In addition to weak operational support, IRB members do not receive adequate training and can be unfamiliar with the very regulatory frameworks that govern their work. Research ethics committees are also ill-equipped to anticipate downstream consequences and account for the industry-academia partnerships common in artificial intelligence research. Introducing new regulatory bodies to address new risks brings additional problems, including poor communication between organizations and inconsistent and uneven application of regulatory rules. And we still lack strong evidence that compliance with IRBs results in participant protection.
Looking to the informed consent process to address group harms also brings serious complications. The first is defining the groups that could experience harm. Without careful thought to the identification of these groups, researchers run the risk of using social groups as inappropriate proxies for the groups actually under study—and those ultimately at risk of harm. Blanket calls for community engagement in data-centric research without careful consideration of the communities in question seems likely to reinforce the incorrect use of population descriptors in fields like genomics. Doerr and Meeder highlight several additional complexities with appropriately demarcating groups in data-intensive research, including groups that researchers can analyze into existence. Even if groups are properly identified, we still need to consider the additional burdens placed on communities and their members through community engagement in the research process, and how burdens could compound if such engagement were mandatory.
The second complication exists with the nature of group harms and informed consent: prospective research participants cannot reasonably opt-out of group harms, even if informed about them during the consent process. In other words, if a prospective participant declines to participate in research because of the risks of group harm, opting out would still leave them at risk of group harms provided other members of the same group consented to the research—what information scientists refer to as the “tyranny of the minority”.
Third, the informed consent process bears too much weight as it is. Concerns regarding the informational needs of prospective participants and the quality of their understanding during the informed consent process remain overlooked. The written consent form is often prioritized over the (ideally dynamic) discussion during the informed consent process. Relying on the informed consent process as the primary ethical touchpoint in research also shifts the locus of responsibility for evaluating the appropriateness of a study away from the researcher and onto the research participants.
Importantly, the federal regulatory apparatus, community groups, and individual research participants are not, nor should they be, the primary safeguards of the ethical conduct of research. Researchers are. We are the only ones involved at every stage of our work and we bear responsibility for its conduct and impacts. The current focus on IRBs may also encourage investigators to distance themselves from the ethics of their research by outsourcing the responsibility for the ethical soundness of a study to regulatory institutions. Yet, as Grady and Fauci argue, “even when all of these [regulatory] systems work well, ethical research depends on responsible investigators with certain character traits and commitments”.
How can we foster a research culture of investigators who take responsibility for and act to prevent the harms that their research can cause? We need to build an overlapping and reinforcing system of norms that promote such behavior. This entails moving beyond goals of compliance with regulatory requirements toward goals of ethical reflection and action in research. Here we are referring to the processes by which researchers (a) reflect on the ethical consequences of decisions throughout the research process and (b) take action to mitigate potential individual-, group-, and societal-level harms of their research.
Efforts have already begun to engage researchers in such ethical reflection and action. The Ethics & Society Review (ESR) at Stanford University is a coaching process where peer faculty experts guide researchers through ethical reflection and action in their grant proposals prior to the administration of research funding. This process helps researchers to foresee possible harms of their proposed research and to outline specific actions they will take to mitigate those harms. It functions separately from the IRB and engages researchers on risks outside of the IRB’s purview, including potential group harms, such as the exacerbation of inequitable access to healthcare due to biases in AI models within healthcare systems. The European Research Council employs a similar process, and, for projects with complex ethical issues, the mitigation strategies that researchers commit to become part of their contractual grantmaking agreement. In response to calls for integrating broader research implications into the peer review process, the prominent machine-learning conference NeurIPS began asking researchers to discuss the potential societal consequences of their work, both positive and negative, in their conference submissions and, to include these discussions in peer review.
These processes all aim to locate ethical responsibility in researchers themselves, not in regulatory bodies. Yet they all face scalability challenges. Reviewing hundreds, if not thousands, of research proposals for possible harms and proposed mitigations requires substantial peer labor. And guiding researchers through this process can slow the grant review process down. Ultimately, these efforts face a key challenge: effectively scaling without devolving into a compliance function or a perfunctory “ethics checklist.”
Addressing these challenges requires promoting and rewarding ethical behavior in academia in meaningful ways. This can include incorporating the ethical conduct of research into the tenure and promotion process, or awarding additional grant funding to account for the extra effort required to implement costly mitigation strategies. Ultimately, the virtuous investigator stands as a key protection against group harms. Therefore, we must find ways of inculcating virtue into researchers and ethical reflection into the entire research process.