Some bioethicists link the beginnings of our field to the Nazi Medical experiments and the Nuremberg Trial (Annas). Whether this is the beginning of bioethics is debatable, but without a doubt, research ethics has been a central topic in the field. In fact, the very first federal bioethics commission laid out the principles of research ethics in the Belmont Report. Later, the President’s Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research recommended to the President and Congress that a uniform framework and set of regulations should govern human subjects research. This effort reached fruition under The Federal Policy for the Protection of Human Subjects or the “Common Rule” that was issued in 1991. Since then, there have been no major changes to the regulations – until now. After a five-year process and thousands of comments, the new “final rule” was released on January 19th, 2017. The July 2017 issue of the American Journal of Bioethics addresses these changes. In addition to our usual open peer commentaries, we are posting a number of blog posts written in response to the AJOB target article.
by Emily Caldes, MA, CIP and Jennifer B McCormick, Ph.D., MPP
As noted by Friesen, Kearns, Redman and Caplan in their review of the Belmont Report, the Belmont Commission tackled the difficult task of distinguishing research from practice. The report defines research activities as those intended to develop or contribute to generalizable knowledge, and it defines practice as activities intended to enhance the well-being of particular individuals or groups of individuals. This distinction, however imperfect, became the regulatory standard used to make determinations about which activities require institutional oversight.
Friesen et al. claim that the line between research and practice offered in the report is no longer sufficient. This is partially due, they maintain, to the fact that research and practice are increasingly intertwined, a fact few would dispute. But the authors appear to be less concerned with the increasing ambiguity involved in distinguishing research from practice and more concerned with the nature of the concepts altogether. They point to the burdens of research oversight that acts as a barrier, preventing the conduct of low-risk research that could contribute to generalizable knowledge and in some cases offer meaningful benefits. They propose a more “pragmatic” approach to setting boundaries around what does and does not require oversight. Specifically, they assert that research should be defined such that “knowledge production and benefits to all communities are maximized and harms to participants are minimized.”
We argue that this issue of defining research and practice, and its relationship to research oversight, requires unpacking. There is a difference between the codes of conduct followed by researchers and the institutional or governmental oversight that ensures researcher compliance with those codes. In the U.S. the primary agent for providing oversight is the Institutional or Independent Review Board (IRB). Additional modalities for review may include funding agencies, data safety monitoring boards, or scientific review boards. The purpose of the oversight is to mitigate conflicted interests and ensure that proposed research meets regulatory and ethical requirements.
The Belmont Report does not attempt to disentangle adherence to ethical standards from the oversight intended to ensure adherence, and neither do Friesen et al. Thus, in their discussion of revisiting the Belmont report, they appear to conflate concerns about burdensome oversight with concerns about the existing distinctions between research and practice. But once we separate oversight from conduct, we are left with the following questions: (a) Is the current distinction between research and practice sufficient in scope, sensitivity, and specificity? And (b) Does the determination that an activity constitutes research be equated with the need for oversight? Arguably, the principles laid out in the Belmont report should apply to any research involving human subjects. However, this does not necessarily mean that all research with human subjects requires full or formal “oversight.”
Our concern is not with the authors’ goal of making research oversight more pragmatic—that goal is reasonable and necessary. Our concern instead lies in the perception that doing so requires a change to the definition of research compared to practice. The problem is not that researchers go about their business with too broad a definition of their work, but that researchers carry the undue burdens of inflexible oversight. Reform should therefore make oversight more flexible, not research less accountable. The precedent for this clearly exists in the current regulations related to exempt research; nevertheless, by many standards, even the reduced administrative burden of exempt review acts as a disincentive.
Given the significant changes to the landscape of research and the push to make routine and low-risk research more feasible, it is tempting to want to focus on what is tangible, namely avoiding research harms while clearing the way to pursuing benefits. However, a focus on what is tangible places the public at risk and obscures the overarching goal of research ethics: non-exploitation. The challenge then is to make research oversight more practical without sacrificing its rigor. It becomes easy to lose sight of this in a culture that too-often emphasizes the efficiency of research over the value of persons. This misplacement of priorities can be particularly problematic if the physician/investigator, the institution, the study sponsor, or any member of the research team is personally invested in the success of the research.
If revisions to the Belmont Report were to include a significant shift in how research is defined relative to practice, as described by Friesen et al, then some of the activities that we now consider research would no longer be held to the same ethical standards. These ethical principles aim to ensure that participants are valued and treated as ends in and of themselves. Conversely, changes to the oversight required for some research and innovation could be executed in a way that values research and aims to maximize its benefits while also holding non-exploitation as a core and incontrovertible value.
In reviewing the rest of Friesen et al.’s recommendations it is apparent that the approach described above is consistent with the spirit of the changes they are advocating. The theme throughout is the need to zoom out from the typically narrow focus on just participants and researchers to offer a more panoramic view of the research landscape. The authors clearly recognize this.
None of this is to say that existing best practices should be abandoned; they simply may be improved by taking a more holistic approach to research. For one thing, it is critical that researchers take into consideration the degree to which their procedures facilitate research participants’ understanding and appreciation of their research involvement. Giving more prominence to the process of consent, for example, facilitates research participants’ understanding of both their role in the study (i.e. participant contributing, patient benefiting, or potentially a little of both) and their responsibilities as research participants.
Of particular importance is the issue of transparency. Friesen et al. argue in favor of greater transparency across all stages of the research process, and presumably by all of the relevant stakeholders. As justification, the authors point to the need for more transparency due to the increasing involvement of industry and the rise of big data research and data sharing. However, we offer an even more fundamental rationale for increased transparency in research: social responsibility and professional obligation on the part of individual researchers.
Local institutions are looked to as stewards of research resources. Most obviously, those resources include increasingly limited research funds, but they also include the faithful participation of willing volunteers, more fundamentally, public trust in research and the institution of medicine. Maintaining the public’s trust requires honesty, integrity, and transparency. Trust is a two-way partnership: someone trusts another who is trustworthy. This raises the question, of course, of what does it mean to be trustworthy. We offer that transparency and openness are critical elements of trustworthiness. Trustworthiness includes full disclosure of where funding comes and how it is used and why; what relationships exist between academic institutions (non-profits) and industry (for-profits) and why; what benefits an institution or an individual investigator gains from industry-academic relationships and how the conflict is managed; and why it is beneficial for the biomedical research enterprise to use information that is viewed by many as personal and potentially socially sensitive.
In conclusion, Friesen et. al. are correct to argue that now is the time to revisit and revise the principles laid out the Belmont Report. The first step to doing so, we propose, is to differentiate between oversight and research determinations, and to better ensure that oversight’s burdens can be calibrated to match the risks associated with the research. Further, revisiting Belmont offers us the opportunity to consider a more holistic approach to ensuring non-exploitation of research participants.