This editorial can be found in the August 2023 issue of the American Journal of Bioethics.
Pragmatic clinical trials (PCTs) serve an important function in the modern research landscape: studying interventions in an environment that reflects real-world conditions, rather than the relatively stringent atmosphere of traditional explanatory trials. When PCTs are conducted in a reciprocal cycle of knowledge generation and care improvement, they also contribute significantly to fulfilling the goals of a learning health care system. The potential of PCTs to drive health care improvement stems in part from differences in design from explanatory trials, including most notably the ways in which some PCTs are embedded more or less seamlessly into routine clinical care. However, these differences can also raise different ethical challenges from those conventionally addressed in research ethics, which has focused heavily on traditional explanatory trials. In this issue of the journal, both Garland and colleagues and Morain and Largent address two such ethical challenges, presenting thoughtful suggestions for the ethical conduct of embedded PCTs.
One challenge for PCTs is securing support from clinicians for embedding research in their practices. Garland and colleagues take the position that everyday clinicians have an ethical obligation to participate in PCTs. They bootstrap their argument by appealing to two other moral obligations: the duty of clinicians to provide high-quality care and the broader duty of clinicians to contribute to advancing biomedical knowledge. The authors then interrogate their position for possible exceptions to this professional duty and describe several contexts in which clinicians could defensibly decline participation. Complementing this work, the article by Morain and Largent identifies a critical issue in embedded research that is likely to become of only greater importance—what should happen when clinically relevant information is identified in embedded research where informed consent has been justifiably waived and patients are thus likely unaware that their data are being used in research activities such as PCTs? The authors show how morally relevant distinctions between traditional explanatory research and embedded research mean that the strategies advocated for the handling of incidental findings in conventional RCTs are not sufficient when similar challenges emerge in embedded research, and raise some helpful suggestions for an ethical path forward.
The Deeper Problem
Both articles offer insightful perspectives toward understanding ethical conduct in embedded research. However, this work will have limited relevance if we do not address a more fundamental ethical issue—namely, that embedded research is still relatively uncommon in American health care. More than 15 years ago, a landmark roundtable on learning health convened by the Institute of Medicine set the goal that by 2020, “90 percent of clinical decisions will be supported by accurate, timely, and up-to-date clinical information, and will reflect the best available evidence”. Since this goal was set, however, little progress has been made to ensure clinical decision making is supported by the kind of evidence that PCTs are expressly designed to generate. Indeed, despite continued investment in expensive technologies and rising administrative costs, the health status of Americans is not only lagging when compared to populations in peer countries, it is measurably declining. While, most recently, the relatively poor performance in the United States compared to other countries in managing the COVID pandemic accounts for some of this disparity, overdose, suicide, gun violence, and the burden of disease from common chronic illnesses have been and continue to be important drivers. For many of these illnesses, significant knowledge gaps around optimal clinical and public health practice persist.
These gaps are unlikely to be solved through traditional explanatory research for many reasons, including financial and logistic obstacles. Simply stated, what we are currently doing does not work, and in the face of declining health status we lack answers to critical questions about what we should be doing in health care and public health practice. Moving forward on this front will require a widespread set of commitments resulting in a systemic shift toward embedded research and learning health care more broadly.
Recent scoping reviews have identified only a handful of institutions in the United States that are conducting embedded research or other activities characteristic of the learning health care system, and these efforts are on a relatively small scale. Furthermore, these findings are supported anecdotally. In discussing the state of health care with systems leaders around the country, it is clear that while there has been some progress in generating real-world evidence in clinical contexts, there remain stunningly few intentional feedback mechanisms to ensure that care is changed once important learning conclusions have been drawn. As a result, the trajectory of embedded research appears perilously close to the state of explanatory clinical trials, where evidence is generated but often fails to be integrated and translated into improved clinical care. A rational transformation to a more effective learning health care system requires that health care systems go beyond data analysis and evidence generation to systematic translation of evidence into improved clinical care.
There are multiple, compounding reasons why the transformation to learning health and the widespread adoption of embedded PCTs have not yet happened. Consider, for example, the following three.
Data System Design
One obstacle to the routine conduct of embedded research relates to the technology involved. Although the capacity for digital recordkeeping of clinical encounters is now highly advanced, these systems have been built around the need for accurate financial accounting and transactional health care encounters, rather than facilitating better decisions from evidence generation using the data from multiple encounters. Electronic health records (EHRs), which have become prevalent throughout American health care, are shaped by the incentives of billing codes and reimbursement structures. There are no parallel incentives to also design these data systems to facilitate real-time knowledge generation. Indeed, there are arguably disincentives for institutions to do so. Institutions may fear that EHR-enabled embedded research through PCTs and other related methodologies could reveal suboptimal care and near-term quality problems. Disclosures from this work could also create a concern for biased or misleading analyses by third parties, such as poorly adjusted comparisons between hospitals.
Generating the real-world evidence needed to address the critical challenges facing American patients, clinicians, and other health care stakeholders can rarely be achieved within a single institution, but instead will require sharing data across multiple health care systems, for which there is little incentive. EHR vendors have little reason to ensure their technology works across systems and interfaces with competing programs, creating challenges for individual patients as well as for systems of learning. More broadly, health care systems have little financial motivation to share electronic data with one another; instead, they are often motivated to sell their data to a third-party broker. Systems may also be reluctant to share information on the grounds that transparency in health data could potentially compromise business dealings, such as ongoing payor negotiations. In the same way that EHRs are shaped around billing, health care systems are often shaped around the ideas of fiduciary duty and shareholder primacy. Any efforts to advance the state of learning health must account for these obligations and incentives.
In addition, despite the fact that the vast majority of Americans express the desire to have their health data used to advance knowledge, we have collectively failed to develop a convincing paradigm for broad participation and data sharing at the level of the individual person in the context of routine health care delivery. Concerns about privacy and confidentiality continue to dominate public discussions, and the lack of agreement on data sharing even among third parties who have secured access to patient data remains a significant barrier. Clear delineation of the reciprocal obligations of those who benefit from data access is lacking in a manner that not only ensures sufficient privacy and confidentiality but also encourages data sharing in a sufficiently widespread manner to allow a far more expansive ability to secure much needed answers to common clinical questions.
Federal regulations governing research with human subjects are vital for protecting patients and other participants in research. However, current guidelines from the Department of Health and Human Services (HHS), as well as the Food and Drug Administration (FDA), were developed to respond to ethical issues in traditional clinical research with experimental products and are still not optimized to facilitate embedded PCTs and adjacent activities of evidence generation, including other uses of real-world evidence. For example, PCTs are generally of lower risk and less intrusive than explanatory trials in ways that may ethically justify modifications of consent and other oversight requirements. More generally, many PCTs are studying licensed interventions that, as a research intervention, pose very little, if any, additional risk compared to ongoing care. Neither HHS nor FDA regulations currently have guidance on whether or when studies of this sort might be categorized as minimal risk, which is then relevant to what types of oversight are appropriately provided. These issues need the joint attention of federal agencies, the research community, the health care delivery ecosystem, and patient advocates.
Countries with national health systems and national insurance schemes have the ability to achieve ongoing learning in a way that is not possible in the United States. In such systems, most health care data can be aggregated in a uniform format. The ability to then analyze questions of importance to everyday health—from the extent to which COVID hospitalizations were among vaccinated versus unvaccinated adults, to the number of readmissions among patients who did and didn’t have someone walk them through medications upon discharge—is far more straightforward and far more efficient. Furthermore, such systems can be designed to be “evidence generation ready” so that prospective studies can be launched quickly at much lower cost than traditional human studies.
In the United States, we don’t have a broad-based national health system. For us, a collective commitment to the promise of learning health care, accompanied by innovative thinking, hard work, and a heavy dose of good will, is needed to realign incentives in American health care and its relationships with the increasingly important private sector in data aggregation.
Getting there requires a view that crosses the proprietary boundaries of health systems and businesses involved in health care and is oriented toward collaboration across the health care ecosystem. This will require more innovative structures for data sharing across institutions and, to the extent possible, providing new incentives to build the sophisticated infrastructure necessary to enable this work, including funding for collaboratories that invigorate the conduct of clinical trials, as well as training programs for embedded researchers and experts on implementation science. In the meantime, there is also much more work for the bioethics community to do of the sort ably represented by the two target articles in this issue. For example, prior ethics thinking on learning health has focused on the obligations of researchers and clinicians and the health care systems that employ them. Yet today, much real-world evidence is generated and aggregated by a diverse range of entities beyond those that deliver health care, including insurers, pharmaceutical companies, biotechnology firms, data brokers, mHealth apps, wearables, and other for-profit and nonprofit entities. Many of these entities operate outside of the sunshine of traditional clinical research and the corresponding requirements for ethical oversight and informed consent. Their aggregation and analysis activities typically involve deidentified data and can be highly profitable, and the effort is primarily aimed at informing business policy decisions. What are the implications for the ethics of real-world evidence generation when these very different actors—most of whom never directly deliver care to patients and rarely subject their health care business analyses to public scrutiny—are driving much of the learning? An additional area of inquiry relates to how to best demonstrate respect, including and beyond the process of informed consent. Questions also persist about the range of permissible—and optimal—ways to solicit consent for embedded research and data uses, as well as the conditions under which obligations to obtain consent may ethically be waived. How much do the responses to these questions turn on assessments of risk, and how much on the likelihood that findings will be translated into real-time health care improvement? Further, how should such determinations be made, and by whom?
Pragmatic trials and other embedded research offer the potential to generate information to improve clinical practice and public health policy, and, in turn, to enable major improvements in the health status of Americans. Bioethics can play a critical role in this endeavor. Resolving perceived and actual ethical challenges in real-world evidence generation helps to eliminate obstacles to the conduct of embedded research and to the resultant move to learning health that come from a misunderstanding of what is ethically at stake.
The author(s) reported there is no funding associated with the work featured in this article.
Robert M. Califf, Ruth Faden, Nancy Kass, Stephanie Morain, and Matthew Crane
U.S. Food and Drug Administration