Upstream Ethical Mapping of Germline Genome Editing

Author

Blog Editor

Publish date

Tag(s): Legacy post
Topic(s): Editorial-AJOB Genetics

This piece appears as an editorial in the August 2020 issue of The American Journal of Bioethics. You can read this piece and others here.

by Jodi Halpern MD PhD & David Paolo

In his lead article, Cwik derives categories for the upstream mapping of germline genome editing (GGE). Some peer commentators refine specific aspects of his analysis or recommend additional dimensions. Our concern, instead, is on the suitability of his methods and the resulting categories for setting an ethical agenda for GGE. We join several peer commentators (de Melo-Martín 2020Evans 2020Weber et al. 2020Wrigley and Newton 2020).

There is value in delineating current research along Cwik’s clinical/technical dimensions—target, goal, outcome, mechanics—to sort out safety and technical issues, and to contribute to discussions of clinical research concerns at IRBs. However, the categorical distinctions that Cwik derives—correction, revision, transferring—should not shape policy debates for a societal intervention like GGE.

Cwik’s method decontextualizes GGE categories from relevant social facts, including health disparities and risks to human rights. Cwik recognizes these societal issues, but his model pushes them downstream in the ethical deliberation and fails to address how social evaluations implicitly influence his upstream categories. We argue here that the social context must be made explicit upstream, in the categorization schema itself.

CWIK’S CATEGORIES ARE NOT ETHICALLY NEUTRAL

…[I]t is not the scientific purpose or technique, but the justification for its use that generates much of its ethical relevance.

In order to formulate his framework, Cwik takes current GGE translational research projects as inputs and uses their sorting according to target, goal, outcome, and mechanics to derive more general categorical distinctions: correction, revision and transferring. Cwik claims that this categorization schema is the result of a “natural sorting…based on how they differ from each other along these four dimensions,” suggesting that he sees his reasoning as not ethically, socially or politically situated. However, Cwik also claims that the results of his analysis are ethically significant: “these differences are not just descriptive—they substantively affect the ethical issues raised by potential applications, both in degree and kind.”

Crucially, Cwik distinguishes correction from revision from transferring for the purpose of delineating which ethical questions are relevant. Although Cwik states that he is agnostic about downstream permissibility decisions, this is in tension with his statements that these fine-grained distinctions allow for finer grained lines of permissibility, even if he isn’t determining those lines here. This leads to our core question: How does an ostensibly neutral scientific mapping produce ethically-distinct categories? Our answer is that such dimensions and categories were derived from implicit value-related distinctions in the first place, and those distinctions must be explicitly examined.

Specifically, there are two levels at which Cwik’s seemingly “natural” sorting actually embodies existing social valuations of ethical import. First, the input to his model is existing GGE research. But the limited group of existing translational research projects was not neutrally chosen and funded—rather these projects were chosen according to judgments about feasibility, clinical and economic value, and other contextualized factors that we know instantiate power relations and social value judgments. This matters—current research studies may not represent acceptable uses of GGE (if any exist) for individual patients and families or for society, and the mapping they suggest may then be skewed in problematic ways.

Second, Cwik’s charting of existing research under-determines his derivation of the categorical distinction between correction and revision. Given this under-determination, we propose that some normative reasoning must contribute (the alternative is arbitrariness). We speculate, as Evans suggests, that perhaps Cwik draws an ethical line between returning a gene to “normal” versus revising it to a rarer type based on implicit thinking that a revision would be more likely to create an advantage or even be an enhancement. In this case, perhaps Cwik holds a view of fairness as each person deserving a fair chance for a life that is “normal” or common, but not for a life with rare advantage. Note that there are influential theories based on such thinking, for example, Norm Daniels’ theory of normal species functioning/fair chances as the basis for distributive justice. Of course, we may be wrong and Cwik’s views may be based on entirely different normative assumptions. Our point is that whichever normative assumptions helped him derive his categories should be made explicit.

Notably, Cwik is aware of the inherently social, contextual nature of GGE science and clearly states that his upstream categories were never meant to determine downstream decisions of permissibility. However, while upstream perceptions do not fully determine permissibility, they do influence which downstream questions become possible to think, and this implicitly shapes downstream reasoning about permissibility. Indeed, it is because upstream perceptions promulgate intuitive moral beliefs that they can do what Cwik says his categories do, which is help us “see” which ethics questions are relevant (Wrigley and Newton come to a similar conclusion). Such value judgments are often made pre-reflectively in the form of “perceptions of moral salience.” The agenda-setting that results from these perceptions is an act of power—it determines what will be discussed and what is dismissed—and creates a moral responsibility to reflect upon the intuitive and implicit assumptions that are the basis of categorization.

CATEGORIZATION, LIKE TECHNOLOGY ITSELF, SHIFTS OUR VALUES

Not only do technologies embody particular values, they also shape and transform reality, and influence our practical options and our reasons for action.

Like technologies, new categories or conceptual mappings not only contain value judgments but shift the standpoint for practical reasoning going forward and may create new moral duties or the perception thereof. For example, within a new category of corrective GGE, parents may feel pressure to ensure that their future children have “normal” genetic profiles as well as not have “serious” or “preventable” genetic diseases.

Additionally, while there may not be a philosophical slippery slope, we must contend with an empirical slippery slope, whereby each step makes the next step more likely. So there are two distinct ways that categorization influences ethics—by determinations of moral salience upstream through directing attention toward some things and away from others, and then through an empirical slippery slope that shifts our standpoint. Evans and de Melo-Martín effectively show that we cannot readily recapture the situated concerns that were divided up by our initial categorization and that, in fact, this empirical slippery slope (which begins upstream) does influence downstream judgments of permissibility.

UPSTREAM ANALYSIS OF GGE SHOULD BE CONTEXTUALIZED

When a technology is poised to change human societies, it is irresponsible to allow implicit social valuations to set the agenda. If an upstream evaluation tool to create precision is needed, it must be one that delineates or maps out these social valuations and ensures transparency and inclusion in assessing these values. Societal values that we view as essential to consider upstream include: fairness/justice, inclusion and human rights (Cwik mentions these as concerns but not as upstream influences on categorization).

Regarding fairness, we argue that the effects of GGE on health disparities is not a secondary or downstream ethical consideration for the following reason: In the US, due to structural determinants of health, life expectancy is reduced by more than ten years for people in poorer communities and this gap is widening. COVID-19 deaths too have brought home the severe racial disparities in health in our country. If expensive GGE were to economically stratify familial genetic opportunities on top of this severe SES and racial stratification, this would intensify injustice across every aspect of people’s lives, from education, to housing, to bodily possibilities. Creating such a totalistic form of injustice is unacceptable; even if GGE were to eventually become affordable and trickle down, those left out for the foreseeable future are being treated unjustly.

GGE also would differentially affect marginalized communities and disability rights, as well as how disabilities are socially constructed. Cwik rightfully mentions the Deaf (culturally identified) community. Risks to this community range from shorter-term—decreasing the number of deaf people diminishes participation in Deaf culture—to longer-term—stigmatization or even ostracism of families who are unedited and the withdrawal of current assistive services (related concerns have come up with cochlear implants but GGE could shift the magnitude of these issues). These ethical issues must be considered upstream.

ASSESSING HUMAN RIGHTS

By now the reader likely is asking how can we take into account contextualized factors in a coherent upstream categorization?

The go-to approach for contextualizing the ethical debate about GGE is to turn to community engagement and democratic deliberation. NASEM calls for this as an essential upstream step. While democratic deliberation is necessary for fairness and inclusion, we believe that it is not a sufficient basis for evaluating technologies that could harm vulnerable populations. Publics can hold internalized noxious norms and cultural biases. Majority populations undervalue the perspectives of marginalized groups, especially groups whose lives they do not understand well. Additionally, there are deeply entrenched cognitive biases that make able-bodied people unable to assess the quality-of-life of people with disabilities. We saw this when Oregon engaged in deliberative democracy to ration health care, which resulted in the devaluing of disabled lives and led the state to be sued under the Americans with Disabilities Act. GGE is precisely the type of technology where the protection of, and respect for, people with disabilities is central, and yet such protection and respect is unlikely to follow from deliberative democratic processes alone.

Therefore, we suggest that if upstream categorizations are needed, they be based on a modified version of the Human Rights Impact Assessment (HRIA) (among other contextualized assessments). The HRIA is used when a public health intervention has intrusive societal effects, infringing on some people’s rights in order to protect the rights of others. For example, the HRIA is useful for decisions about involuntary quarantine for people with dangerous infectious diseases. Given that GGE is a societally intrusive technology, we have argued elsewhere that a modified version of the HRIA asks the right kinds of upstream questions for categorizing the level and type of ethical scrutiny that should then follow in specific applications.

Regarding the risks mentioned above to the Deaf community, the HRIA helps distinguish between the harm done to future deaf people in a world with fewer deaf people and less Deaf culture (as parents opt for GGE) from the harm done by the withdrawal of needed services and ostracism. The HRIA clearly identifies the latter and not the former as an impermissible rights violation and this would serve as a full stop until such rights were otherwise fully protected in law and practice.

Note, we are not saying that GGE should go forward if such laws were put into place—the HRIA itself is not a sufficient basis for assessment. Other important factors that require attention include the social construction of disability (which informs what is nominated for editing in the first place) and the history of eugenics. The point is that the HRIA is a more apt tool than “neutral” technical/clinical categories for assessing the upstream questions regarding a societally intrusive technology.

CONCLUSION

By arguing that democratic processes and especially the HRIA have key roles upstream prior to downstream decisions about permissibility, we are not saying that some distinctions of the sort Cwik suggests may not also have an upstream role as part of an array of assessment tools. Their usefulness would depend upon a more careful examination of the value judgments influencing their derivation and implementation. While we are in no way advocating for GGE for any application, we’ve proposed that the HRIA—along with other evaluation tools for fairness, inclusion, and rights—is an important way to assess GGE upstream, but it would take more time and collective effort to delineate exactly what that might look like. We thank Cwik and the peer reviewers for extremely thoughtful work seeking to bring clarity to the complex questions surrounding GGE.

We use cookies to improve your website experience. To learn about our use of cookies and how you can manage your cookie settings, please see our Privacy Policy. By closing this message, you are consenting to our use of cookies.