Translating Commercial Health Data Privacy Ethics into Change

Author

Kayte Spector-Bagdady and W. Nicholson Price II

Publish date

Editorial: Translating Commercial Health Data Privacy Ethics into Change
Topic(s): Editorial-AJOB Privacy

Note: This editorial accompanies The American Journal of Bioethics, Volume 23, Issue 11 (2023).

Hundreds of articles have been written over the past several decades delineating the ethical tensions of health data commercialization, empirically querying the preferences of data contributors, and offering paths forward for ethical commercial health data privacy policies. In this issue, McCoy et al. do an excellent job synthesizing this large corpus of literature into three main substantive ethical considerations for companies’ collection and use of personal data: minimizing harm, fairly distributing benefits and burdens, and respecting individual autonomy. Because these principles will always require tradeoffs, they also stress the importance of procedural principles to “specify how organizations ought to make decisions…” which include transparency, accountability, and inclusion. Yet, as McCoy et al. point out, comprehensive data governance remains elusive—as the ways in which companies track and collect consumer data only continues to increase.

There are many commentaries in this issue expanding and building on the McCoy et al.’s foundational approach. There are also many commentaries expressing frustration with the lack of action on the part of industry or the government to ensure that the principles that McCoy et al. espouse actually improve practice. Here, we more deeply explore some of the reasons why industry self-regulation has stagnated and offer additional thoughts on how to translate ethical frameworks into improved data sharing governance.

Challenges to Translating Ethics into Practice

While McCoy et al. importantly move forward the conversation about how to frame and analyze the appropriate ethical basis for commercial data governance, underlying concepts for an ethics of data commercialization have been around long enough that we already have a sense regarding uptake. The two main options regarding how to enshrine an ethical framework into commercial health data privacy protections are generally formulated as self-regulation or government regulation. We take the challenges to each in turn.

Why Industry Self-Regulation Has Not Worked

McCoy et al. emphasize that “self-regulation is an inadequate response to the misuse of personal data,” but potentially “one of the only pathways to incrementally improving data practices” given the dearth of government regulation. Unfortunately, however, there are several key reasons why self-regulation has not been fruitful in the past.

Privacy Generally Doesn’t Sell

Whether privacy sells is ultimately an empirical question, and the empirical literature demonstrates it often does not. Consumers might report that they value data privacy, but leaving the question there generates a false binary of the choices being “privacy yes” versus “privacy no.” For enhanced privacy practices to result in increased sales would require consumers knowing or understanding privacy protections to make informed decisions and, that when privacy protections are weighed against other types of benefits, privacy is more important. (It also raises the critical question whether self-governance in the health industry is ever normatively appropriate, but we defer to Jonathan Marks’s excellent book on that topic).

Consumers do not read or understand data privacy policies. We cannot fully summarize the rich empirical and normative literature here, much of which was written by the McCoy et al. authors themselves, but will make a few points to outline the story. Patients rarely have the kind of information needed to make an informed choice for themselves regarding future uses of their data. They neither read nor understand informed consent forms. The vast majority of Americans think the Health Insurance Portability and Accountability Act (HIPAA) prohibits health apps from selling their health data. Only 1/1000 consumers visit a website’s terms of service—dropping to 1/10,000 if it requires two clicks. Median policy reading time is 29 seconds. Digital terms and conditions regarding privacy practices can be so complicated that legal scholars have argued whether they should even be considered valid.

In addition, even if Americans did read (or could understand) privacy practices, they do not choose privacy over other kinds of functionality. Americans report that they are concerned about their data privacy and would choose commercial companies that prioritize data privacy; 81% report the potential risks of commercial data sharing outweigh potential benefits. But this is not how they act. There is a large literature exploring this “privacy paradox,” whereas people report to be concerned about their privacy but do not behave in ways that actually protect it. People are willing to freely share personal data in exchange for other kinds of benefits—including convenience, economic, or reputational—which “suppress the perception of risks while over-emphasizing the perceived benefits of privacy disclosure”. People express more concern about highly regulated academic research than sharing health data in wellness apps. In addition, researchers of the “paradox of control” have found that if developers give users granular control over their data, that makes users more—not less—likely to share them, a fact not lost on developers.

Last, even if industry thinks that privacy might sell, there is little indication that they will be held accountable for promised practices. As Deeney and Kostick-Quenet summarize, “As long as a company’s data processing practices themselves are within the law, companies are free to decide where to position their disclosures along a sliding scale of transparency in ways that best support their business model”.

Industry is Not Adequately Motivated to Avoid Future Regulation

It is also reasonable to assume that the health data industry will self-regulate because it is of value for them to avoid future regulation. But the fact that commercial data companies have not done so in the past implies that this is not a strong enough inducement. For the most recent example of commercial data companies plunging forward without self-regulation or ethical principlism, we need not look further than the generative artificial intelligence (GenAI) boom. As an example, Google has public principles for ethical AI (be socially beneficial, avoid creating or reinforcing unfair bias, be built and tested for safety, be accountable to people, incorporate privacy design principles, uphold standards of scientific excellence, and be made available for uses that accord with these principles). But far from Google being the purveyor of better behavior for the industry, in 2020 it fired both co-leads of its “ethical AI” team (Timnit Gebru and Meg Mitchell) for what Gebru described as raising concerns about data biases. The following year, Gebru, Mitchell, and colleagues again warned major GenAI developers that they were putting winning the technology race and potential profit before critical ethical protections. Again, they were dismissed. And far from being eschewed by the public for not espousing even the most basic of data privacy protections, ChatGPT has become the single most successful product in the western hemisphere.

Government Regulation

Because of the limits of self-regulation, scholars have argued, here and elsewhere, that government regulation is the most promising path forward for industry health data governance. In this issue, Fowler et al. argue “It is in circumstances such as this–in which the public cannot protect itself, and system-level changes are necessary to achieve the intended outcome–that government action is most warranted”.

While we wholeheartedly agree, government translation of ethical principlism can have its own challenges. In his 2015 book, Schneider recounts how U.S. researchers’ horrific abuse of human subjects inspired the original Belmont Report (which McCoy et al. also use to ground their own framework). Regulators then attempted to translate Belmont into the human subjects research regulations, including the Common Rule, with the goal of codifying such ethical principlism into law. Schneider argues, however, that the unaccountable and variable Institutional Review Board system has largely been a failure. Regulators also struggled to apply the Belmont principles in a consistent fashion during the recent updating of the Common Rule. In this issue, Parasidis also goes into depth regarding barriers to comprehensive data commercialization regulation including disagreements over whether consumers should have a private right of action, federal preemption of more protective of state laws, and how much power to grant enforcement agencies. Therefore, while we fully support moving regulation forward, comprehensive and modern data privacy legislation has notable inherent limitations, with perhaps the most convincing being that none has yet been passed.

Which brings us to our last question.

What Are Other Options for Impacting Change?

If self- or government-regulation are not working to achieve the philosophical values within proposed frameworks, what are other options? While we unsurprisingly lack the magic answer, several other approaches are building in the literature.

A first, discussed by McCoy et al., is to focus efforts on the more modest goals of achieving mandates for disclosure and transparency in privacy practices. Such disclosure can enable effective enforcement and focus later policy reform on the most widespread and significant issues. Deeney and Kostick-Quenet argue that there are already existing tools—such as encryption-based mechanisms for tracking data provenance and transaction, algorithmic approaches, and computational auditing and investigation—that can be deployed to assure such scrutiny. Gross et al. also describe their biobank pilot which substantiated the McCoy et al argument that “transparency of data use redistributes power, and begets further obligations for accountability and inclusion”.

Second, it is important to recognize that prospective legislation and regulation is not the only way the law can achieve change. Individual lawsuits can drive change through state or federal law (though the latter is mired in ongoing debates about whether and when private individuals can sue). For example, in Dinerstein v. Google in Illinois, Dinerstein sued the University of Chicago Medical Center (where he was a patient) and Google (with which UChicago shared patient data with for commercial purposes). In this case, Dinerstein was ultimately unsuccessful, but there exists some limited hope for state case law if patients can establish that an invasion of their privacy resulted in financial injury, and in prior instances state tort law has expanded patient rights beyond those found in federal regulation before. Recently the U.S. Department of Justice also sued Google for its monopoly of digital advertising technologies in a civil anti-trust suit.

Last, commercial data use can be shaped by one place particularly subject to influence by academic bioethics: academia itself. There are health data regulatory regimes in this country—they just generally don’t apply to the commercial industry. Researchers who must follow the U.S. Food and Drug Administration or the U.S. Office of Human Research Protections’ human subjects research regulations have very specific rules and enforcement mechanisms for health data use and sharing. Ironically, the government’s own regulations put its funded researchers at a competitive disadvantage vis-à-vis private industry in terms of collecting data. But Academic Medical Centers (AMCs) hold massive assets in their gold-standard health data troves, which they can (and we think should) use to negotiate better protections for patient and participant data. Industry needs AMCs. Academics write the papers that validate the findings used to market the drugs prescribed in medical centers that make commercial data valuable in the first place. As one of us has argued in the past, “instead of waiting for industry to self-regulate its production of valuable health data and biospecimens, academia should self-regulate its own consumption”.

It is invigorating that astute bioethics scholars continue to invest their energy in reform of the commercial health data industry. Moving forward, it is likely that we will also need to be more creative in our approaches to translation of such ethical frameworks to achieve improvement. The articles in this issue of AJOB explore creative ideas to propel this field forward.

Kayte Spector-Bagdady and W. Nicholson Price II

Disclosure Statement

No potential conflict of interest was reported by the author(s).

Funding

This work was supported by the National Human Genome Research Institute (K01HG01049) and the National Center for Advancing Translational Sciences (R01TR004244, UM1TR004404).

We use cookies to improve your website experience. To learn about our use of cookies and how you can manage your cookie settings, please see our Privacy Policy. By closing this message, you are consenting to our use of cookies.