Author

Craig Klugman

Publish date

Tag(s): Legacy post

by Craig Klugman, Ph.D.

On Monday I attended a symposium on inter-professional education. During a session on new technologies in medicine (telemedicine, wearables, and mobile devices) I brought up the question of preserving privacy. The foundation sponsoring the event replied to me, “There is no such thing as privacy. It’s dead.” For someone who works in bioethics, serves on an IRB, and was formerly a journalist, this notion is scary. Perhaps, I have simply been in denial. After all, I use a mobile phone that tracks my position, synchs with the cloud, and provides much convenience. In exchange, my information is collected, analyzed, sorted, and used for marketing and more.

Another sign of the end of privacy is the Open Humans Network, a project of the Harvard Personal Genome Project. The project has researchers at Harvard, NYU, and UC San Diego backed by grants from the Knight Foundation and Robert Wood Johnson. The project seeks to connect people willing to share their personal information with researchers on an open platform.

“The value of sharing data is abundantly clear, and technical barriers are now surmountable thanks to the Internet and advances in information technology. The remaining barriers are legal and ethical, not technical.” The study protocol goes on to explain that promises of privacy “restrain” researchers in “data silos.” Such “silos” mean there are limitations on sharing data between studies and researchers on different projects, between subjects, and with the public.

If that was not concerning enough, consider that participants are called “members” not subjects. This makes people feel more like part of a club rather than an object of study. People can share demographic information, genomic data, and location data. This is enough information for someone to re-identify participants.

Members select what information to share and what to keep private. These choices can be made for their public profiles as well as for each research study. They can choose which research studies they want their data to be used in (there are three at the moment). Members’ identities are publicly available and mini-profiles allow them to share what studies they are involved with. As the consent form says:

“When you share data publicly on the Open Humans website, it will be publicly visible and associated with your member profile. We anticipate researchers are likely to use this data, but there is no restriction: anyone may download it and it may be used for any purpose.”

In exchange, members have access to aggregated raw data from the studies. This is Research 2.0 full interaction where a subject not only contributes data but can also be part of analysis and discovery. Subjects can share their data with each other, the general public, companies (such as direct-to-consumer testing groups), and even for fitness data tracking.

The cost of Research 2.0 is that “We do not guarantee privacy.” Participants may be identifiable depending on what the member chooses to share. The risks to participation include “identity theft, embarrassment, discrimination, and data may later become sensitive.” However, members do have the option to withdraw at anytime and their data will be removed from the database

Open Humans is not the first attempt to democratize personal information for research. The 1000Genomes project is a notable example. Earlier this month Apple announced its “ResearchKit” for collecting medical data from mobile devices. This information collected will be used for researchers to write medical apps and make new discoveries. Similar to Open Human, ResearchKit users can choose what information will be shared with what apps and studies. One of the advantages claimed for these systems is that they make informed consent easier.

A feature of signing up for Open Humans is their consent process, which gives 7 quiz questions to make sure that you have read and understood the consent document. If you do not get them correct, then you cannot complete your sign up.

The notion of crowd-based research is not new. You can already donate your idle computer time to 40 research projects: climate modeling, space modeling, protein modeling, mapping the brain, calculating large prime numbers, even sorting through radio waves for signs of intelligent life elsewhere.

Research 2.0 may not take place in expensive labs and clinics. It may take place on mobile phones and wearables collecting health data, and in the computers in our homes and offices analyzing the data. Perhaps the notion of a citizen-scientist will demonstrate biases and blind spots perpetuated by how we teach scientists to go about their work.

This kind of open platform networking may be the future of medical research. But it seems to me that this is a technology created to make researchers’ lives easier rather than to boost protections for potential participants. I may be old fashioned, but I am not ready to give up my privacy and confidentiality just yet. As a society we need a conversation on the role of privacy and confidentiality in distributed human subjects research. It could be a decade before the rules and regulations catch up with this technology. In the meantime, the researchers should voluntarily develop guidelines that put subject (member) protection first, even if it presents a bit more of a barrier. Realistically though, with declining dollars for research though, this model is a creative approach that will prove itself if it leads to scientific and medical innovation.

We use cookies to improve your website experience. To learn about our use of cookies and how you can manage your cookie settings, please see our Privacy Policy. By closing this message, you are consenting to our use of cookies.