Get Published | Subscribe | About | Write for Our Blog    

Posted on January 10, 2019 at 4:03 PM

by Craig Klugman, Ph.D.

Threats to privacy and confidentiality swirl around. Each day the newspaper seems to report on stories that show the erosion of this fundamental human right. This week alone was a report on the ability of an artificial intelligence to accuratelydiagnose disease simply by looking at a photographic portrait. The National Institutes of Health is exploring whether all newborns should automatically have their DNA sequenced to screen for genetic disease. Another article explains how Facebook is reporting to law enforcement officials when there are posts and patterns of behavior that indicate a user may be suicidal (part of the smartphone psychiatris tmovement). The Consumer Electronic Show introduced a slew of new smartdevices that claim to make your life easier and more efficient, all for the cost of accessing your private information. And a DJ admitted to murdering a young woman when his relative uploaded their DNA to a public database that police scour for familial links in order to solve cold murder cases.

Last week, after I wrote that privacy can inhibit machine learning for developing medical AIs, I received a note suggesting that privacy ends when there is a benefit to society to be gained. This question has led me to wonder if we are in the post-privacy age. When our locations are tracked by cell phone GPS, our conversations are recorded by home hub assistants, and de-identification is no longer realistically possible with research and personal medical records, perhaps we have come to a point where the notion of privacy is no longer helpful. Instead, we should value full transparency; providing our personal data for the common good may outweigh any privacy rights.  After all, one of the hallmarks of younger generations is their propensity to share secrets in public, especially via social media.

The notion of privacy is a product of the modern Enlightenment, the idea that people could own their own ideas and thoughts, and the notion that the individual was the prime locus of import in a society. A right to privacy only developed when people were viewed as having moral, social and political value rather than being mere vassals or property. For example, in the U.S., a right to privacy appears in the U.S. Constitution’s Fourth Amendment (1789), “The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated.” However, this notion of privacy changed as available technologies were developed. A legal right to privacy as might be recognized today appears in a Harvard Law Review article by attorneys Samuel Warren and Louise Brandeis in 1890, where they define privacy as “the right to be left alone”. The future Supreme Court Justice and his colleague were concerned that the new technologies of photography and the growth of newspaper publications meant that people were losing control over their images, ideas, property, bodies, and families. In 1947, the United Nations’ Declaration of Human Rights spelled out this right to all in article 12, “No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation.” Then in 1967, Alan Westin, professor of public law and government at Columbia University, created our modern notion of privacy as an individual civil right, “the claim of individuals…to determine for themselves when, how and to what extent information about them is communicated”.

A Violable Privacy Duty
Bioethics was born out of individualism and political liberalism, both of which support a view of privacy and confidentiality belonging to the individual. In medicine, privacy has existed since the Hippocratic Oath and in bioethics since its founding (whether that be traced to the Nuremburg Code, founding of the Hastings Center, or some other arbitrary historical marker). As I was taught and have practiced in bioethics, privacy and confidentiality are imperatives. Yet there are many exceptions to this duty such as the Tarasoff rule, required reporting—potential harm to self, need to protect the public’s health (reporting communicable diseases), child or elder abuse—or for law enforcement. Therefore, privacy and confidentiality are not viewed in modern medicine as absolute duties.

Art by Craig Klugman

Viewing privacy as inviolable (unless various conditions are met) is a problematic position as it is one that focuses solely on the individual rather than on the greater common good. After all, the medical data I keep secret today could have led to a disease cure tomorrow as part of a big dataset. There are times where other duties are more important than privacy and confidentiality.

Obligation to Participate in Research
One reason for ending privacy, is that such data is valuable to research—advancing scientific knowledge to cure disease. In this case, immense amounts of records are fed into intelligent systems which can them calculate correlations and causations. In medicine, this can mean discovering new mechanisms of disease, new treatments, and new cures. Much of the data comes from electronic health records—information that people provided to their physicians for help in diagnosis, treatment, and cure of their health information. My concern has been that using patient medical records without notifying patients, gaining their explicit informed consent for this use, or providing an opt-out to participating is a violation of consent, privacy, and confidentiality. Keeping this information private and confidential, however, may hinder research.

An early bioethicist, Father Richard McCormick, argued that people (specifically children) have a moral obligation to participate in medical research. Since we all benefit from the products of such research, we should all be part of the production of such knowledge.  He elucidated his argument in a series of letters to protestant bioethicist Paul Ramsey. In the 21stcentury, professor of social medicine Stuart Rennie explored the idea of this requirement and found it an imperfect obligation at best. He argued that required participation may cause greater harm to vulnerable populations and pose threats to “informed consent, participant recruitment, and the ethical review of research”. In other words, obligating participation in research would change the way we have been doing things. We change how we do things all of the time. Consider that our current notion of research ethics is less than a century old. In the late 19thCentury human subjects research was only ethical if done on a researcher and his (usually a “his”) family. The view was that one should never do to others what one would not do to one’s family. Today, the idea of experimenting on one’s own family is problematic because it violates the objective gaze and conflicts of interest.

If privacy is a prima facie duty, then it can be overridden by a more important duty such as caring for the common good, pursuing cures for disease, or solving old crimes.

A Post-Privacy World
In David Eggers book, The Circle, a tech company pursues total transparency where people broadcast every moment of their life (minus bathroom breaks). Privacy is limited to a location (bathroom) and an activity (number 1 and 2). The reader is supposed to be horrified by this turn of events, but perhaps if we move beyond the emotional reactions, this transparency has benefits. The interconnected society in which we now live is both the ultimate expression of capitalism (our opinions, thoughts, and ideas are for sale) and a sign of the limits of our current social and economic systems. University of Notre Dame political scientist Patrick Deneen in Why Liberalism Failed, suggests that liberalism is not tenable in the long term because it favors the individual over the group.

Would the end of privacy mean that we all wear bodycams and broadcast our lives 24/7? Not likely. But it would mean that there is an assumption of sharing instead of an assumption of confidentiality. In this reality, medical records and DNA scans would be shared with researchers, data analysts, and pharmaceutical companies without demonstrating a benefit to us and without our consent. School records, online search patterns, GPS logs, financial records, and criminal records would all be available. We would opt out of sharing instead of opting in. According to my students, they already live assuming that every moment could be captured and broadcast. We would not assume that anything we share or say remains confidential. One result would be a clear and full disclosure of conflicts of interest, an idea that Arthur Caplan has suggested for researchers in the form of the Electronic Long-form Disclosure (ELF) Statement.

Of course this vision is terrifying because we live in a society that often discriminates against people who are perceived as different. We lose jobs, health insurance coverage, family and friends when our secrets are revealed. On one hand, since there won’t be any secrets, meaning we are less likely to have family and friend drama. For example, the DNA test showing misattributed paternity is no big deal because everyone already knew that. If there had never been a closet, then I would not have had to come out of one. But we will need a set of strong laws that protect us from any of our private information being used against us. The only law that comes close to addressing this idea, GINA, lacks the teeth to be effective and to be enforced. Strong federal laws would have to protect us from being discriminated against, fired (or not hired), thrown out of an apartment (or not given one), and so forth because of who we are and what we do.

Larger legal and ethical conversation raised by new technologies and norms that make privacy a problem have to take place soon. The one solution that will not work is assuming our 20thCentury privacy rules will work for the 21stCentury and beyond.

Comments are closed.