Get Published | Subscribe | About | Write for Our Blog    

Author Archive: Bioethics Today

About Bioethics Today

Three otherwise healthy patients go to the emergency department with severe acute respiratory failure. Only one ventilator, required to sustain life until the worst of the coronavirus infection has passed, is available. Who gets the vent?

That’s what “A Framework for Rationing Ventilators and Critical Care Beds During the COVID-19 Pandemic,” Viewpoint just published in the Journal of the American Medical Association (JAMA), addresses. Douglas White, MD, MAS, Endowed Chair for Ethics in Critical Care Medicine at the University of Pittsburgh School of Medicine and Bernie Lo, MD, from the University of California, San Francisco, wrote the Viewpoint, which links to a full policy document that’s been in the works since 2009. It is being implemented in several states and can easily be adapted to any hospital, Dr. White said in a Webinar on March 27.

The impending shortage of ventilators during a surge of viral infections evokes the scene in William Styron’s 1979 novel (and 1982 film) Sophie’s Choice. Upon arriving at Auschwitz, Meryl Streep’s character, a young Polish Catholic mother, must choose which of her two children would be gassed immediately and which would be allowed to live. The decision haunts her for the rest of her days.

Intensivists – physicians trained in critical care medicine – now face the dilemma of choosing who gets the ventilator. There’s precedent, in allocating organs for transplant and, more generally, slots in clinical trials. But nothing has happened at this scale and in this time frame, the looming tsunami of need. Said Dr. White:

In traditional medical ethics, it’s a treating physician’s obligation to address the well-being of individual patients and to respect the preferences of the patient. In a public health emergency, ethics shifts from an individual patient to focus on maximizing the well-being and outcomes of a population of patients.

Triage

The rationing framework replaces the old way of eliminating certain groups of individuals with a 1 to 8 scale, the lower number indicating higher priority in getting the ventilator. Previous protocols removed people with severe chronic lung disease, end-stage kidney disease, heart failure, metastatic cancer, severe cognitive impairment, and in some places, simply old age.

The “multiprinciple allocation framework” applies the score of  1 to 8  to everyone. It is based on projecting what will happen after a patient survives being on the ventilator.

“Just getting the most patients out of the ICU is not enough. If they get out and they have weeks or months left to live, we are not capturing all the things we think are important. No single principle adequately captures the values we take into account when we make these decisions,” Dr. White explained.

The process begins with triage, a staple of disaster medicine.

First, people who have previously stated that they would not want mechanical ventilation in the face of catastrophic or end-stage illness would be asked if they still feel that way. If so, they’re taken out of consideration. A second Viewpoint in the March 25 online JAMA, from J. Randall Curtis, MD, MPH, of the University of Washington, Seattle and colleagues, emphasizes the importance of knowing someone’s DNR wishes.

Second, a team separate from the doctors directly treating the patients makes the triage decisions. “That (treating) doctor has an obligation to the patient right in front or her or him and is knee-deep in keeping patients alive,” Dr. White said. The triage team coordinates with other hospitals, perhaps moving a patient to a facility with an available ventilator before starting down the Sophie’s choice pathway.

Third comes the 1-8 scale that reduces a complex comparison to two general criteria:

  • likelihood of survival after hospital discharge
  • number of life-years gained.

“It compares folks who are most likely to survive after discharge and life years gained to those who have almost no chance of surviving after discharge and accruing life years,” Dr. White explained. Decisions are resource-driven rather than by exclusion of groups, he added.

To assign the scores, doctors consult commonly-used “severity of illness” rating scales to compare just how sick the patients are. That may entail a bit of apples-to-oranges comparisons though. Is a person who recently underwent chemo more likely to survive a prolonged ventilator stint than someone who’s had a recent heart attack?

As the states wrestle with  triage protocols, the fear that certain groups will receive lower status persists, even with use of rating scales and the best intentions. The Arc of the United States, for example, which advocates for people with intellectual and developmental disabilities, has reportedly filed complaints with the Health and Human Services (HHS) department’s Office of Civil Rights (OCR) about possibly impending plans in Washington state, Alabama, and Tennessee. “We’re in the process of opening investigations right now,” said Roger Severino, director of OCR, in a briefing with reporters on March 28.

The 3 competing patients revisited

Returning to the three hypothetical patients competing for one ventilator, assume that they all score a 2. Next, two other factors come into play.

“The ‘life cycle principle’ states that all other things being equal, priority is given to the person who has had the least chance to live through life. If one person is 20 and the other two elderly, there would be a strong argument to give priority to the younger patient. It’s not because anyone has more worth or value than anyone else,” said Dr. White.

The second tiebreaker is ‘critical worker status,’ using what Dr. White calls the “concept of instrumental value.” That is, health care workers and those who enable them to work, who fall ill, receive priority:

Nurses, respiratory therapists, doctors, and the people who clean the rooms between patients in the ICU – by prioritizing these individuals we may augment the response of the health care system and save more lives. For the risks these individuals are taking the health system should ensure they’re taken care of if they get sick.

Ventilator math and a crystal ball

Stress on the health care system to provide enough ventilators emerges from two factors: the huge number of infected people, and the fact that the sickest need to be on a vent about twice as long as people being treated for other respiratory conditions: up to 12 days.

A peculiarity of COVID-19 that impacts ventilator allocation is that some patients seem to do okay once taken off the vent, and then crash.

“There are subgroups who, when we extubate, look good initially and pass the parameters (for discontinuation) and then they get hypoxic and require reintubation. The rate is higher than we’re used to,” said Michelle N. Gong, MD, professor of Medicine and Epidemiology and Population Health at Albert Einstein College of Medicine and Chief of Critical Care Medicine at the Montefiore hospital group in New York City, at an earlier JAMA webinar.

Dr. White agrees. “COVID-19 requires a long duration of ventilation for improvement. It’s important that when we reassess a patient that we don’t release her or him too early. The first 96 hours is too soon to get a signal that separates those who will survive from those who will die. This is really challenging and we don’t yet have good empirical data. So we err on the side of longer ventilation than shorter.”

If patients destined to crash can be identified, their “spot” can go to someone of equivalent distress not likely to die when taken off the ventilator.

The unthinkable: taking a vent from one to save another

Does Sophie’s choice extend to removing people already on ventilators who aren’t doing as well as others who are waiting?

“If we get to a point where there are far more patients who need vents than there are vents, after triage we would pick from among those present every day. But we’ll also need to reassess patients on vents to see if they are improving or if their prognosis is worse than those in the queue. If so, we would need to withdraw mechanical ventilation for those with poor progress in order to give it to patients who are waiting and have better prognosis,” Dr. White said.

The setting is strikingly different from the more typical situation of a family deciding to take an end-stage cancer patient who’s been sick for many months off of a ventilator when others aren’t waiting. A need for ventilators to fight a COVID-19 infection that detonated just days earlier, in many people, is a different beast entirely.

And so ever-evolving assessment tools, like the 1-8 scale, are being developed and deployed to help clinicians and bedside bioethicists make these tough decisions.

No one likes to talk about these sorts of situations, but the conversations are being forced. The doctors in Italy likely made Sophie’s Choices, Dr. White admits, clearly uncomfortable.

Sharing ventilators?

Can ventilators be split, like sticking two straws into a can of Coke? Not easily, say experts.

“From a technical standpoint, that’s not something most hospitals will know how to do,” Dr. White said.

The reason is that each patient is different. “It’s an incredible calculation problem to figure out how to appropriately ventilate patients with different lung characteristics with one ventilator. Some have stiff lungs while others have compliant lungs. It would be great to see that capability, of sharing vents, developed, but we’re not there yet,” Dr. White added.

Bioethicists are also discussing ways that a family can remain connected when a COVID-19 patient is taken off a ventilator.

The best-case scenario is having enough PPE for loved ones to use so they can be at the bedside. The next best solution is a remote bedside vigil via video. “We try to somehow allow the family to be with the patient and have some closure,” Dr. White said.

Choices are going to be necessary, akin to Sophie’s.

**This blog was originally posted on the Genetic Literacy Project at https://geneticliteracyproject.org/2020/04/01/sophies-choice-in-the-time-of-coronavirus-deciding-who-gets-the-ventilator/

Full Article

The possibility of artificial womb technology (ectogenesis) is no longer hypothetical. Three years ago, scientists put a premature lamb fetus in an artificial womb and it was able to develop normally to term. Scientists and others today are working on developing an artificial womb for humans.

There has been much discussion in the bioethics literature recently about whether ectogenesis would be empowering for women, freeing them from their traditional role as child-bearer and child-rearer. Indeed, some claim that the root of gender inequality is the fact that ciswomen experience pregnancy, whereas cismen do not. According to this argument, if pregnancy were no longer associated with a particular gender, then gender inequality would be eradicated.

Yet I find it unlikely that new reproductive technologies alone will engender gender equality without significant social changes as well. In other words, if ectogenesis were to become the new normal for all pregnancies, this would not necessarily sever ties between women and traditional women’s work (e.g. childcare, housework, etc.). This is because women’s oppression is not based on just one obstacle but rather is a multifaceted interlocking system. Feminist philosopher Marilyn Frye uses the analogy of a birdcage to explain oppression, which I quote at length because she so adroitly explains why oppression is so difficult to recognize and to overcome:

“If you look very closely at just one wire in the cage, you cannot see the other wires. If your conception of what is before you is determined by this myopic focus, you could look at that one wire, up and down the length of it, and unable to see why a bird would not just fly around the wire any time it wanted to go somewhere. Furthermore, even if, one day at a time, you myopically inspected each wire, you still could not see why a bird would have trouble going past the wires to get anywhere. There is no physical property of any one wire, nothing that

the closest scrutiny could discover, that will reveal how a bird could be inhibited or harmed by it except in the most accidental way. It is only when you step back, stop looking at the wires one by one, microscopically, and take a macroscopic view of the whole cage, that you can see why the bird does not go anywhere… It is perfectly obvious that the bird is surrounded by a network of systematically related barriers, no one of which would be the least hindrance to its flight, but which, by their relations to each other, are as confining as the solid walls of a dungeon. It is now possible to grasp one of the reasons why oppression can be hard to see and recognize. One can study the elements of an oppressive structure with great care and good will without seeing the structure as a whole, and hence without seeing or being able to understand that one is looking at a cage and that there are people there who are caged, whose motion and mobility are restricted, whose lives are shaped and reduced... As the cage-ness of the birdcage is a macroscopic phenomenon, the oppressiveness of the situations in which women live our various and different lives is a macroscopic phenomenon. Neither can be seen from a microscopic perspective. But when you look macroscopically you can see it – a network of forces and barriers which are systematically related and which conspire to the immobilization, reduction and molding of women and the lives we live.”

Ectogenesis is not the only type of reproductive technology that has been portrayed as something that will minimize gender inequalities and augment women’s reproductive autonomy. In the last decade, “social” or “elective” egg freezing has been described as a form of “reproductive affirmative action” that will level the playing field for women by allowing them to delay childbearing. However, many feminist scholars, including myself, argue that this portrayal of egg freezing is deceptive and inaccurate not only because egg freezing is not guaranteed, but also because such technologies only address one aspect of the various and multifaceted challenges women face in balancing careers and families.

Reproductive technologies like ectogenesis and egg freezing generally do not solve social problems because they do not address the root of the issue, which is social in nature, not medical. Egg freezing is often presented as allowing women time to focus on their education and careers. But empirical research demonstrates that most women are “delaying” childbearing because they lack a partner, not because they need more time to focus on their professional lives. This is a social issue that egg freezing cannot address. Similarly, while ectogenesis may “free” women from pregnancy, it will not, on its own, rewrite deeply entrenched gender norms that align femininity with traditional private realm activities like childcare and household chores.

There are many benefits to reproductive technologies, but we should be careful about claims that they will “cure” gender inequalities, which result from oppressive power systems. Reproductive technologies like ectogenesis and egg freezing may remove one wire from the birdcage, but they will not dismantle the entire oppressive system.

Full Article

In 1976, the U.S. Supreme Court ruled that jails and prisons must provide medical care to incarcerated people on the grounds that “deliberate indifference to serious medical needs” violates Eighth Amendment protections against cruel or unusual punishments (Estelle v. Gamble, 429 U. S. 97). Prior to this, the only medical care offered in 65% of U.S. jails was first aid (Steinwald et al. 1973, in Rold 2008). The case, Estelle v. Gamble, made incarcerated people the only group of Americans other than Native Americans with a constitutionally protected right to health care. However, because federal law prevents Medicaid and Medicare from paying for care for “‘inmate[s] of a public institution’”, most of the cost of jail and prison health care falls to state and county departments of corrections (42 U.S.C., in Rold 2008, p. 18).  One result is that the quality of what is often called “correctional care” varies widely across states and facilities.

Because the U.S. does not have universal health coverage, some correctional institutions end up serving as medical safety nets for people who have poor access to health services when not incarcerated (Sufrin 2014). Others offer dangerously understaffed and/or substandard care (SPLC 2014, Brown v. Plata 563 U.S. 493, 2011). When Estelle was first decided, care in prisons was offered by clinical staff employed directly by state departments of corrections. Since then, many states have switched to using private subcontractors to provide some or all health care delivered inside their prisons (Pew 2017). All but eight states require a co-pay to see a clinician while in prison. The average co-pay across all 50 states is $3.47. Although this may seem like a small amount compared to typical insurance co-pays, it is equivalent to over 25 hours of paid labor inside prison, where the average hourly wage is 14 cents per hour (Sawyer 2017).

Estelle v. Gamble did not mandate that incarcerated people receive excellent or even compassionate health care. As I stated above, it held merely that prisons not show “deliberate indifference to serious medical needs”. For incarcerated persons seeking to legally assert their constitutional right to care, this wording means that medical negligence alone is often not enough for a successful legal claim. Rather, a plaintiff must prove “deliberate indifference” by prison officials or clinicians. Lower courts have described “deliberate indifference” as medical care that is so poor or inadequate that it “shock[s] the conscience” or is “intolerable to fundamental fairness” (Hurst et al. 2019). Some states have enacted laws or policies setting a higher standard. For example, Washington state law requires the provision of “medically necessary” care to people in state correctional facilities (WAC 137-91-010).

Estelle and subsequent rulings established the legal duty to provide health care for incarcerated persons. But what ethical obligations are owed to incarcerated patients? In caring for people in jail and prison, we can do better than the legal minimum standard laid out by the courts. As patients and human beings, incarcerated people hold the same moral status as non-incarcerated patients. They should have the same rights to self-determination (autonomy), treatment in their best interest (beneficence), protection from harms (nonmaleficence) and fair treatment (justice). However, in practice, care for people in prison is rarely ethically equivalent to care provided to free patients because it must be delivered within limitations set by correctional authorities.

For example, a number of the rights typically encoded in patient bills of rights, such as the right to privacy and the right to know one’s own medical records, may not be honored in correctional settings. Incarcerated people who are transported to community hospitals for treatment are routinely shackled during their hospital stay, even at the end of life (DiTomas et al. 2019). Physician autonomy is also limited in correctional contexts, as recommendations made by medical staff are routinely reviewed by prison administrators, who usually have the ultimate say. Given the inherent the inherent values conflict between care and custody it may, in fact, be impossible to deliver ethically equivalent care within a context of punishment. But there is certainly room for improvement.

 

DiTomas, M., Bick, J., & Williams, B. (2019). Shackled at the End of Life: We Can Do Better. The American Journal of Bioethics, 19(7), 61–63. Sufrin, C. (2014). Jailcare: The safety net of a U.S. women’s jail. University of California Press.

Hurst, A., Castañeda, B., & Ramsdale, E. (2019). Deliberate Indifference: Inadequate Health Care in U.S. Prisons. Annals of Internal Medicine, 170(8), 563.

Pew Charitable Trusts. (2017). Prison Health Care: Costs and Quality (p. 140).

Rold, W. (2008). Thirty years after Estelle v. Gamble: A legal retrospective. Journal of Correctional Health Care, 14, 11.

Sawyer, W. (2017). The steep cost of medical co-pays in prison puts health at risk [Briefing]. Prison Policy Initiative.

SPLC. (2014). SPLC files federal lawsuit over inadequate medical, mental health care in Alabama prisons. Southern Poverty Law Center. Retrieved February 12, 2020, from https://www.splcenter.org/news/2014/06/17/splc-files-federal-lawsuit-over-inadequate-medical-mental-health-care-alabama-prisons

Full Article

This past weekend, I watched A Dangerous Son, Liz Garbus’ documentary about the overwhelming obstacles that U.S. parents—especially mothers—face in getting help for their mentally ill children.   The film follows three mothers who in the course of the filming each face a barrage of insults, death threats, and violent behavior from their critically mentally ill adolescent sons.  In the face of this, each of these mothers advocate fiercely for their sons to gain access to mental and behavioral health services while simultaneously trying to keep themselves and other family members safe at home.  Viewers are granted intimate—and at times deeply painful—access to the devastating realities of day-to-day life with severe mental illness and the toll it takes on the entire family unit. 

The specters of gun violence and recent mass shootings loom in the background the film.  Garbus makes explicit reference to the 2012 Newtown, CT and Aurora, CO massacres, and the viewer is primed to realize that the threats of violence toward self or others that each of the profiled boys makes during the course of the filming could be empty threats or the next national tragedy.  This troubling uncertainty is one of many—the uncertainty of gaining access to quality treatment, the uncertainty that treatment will prove effective with these boys, the uncertainty surrounding whether these boys will reach adulthood and what that adulthood will look like.    

Uncertainty also extends to the social and policy realms, as professionals and laypeople alike struggle with how to provide mental health care in an effective and cost-efficient way.  The abiding stigma of mental illness certainly complicates things further and likely leads to the shame, isolation, and disintegration of relationships we see in all three of the film’s featured families.  But what lacks all uncertainty is the fact that untreated or undertreated mental illness is damaging to the individual, to the family, and to the greater society.  

A shift in public attitudes toward a more nuanced understanding of mental illness will not solve everything, but it is an essential feature of moving forward.  Toward the end of the documentary, Dr. Andrew Solomon, a psychologist and mental health activist, highlights what I take to be the critical lesson of the film.  He says:

There is a sort of politics and a reality that are often in conflict. Most people with mental illnesses, most people with autism, most people with any of this variety of conditions, which we largely describe as brain diseases of one kind or another, will never hurt anyone. If we talk too much about those dangerous situations, we stigmatize people we shouldn't.  If we take a politically correct standpoint. and we don't acknowledge those situations, then we end up with families in which a child is terrifying and violent and nobody believes them, and they don't understand what it is they have to deal with. It's a very fine balance we need to strike. I think what we forget most of all when someone is violent and when they have a serious mental illness, is that we've failed them…We need to understand that treatment before tragedy is not only possible, but it should become our reality.  And that's—it's gonna take some tough conversations. 

A Dangerous Son helps to initiate this tough conversation.  Now it’s our turn to keep the conversation going.

 

Full Article

Among the most fundamental concerns regarding medical, biomedical, and bioethical decision making are the concepts of risk and benefit. Of course, benefit is better than risk so this might seem to be a fairly easy balance to calculate. But it is not. I...

Full Article

Two children (Kent and Brandon Schaible) have died of treatable pneumonia and dehydration because their parents (Herbert and Catherine Schaible) resorted to prayer instead of medical care.  In another particularly egregious case, members of the Faith Assembly Church denied medical care to a 4-year-old with an eye tumor the size of the child’s head.  Law enforcement officials found blood trails along the walls of the girl’s home where she, nearly blind, used the walls to support her head while navigating from room to room.  Seth Asser and Rita Swan have documented 172 cases of child deaths from preventable medical complication between 1975-1995.  The report does not include seventy-eight faith healing deaths reported in Oregon from 1955-1998, or the twelve deaths in Idaho from 1980-1998.  As recently as 2013, five child deaths in Idaho were reported from families whose religious beliefs prevented them from seeking medical treatment.  What sort of religious beliefs might possess a parent to refuse medical treatment for their child?  

Christian Scientists base their refusal on the religious belief that medicine is fundamentally mistaken in thinking the ultimate cause of disease is biological, seeing the real source of disease as spiritual disorder; and a spiritual problem calls for a spiritual solution.  The reality of sickness is not denied (e.g., you really do have pneumonia), however, the ultimate cause of that pneumonia is a result of spiritual disorder that can only be properly cured by spiritual interventions.  Because medicine is preoccupied with the biological level, it is unable to bring about change at the spiritual level where real healing occurs.  Sometimes specific scriptures will be cited and interpreted as encouraging the practice of faith-healing (e.g., Epistle of James 5:14-15, Mark 16:18){Campbell, 2010 #836}.  Believers see an obligation to act as an exemplary witness in the presence of illness by appealing to prayer, anointing, and vigils alone for healing.  Some scriptures are even interpreted as seeing recourse to medicine as an act of rebellion against God (2 Chronicles 16:12, Luke 8:43-48).  Others make more straightforward empirical claims by arguing that faith healing is simply more effective than modern medicine by citing the high number of annual iatrogenic deaths in hospitals (200,000-225,000 by some estimates).

Currently, most states offer legal shield from child abuse and neglect statutes for parents who refuse medical treatment for children on religious grounds (see: https://www.pewresearch.org/fact-tank/2016/08/12/most-states-allow-religious-exemptions-from-child-abuse-and-neglect-laws/).  Prior to 1974, it was considered child abuse to fail to seek medical care for a child on religious grounds.  However, a national movement was sparked by the Christian Science Church to have religious exemptions to child abuse and neglect statutes after a member of the church was convicted of manslaughter for failing to seek medical care for their child.  These efforts succeeded in 1974 with the passage of the Child Abuse Prevention and Treatment Act.  Several revisions have subsequently been made to the act, which now defers to states to decide whether to include religious exemptions to child abuse statutes. 

These legal exemptions ought to be overturned and secular clinical ethicists ought to continue recommending the override of religiously motivated medical refusals for children.  A growing consensus in clinical ethics cites the harm principle as the proper justification for overriding these refusals in pediatrics.  However, debate continues over how to interpret the harm principle in such cases.  Aside from locating a proper physical threshold of harm (some suffering, significant suffering, permanent disability, death), ethicists have also considered whether non-physical forms of harm ought to be taken into consideration.  For example, does a parent refusing requested puberty-blocking therapy for a trans-adolescent cross a psychological or dignitary harm threshold that should also trigger state action?  These are the sorts of questions that continue to engender lively debate in clinical ethics. 

Full Article

What is Artificial Intelligence? This central question has captivated the minds of specialists – mathematicians, computer scientists, cognitive scientists, and the like – and passive observers since the days of Alan Turing and John von Neumann. In this discussion I will distinguish between three types of Artificial Intelligence – human level, superhuman, and domain specific. Through this exercise I hope to shed light on the difficulties in conceptually defining the term Artificial Intelligence, as well as dispel misconceptions about the state of the art in Artificial Intelligence. To what end? I hope that this blog will spark a discussion about the ethics of today’s Artificial Intelligence, considered in light of tomorrow’s Artificial Intelligence.

 We will start at the beginning, with Alan Turing’s definition of human level artificial intelligence. Turing’s famous test, popularized by the 2014 movie The Imitation Game, is a test of a machine’s ability to exhibit intelligent behavior comparable to, or indistinguishable from, human-level intellect. In doing so, the test pits human against machine. The test involves three players, two human and one machine, each of whom are separated from the other players. Once separated, players are tasked with holding a conversation with their counterparts. One human player, the evaluator, is tasked with determining which of the other players is a human and which is a machine. The evaluator knows that just one of his conversational partners is human. With that knowledge, if the evaluator cannot reliably distinguish between the machine and the human, then the machine passes the Turing test. Such a machine would be said to possess human level intellect.

For the sake of argument, let’s say that a machine exists with human level intellect. In such a case the machine would have necessarily been created by humans. It is tautological that, since human level intellect engineered human level intellect, then human level intellect is capable of engineering human level intellect. From this it follows that, once a machine possesses human level intellect, it should also be able to engineer human level intellect. Furthermore, in our hypothetical, the creation of human level intellect would have been an iterative process comprised of repetitive attempts, failures, and modifications to realize progressively greater intelligence. From this it follows that, a machine with human level intellect could also engineer intelligent machines through an iterative process of repetitive attempts, failures, and modifications to realize progressively greater intelligence. The difference is that machines could do this process many orders of magnitude faster than humans, thereby enabling them to quickly advance human level intelligence into superhuman intelligence – an entirely new class of intelligence that humans could neither match, nor understand.

Now let’s bring our discussion back to reality. As it stands, humans have developed relatively sophisticated Artificial Intelligence, especially in healthcare. Humans have developed AI that is capable of outperforming physicians in the predicting of psychosis onset in patients with prodromal syndrome,[1] and finding a higher percentage of clinically actionable therapeutic options for cancer patients.[2] Most recently, on January 1, 2020, researchers working at Google’s AI lab, Deep Mind, published a journal article describing a new AI based healthcare tool capable of surpassing human physicians in the diagnosis of breast cancer.[3] The paper, published in Nature, claims it reduces false negatives by up to 9.4% and false positives by up to 5.7%.[4] Further, when pitted against six human radiologists Google’s diagnostic AI outperformed all of them.

These types of systems, which can outperform humans at a single task, or a single domain, are known as domain specific AI. Hierarchically, domain specific AI is the least sophisticated of the AI types discussed herein. Irrespective of that fact, domain specific AI is currently the state of the art and can be an extremely powerful tool, within its specific domain, or suite of tools, across their specific domains. Accordingly, we will begin our ethical discussion here. There are a number of ethical conflicts with regard to domain specific AI, each with sufficient depth to merit its own blog. In the interest of brevity, we only examine one, the tension between justice, the equitable distribution of benefits and burdens in society, and non-maleficence, a physician’s duty to do no harm.

In all of the foregoing examples of domain specific AI – breast cancer diagnosis, psychosis diagnosis, and percent of therapeutic options – the AI outperforms physicians. In domains where a physician diagnostician is less effective or accurate than an AI diagnostician, then a physician diagnosing a given patient is doing relative harm to that patient. Over a sufficiently large sample size, the physician will make mistakes that the AI would not, mistakes that impact lives. From the perspective of non-maleficence, the physician arguably has an affirmative duty to cede responsibility to the AI.

On the other hand, a major contemporary problem in the development of AI is the incidence of biased decision-making. In October 2019, the journal Science published the results of a UC Berkley team’s analysis of over 50,000 medical records.[5] The results showed racial bias in a predictive algorithm used by many United States health providers to determine the patients most in need of extra medical care.[6] Though the researchers did not specify the specific algorithm, the Washington Post reported it to be Optum by United Health, a tool that impacts more than 70 million lives.[7] To compound the issue, the Berkley team identified the same bias inducing flaw in ten other widely used healthcare algorithms.[8] From the perspective of justice, physicians should not use AI that perpetuates and systematizes discriminatory biases.

Numerous questions manifest as a consequence of the ethical conflict between justice and non-maleficence. Do we use AI because not using it may harm the patient? Do we refrain from using AI unless we can be sure it is unbiased? Do we meet somewhere in the middle, or does one side win out? In my opinion, these questions, like many of our greatest questions in the 21st century, require interdisciplinary collaboration. Computer scientists to advise on ways to resolve biases and their likelihood of success, statisticians to calculate the actual impact of the benefits and drawbacks of these technologies, lawyers to balance the equities. Of course, it is incumbent upon the bioethicists to devise the ultimate answers to these questions, if that is possible. Nonetheless, it would be wise to use expert counsel from other disciplines in doing so.

I will leave you all with a hypothetical. If we accept the premise that AI with human level intellect would quickly develop AI with superhuman intellect, does that change the ethical calculus for their predecessors, the domain specific AI of today? Should justice be prioritized so as to not encode biases into the intellectually transcendent AI of the future? Should autonomy become less of a priority because AI will do much of the decision-making anyway? With more questions than answers the ethics of AI in healthcare, and also generally, is a field ripe for discourse that I urge you all to take part in.

Full Article

What is Artificial Intelligence? This central question has captivated the minds of specialists – mathematicians, computer scientists, cognitive scientists, and the like – and passive observers since the days of Alan Turing and John von Neumann. In this discussion I will distinguish between three types of Artificial Intelligence – human level, superhuman, and domain specific. Through this exercise I hope to shed light on the difficulties in conceptually defining the term Artificial Intelligence, as well as dispel misconceptions about the state of the art in Artificial Intelligence. To what end? I hope that this blog will spark a discussion about the ethics of today’s Artificial Intelligence, considered in light of tomorrow’s Artificial Intelligence.

 We will start at the beginning, with Alan Turing’s definition of human level artificial intelligence. Turing’s famous test, popularized by the 2014 movie The Imitation Game, is a test of a machine’s ability to exhibit intelligent behavior comparable to, or indistinguishable from, human-level intellect. In doing so, the test pits human against machine. The test involves three players, two human and one machine, each of whom are separated from the other players. Once separated, players are tasked with holding a conversation with their counterparts. One human player, the evaluator, is tasked with determining which of the other players is a human and which is a machine. The evaluator knows that just one of his conversational partners is human. With that knowledge, if the evaluator cannot reliably distinguish between the machine and the human, then the machine passes the Turing test. Such a machine would be said to possess human level intellect.

For the sake of argument, let’s say that a machine exists with human level intellect. In such a case the machine would have necessarily been created by humans. It is tautological that, since human level intellect engineered human level intellect, then human level intellect is capable of engineering human level intellect. From this it follows that, once a machine possesses human level intellect, it should also be able to engineer human level intellect. Furthermore, in our hypothetical, the creation of human level intellect would have been an iterative process comprised of repetitive attempts, failures, and modifications to realize progressively greater intelligence. From this it follows that, a machine with human level intellect could also engineer intelligent machines through an iterative process of repetitive attempts, failures, and modifications to realize progressively greater intelligence. The difference is that machines could do this process many orders of magnitude faster than humans, thereby enabling them to quickly advance human level intelligence into superhuman intelligence – an entirely new class of intelligence that humans could neither match, nor understand.

Now let’s bring our discussion back to reality. As it stands, humans have developed relatively sophisticated Artificial Intelligence, especially in healthcare. Humans have developed AI that is capable of outperforming physicians in the predicting of psychosis onset in patients with prodromal syndrome,[1] and finding a higher percentage of clinically actionable therapeutic options for cancer patients.[2] Most recently, on January 1, 2020, researchers working at Google’s AI lab, Deep Mind, published a journal article describing a new AI based healthcare tool capable of surpassing human physicians in the diagnosis of breast cancer.[3] The paper, published in Nature, claims it reduces false negatives by up to 9.4% and false positives by up to 5.7%.[4] Further, when pitted against six human radiologists Google’s diagnostic AI outperformed all of them.

These types of systems, which can outperform humans at a single task, or a single domain, are known as domain specific AI. Hierarchically, domain specific AI is the least sophisticated of the AI types discussed herein. Irrespective of that fact, domain specific AI is currently the state of the art and can be an extremely powerful tool, within its specific domain, or suite of tools, across their specific domains. Accordingly, we will begin our ethical discussion here. There are a number of ethical conflicts with regard to domain specific AI, each with sufficient depth to merit its own blog. In the interest of brevity, we only examine one, the tension between justice, the equitable distribution of benefits and burdens in society, and non-maleficence, a physician’s duty to do no harm.

In all of the foregoing examples of domain specific AI – breast cancer diagnosis, psychosis diagnosis, and percent of therapeutic options – the AI outperforms physicians. In domains where a physician diagnostician is less effective or accurate than an AI diagnostician, then a physician diagnosing a given patient is doing relative harm to that patient. Over a sufficiently large sample size, the physician will make mistakes that the AI would not, mistakes that impact lives. From the perspective of non-maleficence, the physician arguably has an affirmative duty to cede responsibility to the AI.

On the other hand, a major contemporary problem in the development of AI is the incidence of biased decision-making. In October 2019, the journal Science published the results of a UC Berkley team’s analysis of over 50,000 medical records.[5] The results showed racial bias in a predictive algorithm used by many United States health providers to determine the patients most in need of extra medical care.[6] Though the researchers did not specify the specific algorithm, the Washington Post reported it to be Optum by United Health, a tool that impacts more than 70 million lives.[7] To compound the issue, the Berkley team identified the same bias inducing flaw in ten other widely used healthcare algorithms.[8] From the perspective of justice, physicians should not use AI that perpetuates and systematizes discriminatory biases.

Numerous questions manifest as a consequence of the ethical conflict between justice and non-maleficence. Do we use AI because not using it may harm the patient? Do we refrain from using AI unless we can be sure it is unbiased? Do we meet somewhere in the middle, or does one side win out? In my opinion, these questions, like many of our greatest questions in the 21st century, require interdisciplinary collaboration. Computer scientists to advise on ways to resolve biases and their likelihood of success, statisticians to calculate the actual impact of the benefits and drawbacks of these technologies, lawyers to balance the equities. Of course, it is incumbent upon the bioethicists to devise the ultimate answers to these questions, if that is possible. Nonetheless, it would be wise to use expert counsel from other disciplines in doing so.

I will leave you all with a hypothetical. If we accept the premise that AI with human level intellect would quickly develop AI with superhuman intellect, does that change the ethical calculus for their predecessors, the domain specific AI of today? Should justice be prioritized so as to not encode biases into the intellectually transcendent AI of the future? Should autonomy become less of a priority because AI will do much of the decision-making anyway? With more questions than answers the ethics of AI in healthcare, and also generally, is a field ripe for discourse that I urge you all to take part in.

Full Article

The right of a person to live out his or her own particular life plans is an important value we all hold dear. To the extent possible we should honor the prior expressed wishes of individuals after they have lost capacity and provide medical care consistent with those wishes. In general, it seems to me that a patient’s right to refuse any and all medical treatments while having capacity should extend to the future during the time of incapacity. 

The scope and authority of an advance directive is an important matter for many patients who fear being over-treated and having the dying process drawn out on machines in the ICU. But many aging individuals also fear becoming demented from Alzheimer’s disease or other forms of dementia and living for years while having lost all connection with their past identities and relationships. More and more people are finding the prospects of living into dementia intolerable. As Norman L. Cantor writes in the Hastings Center Report, “(f)or…people, like myself, protracted maintenance during progressive cognitive dysfunction and helplessness is an intolerably degrading prospect. The critical question for those of us seeking to avoid protracted dementia is how best to accomplish that objective.” It’s important for the aging public, as well as the physicians advising them, to be clear about the options that are available.

It's important to note that many individuals diagnosed with dementia still have capacity and therefore have the right to have their diagnoses and prognoses disclosed to them by their physicians. Unfortunately, according the Alzheimer’s Association, such disclosure occurs just under half the time. This means many patients may be missing the opportunity of asserting their right to complete an advance directive and to spell out their wishes about future medical care. For those who do complete advance directives at this point, it seems to me that patients have the clear right to say that they wish to refuse any and all life-prolonging medical treatments from that time on. Obviously, this includes avoiding intubation, G-tubes, dialysis, etc. But what if the patient after entering into a state of dementia is actually doing well? What if he or she seems happy living in a facility or at home, socializes, takes walks, eats heartily, and even interested in sexual activity? What if such a patient contracts pneumonia and has an advance directive indicating no life prolonging treatment? Does this apply to antibiotics that would reverse a life-threating disease and return the patient to his or her former baseline? My answer is yes. But for such simple, non-invasive measures like antibiotics, individuals should state explicitly they wish to refuse them in the advance directive while they have capacity. 

But there is still the possibility of a person remaining physically healthy for many years while living in a state of dementia. Unfortunately, advance directives cannot prevent this eventuality. The only option while the individual with newly diagnosed with dementia still has capacity is to preclude entering into dementia by voluntarily stopping eating and drinking, as Cantor points out in his article (VSED). I see no ethical or legal reason why an individual could not make such a decision and act on it.

One final issue that comes up for those who feel strongly about not extending their lives during dementia is around food. One point that should be clear: individuals cannot say in an advance directive that they do not want to be given food during dementia. This would mean that caregivers would have to deny food to hungry patients, thereby allowing them to starve. This seems clearly ethically, and probably legally, unacceptable. Giving food to a person who is physically healthy, even with dementia, is a basic form of comfort care, not directly a life-prolonging type of support. The sticky issue here arises after patients are unable to feed themselves and handfeeding becomes necessary. Can someone say in an advance directive that he or she do not wish to be hand fed under any circumstances?

To me the answer to this question also goes beyond what should be included in an advance directive. Food at this stage of life should be viewed entirely as a source of comfort and should not become a burden. The determination of whether food is a source of comfort or is becoming a burden is a clinical determination. Some patients who have stopped self-feeding may still enjoy food via handfeeding. But when a patient begins to show indifference, it is time to stop. This seems to me a matter of providing the standard of care, which can assure all patients are treated humanely until death.

In conclusion, advance directives are important to avoid unwanted medical interventions, including simple interventions to address life-threatening conditions. But advance directives cannot preclude someone from entering dementia—to do that, one needs to consider VSED while having capacity. At the point of dementia, standard of care comfort care, with an appropriate strategy of feeding, should be provided.

 

Full Article

Over the past decades, increasing emphasis on individual autonomy has led to the view that competent adults should decide for themselves how they want to be treated medically.  This shift in practice and policy has been accompanied by the adoption of advance directives that allow competent adults to specify in advance how they want to be treated, with the goal of extending respect for autonomy into periods of decisional incapacity.

Advance Directives are written instructions about health care treatment made by adult patients before they lose decision-making capacity.  These instructions are completed ahead of time and only apply when decision-making capacity is lost.  Examples of an advance directive are a health care proxy document and living will.  A health care proxy is a document that allows you to appoint another person as your health care agent to make health care decisions if you are no longer able to do so.  You may give your health care agent authority to make decisions for you in all medical situations if you cannot speak for yourself.  Thus, even in medical situations not anticipated by you, your agent can make decisions and ensure you are treated according to your wishes, values and beliefs.  A living will is a document that contains your health care wishes and is addressed to unnamed family, friends, hospitals and other health care facilities.  You may use a living will to specify your wishes about life-prolonging procedures and other end-of-life care so that your specific instructions can be read by your caregivers when you are unable to communicate your wishes.  A living will cannot be used to designate a health care agent, the health care proxy document must be used for this designation. 

An advance directive should be considered as a “gift” to a loved one, giving them peace of mind, minimizing stress, and reducing potential conflicts among family members.  This all starts with a conversation with family members and friends who are best likely to represent a patient’s wishes.  The conversation should clarify a patient’s values and beliefs, framing medical wishes around these values and beliefs and addressing questions such as:

v  What’s important to the patient

v  What contributes to the quality of life the patient may want

o   What activities are essential to having this quality of life

v  How does the patient want to spend their final years, weeks or days

o   What role does faith play in making these decisions

o   How much medical care is the patient willing to have to stay alive

o   What kind of medical risks is the patient willing to take

v  When would the patient want to shift from treatment to comfort care

The conversation should not end here as it’s impossible to predict every scenario.  But it is important to continue to share wishes and preferences; explaining views that will give loved one’s information to make decisions on behalf of the patient. 

One of the challenges this presents is that patients may not know what treatments they want in the future and many do not complete an advance directive.  When patients become unable to make or communicate crucial health care decisions for themselves, health care providers look to others to speak on behalf of the patient.  A health care agent is someone that has been designated by the patient to make health care decisions should the patient become unable to do so for him/herself.  Responsibility begins when the patient loses capacity (may vary by state).    What if the patient has not designated a health care agent and loses decision-making capacity?   In New York State, the Family Health Care Decisions Act (FHCDA) provides a hierarchy/highest priority class from which the provider must choose the surrogate.  The role of the surrogate is identical to that of the health care agent—to represent the patient’s expressed and implied wishes using the substituted judgment standard or if the patient’s wishes are unknown, using the best interest standard.  In the ideal scenario, the agent or surrogate is someone who has a close, loving relationship with the patient; someone who has intimate knowledge of the patient’s preferences and values; someone the patient chose or would choose to make health care decisions on his or her behalf; and someone with whom the patient had previously discussed preferences for care.

 The most important take away from this blog is that this all starts with a CONVERSATION.  Discussing with family or a loved one what is important to you, your values, beliefs, wishes, preferences.  Then frame your medical wishes/preferences around these values and beliefs. 

 

THIS MAY BE ONE OF THE MOST IMPORTANT THINGS YOU EVER DO FOR YOURSELF AND YOUR LOVED ONE’S!

 

Full Article