Get Published | Subscribe | About | Write for Our Blog    

Author Archive: Bioethics Today

About Bioethics Today

This past weekend, I watched A Dangerous Son, Liz Garbus’ documentary about the overwhelming obstacles that U.S. parents—especially mothers—face in getting help for their mentally ill children.   The film follows three mothers who in the course of the filming each face a barrage of insults, death threats, and violent behavior from their critically mentally ill adolescent sons.  In the face of this, each of these mothers advocate fiercely for their sons to gain access to mental and behavioral health services while simultaneously trying to keep themselves and other family members safe at home.  Viewers are granted intimate—and at times deeply painful—access to the devastating realities of day-to-day life with severe mental illness and the toll it takes on the entire family unit. 

The specters of gun violence and recent mass shootings loom in the background the film.  Garbus makes explicit reference to the 2012 Newtown, CT and Aurora, CO massacres, and the viewer is primed to realize that the threats of violence toward self or others that each of the profiled boys makes during the course of the filming could be empty threats or the next national tragedy.  This troubling uncertainty is one of many—the uncertainty of gaining access to quality treatment, the uncertainty that treatment will prove effective with these boys, the uncertainty surrounding whether these boys will reach adulthood and what that adulthood will look like.    

Uncertainty also extends to the social and policy realms, as professionals and laypeople alike struggle with how to provide mental health care in an effective and cost-efficient way.  The abiding stigma of mental illness certainly complicates things further and likely leads to the shame, isolation, and disintegration of relationships we see in all three of the film’s featured families.  But what lacks all uncertainty is the fact that untreated or undertreated mental illness is damaging to the individual, to the family, and to the greater society.  

A shift in public attitudes toward a more nuanced understanding of mental illness will not solve everything, but it is an essential feature of moving forward.  Toward the end of the documentary, Dr. Andrew Solomon, a psychologist and mental health activist, highlights what I take to be the critical lesson of the film.  He says:

There is a sort of politics and a reality that are often in conflict. Most people with mental illnesses, most people with autism, most people with any of this variety of conditions, which we largely describe as brain diseases of one kind or another, will never hurt anyone. If we talk too much about those dangerous situations, we stigmatize people we shouldn't.  If we take a politically correct standpoint. and we don't acknowledge those situations, then we end up with families in which a child is terrifying and violent and nobody believes them, and they don't understand what it is they have to deal with. It's a very fine balance we need to strike. I think what we forget most of all when someone is violent and when they have a serious mental illness, is that we've failed them…We need to understand that treatment before tragedy is not only possible, but it should become our reality.  And that's—it's gonna take some tough conversations. 

A Dangerous Son helps to initiate this tough conversation.  Now it’s our turn to keep the conversation going.

 

Full Article

Among the most fundamental concerns regarding medical, biomedical, and bioethical decision making are the concepts of risk and benefit. Of course, benefit is better than risk so this might seem to be a fairly easy balance to calculate. But it is not. I...

Full Article

Two children (Kent and Brandon Schaible) have died of treatable pneumonia and dehydration because their parents (Herbert and Catherine Schaible) resorted to prayer instead of medical care.  In another particularly egregious case, members of the Faith Assembly Church denied medical care to a 4-year-old with an eye tumor the size of the child’s head.  Law enforcement officials found blood trails along the walls of the girl’s home where she, nearly blind, used the walls to support her head while navigating from room to room.  Seth Asser and Rita Swan have documented 172 cases of child deaths from preventable medical complication between 1975-1995.  The report does not include seventy-eight faith healing deaths reported in Oregon from 1955-1998, or the twelve deaths in Idaho from 1980-1998.  As recently as 2013, five child deaths in Idaho were reported from families whose religious beliefs prevented them from seeking medical treatment.  What sort of religious beliefs might possess a parent to refuse medical treatment for their child?  

Christian Scientists base their refusal on the religious belief that medicine is fundamentally mistaken in thinking the ultimate cause of disease is biological, seeing the real source of disease as spiritual disorder; and a spiritual problem calls for a spiritual solution.  The reality of sickness is not denied (e.g., you really do have pneumonia), however, the ultimate cause of that pneumonia is a result of spiritual disorder that can only be properly cured by spiritual interventions.  Because medicine is preoccupied with the biological level, it is unable to bring about change at the spiritual level where real healing occurs.  Sometimes specific scriptures will be cited and interpreted as encouraging the practice of faith-healing (e.g., Epistle of James 5:14-15, Mark 16:18){Campbell, 2010 #836}.  Believers see an obligation to act as an exemplary witness in the presence of illness by appealing to prayer, anointing, and vigils alone for healing.  Some scriptures are even interpreted as seeing recourse to medicine as an act of rebellion against God (2 Chronicles 16:12, Luke 8:43-48).  Others make more straightforward empirical claims by arguing that faith healing is simply more effective than modern medicine by citing the high number of annual iatrogenic deaths in hospitals (200,000-225,000 by some estimates).

Currently, most states offer legal shield from child abuse and neglect statutes for parents who refuse medical treatment for children on religious grounds (see: https://www.pewresearch.org/fact-tank/2016/08/12/most-states-allow-religious-exemptions-from-child-abuse-and-neglect-laws/).  Prior to 1974, it was considered child abuse to fail to seek medical care for a child on religious grounds.  However, a national movement was sparked by the Christian Science Church to have religious exemptions to child abuse and neglect statutes after a member of the church was convicted of manslaughter for failing to seek medical care for their child.  These efforts succeeded in 1974 with the passage of the Child Abuse Prevention and Treatment Act.  Several revisions have subsequently been made to the act, which now defers to states to decide whether to include religious exemptions to child abuse statutes. 

These legal exemptions ought to be overturned and secular clinical ethicists ought to continue recommending the override of religiously motivated medical refusals for children.  A growing consensus in clinical ethics cites the harm principle as the proper justification for overriding these refusals in pediatrics.  However, debate continues over how to interpret the harm principle in such cases.  Aside from locating a proper physical threshold of harm (some suffering, significant suffering, permanent disability, death), ethicists have also considered whether non-physical forms of harm ought to be taken into consideration.  For example, does a parent refusing requested puberty-blocking therapy for a trans-adolescent cross a psychological or dignitary harm threshold that should also trigger state action?  These are the sorts of questions that continue to engender lively debate in clinical ethics. 

Full Article

What is Artificial Intelligence? This central question has captivated the minds of specialists – mathematicians, computer scientists, cognitive scientists, and the like – and passive observers since the days of Alan Turing and John von Neumann. In this discussion I will distinguish between three types of Artificial Intelligence – human level, superhuman, and domain specific. Through this exercise I hope to shed light on the difficulties in conceptually defining the term Artificial Intelligence, as well as dispel misconceptions about the state of the art in Artificial Intelligence. To what end? I hope that this blog will spark a discussion about the ethics of today’s Artificial Intelligence, considered in light of tomorrow’s Artificial Intelligence.

 We will start at the beginning, with Alan Turing’s definition of human level artificial intelligence. Turing’s famous test, popularized by the 2014 movie The Imitation Game, is a test of a machine’s ability to exhibit intelligent behavior comparable to, or indistinguishable from, human-level intellect. In doing so, the test pits human against machine. The test involves three players, two human and one machine, each of whom are separated from the other players. Once separated, players are tasked with holding a conversation with their counterparts. One human player, the evaluator, is tasked with determining which of the other players is a human and which is a machine. The evaluator knows that just one of his conversational partners is human. With that knowledge, if the evaluator cannot reliably distinguish between the machine and the human, then the machine passes the Turing test. Such a machine would be said to possess human level intellect.

For the sake of argument, let’s say that a machine exists with human level intellect. In such a case the machine would have necessarily been created by humans. It is tautological that, since human level intellect engineered human level intellect, then human level intellect is capable of engineering human level intellect. From this it follows that, once a machine possesses human level intellect, it should also be able to engineer human level intellect. Furthermore, in our hypothetical, the creation of human level intellect would have been an iterative process comprised of repetitive attempts, failures, and modifications to realize progressively greater intelligence. From this it follows that, a machine with human level intellect could also engineer intelligent machines through an iterative process of repetitive attempts, failures, and modifications to realize progressively greater intelligence. The difference is that machines could do this process many orders of magnitude faster than humans, thereby enabling them to quickly advance human level intelligence into superhuman intelligence – an entirely new class of intelligence that humans could neither match, nor understand.

Now let’s bring our discussion back to reality. As it stands, humans have developed relatively sophisticated Artificial Intelligence, especially in healthcare. Humans have developed AI that is capable of outperforming physicians in the predicting of psychosis onset in patients with prodromal syndrome,[1] and finding a higher percentage of clinically actionable therapeutic options for cancer patients.[2] Most recently, on January 1, 2020, researchers working at Google’s AI lab, Deep Mind, published a journal article describing a new AI based healthcare tool capable of surpassing human physicians in the diagnosis of breast cancer.[3] The paper, published in Nature, claims it reduces false negatives by up to 9.4% and false positives by up to 5.7%.[4] Further, when pitted against six human radiologists Google’s diagnostic AI outperformed all of them.

These types of systems, which can outperform humans at a single task, or a single domain, are known as domain specific AI. Hierarchically, domain specific AI is the least sophisticated of the AI types discussed herein. Irrespective of that fact, domain specific AI is currently the state of the art and can be an extremely powerful tool, within its specific domain, or suite of tools, across their specific domains. Accordingly, we will begin our ethical discussion here. There are a number of ethical conflicts with regard to domain specific AI, each with sufficient depth to merit its own blog. In the interest of brevity, we only examine one, the tension between justice, the equitable distribution of benefits and burdens in society, and non-maleficence, a physician’s duty to do no harm.

In all of the foregoing examples of domain specific AI – breast cancer diagnosis, psychosis diagnosis, and percent of therapeutic options – the AI outperforms physicians. In domains where a physician diagnostician is less effective or accurate than an AI diagnostician, then a physician diagnosing a given patient is doing relative harm to that patient. Over a sufficiently large sample size, the physician will make mistakes that the AI would not, mistakes that impact lives. From the perspective of non-maleficence, the physician arguably has an affirmative duty to cede responsibility to the AI.

On the other hand, a major contemporary problem in the development of AI is the incidence of biased decision-making. In October 2019, the journal Science published the results of a UC Berkley team’s analysis of over 50,000 medical records.[5] The results showed racial bias in a predictive algorithm used by many United States health providers to determine the patients most in need of extra medical care.[6] Though the researchers did not specify the specific algorithm, the Washington Post reported it to be Optum by United Health, a tool that impacts more than 70 million lives.[7] To compound the issue, the Berkley team identified the same bias inducing flaw in ten other widely used healthcare algorithms.[8] From the perspective of justice, physicians should not use AI that perpetuates and systematizes discriminatory biases.

Numerous questions manifest as a consequence of the ethical conflict between justice and non-maleficence. Do we use AI because not using it may harm the patient? Do we refrain from using AI unless we can be sure it is unbiased? Do we meet somewhere in the middle, or does one side win out? In my opinion, these questions, like many of our greatest questions in the 21st century, require interdisciplinary collaboration. Computer scientists to advise on ways to resolve biases and their likelihood of success, statisticians to calculate the actual impact of the benefits and drawbacks of these technologies, lawyers to balance the equities. Of course, it is incumbent upon the bioethicists to devise the ultimate answers to these questions, if that is possible. Nonetheless, it would be wise to use expert counsel from other disciplines in doing so.

I will leave you all with a hypothetical. If we accept the premise that AI with human level intellect would quickly develop AI with superhuman intellect, does that change the ethical calculus for their predecessors, the domain specific AI of today? Should justice be prioritized so as to not encode biases into the intellectually transcendent AI of the future? Should autonomy become less of a priority because AI will do much of the decision-making anyway? With more questions than answers the ethics of AI in healthcare, and also generally, is a field ripe for discourse that I urge you all to take part in.

Full Article

What is Artificial Intelligence? This central question has captivated the minds of specialists – mathematicians, computer scientists, cognitive scientists, and the like – and passive observers since the days of Alan Turing and John von Neumann. In this discussion I will distinguish between three types of Artificial Intelligence – human level, superhuman, and domain specific. Through this exercise I hope to shed light on the difficulties in conceptually defining the term Artificial Intelligence, as well as dispel misconceptions about the state of the art in Artificial Intelligence. To what end? I hope that this blog will spark a discussion about the ethics of today’s Artificial Intelligence, considered in light of tomorrow’s Artificial Intelligence.

 We will start at the beginning, with Alan Turing’s definition of human level artificial intelligence. Turing’s famous test, popularized by the 2014 movie The Imitation Game, is a test of a machine’s ability to exhibit intelligent behavior comparable to, or indistinguishable from, human-level intellect. In doing so, the test pits human against machine. The test involves three players, two human and one machine, each of whom are separated from the other players. Once separated, players are tasked with holding a conversation with their counterparts. One human player, the evaluator, is tasked with determining which of the other players is a human and which is a machine. The evaluator knows that just one of his conversational partners is human. With that knowledge, if the evaluator cannot reliably distinguish between the machine and the human, then the machine passes the Turing test. Such a machine would be said to possess human level intellect.

For the sake of argument, let’s say that a machine exists with human level intellect. In such a case the machine would have necessarily been created by humans. It is tautological that, since human level intellect engineered human level intellect, then human level intellect is capable of engineering human level intellect. From this it follows that, once a machine possesses human level intellect, it should also be able to engineer human level intellect. Furthermore, in our hypothetical, the creation of human level intellect would have been an iterative process comprised of repetitive attempts, failures, and modifications to realize progressively greater intelligence. From this it follows that, a machine with human level intellect could also engineer intelligent machines through an iterative process of repetitive attempts, failures, and modifications to realize progressively greater intelligence. The difference is that machines could do this process many orders of magnitude faster than humans, thereby enabling them to quickly advance human level intelligence into superhuman intelligence – an entirely new class of intelligence that humans could neither match, nor understand.

Now let’s bring our discussion back to reality. As it stands, humans have developed relatively sophisticated Artificial Intelligence, especially in healthcare. Humans have developed AI that is capable of outperforming physicians in the predicting of psychosis onset in patients with prodromal syndrome,[1] and finding a higher percentage of clinically actionable therapeutic options for cancer patients.[2] Most recently, on January 1, 2020, researchers working at Google’s AI lab, Deep Mind, published a journal article describing a new AI based healthcare tool capable of surpassing human physicians in the diagnosis of breast cancer.[3] The paper, published in Nature, claims it reduces false negatives by up to 9.4% and false positives by up to 5.7%.[4] Further, when pitted against six human radiologists Google’s diagnostic AI outperformed all of them.

These types of systems, which can outperform humans at a single task, or a single domain, are known as domain specific AI. Hierarchically, domain specific AI is the least sophisticated of the AI types discussed herein. Irrespective of that fact, domain specific AI is currently the state of the art and can be an extremely powerful tool, within its specific domain, or suite of tools, across their specific domains. Accordingly, we will begin our ethical discussion here. There are a number of ethical conflicts with regard to domain specific AI, each with sufficient depth to merit its own blog. In the interest of brevity, we only examine one, the tension between justice, the equitable distribution of benefits and burdens in society, and non-maleficence, a physician’s duty to do no harm.

In all of the foregoing examples of domain specific AI – breast cancer diagnosis, psychosis diagnosis, and percent of therapeutic options – the AI outperforms physicians. In domains where a physician diagnostician is less effective or accurate than an AI diagnostician, then a physician diagnosing a given patient is doing relative harm to that patient. Over a sufficiently large sample size, the physician will make mistakes that the AI would not, mistakes that impact lives. From the perspective of non-maleficence, the physician arguably has an affirmative duty to cede responsibility to the AI.

On the other hand, a major contemporary problem in the development of AI is the incidence of biased decision-making. In October 2019, the journal Science published the results of a UC Berkley team’s analysis of over 50,000 medical records.[5] The results showed racial bias in a predictive algorithm used by many United States health providers to determine the patients most in need of extra medical care.[6] Though the researchers did not specify the specific algorithm, the Washington Post reported it to be Optum by United Health, a tool that impacts more than 70 million lives.[7] To compound the issue, the Berkley team identified the same bias inducing flaw in ten other widely used healthcare algorithms.[8] From the perspective of justice, physicians should not use AI that perpetuates and systematizes discriminatory biases.

Numerous questions manifest as a consequence of the ethical conflict between justice and non-maleficence. Do we use AI because not using it may harm the patient? Do we refrain from using AI unless we can be sure it is unbiased? Do we meet somewhere in the middle, or does one side win out? In my opinion, these questions, like many of our greatest questions in the 21st century, require interdisciplinary collaboration. Computer scientists to advise on ways to resolve biases and their likelihood of success, statisticians to calculate the actual impact of the benefits and drawbacks of these technologies, lawyers to balance the equities. Of course, it is incumbent upon the bioethicists to devise the ultimate answers to these questions, if that is possible. Nonetheless, it would be wise to use expert counsel from other disciplines in doing so.

I will leave you all with a hypothetical. If we accept the premise that AI with human level intellect would quickly develop AI with superhuman intellect, does that change the ethical calculus for their predecessors, the domain specific AI of today? Should justice be prioritized so as to not encode biases into the intellectually transcendent AI of the future? Should autonomy become less of a priority because AI will do much of the decision-making anyway? With more questions than answers the ethics of AI in healthcare, and also generally, is a field ripe for discourse that I urge you all to take part in.

Full Article

The right of a person to live out his or her own particular life plans is an important value we all hold dear. To the extent possible we should honor the prior expressed wishes of individuals after they have lost capacity and provide medical care consistent with those wishes. In general, it seems to me that a patient’s right to refuse any and all medical treatments while having capacity should extend to the future during the time of incapacity. 

The scope and authority of an advance directive is an important matter for many patients who fear being over-treated and having the dying process drawn out on machines in the ICU. But many aging individuals also fear becoming demented from Alzheimer’s disease or other forms of dementia and living for years while having lost all connection with their past identities and relationships. More and more people are finding the prospects of living into dementia intolerable. As Norman L. Cantor writes in the Hastings Center Report, “(f)or…people, like myself, protracted maintenance during progressive cognitive dysfunction and helplessness is an intolerably degrading prospect. The critical question for those of us seeking to avoid protracted dementia is how best to accomplish that objective.” It’s important for the aging public, as well as the physicians advising them, to be clear about the options that are available.

It's important to note that many individuals diagnosed with dementia still have capacity and therefore have the right to have their diagnoses and prognoses disclosed to them by their physicians. Unfortunately, according the Alzheimer’s Association, such disclosure occurs just under half the time. This means many patients may be missing the opportunity of asserting their right to complete an advance directive and to spell out their wishes about future medical care. For those who do complete advance directives at this point, it seems to me that patients have the clear right to say that they wish to refuse any and all life-prolonging medical treatments from that time on. Obviously, this includes avoiding intubation, G-tubes, dialysis, etc. But what if the patient after entering into a state of dementia is actually doing well? What if he or she seems happy living in a facility or at home, socializes, takes walks, eats heartily, and even interested in sexual activity? What if such a patient contracts pneumonia and has an advance directive indicating no life prolonging treatment? Does this apply to antibiotics that would reverse a life-threating disease and return the patient to his or her former baseline? My answer is yes. But for such simple, non-invasive measures like antibiotics, individuals should state explicitly they wish to refuse them in the advance directive while they have capacity. 

But there is still the possibility of a person remaining physically healthy for many years while living in a state of dementia. Unfortunately, advance directives cannot prevent this eventuality. The only option while the individual with newly diagnosed with dementia still has capacity is to preclude entering into dementia by voluntarily stopping eating and drinking, as Cantor points out in his article (VSED). I see no ethical or legal reason why an individual could not make such a decision and act on it.

One final issue that comes up for those who feel strongly about not extending their lives during dementia is around food. One point that should be clear: individuals cannot say in an advance directive that they do not want to be given food during dementia. This would mean that caregivers would have to deny food to hungry patients, thereby allowing them to starve. This seems clearly ethically, and probably legally, unacceptable. Giving food to a person who is physically healthy, even with dementia, is a basic form of comfort care, not directly a life-prolonging type of support. The sticky issue here arises after patients are unable to feed themselves and handfeeding becomes necessary. Can someone say in an advance directive that he or she do not wish to be hand fed under any circumstances?

To me the answer to this question also goes beyond what should be included in an advance directive. Food at this stage of life should be viewed entirely as a source of comfort and should not become a burden. The determination of whether food is a source of comfort or is becoming a burden is a clinical determination. Some patients who have stopped self-feeding may still enjoy food via handfeeding. But when a patient begins to show indifference, it is time to stop. This seems to me a matter of providing the standard of care, which can assure all patients are treated humanely until death.

In conclusion, advance directives are important to avoid unwanted medical interventions, including simple interventions to address life-threatening conditions. But advance directives cannot preclude someone from entering dementia—to do that, one needs to consider VSED while having capacity. At the point of dementia, standard of care comfort care, with an appropriate strategy of feeding, should be provided.

 

Full Article

Over the past decades, increasing emphasis on individual autonomy has led to the view that competent adults should decide for themselves how they want to be treated medically.  This shift in practice and policy has been accompanied by the adoption of advance directives that allow competent adults to specify in advance how they want to be treated, with the goal of extending respect for autonomy into periods of decisional incapacity.

Advance Directives are written instructions about health care treatment made by adult patients before they lose decision-making capacity.  These instructions are completed ahead of time and only apply when decision-making capacity is lost.  Examples of an advance directive are a health care proxy document and living will.  A health care proxy is a document that allows you to appoint another person as your health care agent to make health care decisions if you are no longer able to do so.  You may give your health care agent authority to make decisions for you in all medical situations if you cannot speak for yourself.  Thus, even in medical situations not anticipated by you, your agent can make decisions and ensure you are treated according to your wishes, values and beliefs.  A living will is a document that contains your health care wishes and is addressed to unnamed family, friends, hospitals and other health care facilities.  You may use a living will to specify your wishes about life-prolonging procedures and other end-of-life care so that your specific instructions can be read by your caregivers when you are unable to communicate your wishes.  A living will cannot be used to designate a health care agent, the health care proxy document must be used for this designation. 

An advance directive should be considered as a “gift” to a loved one, giving them peace of mind, minimizing stress, and reducing potential conflicts among family members.  This all starts with a conversation with family members and friends who are best likely to represent a patient’s wishes.  The conversation should clarify a patient’s values and beliefs, framing medical wishes around these values and beliefs and addressing questions such as:

v  What’s important to the patient

v  What contributes to the quality of life the patient may want

o   What activities are essential to having this quality of life

v  How does the patient want to spend their final years, weeks or days

o   What role does faith play in making these decisions

o   How much medical care is the patient willing to have to stay alive

o   What kind of medical risks is the patient willing to take

v  When would the patient want to shift from treatment to comfort care

The conversation should not end here as it’s impossible to predict every scenario.  But it is important to continue to share wishes and preferences; explaining views that will give loved one’s information to make decisions on behalf of the patient. 

One of the challenges this presents is that patients may not know what treatments they want in the future and many do not complete an advance directive.  When patients become unable to make or communicate crucial health care decisions for themselves, health care providers look to others to speak on behalf of the patient.  A health care agent is someone that has been designated by the patient to make health care decisions should the patient become unable to do so for him/herself.  Responsibility begins when the patient loses capacity (may vary by state).    What if the patient has not designated a health care agent and loses decision-making capacity?   In New York State, the Family Health Care Decisions Act (FHCDA) provides a hierarchy/highest priority class from which the provider must choose the surrogate.  The role of the surrogate is identical to that of the health care agent—to represent the patient’s expressed and implied wishes using the substituted judgment standard or if the patient’s wishes are unknown, using the best interest standard.  In the ideal scenario, the agent or surrogate is someone who has a close, loving relationship with the patient; someone who has intimate knowledge of the patient’s preferences and values; someone the patient chose or would choose to make health care decisions on his or her behalf; and someone with whom the patient had previously discussed preferences for care.

 The most important take away from this blog is that this all starts with a CONVERSATION.  Discussing with family or a loved one what is important to you, your values, beliefs, wishes, preferences.  Then frame your medical wishes/preferences around these values and beliefs. 

 

THIS MAY BE ONE OF THE MOST IMPORTANT THINGS YOU EVER DO FOR YOURSELF AND YOUR LOVED ONE’S!

 

Full Article

Children are not small adults. This is a phrase we say in pediatrics on a regular basis. The reason for such an absurd comment is that we are constantly faced with medical decisions that force us to rely on adult data to inform our practice. Pediatric patients face unique diseases and metabolize medications differently from adults. Their ability to recover from injury is often superior to their adult counterparts and thus the Quality Adjusted Life Years can be substantially different.  I argue that we have an ethical imperative to conduct pediatric research because 1. research brings forth generalizable knowledge for the good of the pediatric population and 2. because research guides clinical practice and the physician has a duty to provide the best possible care to the patient.

Research in human subjects is justifiable if it satisfies the following conditions: 1. A goal of valuable knowledge, 2. A reasonable prospect that the research will generate the knowledge that is sought, 3. The necessity of using human subjects, 4. Favorable balance of potential benefits over risk of the subjects, 5. Fair selection of subjects and 6. Measures to protect privacy and confidentiality. This list does not suggest any condition that could not be applied to pediatric research subjects as they all promote fairness, beneficence and non-maleficence. Typically, we do not consider clinical research as something that provides significant benefit to the individual subject. Generally, a clinical trial is initiated with equipoise on the efficacy for if not met with equipoise, there would be no need for the study as the outcome would already be known. Thus, if a clinical trial were to test the efficacy of a given medication, the human subject could benefit from being in the trial if the medication works OR could suffer from side effects and receive no benefit from participation. So, although there is potential for individual benefit after participating in the study, the main purpose is to benefit future patients and add to the general knowledge of the topic. A utilitarian approach would look at the potential consequences of pediatric research and determine it to be right or wrong based upon the balance of good or bad. Pediatric research must go through the same pathways as adult research with all human subjects’ trials needing to have a careful justification that the benefits outweigh the risks. Utilitarian theory is focused on value and thus the best action is the one who promotes the most good for all involved. This is precisely the goal of research. Research is conducted to affect change and improve care across a larger population. Through clinical practice, a physician may help hundreds to thousands of patients over the course of a career. However, through research, that same physician could affect the lives of thousands to millions of patients over the course of her life and far beyond. Therefore, in a carefully planned and executed pediatric clinical trial where the potential benefits outweigh the risks, the outcomes could be exponentially beneficial to the pediatric population, thus supported by a Utilitarian approach.

The physician takes an oath to promote the patient’s best interest and to avoid harm. Currently, the majority of treatments we provide to pediatric patients are not evidenced based because no clinical trials have been conducted to test that hypothesis. Therefore, each day we care for pediatric patients without evidence of treatment safety nor effectiveness, we are potentially causing harm. Pediatric researchers support a Kantian theory where morality is judged by their motives. Thus, if a researcher wants to conduct a clinical trial only for the fame from discovering the essential element and not because of a desire to promote good, this would not be a moral-worthy endeavor. Therefore, if we examine the physician’s obligation to her pediatric patient, a duty-based ethical theory would tell us to treat each of them with the best possible practice available. Research guides our clinical practice and help to ensure that we are minimizing harms and maximizing benefits. If we support a Kantian approach, in order to satisfy our obligations, we must utilize the available data to make the best possible decisions, and thus, the available data must come from clinical trials on pediatric patients to ensure scientific rigor.

One objection to pediatric research is that these patients are considered a vulnerable population, incapable of independent informed consent and at risk of exploitation. Due to this fear, special protections have been placed on pediatric research leaving an even more limited pool of subjects in which to study. This is justified by some stating that we should proceed with caution and only permit research when the potential for benefit is extremely likely. However, if the criteria are too narrowed, we risk excluding potential participants from research and unjustly diffusing the risks to a non-uniform population. Rights Theorists would protect the pediatric patient against oppression, unequal treatment, etc. Therefore, upholding the decision to conduct pediatric research allows for the rights of all patients to be upheld and for providers to contribute to the overall body of science, thus practice safer and more effective medicine to the benefit of all patients. When we practice medicine on children without data, we are unnecessarily exposing them to the same risks we fear and are doing so without the oversight and protection imbedded in clinical trials.

In conclusion, I support that pediatric research is both essential and ethical as it 1. generates a generalizable body of knowledge to the benefit of the pediatric patient of the future and 2. clinical practice is best guided by empirical data conducted through research. Clinical ethics outlines our responsibilities as physicians to our patients including veracity, privacy, confidentiality, and fidelity while upholding our role as a clinician. In contrast, research ethics contributes to the greater good by promoting generalizable knowledge for all future patients.

Full Article

Last month, PBS aired the remarkable documentary College Behind Bars. This four-part film follows men and women incarcerated in New York state as they make their way through a rigorous liberal arts degree program offered by Bard College. The Bard Prison Initiative is one of several programs offering college classes within the U.S. correctional system – a system which carries the dubious distinction of holding nearly twenty-five percent of the world’s prisoners. (The U.S. comprises 5% of the world’s population).

I have taught courses in several prison higher education programs, including a course on bioethics.  People have often asked me, “what is like to teach ethics to prisoners?”  While I understand their curiosity, I dislike this question. Teaching in prison does differ from teaching on campus in a number of important ways – most saliently, through restrictions on classroom technologies. But the question implies that there is something special or different about teaching morally-laden subject matter to students who have been convicted of serious and sometimes violent crimes. People are not inquiring about the impact of the prison environment on teaching and learning bioethics, but rather about people in prison as moral beings. The question suggests that prisoners are somehow morally different than those of us on the outside.

In fact, people in prison are us. One in two Americans have a family member who has spent time in jail or prison. Those of us who have not are likely white and/or financially well-off, as mass incarceration has disproportionately targeted low income communities and people of color.  As Evelyn Patterson notes, people in prison “represent a small proportion of those who commit delinquent acts. Prisoners are people are the people who were caught, indicted, and punished via incarceration.” (Think of the example of sexual violence, a crime experienced by more than 1 in 3 women and nearly 1 in 4 men in the United States. The vast majority of perpetrators do not even enter the criminal justice system, much less serve time).

That said, I did find that teaching bioethics in prison differed from teaching bioethics on campus, and not just because I couldn’t stream video clips or email my students. Incarcerated students are immersed within the ethically fraught ‘total institution’ of the prison, a space anthropologist Lorna Rhodes described as “designed to activate a sense of threat to the coherence of the self” (2004, 56).  Students were living and witnessing the moral failures of mass incarceration on a daily basis. When they sought medical assistance, they faced the bioethical challenges of delivering and receiving care in a context of punishment.  These challenges occur at all levels of the health care system, from the micro-level of clinician/patient trust to institutional and society-level questions about access to and deservingness of treatment.  In future posts, I’ll be addressing some of these ethical challenges more specifically, as well as exploring social and historical issues related to health and mass incarceration. 

 

In the meantime, PBS will be streaming Prison Behind Bars for free through late January. 

 

Full Article

Last month, PBS aired the remarkable documentary College Behind Bars. This four-part film follows men and women incarcerated in New York state as they make their way through a rigorous liberal arts degree program offered by Bard College. The Bard Prison Initiative is one of several programs offering college classes within the U.S. correctional system – a system which carries the dubious distinction of holding nearly twenty-five percent of the world’s prisoners. (The U.S. comprises 5% of the world’s population).

I have taught courses in several prison higher education programs, including a course on bioethics.  People have often asked me, “what is like to teach ethics to prisoners?”  While I understand their curiosity, I dislike this question. Teaching in prison does differ from teaching on campus in a number of important ways – most saliently, through restrictions on classroom technologies. But the question implies that there is something special or different about teaching morally-laden subject matter to students who have been convicted of serious and sometimes violent crimes. People are not inquiring about the impact of the prison environment on teaching and learning bioethics, but rather about people in prison as moral beings. The question suggests that prisoners are somehow morally different than those of us on the outside.

In fact, people in prison are us. One in two Americans have a family member who has spent time in jail or prison. Those of us who have not are likely white and/or financially well-off, as mass incarceration has disproportionately targeted low income communities and people of color.  As Evelyn Patterson notes, people in prison “represent a small proportion of those who commit delinquent acts. Prisoners are people are the people who were caught, indicted, and punished via incarceration.” (Think of the example of sexual violence, a crime experienced by more than 1 in 3 women and nearly 1 in 4 men in the United States. The vast majority of perpetrators do not even enter the criminal justice system, much less serve time).

That said, I did find that teaching bioethics in prison differed from teaching bioethics on campus, and not just because I couldn’t stream video clips or email my students. Incarcerated students are immersed within the ethically fraught ‘total institution’ of the prison, a space anthropologist Lorna Rhodes described as “designed to activate a sense of threat to the coherence of the self” (2004, 56).  Students were living and witnessing the moral failures of mass incarceration on a daily basis. When they sought medical assistance, they faced the bioethical challenges of delivering and receiving care in a context of punishment.  These challenges occur at all levels of the health care system, from the micro-level of clinician/patient trust to institutional and society-level questions about access to and deservingness of treatment.  In future posts, I’ll be addressing some of these ethical challenges more specifically, as well as exploring social and historical issues related to health and mass incarceration. 

 

In the meantime, PBS will be streaming Prison Behind Bars for free through late January. 

 

Full Article