Randomized Trials are Deeply Offensive

Author

John Lantos

Publish date

Tag(s): Legacy post
Topic(s): Clinical Trials & Studies Ethics Human Subjects Research & IRBs Science

This post appears by special arrangement with the American Journal of Bioethics.

by John D. Lantos, MD

In this issue, Macklin and Natanson examine some of the controversies that arise in randomized clinical trials (RCTs). They are particularly concerned that researchers may misrepresent novel interventions as “usual care.” This is problematic, they claim, both ethically and scientifically. Ethically, it misleads potential research participants into believing that studies have known and minimal risks. Scientifically, such studies may lead to erroneous conclusions about the relative efficacy of specific practices.

Implicit in their critique are two ideas. The first is that we can know what “usual care” is. Their definition, however, is both broad and vague. They write, “(P)hysicians must use clinical judgment in choosing a treatment and varying or changing an aspect of the treatment as the condition of seriously ill patients fluctuates. Such clinical judgments can be based on shared clinical experience over time in the field, as well as on the current literature, including preclinical studies. In addition, physicians in clinical practice often use physiological endpoints that are based on the accepted scientific aspects of physiology.” In other words, usual care can only be defined by considering many different sources.

The second implicit idea is that the usual care that practicing clinicians provide should be assumed to be the safest and most effective care. This second assumption is a common assumption in critiques of RCTs. It is based on the idea that doctors’ current knowledge and practices are a good indicator of all that we know about the risks and benefits of current treatment.

To illustrate the problems that arise when these two ideas are ignored, they describe three studies in which, they claim, study design did not include a true “usual care” arm but, instead, randomized study subjects to treatments that were unusual, thereby increasing risk and leading to invalid conclusions.

A curious aspect of the paper is that, in these three studies, we can estimate the actual risks of study participation because, in each, there is a plausible comparison group of patients who would have been eligible for the study but did not enroll. In one of the studies (ARMA), those patients had better outcomes than patients in the study. In a second (TRICC), some subgroups of patients in the study did better those outside the study and others did worse. The study led to changes in usual care that are associated with better outcomes. In the third (SUPPORT), patients in both arms of the study had better outcomes than eligible patients who were not enrolled.

One might ask, then, whether the fears raised by Macklin and Natanson are actually borne out by the results of the studies that they critique. I think not. Instead, and seemingly inadvertently, the data that they present supports the very different conclusion that these studies were appropriately designed, answered important questions about the safety and efficacy of treatments, and did not predictably harm study participants.

The controversies turn on questions of whether doctors’ actual practices should define the limits of what we know to be efficacious. If so, then any RCT comparing usual practice to an alternative would be deeply offensive. To do such studies, doctors would have to admit to themselves and to their patients that, contra Macklin and Natanson, they really do not know which treatment is best. They would need to believe that so strongly that they would be willing to assign treatments at random rather than by using their own clinical judgment and medical expertise.

George Annas beautifully summarizes the ways that this is offensive. He said, “Randomization always deprives research subjects of the judgment of their physician.” This violates our deeply held belief that “… medical education means something. We put (doctors) through residency and fellowships. We want their judgment.”

Joffe and Miller echo Annas’ concerns. They believe that researchers, unlike clinicians, are not obligated to do what is best for patients. Hellman and Hellman suggest that researchers who follow protocols and assign treatments at random do so at the cost of honoring their ethical commitments to individual patients. The Office of Human Research Protections echoes this view. In a letter to the SUPPORT investigators, OHRP noted, “Doctors are required, even in the face of uncertainty, to do what they view as being best for their individual patients. Researchers do not have that same obligation”.

These accusations of disloyalty to patients illustrate the deep ways that RCTS offend common sense and upset our psychological well-being. We hate to think that doctors don’t know what is best. If doctors truly don’t know what is best, the thinking goes, then they must not be good doctors. Or, more disturbingly, if they do know what is best and they are still willing to choose a treatment by flipping a coin, then they are abdicating their fiduciary responsibilities.

The odd thing about such controversies is that everybody knows that doctors are often wrong and that medical recommendations change over time. Whole books have been written about situations in which doctors confidently proclaimed, for years, that they knew what was best, only to have their firm convictions overturned by the evidence from a clinical trial. Interventions such as pulmonary artery catheters, coronary stents, and post-menopausal hormone replacement therapy were all widely used and widely thought to be beneficial. Then they all fell into disfavor because studies were done comparing them to approaches that were not widely used. Physicians clearly do not always know what is best, and the strength of their beliefs about what is best is not an accurate measure of the rightness of those beliefs. Doctors can be massively and collectively wrong.

So how should doctors know whether or not what they think they know for sure is actually true? In most situations, doctors do not rely on clinical trials to understand the relative risks and benefits of different treatments. Instead, they rely on experience, anecdote, expert opinion, and other informal information gathering. Randomized trials are only deployed in the rare and specific circumstances when experts disagree, existing evidence is conflicting, and informal methods of evaluating outcomes are recognized to be inadequate. Trials are expensive. Study design is difficult.

Trialists take all the information that goes into clinical judgments, all the skepticism and humility that comes from knowledge of medical history, and then make judgments that are essentially similar to clinical judgments about how, exactly, to design the study. In doing so, they are seamlessly blending their fiduciary obligations to the patient with their humble assessment of the limitations of their own knowledge. This is not, as is commonly asserted, a conflicting loyalty to science that compromises their loyalty to patients. It is, in fact, a deeper and more honest loyalty to patients that requires of doctors that they admit the limitations of their own knowledge.

What should the regulators of clinical research do when expert clinicians, with all their medical education, come to the conclusion that they really do not know which among a variety of widely used treatments is the best? What should we do when those well-trained, highly educated doctors whose judgment we trust further decide that the best way to find out which is best is to do a formal study in which patients are randomized to one treatment or the other and the outcomes carefully assessed?

The current approach is to rely on the judgments of the practicing clinician-scientists to design studies in order to answer the questions that they perceive as most vexing. By this approach, we don’t accept any study design. Instead, study designs are reviewed by IRBs and by separate groups of experts at funding agencies who must expend their limited funds to conduct only the studies that they believe are most important. This careful review of each study has made clinical research remarkably safe. Meta-analyses of outcomes show that research participants generally have outcomes that are comparable to similar patients who are not in studies. As noted, the studies critiqued by Macklin and Natanson show similar things. Study subjects were not systematically harmed by participating—though, as expected, in each of the studies, some patients had better outcomes than others.

Macklin and Natanson suggest a different approach. We should, they say, quantify actual clinical practices and assume that whatever most doctors are doing must be the best and therefore that any deviation from such customary practice must increase risk. That view is an understandable heuristic that logically follows from the false but deeply held belief, beautifully articulated by Annas, that doctors ought to know what is best and that their clinical judgment matters. That is a tough belief to give up.

Is the current approach as problematic as they suggest? As they illustrate, it may sometimes lead to the conclusion that current treatments are best. But it also, frequently, leads to changes in usual care. Their elegant arguments don’t deal with the many studies in which the results were quite different and in which commonly used practices turned out to be less effective or less safe than other practices. They don’t deal with the many studies which show that, overall, patients in clinical trials have outcomes that are similar or better than eligible patients who are not enrolled in trials. They don’t tell us how, exactly, we should determine which current practices ought to be questioned and which ought to be endorsed. Surely, it cannot be the case that all current practices should always be considered the best. If we assumed that, we would never need to do any studies, would never discover problems with existing treatments, and would never develop new or better treatments.

In the end, then, critiques such as those of Macklin and Natanson depend upon hindsight. They are convincing only if we imagine that we could have and should have known before the study what we only have learned by having done the study.

We use cookies to improve your website experience. To learn about our use of cookies and how you can manage your cookie settings, please see our Privacy Policy. By closing this message, you are consenting to our use of cookies.