Posted on April 12, 2013 at 11:20 AM
David Magnus, PhD
When we go to the doctor and they recommend a treatment, we like to believe that there is good evidence that it is the best treatment available. The reality is that there are often several treatment options (different drugs, different dosing, different procedures) and which treatment you get may depend upon who your doctor is (Dr. A is sure drug X is better while Dr. B thinks drug Y is better—though neither view is well supported by the evidence). A really honest and knowledgeable physician might flip a mental coin to decide which treatment option to provide (all of which we call “clinical judgment”).
In light of this uncertainty, it is imperative for more and better quality comparative effectiveness research (CER) to take place so that we really know what works best (rather than basing medicine on whether a patient goes left or right when they get off the elevator, as a colleague has said).
With this in mind, OHRP’s recent action to create a controversy over some very important CER research could have an unintended chilling effect on this valuable research.
OHRP sent a letter to investigators at the University of Alabama at Birmingham (UAB) about a large multi-center clinical trial to determine appropriate oxygen saturation levels in severely premature neonates. The history of practice in this area is complicated and standard of care has varied over the past decades. Clinical managements of these vulnerable infants is tricky—too much oxygen produces toxicity (resulting in blindness and possible lung damage) while too little oxygen is very dangerous (producing risk of neurological damage and death). By the mid-2000’s, it was not clear what levels of oxygen saturation should be the goal. In other words a perfect opportunity for CER.
Neonatologists called for a large study to better understand the risks and benefits of the differences in oxygen levels. UAB investigators took the lead in a large trial to try to get the answers. At all participating institutions, infants were receiving anywhere from low (85-89%) to high (91-95%) oxygen levels (in accord with the standard of care). Researchers proposed that instead of allowing random or non-evidence based factors to determine where in that range infants levels were set, instead infants would be randomized to groups where effort would be made to maintain oxygen at the low end of the normal range (85-89%) or the high end of the range (91-95%). Since both of these ranges are within the standard of care (85-95%), many researchers argued that this is clearly minimal risk research (a fact that should be reflected in the consent documents).
OHRP disagrees. They claim that limiting the range of oxygen levels (instead of allowing it to randomly range across the spectrum) alters the risks and benefits to the infants. However, there is absolutely no evidence to support this claim. In fact, the NEJM article that was produced based upon this data included a case control with standard practice (85-95%) and found benefit to being in the trial. One can debate the validity of a non-randomized control, but at a minimum, it is clear that there is no basis to the claim that there was increased risk for being randomized within the standard of care.
OHRP’s letter has already produced misleading headlines about risky research on infants without parental consent. This only serves to make it less likely that valuable CER research will take place.
Whether a physician flips a mental coin or investigators flip one (randomization) has no bearing on risks participants are exposed to in research (as opposed to the risks of clinical care). CER should not be hindered by over enthusiastic regulators who make consent forms less accurate (by making research sound riskier than it is).
Disclosure: After I began working on this I discovered that Stanford was one of the 23 sites for the trial.