Posted on May 4, 2016 at 6:07 PM
by Arthur Caplan, Ph.D. and Bruce Levin, Ph.D.
The Wall Street Journal and many other media outlets chose to beat on the FDA for its recent decision to deny approval of eteplirsen, a treatment for Duchenne muscular dystrophy being developed by Sarepta Therapeutics.
Actually it was the FDA’s scientific advisors not the FDA who concluded that there wasn’t sufficient evidence the drug was effective.
Still the WSJ sneered that “Here’s the gist of FDA’s objection: 12 patients are too few, and thus we don’t know if the drug helps boys walk longer or if the results are skewed. FDA prefers large trials that inject half of patients with a placebo. The agency has all but told patients to blame the lack of approval on Sarepta for its trial design.”
It is not just the FDA but most of the scientific community that prefers large placebo trials that produce stronger evidence. But both the WSJ, other FDA bashers and the FDA’s scientific advisors need to move the ball forward on trials. It is not placebo-controls but the importance of randomization in small trials that ought to be the issue for demonstrating safety and efficacy in any trial using small numbers.
In the comparison of eteplirsen to the matched control group there was no randomization.
Consider this fourfold table:
Still walking Not still walking Total
Treated 10 2 12
Comparison 1 10 11
The P-value for testing the null hypothesis of no difference is 0.0006. The likelihood ratio weight of evidence in favor of an efficacy hypothesis and against the null hypothesis is 71.6—that is generally considered quite strong evidence.
It is not the smallness of n=23 boys that is the concern. It’s that this comparison was not randomized. Indeed, had the company started an RCT four years ago with 24 boys appropriately randomized the FDA advisors and the FDA would have been very inclined to approve the drug.
What the world of drug approval needs is not less FDA or bigger trials but consensus about a quantitative theory of when a non-randomized comparison is likely to be valid and when it is not. Right now, we have only qualitative assessments of potential bias. What is lacking is a method that tells the FDA, “There is a 99% probability this non-randomized comparison is valid” or “You’re right to be skeptical, this study is garbage.” Until we have such tools, we’ll have the FDA insisting on randomizing many subjects to placebo, advisors unwilling to credit results from studies that don’t do so and continued pointless frustration among regulators, companies, parents and the media.