The essence of responsible conduct of research is to assure that science concerns itself with the identification and clarification of objective truth. I have spent some time this past week trying to read a recent study by Fanelli and Ioannidis entitled “US studies may overestimate effect sizes in softer research” published in the Proceedings of the National Academy of Science.
This is a statistical paper, written densely, but still mostly understandable even to a mere scientist such as myself. Many of you have probably heard of John Ioannidis. He has gained prominence by doing theoretical analysis of the studies of others and using his results to conclude that most biomedical research is wrong or at least biased. I may be wrong, or at least biased, but I have come to believe upon reading this work that Dr. Ioannidis is wrong and maybe biased as well.
Fanelli and Ioannidis have examined a group of meta-analyses in health related biological and behavioral area. For those of you who do not know meta-analysis is a group of methods whereby investigators combine the results for similar studies in order to provide stronger evidence to address specific research questions. This works when studies are similar and are themselves too underpowered to provide definitive results. Those who read this blog regularly know I am not a fan of meta-analyses. They are subject to all sorts of biases especially selection bias. The biggest problem, however, is that it is rare that studies are done in a similar enough manner to be legitimately combined. The math (generally combining mean effect sizes weighted for the size of the study) works regardless of the similarity of the studies. Thus it is difficult to fathom the legitimacy of combining dissimilar studies based upon their effect sizes, but this is exactly what Fanelli and Ioannidis have done.
They have used only meta-analyses which express effect sizes of individual studies as odds ratios (the odds of an event occurring in the experimental arm divided by the odds of the event occurring in the experimental arm). They then divide the results of each individual study (primary effect size) by the results of the corresponding meta-analysis (summary effect size). At this point they have a ratio of ratio lacking any units. Their primary result is then the log of this ratio of ratios. They graph this outcome based on the studies classification as behavioral, non-behavioral, or bio-behavioral as well as by geographic distribution.
This probably has you confused by now and I do not blame you for this, it is very complex. I really just want to make the point to you that the analysis is based upon highly processed numbers that are very far removed from the actual experimental results. Yet the authors use the broader distribution of this log of ratio of ratios as the basis to impute bias upon behavioral scientists from the United States, which they call the US effect. This appears to be a giant leap of faith. I suggest that there is no basis here for the conclusion of bias. There is no data which suggests or supports bias. I contend this is an unreasonable contention.
Yet I am already seeing this paper cited as evidence of bias. All of this leads me to return to my suggestion above. That is these authors are likely wrong and It appears, at least to me, that these authors are themselves biased. It is difficult to conclude that this is an objective study revealing objective truth. These authors have not, in my opinion, held themselves to the standard to which they appear to hold others. I do not consider this to be responsible conduct of research.
The Alden March Bioethics Institute offers a Master of Science in Bioethics, a Doctorate of Professional Studies in Bioethics, and Graduate Certificates in Clinical Ethics and Clinical Ethics Consultation. For more information on AMBI’s online graduate programs, please visit our website.