Rand Paul is About to Legislate Peer-Review: Scientists Need Not Apply

Author

Craig Klugman

Publish date

Tag(s): Legacy post
Topic(s): Clinical Trials & Studies Politics Science

by Craig Klugman, Ph.D.

Senator and former Presidential candidate Rand Paul introduced S. 1973, a bill that would change how scientific grant proposals are reviewed. Besides eliminating the NSF oversight office, it would require that research proposals are reviewed by an office that includes non-scientific reviewers drawn from the community. The aim is to ensure that research is approved by the people who pay for it—non-scientist tax payer advocates. The advocate could not be a professional in academia, research, nor an expert on the topic under review. Rand’s goal is to eliminate “silly research” and ensure that tax dollars deliver on useful research

More broadly, Rand is responding to a concern that the government funds research that is not applied, does not have a guaranteed return, and that is a “waste” of money. Similar sentiments were behind the creation of the CTSA system—that taxpayers have spent enough money on science and it is about time that they get some return on their investment.

He is not the first to try to move science in this direction. As of October 31, funded scientists cannot serve on EPA review boards. Any scientist who receives an EPA grant–a feat that would suggest one is a good scientist–is forbidden from serving on a panel that reviews other grant proposals. EPA Administrator Scott Pruit said that the move was to reduce conflicts of interest.

The system that is supposed to ensure that funded science has value, meets rigor, and is undertaken by people who have the skills to successfully complete the work falls upon peer-review panels. Traditionally, these panels are populated by other scientists working in the same area or discipline, who have the background and knowledge to make decisions on the quality and value of such work. They are charged with ensuring that the money granted has a high likelihood of providing a benefit: “Peer review did not develop simply out of scientists’ need to trust one another’s research. It was also a response to political demands for public accountability.”

Peer-review is the hallmark of good science. A peer-reviewed grant or publication is supposed to be the most rigorous and most accurate work. Peer-review separates competitive grants from federal agencies (high prestige) and high impact factor journals from predatory journals and funders who expect results to support their preconceived notions and biases. Peer review can detect errors in work or even fraud. The system is built on trust that reviewers will act ethically, will be impartial, and will uphold the rigor of scientific research.

For most review panels and journals, learning how to peer-review has been haphazard. In some cases, a professor might “teach” a student how to peer review by inviting him or her to co-peer review a manuscript. When I was invited to review my first article, I spoke with colleagues and did some Google research to figure out what I was supposed to be doing. The lack of organized training has led to inconsistency in how such reviews are approached. Some journals have taken to developing and instituting peer-review training, though such efforts are recent and nascent.

In our current anti-intellectual climate, some politicians and other pundits have received airtime for claiming that “peer review is broken.” A recent article by science writer Gina Kolata on the pressures that encourage the use of predatory journals suggest that the problem with a lack of peer-review is “academic fraud that wastes taxpayer money, chips away at scientific credibility, and muddies important research.” There are indeed some problems with peer-review for the simple fact that it is done by humans who are subjective by nature. Sometimes the biases we bring our subtle: We may find articles on topics that interest us to be worthier or we be harder on proposals that work in a similar area because we know more about it and thus have higher expectations. And sometimes the bias is blatant: After all, science is competitive. There is a limited space to publish in high impact journals and there is an increasingly limited amount of research dollars to go around. Thus, there is a temptation to hold back competitors and even to steal ideas. Although they are not supposed to, there are a few cases of reviewers rejecting a competitor’s proposal and then taking some key element to further their own work and reputation. When proposals are blinded there is a risk that a reviewer may have a conflict of interest with the unknown authors that goes unacknowledged. When proposals are open so that conflicts can be avoided, then personal connections can come into play (supporting mentors, mentees and colleagues; opposing competitors). Assessing someone else’s work can be difficult, especially if you are not an expert in that method and area, which sometimes happens in peer-review.

Peer-review can be improved. For one, little science has been done on what is good peer-review and how it can be improved. Quality improvement research could be beneficial. Second, we should train peer-reviewers both in graduate programs and also in the methods and standards of a journal and review panel. In qualitative social science research, we often look to inter-rater reliability when coding transcripts. This measure looks at how similar two or more people are likely to code a section of text. Raters are trained to minimize their differences and when they do exist, they talk through them. Perhaps a similar check could be adopted for peer review. This is a small and inadequate list, but it does suggest that scientists can find ways to improve peer-review.

There may be problems with peer-review that are part of the scientific enterprise, a practice that is high competitive and conducted by humans. But, to think that the answer is to bring in people who cannot understand what researchers propose is likely to prevent real science from happening. Rand Paul’s solution is to add people who may or may not know something about science. They most certainly will not be academics or researchers, nor will they be people who work in science. Many proposals could benefit from adopting clearer language that non-experts can understand, but is it reasonable to ask a person inexperienced in the fine points and techniques of a narrow scientific pursuit to understand the nuances and methods of a proposal? This effort is not about improving the review of science but about giving credence to voices that are anti-intellectual and anti-science. He wants to turn science into a popularity contest. Perhaps grants will be given to those who appear on reality TV shows: “Researching with the Stars.” This bill is nothing less than an effort to silence educated voices in an attempt to win with populist sentiments.

We use cookies to improve your website experience. To learn about our use of cookies and how you can manage your cookie settings, please see our Privacy Policy. By closing this message, you are consenting to our use of cookies.