Overthrowing the Tyranny of the Journal Impact Factor

Author

David Magnus

Publish date

Tag(s): Legacy post
Topic(s): Health Care

David Magnus, Ph.D.

When my wife (a librarian) was first working at an academic library I learned a lot about collection development, including how librarians decided which journals they would subscribe to. 25 years ago, there were reference publications that provided librarians with information about what the “top” journals in each field were. In philosophy, information was available that described which journals a library should have if they only had space for a small number of journals. Most of those reference publications were weak on methodology and lacked transparency in their process. They have largely vanished, replaced by more rigorous methodologies such as those developed by ISI to measure the value of journals as a tool to help librarians make collection development decisions. By far the most prominent of these methodologies was the Journal Impact Factor (JIF).

Over time a funny thing happened. JIF, a tool designed to help make collection development decisions, turned into a widely utilized method of evaluation of researchers for appointment, promotion and grant funding. Now, a growing chorus of researchers, journals, publishers and even professional societies are saying enough and calling for an end to the over-reliance on a single metric that wasn’t designed for the uses to which it is increasingly put.

I can personally attest to the tyranny of the JIF. I am required to put the JIF for every faculty publication in my annual report of my faculty. Here at Stanford, the JIF is incredibly important for appointment, promotion and merit raises. I counsel my faculty accordingly—I have told them I want them to look up the JIF for any journal that they consider publishing in and it is an ever present cloud hanging over all of our publishing decisions.

There are a number of problems with the journal impact factor as a measure of scholarship. First, it confuses evaluation of a journal with evaluation of articles. Since JIF is an average, it is possible for a JIF to be dramatically boosted by a small number of articles. In contrast, a highly cited article could appear in a journal whose JIF is dragged down by a number of articles with few citations. This is an especially interesting challenge for bioethics. Looking at my own citations and those of my colleagues there is some (imperfect) citation association with the JIF of the journal we published in. But it is very much an open question for bioethics how many citations our articles receive in high impact factor medical and scientific journals such as JAMA, NEJM, Science, Nature and Cell, when contrasted with articles published in more specialized journals (Fertility and Sterility, Critical Care Medicine, Genomics and Medicine) or bioethics journals. This is made even more challenging by the fact that some of the higher impact factor journals typically publish our work in special categories: policy forums or perspective sections. A former colleague once published a very interesting “Piece of My Mind” in JAMA (very high impact factor)—yet that article remains far less cited than an article on the same topic we published later in AJOB—though JAMA’s JIF obviously dwarfs ours.

Second, it is especially hard to compare JIF’s across fields. As a library collection development tool it may not really need to. But the way that JIF is typically used means that researchers are compared across fields that are very different. Many medical and scientific fields have tens of thousands of people in them and hundreds of journals. An influential article has an opportunity to be cited many, many times as a very large literature is constantly produced. In contrast, in fields like bioethics, with a much smaller base and relatively small number of journals, it is going to be much harder for articles to attract a large number of citations (and hence bioethics journals will tend to have lower JIF’s than, say genetics journals). Given the size of our field, I am proud of how high AJOB’s JIF is—but it is still small compared to many journals in larger fields.

Third, JIF is a very specific metric that does not measure all of the citations a journal receives. The San Francisco Declaration on Research Assessment recommends publishers highlight and publicize a broader range of metrics, including 5-year Impact Factor (which measures different citations than JIF—it is not just a five year average of JIF); EigenFactor; SCImago and other metrics. While AJOB scores roughly equally well in these metrics as it has for JIF, it has always bothered me that one of the most influential aspects of the Target Articles we publish doesn’t “count” anywhere (except the little noted category of “Immediacy Index”). One of the great benefits of publishing in AJOB is the open peer commentaries that accompany the publication—given the realities of citation patterns in bioethics, this represents a huge amount of very directed attention from scholars in the field toward these articles—yet neither JIF nor 5 year IF take these citations and commentaries into account as a function of the way that these IF’s are calculated.

Fourth, there are other ways of measuring or assessing quality and “impact” beyond citation patterns. Articles that influence medical practice or policies are particularly important for our field—though admittedly these are much harder to measure. Promotion and appointment always include peer assessment as part of the process—yet in a field as interdisciplinary as ours, the DORA statement that scientific articles should ultimately be judged based upon their content is challenging in bioethics—some of my social science colleagues have a low opinion of most philosophical articles and vice versa. Qualitative and quantitative empirical researchers often have contempt for one another. At the journal level, other metrics that we should all strive to make more transparent include acceptance/rejection rates and readership size. We have shared this information in our annual report to our Editorial Board and we intend to make this information available to all of our readers.

Those of us responsible for hiring and promotion are in a very challenging situation—stuck between institutional demands and a need for cultural change. The DORA is an important start in a process that may resolve this tension. And everyone in bioethics should consider signing on to DORA.

We use cookies to improve your website experience. To learn about our use of cookies and how you can manage your cookie settings, please see our Privacy Policy. By closing this message, you are consenting to our use of cookies.