BMC and SpringerOpen sign the San Francisco Declaration on Research Assessment

More information relating to this announcement can be found here.

 

In 1999 BMC made high quality research open to anyone who wanted to access and could use it. By making open access sustainable, we changed the world of academic publishing. A core part of our role has always been to distribute, and communicate the research we publish beyond its original audience. We want our authors’ research to be as widely read, cited, and talked about as possible; and therefore, we have a real interest in how academia measures the impact of research.

The Impact Factor (IF) is the traditional, and most widely used method for gauging the quality of journals. In use since 1975, the IF is far older than BMC. A journal’s Impact Factor is determined by dividing the number of citations published in a given year, to citable articles published in that journal during the preceding two years. The attraction of the IF is its supposed simplicity; many conclude that the higher the number, the ‘better the journal.’

If used properly, in context, there should be no problem with using a journal’s IF, and many academics find the IF a very useful tool, especially when wading through the thousands of journals available and trying to decide where to submit.

The problem emerges when the IF is used improperly or over-enthusiastically. Inappropriately, judgements about academic CVs are often based on Impact Factors rather than the quality of the published research itself, and publishing in journals with the right Impact Factor can positively affect the result of grant applications, tenure applications, and in some regions, even monetary bonuses. It’s no wonder that 93% of scientists told us in a recent survey that IF will be important for them when deciding where to submit in the future (although from anecdotal conversations I’m sure many of these feel conflicted); it’s because this is reflected in both the culture and structure of academia. Some argue that even the name of the Impact Factor is problematic for setting unrealistic expectations.

We’re proud of our journals, and of how well they perform in the Journal Citation Report. But over-reliance on the IF has never felt right to us. No one metric should be the be-all-and-end-all. IFs also vary wildly between fields, which can be problematic for those working on interdisciplinary projects.

The IF cannot tell you how likely it is that your article will be downloaded by your peers, shared on social, or read by policymakers. It tells your boss very little about your publication history (as the H index might.) It doesn’t even tell you how many of the articles cited in the journal might have been retracted. This is why we include Altmetrics on every article page.

But we want to go further. And that’s why we’re signing the San Francisco Declaration on Research Assessment (DORA.)

For those of you who haven’t heard of DORA, it recognises “a pressing need to improve the ways in which the output of scientific research is evaluated by funding agencies, academic institutions, and other parties.” Signing it means that we pledge to “greatly reduce emphasis on the journal Impact Factor as a promotional tool by presenting the metric in the context of a variety of journal-based metrics. Hundreds of other organisations, including institutions and publishers, have already signed DORA, so we will be among good company.

We have chosen not to cease promoting the IF entirely. We will not remove IFs from our journal websites. Our authors tell us time and time again that they need to know IFs, and we believe and trust in author choice (after all, our authors represent some of the world’s best and brightest minds.)

Nor will we be compliant immediately. It will take time to audit every one of our 300 or so journals, and our website is undergoing some changes right now, so we’ll update as we go.

But we promise that by the end of 2017, the way we describe our journals will become less reliant on the Impact Factor and will show more alternative metrics, and data, which scientists can use to make their own informed choice regarding where to publish.

Share this post

View the latest posts on the Research in progress blog homepage

Comments