Are metrics meaningful?

How do we measure the impact of research? And are the metrics we measure of value? These questions, and more, were discussed at this year’s COPE European Seminar.

1

“Two years ago, there wouldn’t have been the impetus for COPE to have a seminar on metrics and how they shape publication (mis)behavior” commented Ginny Barbour, Australian Open Access Support Group and chair of COPE.

There has been considerable growth of alternative metrics (other than citations) in recent years to measure the ’impact’ of a published article. This growth is coupled with continued pressure on scientists to publish in order to uphold a PhD, a tenured position, or simply retain their job. The time was right for participants at COPE’s annual European Seminar (#COPE2015) to discuss what the new metrics measure and what they all mean.

Euan Adie (Altmetrics) and Lisa Colledge (Elsevier) introduced ways in which metrics are used (and abused) in scholarly publishing. Should metrics be weighted in some way and aggregated to form a ‘popularity score’, or should multiple metrics by used individually? Clearly, there are pros and cons to these approaches and no single metric reflects the full attention an article receives.

A panel discussion with Jonathan Montgomery (Nuffield Council on Bioethics), Mike Thelwall (University of Wolverhampton), and Ginny Barbour provoked debate on how metrics could be used to foster ethical publishing practices. Views from the panel highlighted the pressure put on scientists in the UK when their research is evaluated in research assessment exercises.

…the culture of scientific research is undermining what researchers are being asked to do.

A report from the Nuffield Council on Bioethics describing the current state of the culture of research practice in the UK confirmed what has long been suspected: the culture of scientific research is undermining what researchers are being asked to do. Similar pressure in other regions may have driven the unethical manipulation of peer review processes, which has culminated in a number of retractions.

And of course Editors are under pressure too with a pivotal, if somewhat daunting, role to decide if an author’s paper should be published, and the impact this in turn can have on future funding.

Questions and comments from the floor followed the panel discussion, as Editors grappled with the danger of ‘overdoing’ metrics while appreciating that not having any metrics at all is just as problematic (Roger Jones; British Journal of General Practice). Should there be a metric for peer review turnaround times? Could researchers peer review activities be included as a measure of scholarship? Do metrics even correlate with actual uptake of the research? The CONSORT and PRISMA Statements have been cited thousands of times, with an overwhelming number of tweets and blogs, but uptake remains poor (Larissa Shamseer; Systematic Reviews)?

We heard how when conducting research assessments, looking at metrics can be an efficient ‘stand in’ for actually reading and assessing an individual paper (Anthony Wilkinson; European Biophysical Societies Association). The panel felt that this this may be possible within certain disciplines but suggested that the real challenge lay in capturing the way in which individual’s ‘interdisciplinary collaborative approaches’ could be assessed.

Sarah de Rijcke’s (Leiden University) gave some real-life examples of how metrics are continuing to impact on the behavior of research groups and individual academics, now published here, prompting Sabina Alam (BMC Medicine) to question whether COPE could educate against this practice and change behavior? Ginny speculated on the potential role of COPE in this area and whether it should be part of their remit. Jonathan felt that COPE was in a unique neutral position to tackle this, but only if they could provide an alternative solution.

As always after a workshop, one is left with questions: what is the take home message and what practical steps should journal editors take (as Martyn Rittman of MDPI asked)? I personally wonder if the growth of metrics will just add to the pressure already put on scientists. Certainly, some of Sarah de Rijcke’s examples of lab conversations were depressing.

All-in-all it seems that there is a need for the research community to have transparent metrics that are easy to understand and from sources that can be readily audited. Journals were encouraged to experiment, perhaps with a ‘dashboard of metrics’ and recognize that researchers contribute in a variety of ways, not just necessarily in publishing articles but in peer reviewing them as well. And quite rightly so. Thank you COPE for a stimulating and thought-provoking seminar.

 

View the latest posts on the Research in progress blog homepage

One Comment

Comments are closed.