Scholarly metrics are abuzz!
In one manic week in April 2015, all of the following took place:
- The Committee on Publication Ethics held their annual European Seminar focused on How Metrics Shape Publication Behaviour (summarized in a blog here)
- The Royal Society rang in 350 years of Philosophical Transactions with meetings focused partially on the impact of metrics on research assessment
- A Nature publication offered up the Leiden Manifesto – ten guiding principles for best practice in metrics-based research assessment.
The misapplication of metrics
A metric is defined as a ‘standard of measurement’. One popular metric, the journal impact factor, has become a widely used mechanism for comparing journals across fields. It is not uncommon for authors to select a journal for publication consideration based on its Journal Citation Report impact factor or ranking within the field.
The appropriateness of this approach is questionable, given that it does not account for relevance and readership of the journal, among other things. Nonetheless, the seed for such thinking is planted quite early on in a researcher’s career.
Research coming out of the Evaluation Practices in Context group at Leiden University found that life science post-docs tend to value grant money, number of publications and journal impact factors as ‘capital’, influencing their decisions on what research questions to ask, where to work, and even social interactions, and not just simply where to publish their research.
Over time, new metrics have emerged quantifying article-level impact such as views, downloads and Altmetrics, and researcher-level impact, such as the H-index. With so many ways to measure research output, some stakeholders (institutions, funders) are using metrics as measures of merit for research and researchers – although few will admit to it.
Not only do we have to contend with problems in the way that metrics are calculated and manipulated but, critically, their misapplication as a measure of quality is troubling. The implication is that researchers or research articles with more impressive metrics (and hence more popular) are perceived to be more impactful and tacitly misconstrued as better quality and worthy of reward. Having not done a systematic review on the topic, I cannot say for certain, but this is surely not a sound hypothesis on which research evaluation should be based.
Are metrics really leading us away from research transparency?
Despite the plethora of metrics, and certainly as a result of the misdirected importance placed on them as an assessment of quality, transparency – probably the most important of all measurements of quality – is actually not covered by a metric.
In my view, this is highly problematic. I have spent the last several years whetting my scientific chops by applying my epidemiological skills to evaluating the completeness, bias, and transparency of reporting of biomedical research, a primary focus of the EQUATOR Network with whom I work.
For the uninitiated, transparency in reporting is all about understandable, complete, available/accessible, reproducible, and usable biomedical research. Indeed, transparency may just be the X factor of evidence-based medicine – the single largest variable on which the reliability and trustworthiness of the entire evidence base is hinged.
Like metrics, research transparency is very much in vogue; but the impetus is different. As a whole, biomedicine is failing on all transparency accounts. A fair proportion of research is either discontinued or never made public, particularly because it is littered with reporting biases or not reported in a usable/useful manner. Essential components (such as individual patient results) are inaccessible.
Achieving transparency in research should be straightforward, but it is not.
Achieving transparency in research should be straightforward, but it is not. Many have speculated who is at fault and have set priorities for what needs to be done to improve this situation. Essentially a systemic culture change is needed within the scientific enterprise and this seems to have (finally) caught the attention of funders, regulators, publishers, and lay media, all at the same moment. Improvements – or promises to that effect – seem to be looming.
As a young researcher, I am troubled that the system in which we are trained, and later expected to work in, is weighted so heavily towards bean counting and away from transparency.
Two years ago I chose a small piece of the transparency pie to explore further in a PhD in epidemiology. Beyond my good fortune to have mentors that share my research interests on the topic, more broadly speaking there seems to be little focus on educating young scientists on best practices in research transparency, ensuring that at a minimum, the research they, or we, produce will be a usable and reproducible contribution.
There is, however, omnipresent encouragement to publish at an early stage as training for our academic careers ahead. This imbalance between encouraging, and even rewarding, numerous publications in high impact journals over transparent research practices in academia is an unintentional brainwashing of epic proportions and with disastrous consequences (i.e. impairing truly evidence-based medicine).
What can be done?
Transparent and ‘high impact’ research can, and does, co-exist, but there does seem to be a trade off between the two. The incentives and motivations towards ensuring transparency in academia are far fewer than those aimed at many publications in high impact journals.
The current range of impact–popularity metrics reveal nothing about the inner workings of a research article and its potential reporting biases. What if the two came together, such that articles could be measured in terms of their transparency in a way that is truly meaningful and could not be readily manipulated or misapplied?
A new transparency metric?
Scientists like measuring things. Wouldn’t it be neat if we could motivate them to produce transparent research by measuring their publications on the basis of transparency, completeness and reproducibility, rather than (or in addition to) the number and type of ‘beans’ they accrue?
This is not a second call for the previously suggested transparency index, aimed at grading the transparency of journal policies. Instead, what if there was an article-level metric that would provide a snapshot of the transparency of the research contained within?
Consider a metric derived from the following features in a publication:
- Inclusion of a signed declaration of transparency by authors
- Compliance with an appropriate reporting guideline (checked and signed off by the publishing journal)
- Link to a pre-study registration record
- Link to a date-stamped public protocol
- Link to fully accessible individual patient results
It may seem far off, but an emerging tool in the publishing industry may be starting up the transparency engines. PRE-val is a tool aimed at bringing transparency to the journal peer review process by making it apparent to those reading an article that it has been peer reviewed. It also provides a snapshot of whether the journal endorses various publication ethics standards or emerging practices.
Perhaps the PRE-val model can be replicated, or even extended, to include the above items, and a badge bearing the article’s transparency metric could accompany each publication. Ultimately, such a badge may enable readers to see: what was done, whether it deviated from intended plan (and formulate a judgment about the impact of any changes), what was found, and whether ethical reporting practices were in place.
These are just the ramblings of a young researcher. Perhaps one day we will get there, and the beans we are counting will be so transparent that there’ll be no need to count them anymore.
6 Comments