This week the National Centre for the Replacement Refinement & Reduction of Animals in Research (NC3Rs) hosted a workshop on publication bias (supported by the Wellcome Trust). The aim was to bring together researchers, funders, and journals, from academia and industry to discuss ways in which we can reduce publication bias in animal research (follow on twitter with #publicationbias).
Publication bias occurs when the research that reaches the publication domain is not representative of the research that is done as a whole; typically null or ‘negative’ results are supressed. Evidence-based medicine pioneer and founder of the Cochrane Collaboration Sir Iain Chalmers has been vocal on the topic since the 1980s and praised the NC3Rs for bringing the issue to the forefront once more.
What drives publication bias?
Emily Sena (University of Edinburgh) opened the workshop and discussed her research with Malcolm MacLeod on initiatives to identify potential sources of bias in animal work. They set up CAMARADES, a Collaborative Approach to Meta-Analysis and Review of Animal Data from Experimental Studies (more information in an interview we published here).
Intriguingly it’s ‘researcher bias’ which results in most loss of data in not submitting results for publication, rather than ‘Editor bias’ i.e. Editors deeming results aren’t worthy for publication! Indeed there have been journals dedicated to the publication and dissemination of ‘negative’ and non-confirmatory results for over 10 years.
A sad truth was Emily’s acknowledgement that science is set up with perverse incentives that reward scientists for ‘impact’ and ‘productivity’ rather than for the quality of their research or the ability to replicate studies.
Jonathan Kimmelman (McGill University) explained that we are all in the midst of a ‘replication crisis’ in biomedicine with many studies defying replication. Champion of the reproducibility cause, John Ioannidis previously noted that up to 85% of research resources are wasted because of it.
Jonathan advocated preclinical study registration as a key step to reduce publication bias and our moral obligation not to ‘waste’ animals’ lives by not sharing results.
However, the challenge is that a one-size solution may not fit all. Academics may well have different ‘aims’ with regard to the preclinical data that they sit on, compared to a large company. And a large company may have different ‘aims’ again compared to a small one.
Trish Groves from the British Medical Journal gave her perspective on how registration of clinical studies came about in 2005. More recently BMJ co-founded the AllTrials campaign with the simple message ‘all trials registered, all results reported’.
Trish emphasized the ethical rationale for registering a trial. She also pointed out that if it wasn’t for the fact that the ‘big’ journals in medicine had got behind this idea and required mandatory registration, there wouldn’t have been a spike in uptake by researchers. Although there may be reasons why retrospective trial registration is now sometimes justifiable too (BioMed Central’s policy is here).
Registration is one way to combat publication bias, but is it enough?
We also heard about ways in which journals can drive innovation and help reduce publication bias. Chris Chambers (Cardiff University) talked about a recent initiative from Cortex called ‘Registered Reports’ where the methods and proposed analyses of a study protocol are pre-registered and reviewed prior to research being conducted.
However, this appears inspired from BioMed Central, who have been pioneering this approach since 2001 (see here for our author instructions). The beauty of such initiatives is that protocols can ultimately become the first element in a sequence of ‘threaded’ electronic publications, connecting all digitally-published content relating to the evidence about a particular trial. BioMed Central has been at the forefront of publishers putting this ‘linked data’ approach into practice.
Susanna-Assunta Sansone (Oxford e-Research Centre) talked about the need to motivate researchers to publish data. Her personal opinion was that a ‘carrot and stick’ approach is needed. She described the approach taken by Scientific Data in publishing articles called ‘Data Descriptors’ which as the name suggests are only about data.
They give credit to authors for sharing the data in the form of a citation and comprise two parts: a Narrative component (typical sections of an article), and a Structural component (which is machine readable). Susanna is also on the Editorial Board of our GigaScience journal, which since 2012 has pioneered this approach with Data Note articles that similarly use data review, curation, and rich interoperable metadata.
Christophe Bernard, Editor-in-Chief of the Society for Neuroscience’s new journal eNeuro, mentioned the journal’s double-blind peer review process where reviewers don’t know who the authors are and vice versa. Nature also recently announced their intention to offer this peer review process too.
Although double-blind peer review is proclaimed to reduce bias (by forcing reviewers to judge the merits of the manuscript and not be biased by the gender, standing or affiliation of the researcher) there are cons too.
Surely, openness on both sides, where the authors and reviewers are known to each other would be preferable? The medical journals in the BMC series have been operating open peer review for the past 14 years and we’ve found reviewer report quality is higher too.
There are lots of ways to publish data
We heard about ways to find and view data – e.g. PLOS’s collection of negative, null or inconclusive results. This has also been an area BioMed Central has been working hard to counter. Journal of Negative Results in BioMedicine launched in 2002 acknowledging Karl Popper’s realization that science advances through a process of ‘conjectures and refutations’. Similarly, BMC Research Notes launched in 2008 with the aim to free ‘dark data’.
More recently BMC Psychology launched in 2013 with the explicit pledge to publish repeat studies and negative results in a field that has been historically plagued by the under-reporting of both replications and null findings.
We heard how F1000Research had an APC-free period to encourage submission of negative results and their ‘living figures’ initiative also encourages other researchers to submit data to a published article and evaluate it.
Mark Hahnel from FigShare also talked about new ways to make article content discoverable. The days of the static publication seem to finally be changing, and GigaScience has been using discoverable and citable DOIs to make similarly interactive content discoverable, publishing interactive visualizations and workflows, and downloadable virtual machines.
We also heard from the perspective of a funder (National Institute for Health Research) which provides open access to protocols while trials are underway. The voice of industry was included too with worrying stories about how selective presentation of data by investigators can lead to poor quality science. See more here.
Prospective registration of preclinical studies is needed
Attendees at the workshop could make their views heard on the main purpose of the meeting: whether prospective registration of preclinical studies is necessary to reduce publication bias. We heard from proponents of this view who initially had the audience on their side (a pre-debate vote showed most support for prospective registration).
However, the opponents of this view produced sufficiently convincing arguments that they swung the vote after the debate. The debate whetted our appetite for breakout sessions to look more closely at the benefits and limitations for the various stakeholders (researchers, journals, funders, industry, institutions) on prospective study registration, publishing models and repositories and lots of feedback was generated.
It seemed that there was a pivotal role for funders to play in making prospective study registration a condition of grant funding and the benefits of this to other stakeholders would encourage cooperation too.
BioMed Central is well aware of challenges, as the publisher of the ISRCTN clinical trials registry. While advocating registration is one thing defining a minimum dataset represents another challenge, however, NC3Rs would be well-placed to take the lead given their role in formulating the ARRIVE (Animal Research: Reporting of In Vivo Experiments) guidelines. Thank you NC3Rs for a stimulating workshop!
Thanks to Kam Arkinstall, Scott Edmunds, Helene Faure, Amye Kenall and Daniel Shanahan for their comments on this blog.