The 8th (my 1st) Peer Review Congress had just finished with emotional tributes and a standing ovation to Drummond Rennie (inaugurator of the Congresses) for all his efforts to champion research into peer review over the years. It had been an interesting and intense few days packed with talks and reunions with colleagues.
I couldn’t shake the wistful lyrics of the song which closed all sessions (Budapest by George Ezra) “Give me one good reason why I should never make a change” from my head – but there was no chance to be sad as the satellite session on “Under the microscope – transparency in peer review” was just about to start. With a swift nose blow and sprint for a cup of tea (the aircon was fierce!) I joined the other panellists and people who had stayed on for the discussion. The discussion was recorded and you can watch again here.
Alice opened the session describing the aims and aspirations of Peer Review Week 2017 and asked the panellists for their take on transparency.
For Irene, this refers to transparent actions from all those involved in peer review – journals: sharing adequate policy information, peer review timelines; authors: being transparent about their conflicts of interest, funding and research materials. Irene also advocated for reviewer reports content and editorial correspondence to be published alongside articles to help distinguish journals who follow good practices from those that do not*. I loved the quote Irene shared from a paper by Noah Moxham and Aileen Fyfe (author final version here) referring to the Royal Society’s move in 1832 to make reports on papers under consideration at Transactions public. The Duke of Sussex noted that public reports “were often more valuable than the original communications upon which they are founded”.
Transparency for me refers to the particular peer review model a journal uses, with transparent peer review specifically meaning that reviewer report content (but not reviewer names) accompany publication of an article.
Transparency for me refers to the particular peer review model a journal uses, with transparent peer review specifically meaning that reviewer report content (but not reviewer names) accompany publication of an article. EMBO and Nature Communications practise this and Genome Biology recently announced a trial.
However, there are various “shades of transparency”. Some journals share reviewer names but not the reports (e.g. Frontiers and Nature). eLife publish a synthesised decision letter sometimes with reviewer names (if reviewers allow). Arguably open peer review – where reports are signed and accompany publication (as practiced on 70 BMC journals from 1999 onwards), is the most transparent form of peer review. Open peer review makes reviewers and editors accountable, and reports can be cited giving reviewers recognition for the work they do (see commentary from Fiona Godless in 2002).
In the discussion afterwards we acknowledged that there are field-specific variations in uptake of transparency (see data from Nature Communications and abstract from Maria Kowalczuk and Michelle Samarasinghe presented at the Peer Review Congress). There are other ways of defining what is meant by open peer review, e.g. involving open interactions or open crowd sourcing, each solving slightly different problems with peer review (see Tony Ross-Hellauer’s systematic review). Then there is the role of the Editor (is that transparent or acknowledged?) and versions (are all shown?).
Andrew felt that transparency was such a weird thing to define that it was easiest to start with what is not transparent. For him that’s any journal-based review process, where review information is not shared. Peer review may or may not be happening, and if it is happening then the review lives in a silo. He used the philosophical example “if a tree falls in the forest did the tree make a sound?” and advocated for the need to recognise reviewers demonstrating their contribution to the review process without necessarily having to share the review itself. Anything that moves beyond journal-based silos for peer review is moving along a continuous spectrum and bringing more transparency to the process.
Carly explained her perspective from that of a funder, and was supportive of open research and accelerating the speed of research. She was interested in how to give credit for research that they fund and for individuals doing a lot of reviews. She emphasised that transparency in peer review is important and is something to think about with funders more generally where there can be a lot of black boxes.
Next, Alice asked the panellists and the audience for any fun facts about peer review they could share.
Carly mentioned the new Crossref initiative of providing a schema for peer review alongside persistent identifiers for peer review. This will further help facilitate recognition for peer review.
Andrew was concerned about the increasing burden placed on peer reviewers to do more and more peer review. Publons now has over a million verified reviews and developed a measure of ‘review inequality’ to look at what proportion of researchers are doing peer review. Interestingly, this is very consistent across countries, but differs across research fields, with the burden across Biomedicine being evenly spread, but the burden in the fields of Chemistry and Engineering being more unequal and concentrated on certain individuals.
In the discussion afterwards, Tim Vines (Origin Editorial) shared some research he had published in Research Integrity and Peer Review from six large ecology journals from data spanning 2004-2016 showing that the proportion of people receiving more than one review a year was flat. But while journals were not asking reviewers to do more than a decade earlier, reviewer agreement rate was declining so more people had to be invited to undertake peer review.
The role of training reviewers in how to review was mentioned and organisations are trying to provide the tools and support necessary.
I was interested to know who was doing peer review, and shared some insights from Janne-Tuomas Seppänen from the Peerage of Science community which pointed to post-docs doing most of the peer review.
Irene shared data from Publons on the average length of a peer review. From 200,000 verified first round reviews, the mean length of a peer review report was 457 words, the median was 321 words (slightly longer than the opening two paragraph of this blog). Irene felt that although length of a peer review does not equate to quality this was still surprising short.
Alice then opened up discussion from the floor and here are some of my recollections.
Ivan Oransky (Retraction Watch) feels all peer review reports should be made available when articles are retracted (irrespective of the peer review model of the journal). The trouble here, as explained by Carly, was that this would be putting too much pressure on researchers and might deter people from reviewing. Peer review would also be held to account for some issues which it was not designed to do (e.g. to detect sophisticated plagiarism, fraud, duplicate publications, figure manipulations or honest errors).
The role of training reviewers in how to review was mentioned and organisations are trying to provide the tools and support necessary (e.g. Publons has a training course, as does Nature Masterclasses both aimed at peer reviewers, and COPE and EASE provide guidance primarily for Editors). Mersiha Mahmić-Kaknjo (Cantonal Hospital Zenica) emphasised the important role that supervisors play in providing the necessary support to enable students to undertake peer review (especially from her own experience of open peer review) and this lead to conversations about recognition for peer review and the need for feedback.
My perspective is that while we could all advocate for more transparency in peer review, it’s clear from our discussion that one size does not fit all in terms of peer review model. However, in the spirit of the peer review congress and initiatives like PEERE, people are experimenting with peer review which is great to see. Let’s make those experiments data-driven and adopt those initiatives that make a change for the better.
Thank you to Alice Meadows and the peer review week team for all their help facilitating this event.
*I am grateful to Irene Hames for clarifying that she was advocated for sharing of reviewer report content and editorial correspondence, but not necessarily reviewer identities. The blog has been amended to reflect this.
2 Comments