Peer review of peer reviewing

- 3 Comments

Often referred to as the ‘the worst system imaginable except for all others’, peer review itself was under review on Monday 21 April, at the Experimental Biology conference in Boston. A panel, organised by BioMed Central and energetically chaired by Gregory Petsko, took on the issues faced by academics and editors in peer review.

The Oxford English Dictionary definition of peer review is ‘the evaluation of scientific, academic, or professional work by others working in the same field’.  Participants in the panel discussion painted a rather more vivid picture of the painful reality of that evaluation, with its consequences for funding and careers.

Commentaries highlighting the inefficiencies, even failings, of peer review have seemingly been on the increase since Martin Raff, Alexander Johnson and Peter Walter protested about ‘painful publishing’ in Science in 2009; panellist Hidde Ploegh, whose comment in Nature two years ago called for an end to the ‘tyranny’ of reviewer experiments, represented a widely shared feeling among academic scientists. By way of an introduction to the issues the panel would hope to address, Petsko summarised other commentaries here.

Problems include lengthy review processes that do not ultimately benefit the quality of research or its presentation, detrimentally holding up publication,  an insistence that  each paper presents a ‘complete story’ – as one discussant put it: “The one thesis one paper problem.”, and requests from journal Editors for reviewers to assess a manuscript’s impact or significance (which may not become apparent for years after a paper’s publication).

Given that most authors and also reviewers, and most reviewers are also authors, this is a problem for the whole scientific community  – as Petsko says  ‘we have seen the enemy and it is us’. But what is the solution?

It is clear that there is no single panacea, and encouraging that journals are addressing the problem in a variety of ways. Miranda Robertson explained BMC Biology’s policy of re-review opt out, intended to relieve the pressure on reviewers and minimize delays to publication: once a paper has been revised in response to reviewers, the authors may choose whether it is seen again, or a decision is simply made by the editors. Emilie Marcus explained how Cell sets standards and carefully discusses each decision under the traditional reviewing process.

The eLife approach is different: all academics involved in editing are well known scientists and the aim is to encourage communication between reviewers and reach a consensus view. Not only does this make the rogue reviewer problem less likely but it makes possible a collective decision about what is reasonable. This does make substantial demands on the time of the academics involved, for which they (though not the reviewers) are paid by the three major funding agencies that publish eLife; and some felt it was not clear how generalizable this model is.

Elsewhere open peer review is proving successful. Biology Direct, now in its 8th year, aims to increase the responsibility of reviewers by including both their names and reports in the published article. As stated in the launch Editorial, why hide the “priceless discussions often… providing us with new perspectives and fresh ideas for our research” behind closed doors?

Perhaps open review and new platforms of communication will provide a solution. Laurie Goodman, Editor in Chief of Gigascience referred to the review of one manuscript late last year when a reviewer blogged his review – something that would traditionally be a cause for alarm. The paper was on preprint server and already available for comment, and contrary to the expected negative effects, the ensuing discussion of the paper and review by the author, editors, tweeters and bloggers added to the paper significantly. Goodman says: “In 20 years of peer review, this was the first time where the review process was an absolute blast.”