Last month, guest blogger Fiona Russell – a post doctoral researcher studying chronic pain, inflammation, and arthritis – attended a Sense about Science peer review workshop (which our Biology Editor, Elizabeth Moylan, took part in on the panel). Here, she writes about the need for young researchers to develop their peer review skills and how journals could do more to recognize and incentivize reviewers.
I remember the worry the first time I was asked to peer review a paper: Would I miss something crucial? Would I attach too much importance to something inconsequential?
It wasn’t until I received the final decision from the editor, with the other reviewer’s report attached, that I was reassured I had done a satisfactory job.
That is why last month when I participated in the Sense about Science Peer Review workshop, I was most interested in peer review training. I asked the expert panel, if journals or editors give much feedback or formal training to reviewers.
The consensus seemed to be that this rarely happens. Some journals rate the reviewers for their own purposes – but often this is not fed back to the reviewers. But surely, just as the paper can be improved by reviewing, reviewers can benefit from constructive criticism.
I think there should be a system whereby PhD students have to review at least three papers during their training. The supervisor would officially review the paper for the journal, but the student would write a review to be compared with the supervisor’s. Feedback can then improve the student’s next review.
It is also important that reviewers get acknowledged for their hard work. At the moment my CV lists journals I’ve reviewed for, but it would be better if I could add that I was a 5* reviewer – or if I wasn’t. Knowing I was being rated, it’s likely I would work harder to ensure a good ranking. (I have just recently seen exciting news from PeerJ about their partnership with Publons to enable reviewers to get credit)
Punctuality of reviews could also be rewarded. Panel member, Alice Ellingham from the Editorial Office talked about the problem chasing AWOL reviewers, and we all know the frustration of delayed reviews. Knowing that someone was a 5* reviewer with 100% punctuality would be great (but could lead to the top reviewers being inundated with papers to review).
During the workshop we took part in a group discussion about strengths and weaknesses of the current peer review system and potential alternatives.
We discussed how peer review can improve a paper if the reviewers give constructive criticism, and someone mentioned how peer review should detect plagiarism.
I don’t agree with this last idea, as I feel it is the job of the journal to do this using automated plagiarism checkers. As a reviewer you do not always have intimate knowledge of the topic you are reviewing, so it is unrealistic to expect reviewers to have read all the relevant papers in order to detect plagiarism.
My favorite alternative to the current system has been successfully pioneered by F1000 Research. They are a new open access journal that does post-publication open peer review. All the reports are available to read and it is easy to see the discussion between authors and the named reviewers. This is a useful teaching tool too, as young scientists can read and learn what a review report looks like.
The lack of anonymity in peer review is also a good thing in my mind. Interestingly, one of the speakers, Elizabeth Moylan from BioMed Central, talked about research that showed when reviewers were named, reports were more constructive with a greater number of comments on the methods, and comments backed up with evidence.
During the workshop we also chatted about the public perception of peer review and how people need to realise the system isn’t infallible. Everyone needs to recognize the difference between peer-reviewed research and scientific claims based on little or no evidence with no peer review (something Sense About Science is doing very well with in their Ask for Evidence campaign).
Overall the workshop provoked much discussion on peer review. I have only touched on some of it in this post but would welcome more comments and ideas.
Thank you for participating in the workshop and I like your blog! Just a few more
thoughts from me…
I think a lot of Publishers keep track of their reviewers for punctuality etc, but this is something that isn’t broadcast externally. I guess reviewers sometimes have very good reasons for being a bit delayed returning their report from time-to-time.
However, we do share reviewer reports with all reviewers on a manuscript after the initial decision so that they see what others said. Hopefully this is useful feedback. And of course on our journals with open peer review, – e.g. the medical journals in the BMC series (https://www.biomedcentral.com/about/editorialpolicies#PeerReview), and biology journals such as Biology Direct (https://www.biologydirect.com/) and GigaScience (https://www.gigasciencejournal.com/) – the reading public can see the named reviewer reports too, as part of the pre-publication history. We just do peer review before publication not afterwards.
Part of the motivation for Janne-Tuomas Seppänen starting Peerage of Science (https://www.peerageofscience.org/) was to provide feedback on the reviewer reports themselves – ‘peer review of peer review’. You might be interested in our interview with Janne here: https://www.biomedcentral.com/biome/progress-in-peer-review-janne-tuomas-seppanen-discusses-peerage-of-science/
Open peer review also gives credit, everyone can see who the reviewers were. However it’s great to see the steps F1000Research and PeerJ are taking to make them fully citable!
At BioMed Central we also publish citable reviewer acknowledgements: https://www.biomedcentral.com/series/reviewerack/ in this way reviewers can still receive credit for their work even if the manuscript they review isn’t ultimately accepted for publication.
Elizabeth Moylan, Biology Editor, BioMed Central
Good points. I run a workshop for postdocs and PhD students annually on peer review. I’m frequently disappointed in the process (when it is my paper I get wonderful and appropriate critiques, but sometimes a clunker).
My advice is to first learn the journal, its scope, and the thresholds for publication. Look at other comparable work. Does the paper fit? Are the hypothesis, data, interpretations and conclusions globally in line with that journal? Are the experiments, methods and statistics appropriate?
There are too many reviewers that resort to wanting the next experiment, mostly because we all can easily think of ten more things to do, and it is the lazy to critique the work. The only reason to recommend additional research is if the work falls short of the journal’s standards.
The success of peer review depends on careful, thoughtful and critical analysis. We also need to cleanse the process of hostility and the cranky tones it has taken on lately.
You ask a very important question, Fiona – the answer is a resounding ‘yes’, but very few researchers receive training in peer review. This is starting to be addressed. For example, the Biochemical Society recently held an Understanding Scientific Publishing workshop which included an interactive session on reviewing papers
I’d urge PhD students and early-career researchers (ECRs) to watch out for similar events their own societies or institutions may be holding and sign up.
Last year at COPE (the Committee on Publication Ethics) we produced some Ethical Guidelines for Peer Reviewers to help increase researchers’ awareness of the basic principles and standards to which all peer reviewers should adhere during the peer review process, and ECRs may find them particularly useful – there’s a link to them on this page
I give a lot of workshops to researchers and one of the most common complaints I hear from PhD students and post-docs is that they do a lot of reviewing for their supervisors (way beyond what could be considered mentoring/training) but receive no credit for this – their names are never passed on to the journals. It’s very important to build up a reviewing record with journals: so the journals are aware of the individuals who have actually done reviews for them and can contact them in their own right, so reviewers can receive any rewards and acknowledgements those journals give out, and to stand a chance of being considered in future for editorial boards. With this in mind, we included a point in the COPE guidelines that any ECRs who find themselves in such situations can show their supervisors if they haven’t been able to resolve or don’t know how to address the problem:
‘Peer reviewers should not involve anyone else in the review of a manuscript, including junior researchers they are mentoring, without first obtaining permission from the journal; the names of any individuals who have helped them with the review should be
included with the returned review so that they are associated with the manuscript in the journal’s records and can also receive due credit for their efforts.’
Having accurate reviewing records tied to the people who have done them is set to become even more important in the light of new initiatives to recognise peer reviewing efforts, e.g. a working group is currently looking into how to acknowledge peer review activity in ORCID records https://orcid.org/blog/2014/04/08/orcid-and-casrai-acknowledging-peer-review-activities .