The problem of fake peer reviewers is affecting the whole of academic journal publishing and we are among the ranks of publishers hit by this type of fraud. This has been covered by Retraction Watch several times, including here, here, here and here, as well as by the New York Times.
The spectrum of ‘fakery’ has ranged from authors suggesting their friends who agree in advance to provide a positive review, to elaborate peer review circles where a group of authors agree to peer review each others’ manuscripts, to impersonating real people, and to generating completely fictitious characters. From what we have discovered amongst our journals, it appears to have reached a higher level of sophistication. The pattern we have found, where there is no apparent connection between the authors but similarities between the suggested reviewers, suggests that a third party could be behind this sophisticated fraud. We are still investigating what has happened here, and will provide more details when we know.
BioMed Central has a Research Integrity team, a group of experienced editors who work with the Editors-in-Chief, Editorial Board Members and in-house Editors to ensure that the peer review in BioMed Central journals is the best it can possibly be. We are pleased that our careful checks meant that that we discovered this scam after only a handful of articles had been accepted and published, and that even when these suggested reviewers were being used only a few were peer reviewed by two author-suggested peer reviewers: many had already been rejected or were still under review. However, it is unnerving to find this happened at all.
Of course the whole process of scientific research and publication is based on trust and journals accept what they are told by authors at face value unless there is a reason to be suspicious. So journal editors assume that authors suggestions for peer reviewers are genuine because the authors have a genuine desire to have their work peer reviewed. Journals do not wish to operate on the premise that all authors are guilty until proven innocent and that all author-suggested reviewers must be treated with suspicion, but there will always be some individuals, motivated by greed, laziness, or the pressure to publish, who will find ways to play the system.
One obvious solution is for editors never to invite author-suggested reviewers. Following our discovery, we have immediately disabled this option across the majority of our journals while we investigate. We are concerned, however, that this solution is not ideal in the long term.
Assuming that the vast majority of authors are honest and straightforward, to ignore their suggestions risks slowing down the peer review process. It makes sense that authors who know their fields know who would be appropriate to peer review their work. To ignore their suggestions and attempt to find the same people independently makes little practical sense, especially in small fields. Furthermore, all journal editors are experiencing a massive increase in submissions, particularly from newer markets, not yet matched by an increase in reviewers. For this reason, the practice of allowing author suggested reviewers, which pre-dates open access, may be needed more than ever.
So what can journal editors do? While a completely fictitious peer reviewer should be easy to spot, this is less easy where authors have fabricated an email address to represent a real person, and the task of authenticating every email address would add a considerable burden to the already overburdened journal editor. Even worse, peer reviewer circles, where a group of authors peer review each other’s work across several journals, are challenging for an individual journal editor to detect. An all-encompassing solution is going to be hard to find.
Technology might provide this solution, with automated methods to cross-check author-provided email addresses or to track patterns of peer review activity. Peer review training and a register of trained peer reviewers may help to authenticate peer reviewer suggestions. These, however, are not immediate solutions.
In the meantime, an alternative way to address the problem of fake peer reviewers might be for genuine peer reviewers to take measures to protect their identity, for example via initiatives like ORCID. This is an option we are pursuing, together with a review of our policy on author-suggested reviewers. As for the 50 manuscripts we have put on hold, we will be investigating to gain some insight into the source of the peer review suggestions and will share our findings.
In any case, authors who do successfully publish with fake peer reviews should not become complacent. Discovery, especially with open peer review, is always a possibility and journals should not hesitate to conduct post-publication peer review and retract if necessary whenever fake peer review is discovered.
This is not merely a result of absent or misguided ethical standards of individual researchers. The entire system is based on relative anonymity and lack of transparency. There is virtually no technical or ethical reason that wouldn’t allow us to publish the review process to the final publication (e.g. as supplemental information). Naming the reviewers would avoid typical problems currently encountered with peer review, including sloppy and unprofessional reviews and of course outright fraud. It seems curious to me that we adhere to publication standards that were dictated by the inherent limitations of the pre-digital publishing process