I’d like to use a medical analogy. Imagine there is a test for diagnosing a serious disease. The test is 90% accurate but it misses 10 in 100 cases. It’s therefore a very useful test, but not a perfect one.
Doctors using the test are aware that it misses about 10% of cases. People who are diagnosed and treated promptly can be cured, but those whose diagnosis is missed die an agonizing death.
In this situation there is little point publishing case studies describing these agonizing deaths — it does not add to anybody’s knowledge or wellbeing. However, if somebody notices that in some centers the test accuracy is only 85%, while in others it is 95%, and they look at the reasons for the different performance, that might make an interesting publication. Or if somebody develops a new test with greater accuracy, that would also be worth publishing.
Presenting all sides of the story
We see no point in publishing individual cases in which authors feel that peer review has failed or let them down.
It is well known that peer review is not perfect. This fact is not only recognized by authors but also acknowledged by editors and publishers. Therefore, we see no point in publishing individual cases in which authors feel that peer review has failed or let them down.
Such descriptions do not help us improve peer review. We also fear that such cases would only ever describe situations in which authors had their work rejected by journals (analogous to the false-negative results from the diagnostic test) but would rarely describe situations in which flawed work was published but should not have been (in other words, the ‘false-positives’). Thus, these anecdotes would not only be unhelpful but would also be a biased selection.
Lastly, such case descriptions nearly always only present one side of the story, so the reviewers or editor who are ‘accused’ do not get a chance to explain their actions: this seems unfair.
Misconduct and poor reporting
As well as rejecting specific case studies about peer review, RIPR will not generally consider specific cases relating to research integrity (or misconduct) or poor reporting either. Thinking about suspected misconduct, we do not believe that a journal is the right place to make accusations and the problem of giving a proper ‘right to reply’ is of even greater concern than in cases about peer review and journal decisions.
Research integrity, effective research reporting, publication ethics and peer review are such complex interactions that even apparently small changes in process may have important effects.
The only exception might be if an institution were to describe a specific misconduct investigation that used a novel technique, or produced especially beneficial or harmful results and therefore provided lessons that might apply to other investigations.
However, when we say we will consider research, we want to stress that we are happy to consider reports of small projects done at single journals or institutions. Research integrity, effective research reporting, publication ethics and peer review are such complex interactions that even apparently small changes in process may have important effects.
For example, one journal used a randomized trial to show that the design of authorship forms can affect the truthfulness of researchers’ descriptions of their contributions to a publication. We encourage journals to make their policies ‘evidence-based’ by testing their effects.
Returning to the analogy of peer review as a diagnostic test, the good news is that, unlike the scenario described above, authors who feel they have been failed by peer review at one journal always have the option to submit their work to another journal and get an independent ‘diagnosis’, so a bad decision might sting a bit, but need not be fatal.
Comments