When you are a scientist, the worst thing that can happen professionally is the retraction of a paper you published. The amount of damage depends on the reason for retraction. If the retraction is the result of fraud, it means the end of your scientific career. The number of retractions per year is rising. Interestingly, high-impact journals such as Nature and Science score relatively more retractions than less prestigious journals.
A widely publicized retraction may even boost the impact factor of a journal: the infamous Wakefield article in The Lancet has, according to Google Scholar, been cited more than 2300 times.
Authors run the risk of citing a retracted paper. This is particularly problematic when part of the paper’s argument rests on content that derives from the retracted paper.
Authors run the risk of citing a retracted paper. This is particularly problematic when part of the paper’s argument rests on content that derives from the retracted paper. It may even result in the retraction of the citing paper.
My colleague Harm Nijveen (Wageningen University) and I wanted to know how real this danger is. How easy is it to spot whether an article is retracted?
Another, equally nagging question is: do retracted results propagate? In other words, even though you do not cite a retracted article, do you use content that has been retracted?
For example, article A is retracted, article B cites article A, and article C, which includes the retracted result, cites B but not A. It is then difficult to find out that part of what C says is, in fact, retracted.
We investigated this scenario for one particular article published in Nature. Our results were published today in the new BioMed Central journal Research Integrity and Peer Review. Although in our research (spoiler alert!) we did not find propagation beyond direct citations, I would be very surprised if it did not happen at all.
Because citing a retracted paper may invalidate an argument, it is of the utmost importance to keep track of which papers are retracted and why.
Because citing a retracted paper may invalidate an argument, it is of the utmost importance to keep track of which papers are retracted and why.
Literature search engines should ideally provide not only search results but also retraction information.
However, with the exception of PubMed, most search engines still do not always provide an indication of the retracted status of an article (for details, see our publication). And no search engine provides information about why an article has been retracted.
In fact, the retraction notices of academic journals are frequently noncommittal. In 2010, in an attempt to fill the gap, Ivan Oransky and Adam Marcus started the blog Retraction Watch, which does invaluable service not only by keeping track of retractions but also in trying to find out the reasons for retraction.
In 2014, Oransky and Marcus received a grant for setting up a database of retractions. Such a database is a good starting point for research into automated support of plagiarism discovery, retraction identification, and retraction monitoring.
High on my wish list is the addition of retraction marks in the bibliography of published articles.
In a paper that has not (yet) been peer-reviewed, Elisabeth Bik and co-authors employ image detection techniques to find a startling number of inappropriately duplicate images in biomedical publications. They estimate that something is wrong with the images in one out of every 25 publications.
High on my wish list is the addition of retraction marks in the bibliography of published articles. Papers are digital these days. When, in a published article that cites several papers one of those papers gets retracted, it is technically possible and scientifically desirable to add a mark. I can also imagine text mining techniques to monitor propagation of retracted results through citation chains.
The phenomenon of retractions thus becomes a treasure trove of research opportunities.
2 Comments