While there is still progress to be made on this front, the development of clinical trial registries such as ClinicalTrials.gov and the EU Clinical Trials Register, together with policies mandating the use of these resources, have given the scientific community the tools needed to largely solve this problem.
Unfortunately, far less attention has been given to the question of how best to use research data after they have been made publicly available. PubMed currently includes more than 26 million citations. With such an enormous number of available articles to choose from, it is often easy to find individual studies which support directly contradictory views on a single clinical topic.
It has long been suspected that individual studies are more likely to be cited by other researchers when they are positive.
This is problematic because it allows both researchers and laypeople to make claims that are supported by individual published studies, but which do not reflect the total available scientific evidence on a given topic.
In particular, it has long been suspected that individual studies are more likely to be cited by other researchers when they are positive – i.e., when the new drug or device being studied looks beneficial – than when they are negative. This is called citation bias.
When citation bias occurs, positive results receive more and more emphasis, while negative or neutral results are downplayed. Over time, the results from positive studies may come to be regarded as truisms, even when other studies suggest less dramatic or even contradictory results, thereby exposing readers to a biased perspective.
This can also increase the chances that patients will be exposed to interventions which are less effective than clinicians believe them to be based on the increased attention paid to positive studies.
Furthermore, biomedical research is an incremental process, in which prior work forms the foundation for future hypothesis generation and testing. Basing new hypotheses on biased assumptions threatens the validity of future studies, and risks wasting the limited resources available for biomedical research.
We recently analyzed trials of thrombolytic agents (ie “clot-busting” medications) which are used to treat patients with acute strokes, to look for evidence of citation bias.
Because this has been a topic of very active research over the last several decades, it is somewhat unique in that there have been multiple large, well-run clinical trials which have shown contradictory results. This gave us the opportunity to compare citation patterns between the trials which were positive and those which were not.
This degree of citation bias is striking, and illustrates how dramatically the results of a study can affect the attention the results are given from other authors and researchers.
We found that despite similarities in study size and quality, trials which showed that thrombolytic agents were beneficial were cited up to ten times as often as the trials which showed that they were harmful. This degree of citation bias is striking, and illustrates how dramatically the results of a study can affect the attention the results are given from other authors and researchers.
While our study was limited to a narrow collection of stroke trials, it is likely that this is a widespread issue, though finding a solution is much more difficult than identifying the problem.
Our results are a reminder of the importance of relying on high-quality systematic reviews which attempt to summarize all the available data on a particular topic rather than individual studies, whenever possible. Often, however, such reviews are not available.
Authors and peer reviewers, who are generally content experts, also have an obligation to ensure that the citations used in scientific manuscripts reflect a complete and balanced picture of the available evidence. More work is needed to identify ways to reduce the selective citation of medical research.