What is reporting bias?
It is often forgotten that science is made up of two activities. Scientific education tends to focus on the first – the background and art needed to conduct good experiments. But a second equally important aspect of science is being able to accurately communicate the results of experiments to others.
Reporting bias occurs when how experiments are reported distorts the conclusions obtained from the experiments. Identifying reporting bias can be difficult because there is always a subjective element to how a scientist writes about their results.
But, despite this overall subjectivity, distinct sub-types of reporting bias can still be identified such as ‘publication bias’ – where only a limited subset of results get published, and ‘outcome reporting bias’ – where only specific outcomes are published.
Studies have shown that the majority of reporting bias is caused by researchers simply not having enough time to write the papers rather than malicious behavior per se.
What problems does this cause?
This depends upon the audience. Perhaps the most serious problem is when reporting bias occurs in the peer reviewed scientific literature. This type of literature forms the main knowledge base that other scientists use to design their next experiments.
If the literature is incorrect or misleading a significant amount of time and money can be wasted trying to build upon an inaccurate understanding of previous work. Critically clinicians also use the scientific literature to inform on the best treatment for patients.
Here, if the scientific literature is misleading, patients can be prescribed drugs or procedures that may not be appropriate. This causes unnecessary suffering and in some (documented) cases unnecessary deaths.
Furthermore, health systems can be misled into wasting large amounts of precious budgets on drugs or procedures that simply do not work. The problem of reporting bias thus extends far beyond just the scientific community.
What were your main research findings and how are NHS research ethics committees in a good position to detect reporting bias?
As chair of a NHS research ethics committee (REC) I was stung by criticism that we were being remiss by not following up how research had been published.
As chair of a NHS research ethics committee (REC) I was stung by criticism that we were being remiss by not following up how research had been published. RECs are key gatekeepers that ensure all research conducted using NHS patients or resources are carried out to high ethical standards.
With the help of a research assistant, I decided to look at projects given a favorable opinion by my committee that had notified us of completion during the 2010 and 2011 calendar years.
We chose these dates to provide a couple of years for the scientists to have the opportunity to write-up and publish their work. Using just the normal literature search engines available to all scientists we first looked to see how many research papers had been published, before comparing the outcomes that the researchers told us would be measured in their original ethics application with the outcomes reported in the final papers.
We found that only 32% of the completed projects had published their results, and for the 28 projects where we could obtain the original REC application forms, 57% had outcome inconsistencies.
How could this issue be improved in the future?
The numerical results we found were not a particular surprise as they were consistent with previous studies. However, the unique aspect of this study was our ability to conduct it from within the NHS ethics system.
The UK is relatively unique in having a national health service and with it a publicly funded ethics system. Indeed it is a legal requirement for the majority of NHS research to submit protocols to ethics committees, and as such the NHS’s health research authority (HRA) holds a comprehensive (albeit confidential) archive of research that is being conducted, including the scientific protocols.
As chair of an ethics committee I was able to access this archive for audit purposes, and thus overcome a major hurdle encountered by other groups who try to monitor reporting bias (such as the Cochrane collaboration) who often do not have access to protocols, and in many cases do not even know whether studies have been conducted in the first place.
This work has therefore shown that the HRA is in an excellent position to monitor reporting bias as part of an internal audit, and potentially place pressure on researchers to make sure they accurately publish their results.
The next step is to consider whether ethics committees could (or should) be used to monitor this important issue, and perhaps then act to ensure that all research that they review gets published accurately.
Comments