Is science broken? The reproducibility crisis

In an effort to respond to growing concerns about a ‘reproducibility crisis’ in science, UCL opened the discussion this week on the extent to which research practices are failing science and what we can do as a community to fix this.

1

Led by a talk on pre-registration by Professor Chris Chambers, an event at UCL earlier this week culminated with a lively panel discussion between academics in the fields of psychology and cognitive neuroscience. So the question is: “Is science broken? If so, how can we fix it?”

The problem

Psychology and neuroscience are evolving at a rapid pace, with emerging technologies empowering the scientific community to finally address the impact of poor research practices. However, in recent decades, the reproducibility of a shocking number of studies has been called into question.

Chris Chambers, Professor of Psychology and Neuroscience at Cardiff University, blamed this problem on the pressure to publish ‘good’ results. Too often, the quality of science is measured by the perceived level of interest, novelty and impact of the results. This leads to a number of problems in the research process – publication bias; significance chasing; ‘HARKing’ (hypothesizing after the results are known); a lack of data sharing and replication; and low statistical power.

The crux of this problem, argued Dorothy Bishop, Professor of Developmental Neuropsychology at the University of Oxford, is using methods that pre-suppose hypothesis testing while conducting exploratory research. This leads to a huge bias for false positives. Reflecting on her experience of working in the field of ERP/EEG, Bishop maintained that there was no agreement on which analyses to conduct and too much flexibility in the process. Anyone who hypothesizes after looking at the data can find something and label it an ‘effect’.

Similarly, Neuroskeptic, Neuroscience, Psychology and Psychiatry researcher and blogger, reflected on entering the lab and becoming disillusioned by poor practices. He referred to a “tacit decision” among scientists to accept methods that they would not dream of teaching to undergraduates.

Sophie Scott, Professor of Cognitive Neuroscience at University College London, hauled the discussion back to the ‘bigger picture’. Rather than becoming too concerned with single papers and processes, she argued, we need greater emphasis on where our research will be in 100 years, and how our unconscious biases are really influencing the science we do. We need to “dig where we stand” and focus on obtaining meaningful results, which have longevity. From her point of view, the problem is more of a cultural one, which Scott argues needs to be addressed with incentives.

It wasn’t all doom and gloom, however. Offering a slightly different perspective, Sam Schwarzkopf, Research Fellow in Experimental Psychology at UCL, suggested that science is not broken and is actually working better than ever before. Science is constantly evolving and has become increasingly open and transparent. Although science is by no means perfect, Schwarzkopf believes that we should be asking ourselves a different question: “How do we make science better?”

So how can we fix it?

More emphasis on the process, less on the results

Chambers, as the Registered Reports Editor of Cortex, discussed how pre-registration can provide a partial solution to the problem.

Registered reports are built on the philosophy of hypothesis testing, looking at the question being asked and the quality of the method itself, not the results it generates. The intention is to move away from considering serendipitous results to be synonymous with quality.

He explained that Registered Reports work in two stages: 1) the authors submit an introduction, proposed methods and detailed analysis, along with pilot data if possible. This is then peer reviewed and, provided it meets requirements, the journal offers acceptance in principle (IAP), regardless of the study outcome. 2) After completion of the research, authors submit a ‘Stage 2’ manuscript, now including the results and discussion, which is then published, on the proviso that the authors have followed the pre-approved protocol.

This led to an interesting discussion about the merits and practicalities of registered reports, and how the concept was already embraced long ago by clinical researchers (e.g. BioMed Central’s study protocols and ISRCTN registry).

However, Registered Reports are not the only solution. Indeed, Chambers highlighted that there are many other outlets that care less about the results and place more emphasis on the methods (with BMC Psychology and PloS One given special mention), and various initiatives for enhancing the reporting of methods in these areas (e.g. NIH’s principles and guidelines for reporting preclinical research).

Sam Schwarzkopf suggested that Registered Reports may be too focused on treating the symptoms rather than the root cause, and may risk denigrating exploratory research. He proposed that equal weight should be placed on exploration and replication.

Change can only happen with the right incentives

Although pre-registration may help to incentivize transparent practices across scientific disciplines, more needs to be done to change the way the quality of science (and scientists) is measured. Neuroskeptic suggested that the ranking of a journal can indicate the quality of an author’s research, provided that the study has been pre-registered. In contrast, Chambers commented that we need to move away from looking at measures like the citations and consider alternative metrics, such as Oransky’s ‘R-index’.

Reviewers can also contribute substantially to these efforts in encouraging open practices (e.g. Agenda for Open Research), and the publication of reviewers’ reports was discussed as a way of recognizing this.

Collaboration, not competition

The lack of data sharing in science, referred to by Sophie Scott as a catastrophic problem, seems to be, in part, due to a lack of motivation to collaborate. Although there are legitimate ethical concerns regarding the sharing of some clinical data, as noted by Bishop, a cultural shift is needed.

Collaboration, particularly with respect to replication and data sharing, are already commonplace in the physical sciences. The fields of psychology and neuroscience need to follow the suit, argued Schwarzkopf.

Whether you believe that science is in need of fixing – or that it’s working better than ever – it’s clear that change is coming. As a community, we need to work together, with the ultimate goal of increasing openness, transparency and reproducibility in science.

Liz Bal

Associate Publisher at BioMed Central
Liz completed an MSci in Biology at Imperial College London, before joining BioMed Central in 2010. Now, as an Associate Publisher in the Biological Sciences team, she is responsible for the development of a portfolio of neuroscience, biotechnology and cell biology journals.
Liz Bal

View the latest posts on the On Biology homepage

One Comment

Silbys

By all accounts, the leadership of the Reproducibility Project: Cancer Biology is acting in good faith with a clear desire to replicate as many results as possible. It remains to be seen whether this levelheaded attitude will extend to how results are actually reported by the media, or if the study will yield its own set of sensational headlines to the detriment of the cancer research field.

read more

Comments are closed.