What is wrong with this picture? – reproducibility and realism

As part of BMC Biology’s on-going editorial series on the importance of well-designed figures, Graham Bell discusses transparency, and pragmatism in reproducibility.

The original idea of our series ‘What is wrong with this picture?’ was to do something helpful and constructive to meet the current concern about reproducibility of research results. Hence our collection of short examples of how to avoid inappropriate analysis, misleading presentation of data, and inadvertent crimes of omission.

But we also have to recognize that the ‘ideal’ ways to handle data can be complicated by the practicalities of the laboratory or the field, or thwarted by biological reality.

Let’s be realistic

For example, our piece on biological vs technical replicates, a fairly simple picture that becomes more complicated when you consider the different ways that data might actually be collected.

In our example, we explained that taking a measurement 7 times on one mouse would be 7 technical replicates; and 7 different knockout mice would be 7 biological replicates. But what about 7 different mice that originally came from the same knockout mother? Technical or biological? We conferred with our Consultant Editors…

Tiered replicates and transparency

…and our understanding is that there can be different ‘tiers’ of biological replication – the mice from one knockout mother would count as biological replicates, representing biological variation, rather than technical – but at a lower tier, since they have come from the same mother, and are not ‘as independent’.

It can be even trickier with cell cultures – where all cell lines ultimately derive from the same source – and a useful primer can be found on LabStats.

There are also practical constraints on experiments – if an experiment measures 3 biological replicates, but 2 of these were measured on one day and 1 on the next day, how should that be considered in the analysis? In this sort of situation – to be avoided if possible, but realistically inevitable – while the replicates are biological, the right thing is for the author to explain exactly how the experiment was performed and analysed. Then people can clearly understand what was done and think about whether it seems reasonable to them. Perfection may not be achievable, but transparency is.

Who needs statistics anyway?

The question of what is realistic or necessary is most squarely confronted in the issue of biological vs statistical significance.

Sometimes, showing qualitative data like immunofluorescence without quantification can be a problem.

But equally, sometimes it may not be necessary to quantify absolutely everything just to be able to point to an asterisk showing a statistical difference, if the difference is essentially unquestionably clear. And again, if it’s not, statistical significance may not mean much. A highly respected scientist is alleged to have remarked to a student once about some graphs, “If I can’t see the difference from the door, there isn’t one.”

When nothing is wrong with the picture – as such

And then, the picture may not be the problem.

The analysis or representation – which might be fine – may not reflect a flaw in the underlying methodology. Once the experiment is done, it may be too late to apply an appropriate statistical test. As the great Ronald Fischer remarked, “To consult the statistician after an experiment is finished is often merely to ask him to conduct a post mortem examination. He can perhaps say what the experiment died of.”

One may repeat an experiment a few times, see that a p value of <0.05 is present, and then having hit the magic number, not do it again. Which is the wrong way round. The right way would be to decide at the outset that n will be (say) 50, and accept whatever significance that delivers. A recent paper in BMC Research Notes on medical literature reported a distribution of p values skewed heavily, with many more values reported immediately below 0.05 then those immediately above 0.05, suggesting a problem at some point during the research process.

Blind alleys and covered tracks

A bar chart or pretty confocal image is a mere representation of the underlying data and may not quite reflect the messiness of biology and the inevitable imperfections of real lab work performed by real people. We can’t have perfection – but conscientious transparency in reporting research allows people to decide for themselves the strength of the data, and to build confidently on reported investigations.

In this connection, I can’t resist quoting the opening lines of Richard Feynman’s Nobel lecture from 1965: “We have a habit in writing articles published in scientific journals to make the work as finished as possible, to cover all the tracks, to not worry about the blind alleys or to describe how you had the wrong idea first, and so on. So there isn’t any place to publish, in a dignified manner, what you actually did in order to get to do the work.”

Graham Bell

Senior Editor at BMC Biology
Graham received a PhD in Developmental Biology from UCL/Cancer Research UK in 2014. He joined BMC in 2015 to become a member of BMC Biology's editorial team, where he is a Senior Editor.

View the latest posts on the On Biology homepage

Comments