One well drawn graph or diagram can speak volumes about the way an experiment was carried out, how the data were analyzed, or sample identity and size. A poorly thought-out figure on the other hand can confuse and mislead – and more fundamentally, can expose poor study design. Either way, poor figures can present obstacles to their own authors in publishing their work (a point that has been eloquently made by BMC Biology’s Editorial Board member Leslie Vosshall).
In our new editorial series What is wrong with this picture?, we explore some common ways that figures can mislead, or obscure information — including problems with axis scaling, color schemes, and statistical tests. The series was inspired in part by issues that arose during the development of a reporting checklist that from this month authors submitting to BMC Biology will be asked to fill in. The checklist is to answer the increasing awareness of the need for transparency and greater reproducibility in research publications, and the right illustrations can be half the battle.
In putting the series together, we have benefited greatly from the input of our guest editors, John Brookfield and Shinichi Nakagawa, drawing on their own interests and expertise as researchers – and on their bugbears as reviewers. (You can see what other members of our Editorial Board have said about thorny statistical and other issues by reading our blog revealing their views on our reporting checklist.)
From my own experience as a Ph.D. researcher, I know it can be challenging to produce compelling figures, particularly in the face of ever more complex data generated in cutting-edge research, and especially as most undergraduate and postgraduate life sciences teaching includes little formal instruction on producing figures and manuscripts for publication. It seems that as new experimental techniques and technologies develop, ways to present the information they produce don’t always keep pace.
Or conversely, and more insidiously, software packages allow the unwary to generate figures without thinking about their design, and inadvertently mislead.
In this series, we suggest some specific solutions to the problems we present, which we hope will be particularly useful as a guide to early stage researchers. It may even be that giving some thought to figures in the early stages of a new experiment may expose missing control groups or problems with statistical analysis, and lead to modifications to the investigation itself.
We welcome suggestions and comments on the series, and hope to provoke debate and discussion. A picture may be worth a thousand words – but does it always tell the right story?