Our new editorial series ‘What is wrong with this picture?’ has attracted some gratifying attention in the social media since it was launched in August, as well as in Mozilla Science’s Week in review and Weekend reads on Retraction Watch.
There has also been feedback from readers that suggests, in a way, that we’re the wrong people to be delivering its message.
The series, aimed at authors, illustrates some common ways in which scientific figures can mislead or obscure information, and was part of our recent efforts on reproducibility, accompanying the launch of a pilot ‘reproducibility checklist’ that authors are now required to complete before submitting a manuscript.
But the most important point to emerge from the feedback from readers so far (see samples below) is that the lessons it teaches ought to be taught, and learned, long before people are reading editorial comments in journals.
This was overdue and is needed not only for scientists but also for students who frequently do not learn how to critically read papers.
“Congratulations on this little series of misleading presentations! This was overdue and is needed not only for scientists but also for students who frequently do not learn how to critically read papers. I posted this on several scientific Facebook sites and will use it for teaching”. – Wolfgang Nellen
Some of course already teach students to critically appraise published figures:
“I teach a module on ‘data interpretation’ where I use images of data from publications to help our students become more aware of understanding how to interpret data that they see. The ‘unspoken agenda’ is the assumption that learning to interpret data from graphs and figures will lead to a better presentation of the students own scientific data. Part of the module is for students to interpret each other’s data.” – Dean Goldring
And others pointed out that sometimes figures aren’t misleading, but instead are simply uninterpretable:
“Worse than being ineffective, some diagrams are ambiguous, so it’s impossible to be sure of their content. Here is my paper about ambiguity in diagrams.” – Bob Futrelle
While our articles explaining problems with bar charts and processed data attracted the most debate and discussion on Twitter, Over the Rainbow, which explored issues with representing continuous data using a rainbow color scale, seemed to provoke more detailed comments by email.
One reader described his method of overcoming color blindness in fluorescent imaging that could benefit non-color-blind researchers in data presentations:
“I have found that black and white images for each color channel [in immunofluorescence images] contrast better than colored images. I have also realized that many digital projectors are not well calibrated and often project poor images of data where the color is important for understanding. … [Black and white] images, in general, are projected in a manner where the calibration of the machine no longer influences the interpretation of the data” – Dean Goldring
Ideas for the future
It wasn’t all good. We confused at least one reader with our liberal interpretation of the designation ‘Abstract’. The ‘abstracts’ accompanying each article are really an introduction, and certainly not a summary: the designation is an artefact of an unfortunate inflexibility in the format of our articles. “Sorry. My mistake. I mistook a bad abstract for the wrong abstract”.
The irony of misleading with a series on misleading figures is not lost on us – we will make adjustments for future contributions.
As we had hoped, we have also received some good ideas for future articles in this series. Further suggestions will be welcomed, and we hope readers will not hesitate to criticize, as needed, as well.