The media is full of research about how our brains work, but how can we know whether to believe the spin that the news puts on neuroscience findings? In this guest post, we asked Professor Gina Rippon, a neuroscientist from Aston University to give us her take on how to sort the neurotrash from the neurotreasure. Her talk for ScienceGrrl at the Women of the World festival at Southbank Centre explored the media coverage of research into the (lack of) differences between men and women’s brains.
The development of brain imaging techniques offers wonderful opportunities for cognitive neuroscientists to really get to grips with whatpatterns of brain activity they observe when subjects performthe behaviours they are interested in. And their findings can now be communicated in seductively beautiful images, with intricate patterns of activation, colour-coded and superimposed on cross-sections of the brain.
There has been an explosion of public interest in these findings, and ‘neuroscientists say’ or ‘neuroscientists have found’ type articles can be found almost every week in the press. Flattering as this is to a neuroscientist like me, I am also concerned about the misunderstanding and misrepresentation evident in many of these articles. This might not be of such concern if we are reading about (say) the neuroscience of Bob Dylan’s genius, but alarm bells should ring if neuroscience findings are being used to support stereotypes or prop up prejudice.
1. Does the journalist suggest that we are looking at real-time ‘photographs’ of actual brain activity?
The wonderfully colour-coded maps that illustrate populist neuroscience articles give the impression of being real-time representations of brain activity, as if brain scanners are like high-resolution cameras or video-recorders. But brain images are actually the end product of a great deal of image manipulation, involving, for example, setting thresholds, ‘smoothing and normalising’ differences between individuals, complex statistical comparisons (not to mention the ‘settings’ choices that were made when the data were actually being collected). These would always be accurately and openly reported when published in a scientific journal, but rarely make it into the populist press.
2. Is there a ‘hint’ of neophrenology?
Is there a suggestion that activity in a very specific area of the brain is associated with a very specific (and possibly unique) function (e.g. tool use)? The brain is not organised in this way and stories couched in these terms should be viewed with suspicion.
3. Is there a ‘whiff’ of ‘biology is destiny’, and no acknowledgment that our brains are plastic?
Almost all brain structures and functions can be altered throughout life by experiences ranging from simple exposure to social attitudes to much more extreme training experience such as learning to juggle. Reports should acknowledge that social, educational and cultural factors as well as biology could contribute to any reported differences.
4. Who is being studied and what do we know about them?
This relates to the previous point. Given what we know about the impact of a wide range of quite measurable factors such as socioeconomic status and educational experiences on brain development, check whether these factors as well as sex and age have been included in selecting participants or controlled for in analysing the data.
5. How big are the groups being studied?
Sample size is an important issue in functional neuroimaging studies. Too few, and the reliability of the results should be questioned. Some experts say there should be between 15 -30 in any group being studied.
6. How big are their heads?
It is probably not surprising that head size might make a difference to brain size, but it is even more surprising that many of the earlier brain imaging studies did not allow for this. If group differences in particular aspects of the brain are reported, they should only be based on comparisons of relative brain size, i.e. brain size relevant the head size of each participant.
7. How big is the difference being reported?Almost all empirical studies report differences between groups in terms of the probability that any difference is not just due to chance (a statistically significant difference). But what is more important is the effect size, i.e. how big the difference is, does it mean that the groups are only just distinguishable or is the difference so great that there is virtually no overlap between their scores? Many studies do not report this and, even if they do, it does not make it into the media’s reporting.
8a. Are findings interpreted in terms of what was being measured in the study?
This would seem obvious but sometimes researchers may rely on assumptions about the behavioural differences between the groups they are studying without actually measuring these differences as part of the study. For example, male-female differences in brain structure may be explained in terms of ‘known’ differences in visuo-spatial or verbal ability.
8b. If behavioural differences are being assumed are they real? Systematic reviews and meta-analyses of many years of research on stereotypes such as male-female differences in cognitive skills have revealed that they are small or non-existent.
9. Were any differences found predicted in advance or was there a bit of a fishing expedition?
So-called post-hoc findings need to be acknowledged as such. If the differences originally looked for were not found and what is being reported is actually a ‘surprise’ finding then we should be told. And was it made clear that there were null findings, that ‘expected’ differences were not found? This prevents the so-called ‘publication bias’ where a lack of difference doesn’t make it to press.
10. Has another version this study been done in another lab?
Replication is very important in science. Given the cost in time and money of functional neuroimaging, it is not a common practice. If what is being reported is a ‘one off’ study, then this should be made clear.