Exploring ethnic differences in medical exam outcomes

It has been observed in the UK and internationally that Asian medical students receive, on average, lower examination scores than their white counterparts. But what are the reasons behind this phenomenon known as differential attainment? New research published today in BMC Medicine explores whether examiner bias might be the cause.

Medical students from Black and Minority Ethnicity (BME) backgrounds have slightly lower exam scores, on average, compared to white medics. This differential attainment is unfortunately very common in medical education, in both undergraduate and postgraduate exams; in written and clinical exams; and in many countries around the world.

To the best of our knowledge, it was first described in the UK 1994, in the University of Manchester’s Medical School: 10 students failed their clinical final exams, all were male and had Asian surnames, and all of them had passed their written exams. This led to the contention that these were not “weak students” but had suffered from discrimination.

This claim was difficult to prove or disprove, but Manchester took pioneering steps by raising the issue nationally, and by instituting measures to reduce the potential for examiner bias.

A 2011 meta-analysis confirmed differential attainment exists across a number of contexts, and researchers tried to find out why. Mostly they used observational techniques (i.e. studies using existing exam data), to see whether differential attainment in clinical examinations persisted after taking into account other measures of students’ performance, such as their  A-level results and written exam scores.

One study looked at an exam where candidates were examined by two examiners (sometimes of different ethnicities) at the same time. These pairs of examiners gave the same scores regardless of whether those candidates were white or BME.

It is unclear whether differential attainment is due to examiner bias in clinical examinations

The results of these studies suggested examiners were rarely biased (if at all), but despite being carefully conducted, the methods used meant it still wasn’t possible for to tell with certainty whether students from BME backgrounds were performing less well (for whatever reason) or examiners were biased. Furthermore, evidence from social psychology suggests racial bias is nearly ubiquitous and often operates beyond peoples’ conscious awareness. So it seemed plausible that, despite the evidence from these observational studies, examiners could be biased.

The first experimental study of clinical examiner bias

Our study is the first to use an experimental rather than an observational design to test the question of examiner bias directly.

Our study is the first to use an experimental rather than an observational design to test the question of examiner bias directly.

We created versions of the same performances played by professional actors of different ethnicities – in this case White British and British-Asian. Performances were carefully scripted and we filmed small sections. The actors watched each other and tried to carefully replicate the same intonation and non-verbal communication, as well as sticking to the same script.

We then recruited experienced examiners from medical schools around the country and randomly allocated them to judge the same performances but by “medical students” of different ethnicities. To avoid affecting their judgements, we deliberately did not tell the examiners what we were looking for.

As well as looking at the scores examiners gave “students”, we tested whether examiners subconsciously brought to mind a stereotype of “Asian-ness” using a psychological method called a lexical decision task. This task relies on the idea that humans can more rapidly make sense of words that are related to a concept which has already been brought to mind, than something they are not thinking about. So by responding to words associated with an Asian stereotype more quickly than neutral words, examiners gave us evidence that an Asian stereotype was active in their minds. However, there was no evidence that this stereotype influenced the scores or feedback they gave students. Neither did it affect how they remembered the students’ performances after at the end of the exam.

No evidence of examiner bias – so what is the cause?

Our study provides evidence that examiners of undergraduate clinical examinations were not biased against Asian medical students. While a single study can only reassure us about the particular conditions under which it was conducted, taken alongside the previous observational studies, it provides much stronger evidence that differential attainment by students from BME backgrounds doesn’t arise due to examiner bias.

So why does differential attainment occur? After all, there is no doubting that it is real. Research by one of our authors (Dr Katherine Woolf) has investigated ways in which students’ ethnicity might influence their learning within clinical workplaces. She has found medics from BME backgrounds can face negative stereotyping in the clinical learning environment; their friendships with other medics differ from those of white medics; and if BME medics are perceived not to “fit the mould” they may experience reduced support and encouragement from seniors.

Next steps

Differential attainment by BME medical students and doctors is a significant problem, but examiner bias is unlikely to be a substantial cause. While ongoing vigilance is needed with exams, our efforts to enhance equity may be better focused on understanding how ethnicity impacts negatively on medical students’ and doctors’ experiences of learning, and on developing means to address this.

View the latest posts on the On Medicine homepage

Comments