The recently published trial by de Kock, Noben, and colleagues, from the Netherlands, is a good example of something we often see in family medicine: a reasonably-sized trial of a plausible, complex intervention, which produces a negative result. In this case, the paper reports a trial of teaching additional skills to family physicians. The aim of these additional skills was to support people who were currently not working because of poor health to return to work. We know that, in general, being in work is good for well-being (and wealth), so helping people return to work is a worthwhile aim. We also know that family physicians play a role in maintaining patients’ sickness absence, either directly through certification or indirectly by the advice they give for self-management. It is therefore plausible that an educational intervention for family physicians should have an effect. However, the trial found none.
So what went wrong?
The study was certainly well-designed: it was a cluster randomized trial (280 patients from 26 practices) and it was conducted thoroughly. The investigators recognized that the trial might not be big enough to see a difference in hard outcomes like time off work, so they also used an intermediate outcome measure that was closer to the intervention. Specifically, they used a work-related measure of self-efficacy that correlates well with actual return to work. This meant their findings should have been less susceptible to individuals’ circumstances, such as their personal and occupational situation. But there was no effect on either outcome. And perhaps that’s not surprising, because previous studies suggested that patient factors affected sickness certification more than practitioner factors and that collaboration between general practitioner and occupational physician showed no effect.
What does this study tell us?
Papers like this give us the opportunity to share knowledge between healthcare systems of what doesn’t work, which is just as important as sharing knowledge of what does.
I suggest that this study teaches us two things. First, it reminds us we need to focus less on “complex interventions” and more on the “complex systems” within which these interventions happen. In their discussion, the authors fall into the trap of thinking “if only we had a better intervention” and suggest that better-tailored interventions might work. This is an appropriate way of thinking about relatively simple interventions – “if only we knew which drug to use for which patient…”, but with complex interventions it may be missing the point.
That is because complex interventions are mostly interventions in complex systems not just in isolated individuals. And complex systems, such as patients’ networks of health, illness, home and work, have the property of absorbing (or buffering) most things which might change them. This doesn’t happen all the time, and indeed sometimes complex systems change dramatically in response to single inputs, but usually they just absorb them. The result is that well-designed complex interventions frequently fail to show the neat, statistically significant effect that we expect. That is not because they are bad interventions, but because they are trying to change one aspect of something (or someone) in a much more complex network of things or people. And the effect of that complex network, or system, is usually to absorb the single change.
The second thing this study reminds us is that we need to share, internationally, what works and what doesn’t work in family medicine. Politicians, commentators and think-tanks across the world will keep coming up with ideas for making things better through family medicine. Not all of these ideas will be as smart as they first seem, and many will repeat the mistaken assumptions of earlier initiatives elsewhere. Papers like this give us the opportunity to share knowledge between healthcare systems of what doesn’t work, which is just as important as sharing knowledge of what does.