The problems with EBM are not dissimilar from those facing science as a whole. There is the over emphasis on results – not health outcomes mind you, instead the type that makes for a catchy headline after it’s written up in academic journals. There is also the ever present issue of generalizability.
In fact, there were those that proposed that eminence-based medicine is actually better for individual patient encounters. Still, while the problems that plague EBM were widely discussed, it is perhaps the proposed solutions that warrant special attention.
It just so happens that your friend here is only MOSTLY dead. There’s a big difference between mostly dead and all dead. Mostly dead is slightly alive.
Miracle Max, The Princess Bride
Richard Peto launched straight into this debate with an appeal for reliable evidence of moderate effects on mortality. There are no panaceas; instead it is up to the researcher to distinguish whether a new intervention is mostly useless, or all useless.
Taken together, these moderate effects can still have a profound impact on patients. A good example of this is breast cancer, where lots of small gains have resulted in a 50% reduction in mortality. Certainly nothing to sniff at.
Of course, if you are looking for moderate effects there needs to be negligible bias. Meta-analyses of small trials are often refuted by large randomized controlled trials (RCTs), as was the case with magnesium for heart attacks, which Peto likened to ‘human sacrifice’. However, despite the power of large, streamlined RCTs, they cannot totally overcome bias.
Peto ended his talk with an appeal for clinicians to trust the relative risk of overall results, not a subset of them. If you interrogate the data hard enough, you can always find something; as Peto said “virtually all subgroup analyses are rubbish”. Otherwise we have some bad news for Libra and Gemini…
Patrick Bossuyt then stepped up to the plate, asking where the true value of diagnostic test accuracy really lies. The field has come a long way from the early days of “judo with 2×2 tables”, but can we justify RCTs for diagnostics?
There is no direct benefit from a diagnostic test; instead the effects rely on the results of the test and the resulting management consequences. The tests themselves can also have an impact on the patient, both physically and emotionally.
Using the example of amyotrophic lateral sclerosis (ALS), Bossuyt asked if we should, in good conscience, recommend genetic testing knowing there is no treatment.
Our interest in diagnostics is the potential impact on health. We need to move from an essentialist view, where Bayes theory rules all, to a consequentialist view, where the value of the test is inherent in the associated health outcome, and variability is of interest.
This was a very similar message to that espoused by Howard Bauchner in his discussion of clinical practice guidelines. Guidelines, like the evidence they are based on, should inform but not dictate, guide but not enforce, and support but not restrict.
Clinical decision making, he argued, is more complicated than ever. It is based on patient preference, clinician experience and the published evidence. However, not all of these factors are created equal: patient preference should take the lead in management of chronic disease, but you would rely far more on the evidence when treating acute conditions.
Trisha Greenhalgh echoed these statements, saying we need to practice ‘patient-based medicine’. It takes judgement to decide which guideline(s) to follow and why – clinicians need to consider the totality of the evidence we have on the patient, when making decisions.
Using a personal example from when she was thrown from a bike at speed, but managed according to a ‘trips and falls in the elderly’ guideline, Greenhalgh advocated asking ourselves “Is the management of this patient, in these circumstances, an appropriate application of the evidence?”
This, of course, requires that the evidence is presented in such a way that you can reliably evaluate any potential bias or lack of generalizability. In short, it is again an issue of transparency. But transparency goes both ways, as Fergal Ó Regan said – if we are to practice shared decision making, patients and the public also have a right to access RCT data.
It is important that the pharmaceutical industry understand that they lose out if confidence in the system is diminished.
Fergal Ó Regan, EU Ombudsman
Using some examples from the work of the European Ombudsman, Ó Regan pushed for all RCT data to be made available, to help support the decisions made. Despite many protests of ‘commercially sensitive information’, he simply stated that “it is important that the pharmaceutical industry understand that they lose out if confidence in the system is diminished”.
But transparency, while important, is not an end in and of itself, argued Ó Regan. The benefit of transparency is that it allows accountability. It was here that the redoubtable Ben Goldacre took the stage, supporting this sentiment and maintaining that the single biggest thing missing from current transparency movements is audit.
Publication bias is research misconduct. There is no difference between suppressing certain results within an experiment to give the trend you want to see and suppressing a whole experiment, but for some reason researchers often don’t view it this way.
We need to celebrate those who make their results available and censure those who don’t, said Goldacre, but the only way we can do this is if we know. Carl Heneghan also advocated auditing of trial registration and publication, calling on all present to “get your house in order”.
To my mind, the conference painted a picture, not of a movement in crisis, but instead one that has only begun to realise just how large a challenge it faces. To quote Síle Lane “we have changed the world”… but we’re not done yet.