Embedded recruitment trials: creating Russian dolls

If you fail to recruit, your clinical trial will fail. This harsh reality has led to an increased number of embedded recruitment trials, assessing which recruitment strategy works best in what context. Research recently published in Trials outlines guidelines for reporting these embedded recruitment trials and here, Shaun Treweek, who was involved in the project, discusses how these guidelines not only impact reporting but should also influence embedded recruitment trial design.

Doing trials is hard work.

You need a good research question and a funder willing to give you enough money to answer that question, you need to recruit the astonishingly high number of participants that your statistician says is required and then you have to keep them all on board for the next five years. You have to get it all approved, prepare data collection forms, develop a data management system to support those forms, provide training, deal with logistics, the list goes on.

But really it’s recruitment that bothers you. Everyone connected with trials worries about recruitment and the reason is obvious: without participants you have no trial.

But really it’s recruitment that bothers you. Everyone connected with trials worries about recruitment and the reason is obvious: without participants you have no trial.

What’s more, you can burn your way through an awful lot of time and cash before the hopelessness of recruitment becomes clear.

A study of a single academic medical center in the US published in 2011 looked at the cost of trials that were closed having recruited either zero or one participant. They found 260 trials in this category and estimated the costs at just under $1 million. That’s $1 million dollars for no scientific benefit.

In the UK, Sully and colleagues found that only 55% of trials funded by the Medical Research Council and the Health Technology Assessment program met their recruitment target, although to be fair, three-quarters did reach 80% of their target. Almost half of all trials received an extension of one kind or another; those that did were no more likely to have recruitment success.

Given that it is estimated that around 25,000 trials are published every year and essentially all of them need to recruit participants, it’s strange that we are not better at it.

Given that it is estimated that around 25,000 trials are published every year and essentially all of them need to recruit participants, it’s strange that we are not better at it.

In fact, despite trialists being a community of evaluators, we are remarkably poor at evaluating our own methods. The current Cochrane review of strategies to improve recruitment to trials presents good-quality evidence for, perhaps, three things trialists could reasonably expect to improve their recruitment. It’s hardly Christmas.

Which is where the recent article in Trials by Vichithranie Madurasinghe, Sandra Eldridge, Gordon Forbes and colleagues (conflict of interest note: I am one of these colleagues) on the reporting of embedded recruitment trials comes in. What these authors want to do is:

  1. encourage trialists to evaluate their recruitment strategies by running embedded trials-within-trials
  2. having done that, report their findings well

As you might expect in a trial reporting guideline, the authors lean heavily on CONSORT (CONsolidated Standards of Reporting Trials) and the development process and presentation of the guideline will be familiar to anyone familiar with CONSORT, which if you are involved in clinical trials, I guess is most of you.

The guidance is sensible: have a look, see what you think. There are two bigger picture things to take from this paper though.

Firstly, Madurasinghe and colleagues are imploring us to apply the same enthusiasm and methodological rigor to evaluating our own methods as we apply to the treatments, therapies and health care initiatives we evaluate together with our clinical and policy colleagues.

We need to do more embedded trials, and we need them to be good.

We need to do more embedded trials, and we need them to be good. That last point leads onto my second big picture thought.

There is a not unreasonable tendency to look at reporting guidelines when you are, well, reporting. I think this is a mistake, or rather leaving it this late is a mistake.

The reporting standard presented by Madurasinghe and colleagues describes what an ideal report of the results of an embedded recruitment trial would look like.

It presents what others, such as other trialists and systematic reviewers, need to know to make judgments about your embedded recruitment trial and its relevance to their own context and work.

It is not simply a document describing how to report, it is a document describing how to design. The time to consider reporting standards is right at the start: these are the things that you will need to do and then describe for others to consider your trial relevant to them.

So, if you are uncertain about your recruitment strategy why not build-in an embedded recruitment trial? We certainly need more. And if you do, look at the paper by Madurasinghe and colleagues before you start.

If you really wanted to put a cherry on top, it would be great to let the Trial Forge team know too; there might be other trialists who would like to test the same recruitment intervention.   All our trials need Russian dolls.

View the latest posts on the On Medicine homepage

Comments