The noisy ugly sister of trial processes is recruitment. Everyone worries about recruitment, fretting that numbers are unachievable. Parts of the UK’s NHS research support infrastructure has its funding linked directly to recruitment numbers, a link that focuses minds on recruitment but not what happens next.
What happens next is often that participants don’t provide primary outcome data. Stephen Walters and colleagues report the median proportion of randomized participants retained for primary outcome analysis to be 89%. This sounds pretty good until you realize that it means that 50% of trials do not have primary outcome data for more than 11% of participants. Walters and colleagues looked at UK Heath Technology Assessment (HTA) Programme trials, meaning their data come from a very good bunch of trials and teams. For trials in general, the picture is likely to be worse.
Primary outcome data matters because poor retention threatens trial validity, especially as participants for whom data are missing may differ from those who have provided data. Trials may end up underpowered and, to add to our woes, relatively modest dips in retention can lead to very shaky trial conclusions.
Primary outcome data matters because poor retention threatens trial validity, especially as participants for whom data are missing may differ from those who have provided data.
The Fragility Index, a way of assessing how fragile a trial conclusion is, developed by Michael Walsh and colleagues, often shows that what is considered statistically significant can be turned insignificant by a handful of events going in the opposite direction. Crucially, these authors found that for 53% of trials, the number of event swaps needed to change the conclusion was less than the number lost to follow-up.
Improving trial participant retention
The directors of UK clinical trials units consider retention to be one of the top three (out of 55) trials methodology research priorities. Published in Trials, Anna Kearney and colleagues go that one step further and identify some concrete priorities within trials retention research.
They asked 75 chief investigators of UK HTA funded trials and the directors of 47 UK clinical trials units about the strategies they use to improve trial participant retention. A Delphi was also done with trials units to gain consensus as to which strategies should be prioritized for future evaluation research.
There’s a lot in the paper so let’s concentrate on four things:
- chief investigators’ assessment of the reasons for missing data
- the strategies used most often by trials units
- missing stakeholders
Chief investigators suggested many reasons for missing data. Formal participant withdrawal came top of the list (84% of trials) though this is not the same as saying that it accounts for 84% of missing data, which isn’t reported as far as we can see. Losing touch with participants (61%) and participants not returning questionnaires (49%) come next.
For trials units, newsletters come top of the table of strategies routinely used to support retention (70% of CTUs reporting their use), followed by a timeline of participant visits for sites (58%) and prepaid envelopes sent with questionnaires (58%). Indeed, 15 of a total of 57 different retention strategies were routinely used by over 40% of units responding to the Delphi.
We don’t think any of the 15 [retention] strategies has compelling evidence to support its use without thinking about further evaluation.
This brings us nicely to evaluation. Of those 15 strategies, there has been no formal evaluation for five. Only two of the others have been evaluated more than once (twice, for both). We don’t think any of the 15 strategies has compelling evidence to support its use without thinking about further evaluation. In other words, large numbers of the UK’s trials units are routinely using retention strategies without knowing whether this is a good idea or not. This is hardly their fault; there is very little evidence to support most trial conduct decisions and decisions must be made regardless.
Kearney and colleagues are absolutely right then to propose more evaluation and to give a list of priorities that need more evaluation and prioritization focus. Although newsletters didn’t make the cut (they make their first appearance at a lowly number 39 on the priority list), we’d like to see them getting some attention. They are used more than any other reported strategy, they chew up time and they might not work. They would be easy to evaluate too.
Finally, there is a key stakeholder view missing: public and patients. Most of the reasons for missing data put forward by chief investigators, and certainly the top three, will be difficult to address without a public and patient perspective. Their involvement in a priority setting process might lead to a different list to that presented by Kearney and colleagues. Retention, however, is a Cinderella that needs as much attention as it can get, whatever the order of the list.