Excuse me Trialist, what are your thoughts?

New research published today in Trials explored the first-hand experiences that trialists face when conducting and reporting clinical trials. What did they say and what can we do to tackle the challenges they face?

5

Randomized controlled trials are praised as one of the highest forms of evidence in healthcare. However, to be valuable, all research must have valid methods, and be reproducible and useable.

The usability of research is affected by a multitude of factors, stemming from the research not being reported in sufficient detail. Clinical trials particularly have come under heavy fire recently, with a few very high-profiles cases highlighting some of these issues.

The Tamiflu case, where the full evidence suggested no additional benefit over paracetamol, highlighted major impacts from publication bias and selective outcome and analyses reporting, whereas the case of selective serotonin reuptake inhibitors (SSRIs) increasing the risk of suicide emphasized the disastrous effect inadequate reporting of harms can have on patients.

Media hype stemming from such stories has created the caricature of a malevolent trialist, gleefully suppressing data for their own publication record.

But, in a world dominated by regulation, funders and grant applications, has anyone actually asked the trialists?

Excuse me Trialist, what are your thoughts?

Rebecca Smyth and colleagues set out to do just that. They identified 286 trialists, who had either published a trial covered in Cochrane systematic reviews used in the ORBIT project or were randomly sampled from PubMed between 2007 and 2008, interviewing the 59 that replied.

These interviews highlighted issues for trialists relating to five major stages of the clinical trial process:

  • Framing the research question,
  • Defining key outcomes,
  • The role of the study protocol,
  • Conducting the research,
  • Getting the research published.

Many of the reported challenges have been highlighted before, but the importance of qualitative studies to gain insight from those on the front-line of research, as well as provide depth and detail to the evidence-base, cannot be underestimated.

There are some findings that will always be disheartening. Thirty-three trialists commented that, independent of outcome, they had never chosen to withhold from publishing their results; however, sixteen of these (48%) said they had been involved in a trial that was never published, 3 of which were industry-funded:

“… that was an industry sponsored trial… Not only was it a negative result but the study was stopped early because of safety. And that has not been published.”

…Not only was it a negative result but the study was stopped early because of safety. And that has not been published…


Interviewed trialist
Smyth and colleagues, Trials 2015

So, what can we do?

Existing initiatives concentrate on combating issues at the final stage of a clinical trial, i.e. publishing the results. But Smyth and colleagues identified issues for trialists that occur in many of the stages leading up to, and ultimately contributing to, this final stage.

Reporting guidelines, such as those from the EQUATOR Network, including CONSORT, provide guidance for the minimum set of reporting items to ensure that research is transparent and reproducible. However, if a trialist does not see these or isn’t referred to them until their article is rejected, it’s too late. Trialists need to be aware of these guidelines through the entire trial process for them to have their intended impact.

Therefore, maybe we need to focus our efforts, not just on the final stage of a clinical trial, but equally on all the other stages that will impact this final stage.

Some projects are already underway with the goal of supporting better trial design and reporting practices, including the National Institute of Health Research Clinical Research Network and Trial Forge, but do they go far enough?

In particular, difficulty in defining key outcomes was highlighted by those interviewed. When it comes to defining key outcomes, the COMET Initiative aims to develop standardized sets of outcomes for specific conditions, an agreed minimum of what should be measured during clinical trials.

These should be incorporated into the study protocols, which should be a trial’s ‘how-to guide’ and form the core of their eventual report paper. The deliberation and detail that goes into creating a protocol means that adherence should assist all stages of the clinical trial process. However, some trialists mentioned that after completion, the protocol was shelved and forgotten:

“…You write the protocol you spend months doing it, and you have got a really good grasp of it and then as the trial progresses the protocol fades…”

Another recurring theme was that of recruitment and retention, something that will feel all-to-familiar to many trialists out there. Many of those interviewed commented on difficulties achieving their target sample sizes:

“… it was supposed to be a huge trial, looking to recruit 1800 men, and we closed after two years with 35 randomized, so clearly you are not going to get anything useful out of that…”

While this is a well-known bugbear, perhaps it is time to acknowledge that some of these trials should never have gone ahead. Does this show we are doing too many trials? Instead, maybe we should be conducting more pilot studies with predefined feasibility outcomes for factors such as recruitment, to give a clear go/no-go for definitive trials.

Whatever the future may hold for the clinical trial process, we need to ensure that the voice of all parties involved are heard.

30/01/15: *Update to text*: paragraph 9 has been updated. The original paragraph read, “There are some findings that will always be disheartening. For example, 48% of trialists interviewed stated they had been involved in a trial that was never published:”

View the latest posts on the On Medicine homepage

5 Comments

By commenting, you’re agreeing to follow our community guidelines.

Your email address will not be published. Required fields are marked *

Confused Mike

I doubt you were trying to be intentionally misleading, but I feel like the following statistics may be relevant given the quote chosen for the reason trials have results that are not published.
“There are some findings that will always be disheartening. For example, 48% of trialists interviewed stated they had been involved in a trial that was never published:
“… that was an industry sponsored trial… Not only was it a negative result but the study was stopped early because of safety. And that has not been published.”

From the article:

“Three of the 33 trialists had, however, been involved as site investigators in industry-funded trials that were never published.”

“Those authors who had failed to publish any findings from a trial (n = 16) cited many reasons: negative study findings, lack of time, lack of resources, recruitment problem,rejection by journals, unclear results, or failure to complete the trial.”

Is my math correct that only 9% (3/33) of the respondents said they ever failed to report an industry sponsored trial compared to 39% (13/33) who said they have ever not reported results for a non-industry sponsored trial?

Reply
Editor
Ella Flemyng

Thanks for highlighting this, Confused Mike! You’re right, I had not intentionally meant to be misleading. I’ve updated the paragraph and stated that I have done so at the end of the blog.

Reply
David Welch

I am still confused by the author’s language in this paragraph, because of the “near” double negative: “Thirty-three trialists commented that, independent of outcome, they had never chosen to withhold from publishing their results; however, sixteen of these (48%) said they had been involved in a trial that was never published, 3 of which were industry-funded:”

It is unclear what “never choosing to withhold from publication” really means– As I disentangle this, I read it to mean “Choosing to (always) publish”… which is laudable, but not really surprising since this is what we should all aspire to do.

Second, and less pedantic, 3 of 16 unpublished trials had industry involvement. This seems to me to be a very small fraction (19%, or 1 in 5) of what industry’s involvement in funding trials may actually be. In spite of the dramatic following statement that one industry trial may not have been published despite finding potentially life-threatening consequences for patients, overall, I would think that industry involvement in only 1 in 5 unpublished trials is probably less than their proportional involvement in all trials. To me, this would be evidence for industry not trying to slant reported incomes to the positive. This may simply be a small-sample result that would not be supported by a bigger study, but my interpretation of the data quoted would be that industry may not be deliberately slanting the results as much as might be suspected. A comment on this perspective would be useful.

Reply
Terry

Having personally being gouged and lied to by laser company’s, I saw 6900 abstracts dissapear from PubMed that had 7200 abstracts. It seemed 95 percent of them were lies or misleading. I blamed the laser companies at first, but realized later it was the doctors that paid 200,000 for a lie and a pray.

Reply