Tracking the unknown unknowns of trial registrations and their published results

Links between registration records of clinical trials and their published results are essential for assessing publication bias and reporting bias. Unfortunately these links are often missing. Here, Rabia Bashir and Adam G. Dunn, authors of an article published today in Systematic Reviews, talk about the processes researchers use to find trial reports from registration records, through links and manual searching, and discuss why it’s vital to have a reliable infrastructure in place.

Clinical trials produce a diverse range of information as they are planned, registered, undertaken, and reported, but it can be difficult to track all of that information down.

Clinical trial registries like ClinicalTrials.gov help to consolidate that information, and can be used in studies that measure publication bias and outcome reporting bias caused by missing or incomplete publication of results. They are also a critical source of information for systematic reviewers looking for unpublished data.

To measure these biases and understand what data might be missing from published trial reports, researchers make use of links between trial registries and bibliographic databases. But many of these links are missing, and it is time-consuming to determine whether there is a missing link or if there is simply nothing to look for on the other end.

But many of these links are missing, and it is time-consuming to determine whether there is a missing link or if there is simply nothing to look for on the other end.

Published today in Systematic Reviews, we reviewed studies in the area to quantify the processes that researchers used to find published trial reports starting from a set of registry entries, and to find registry entries starting from a set of published trial reports. We were especially interested in how they used automatic links—the machine-readable links that sit in the metadata and can be analyzed without the need for extra manual effort.

Finding these links appears to depend on how investigators look for them. Besides using automatic links, researchers also inferred links by manually searching registries and bibliographic databases, and contacting trial investigators to inquire about what they could not find themselves.

Our results showed that researchers used both automatic and manual processes in at least 24 of the 43 studies looking for published results from registrations. In these studies, researchers found that 23% of registrations had automatic links to published reports. But when they went on to look for other results manually, they found an extra 17% had published reports but no automatic links.

This tells us that relying only on automatic links may lead to very large over-estimates of non-publication. It also means that when spending time to look for links between registrations and reports, the more you put in, the more you get out.

The results also helped us to map out several reasons why machine-readable links between trial registrations and published reports may be missing from the metadata. Journals may not always require the registrations of trials, or they may not add the registry identifier in the abstract or metadata even when it is included in the full text of the article. Trial investigators may not update information in the registry when the article is published.

Finding biases in what has or has not been reported can tell us why it can take many years to identify safety issues after drugs are approved.

There are important reasons for why we need to improve the linking of trial information. Finding biases in what has or has not been reported can tell us why it can take many years to identify safety issues after drugs are approved, even when the right clinical trials have been completed.

Initiatives like the Linked Reports of Clinical Trials are already available to support links between trial registry identifiers and publications. But improvements in linking will require changes to both policy and practice. Bibliographic databases need to work with journals and publishers to make registry identifiers a mandatory part of the metadata provided when indexed. Funding agencies can strengthen policies to require that registry identifiers are included in the abstracts of trial reports and to push trial investigators to keep their registrations up to date as a requirement of funding.

If we can improve the system to make trial information more accessible, more transparent, and better linked, we can ensure that all future trial results data are made accessible as quickly as possible. To get there, it will take a concerted effort from everyone involved in the funding, undertaking, reporting, and synthesis of trials.


Found this post interesting? You may also be interested in the ‘Linked Reports of Clinical Trials’ cross-publisher initiative, developed in collaboration with CrossRef.

View the latest posts on the On Medicine homepage

Comments