Science is about objectivity, openness and truth. Marketing is about rhetoric, half-truths and sales. When marketing creeps into clinical trials, science is the loser – and with it, good medicine and patient care.
By no means are all drug industry trials undone by marketing. One need but look at dementia, where a market of billions awaits, but drug after drug has failed in pivotal trials, to appreciate that many commercial trials have scientific value.
And this matters, because there are growing calls from industry to replace pivotal drug approval trials with an “adaptive pathways” system, which would permit drugs to be marketed without strong evidence they work. That evidence would come from subsequent real-world use data, and I can see potential value in this as a counterpart to trials, but the approach could, if mismanaged, usher in an era of snake oil and quackery.
So we need trials of commercial drugs, but trials unsullied by marketing. To achieve this, we must understand the interaction between marketing and trials.
Defining “marketing trials”
Building on Barbour et al’s study, published in Trials, I start by looking at concepts of “marketing trials”. Previous work has taken a yes/no approach – either a trial is a marketing trial, or it is not. But in truth, marketing and science coexist in the same research. The quintessential “marketing trial” merely marks the end of a spectrum of commercial influence. It is the entire spectrum which should concern us.
The quintessential “marketing trial” merely marks the end of a spectrum of commercial influence.
I also draw a distinction between the marketing functions of trials, and their marketing-related features. By “functions”, I mean how a trial is used for marketing. For instance, trials provide data, publicity, introduce prescribers to drugs, and help manufacturers recruit academic opinion leaders.
Conversely, by “features” I mean what marketing considerations do to the trial. For instance, they may shape the research question, study design, number of study sites or the way the research is reported.
Companies as authors, not sponsors
I conducted new analyses on Barbour et al’s study cohort. The key step was to focus on those trials within the cohort which were funded exclusively by drug manufacturers.
I found that the large majority of these trials had direct company involvement in their design, statistical analysis and reporting. We are in the habit of referring to companies as mere “sponsors” of trials, but this term should be discarded. Companies are direct corporate authors of their trials – and their proprietors too, owning the data and able to shut them down at the drop of a hat.
Companies are direct corporate authors of their trials – and their proprietors too, owning the data and able to shut them down at the drop of a hat.
I also examined product seeding. This is a notorious marketing vice, in which trials are used to get clinicians into the habit of prescribing a drug. Seeding involves using a large number of investigator sites, with a small number of patients at each one. I found that features consistent with seeding were commonplace, even in these top industry trials in prestige journals.
Thirdly, I studied attributional spin. This happens when, for marketing credibility, a company plays down its own involvement in a trial, but plays up the role of the academics it has enlisted into the project. Attributional spin involves various tricks, but I assessed one feature only – the status of the lead author.
I checked all 70 industry-financed articles in which both academics and industry employees were listed as co-authors. The number with an industry employee as lead author was zero.
Playthings of marketing
Taken together, these observations illustrate how trials become playthings of marketing. The companies have mastery: they own and plan the trials, play a direct authorial role throughout, and own the data. But when it comes to publication, the academic recruits, who could readily be replaced by different academics without any real impact on the output, are usually positioned as leaders, and their status endorses the work.
To make matters worse, many of these academics have financial relationships with the manufacturer. That does not mean they are willfully biased, or mere “guest authors” who contribute little. Most make substantial, honest contributions – and yet with the arrangements I describe, biases in framing, methodology, analysis, interpretation, reporting and attribution can inevitably accumulate.
The greatest problem with marketing is that these biases are often subtle or even unidentifiable from the limited information in the published article.
The only real solution is the one Barbour et al advocate – independent clinical trials. One interesting model for this is Italy’s Mario Negri Institute, recently discussed by Donald Light, Antonio Maturo and Tom Jefferson.
It is unrealistic to expect revolutionary change, but legislators should explore ways to increase the role of independent evaluation, particularly for assessing the true clinical effectiveness of important drugs already on the market.
And if wholly independent trials are not attainable, then the more independent the academic leads are, the better. Institutional review boards should set stringent standards for the methodology of trials, their value to patients, and the independence and authority of the academic investigators. Academics should not be permitted to receive company payments for consultancy or other activities beyond the trial itself.
The study protocol, clinical study reports and patient-level data should all be available to independent academic scrutiny.
The academic team should be insulated from the company, with a copy of the database in their own institution, free to conduct whatever analyses they choose. The company should not participate in analysis and interpretation, use its employers as co-authors, or hire trade writers to write the manuscript. The study protocol, clinical study reports and patient-level data should all be available to independent academic scrutiny.
Many of these measures are already employed in the best industry trials. And when companies do play direct roles – and this is often inevitable for secondary analyses – this involvement should be highlighted at the outset of the published manuscript, not disclosed by a mumble in the small print.
My commentary discusses various remedies, such as stronger reporting guidance, bias assessment tools and a broader definition of research integrity, but I particularly emphasize the need for medicine’s journals to do better. Journals must label industry trials as the commercial enterprises they are, and do more to root out bias. The best approach is, I think, to oblige authors to self-report biases through a mandatory checklist. Journal Editors and Publishers also need to establish a new, more authoritative publication standard, supplementary to the marketing-lenient editorial guidelines that prevail today.
That is a matter of scientific self-respect, but most of all, of respect for patients, who do not enter clinical trials to help big business spin science into gold.