© Devonyu / iStock
Imagine, if you will, that you are developing a promising new drug which, judging by its performance in in vitro and animal studies, may extend the lives of patients with a certain type of cancer. Before you can launch this drug onto the market, you will need to show that, for its target population, it works as intended and its side effects are bearable. To do that, the design and conduct of clinical trials is essential. Broadly speaking, there are two approaches you can take when designing and conducting such trials.
The first approach involves running your trials under idealized (and thus artificial) circumstances, to test whether the medicine in question is at all capable of exerting its desired effects. You can create these circumstances by exercising strict control over what happens during the studies. Practically speaking, this could for example mean that you make sure to only recruit participants who satisfy a long list of carefully chosen selection criteria, to limit the research setting to university hospitals exclusively, and to define in detail how investigators should administer the investigational product. By doing so, you will generate data that provide an indication of the efficacy of that product.
Conversely, the second approach consists of carrying out your trials under conditions which reflect as closely as possible how the experimental agent would be applied in a real-life context, outside of a research environment. If you want to follow this approach, you may for instance decide to restrict the number of exclusion criteria to a minimum, to also rely on community hospitals for patient recruitment, and to leave it up to each investigator to choose how they employ said agent in their clinic. The trial results will then offer insights into the effectiveness of the therapy under investigation.
Studies that are set up according to the first approach are sometimes referred to as being ‘explanatory’, while those that are designed based on the second are often given the descriptor ‘pragmatic’. Although the two approaches, which are valid in their own right, are irreconcilable in theory, in practice many trials display both explanatory and pragmatic features. A useful tool was developed that allows for quantifying the degree of pragmatism a given trial exhibits: the PRECIS-2 instrument. By scoring the study on a scale of one (very explanatory) to five (very pragmatic) across each of the nine distinct PRECIS-2 domains, its position on the pragmatic-explanatory continuum can be determined.
However, despite the availability of the PRECIS-2 instrument, multiple prior analyses of trials undertaken in a variety of different medical fields have shown that many trialists use the term ‘pragmatic’ rather liberally, without properly justifying why their studies deserve to be labelled as such. We wanted to know whether that finding holds true in the oncology sphere as well. To test this, we searched the literature and subjected every trial we could find that investigated an antitumor treatment and that carried the ‘pragmatic’ tag to a full PRECIS-2 evaluation, extracting the necessary information from relevant study documents.
What we observed was striking, but not surprising: in our sample of 42 supposedly pragmatic trials, the median total PRECIS-2 score at the individual trial level barely exceeded 3, corresponding to the midpoint of the PRECIS-2 scale. In other words, the median study included in our analysis turned out to be no more pragmatic than it was explanatory. Moreover, none of the studies we examined adequately explained why their use of the ‘pragmatic’ label was warranted, neglecting to report any PRECIS-2 scores whatsoever. Additionally, in a majority of cases, a complete PRECIS-2 assessment could not be performed, mainly due to the research setup not being described thoroughly enough in the source materials consulted.
Clearly, oncology trialists, just like their colleagues from other disciplines, lack understanding of what pragmatic trials are. The implications of this observation should not be understated: false claims of pragmatism can mislead readers into thinking that real-world patients will experience the same therapeutic outcomes as the trial participants, which is far from guaranteed, given the existence of the efficacy-effectiveness gap. These claims may deceive decision-makers like payers and health technology assessors too, to whom pragmatic studies are of particular interest, owing to the strength of the evidence produced by those studies.
What our work demonstrates is that we have to be more idealistic about pragmatism in clinical research by affirming that it is a multifaceted concept which encompasses all aspects of a trial’s organization.
Journal editors and reviewers must play an active role in this regard and insist that sponsors who assert that they have conducted a pragmatic study substantiate their assertions by determining and publishing that study’s PRECIS-2 scores and the motivations underlying them, allowing for their independent validation by third parties. Only by taking resolute action to prevent the misuse of the ‘pragmatic’ identifier, can we avoid potentially harmful misunderstandings and misinterpretations.