The competition for the “honor” of being the most confusing part of clinical trial results is fierce. I don’t think technical terms…
“People on a trial tend to do better”. Ordinarily, when I see someone make that point, I agree. But last week, when I saw that in a tweet, I shook my head. I no longer think it’s that simple – or even that we know the answer to the question, beyond, “it depends”.
Before we go any further, it’s important to be clear I’m not talking about first-in-human, early stage trials – phases 1 and 2. I wrote about the history of those in vaccines early in the pandemic. This is about the bigger, phase 3 randomized trials that are the last regulatory hurdle to clinical practice for drugs and devices – and the basic test for everything else.
For example, we’re watching trials shake down treatments for people with Covid-19, helping push down the disease’s mortality rate. The first clinical practice guidelines from the US National Institutes of Health (NIH) in April recommended against routine use of corticosteroids for people mechanically ventilated for Covid-19. Now they recommend using one of them, dexamethasone, for everyone on mechanical ventilation or needing oxygen supplementation. All because of the people pulling together in the UK’s RECOVERY trial, revealing this relatively cheap and widely available drug could substantially drop the risk of people with severe complications dying. Respect! That trial is an enormous gift that just keeps on giving.
Meanwhile, some highly reported incidents in the trials of the Oxford/AstraZeneca vaccine show the complexity. A reported death from Covid-19 of a young doctor in a placebo group highlighted the stakes in waiting for large trials before using something. On the other hand, trials were stopped because of a pair of serious adverse events, one of which wasn’t determined to be vaccine-associated, but was not 100% cleared either. Even a rare serious harm can affect large numbers of people if billions of people are exposed to the risk, and trials are a risk minimization strategy for that.
The widespread judgmentalism towards countries that have introduced emergency use of Covid-19 vaccines before phase 3 data is out makes it clear we’ve decided as societies that we benefit from clinical trials. The daily burden of Covid-19 around the world is excruciating, but we’re holding off, waiting. What about the people participating in the trials, though? People do it for various reasons – sometimes because it’s the only way they can get access to a new treatment, and generally out of altruism; wanting to help others and contribute to the betterment of health care and other people’s lives. Why did I believe for years it was in their personal interests, too, to be in the trials, and now I think it’s more complicated than that?
Let’s go right back. A lot of our mores and attitudes to human experimentation were shaped by abuses and atrocities in the 1930s and 1940s. Those in Germany were dug into right after World War II (leading to the Nuremberg Code), but the repercussions continued for years – such as from the Tuskegee study, begun in the ’30s, though only reaching public awareness and action in the 1970s. That means a lot of beliefs were deeply culturally embedded well before the 1960s when large clinical trials began to be expected. It was only in 1959, at a landmark meeting in Vienna, that the methods of randomized trials as we know them were codified.
The ethics regulation complex surrounding clinical trials is predicated on protecting people from risks of research participation. And that created what Iain Chalmers calls a double standard: If a clinician wants to experiment with using hydroxychloroquine or anything else on patients with Covid-19, they face no special ethical hurdle – they only face that if they approach experimenting formally in a way that could lead to reliable knowledge.
Those of us in Chalmers’ camp – promoting clinical trials and the use of the knowledge they provide – got a boost to the cause in the 1990s. David Sackett dug through medical literature in 1999, coming up with 25 studies looking at the question of how people who participate in clinical trials do in comparison to people who do not: 23 of them, he said, “documented better outcomes” for the trial participants. And he helped establish a group to systematically review the research on the effects of clinical trial research on participants’ outcomes.
Now the die was really cast. When the Cochrane systematic review he’d been supporting came out, out of the 10 comparisons with statistically significant differences, 8 showed trial participants doing better. Although the authors, Gunn Vist and colleagues, didn’t outright conclude the evidence proved it was better to be in a trial than not, large bunches of us became convinced that trials weren’t just the right thing to do for society and health care, it was, on balance, better for the participants too. Over the years, studies that confirmed it kept getting attention and catching my eye. I’d made up my mind, and didn’t think I needed to keep up with the evidence about this belief.
But then I recently wrote in a draft article that being clinical trials was good for people, and decided I needed to have a link to back up the statement. And that’s where I realized my belief had been based on a combination of confirmation bias and not keeping up with the evidence.
First, the Cochrane review had been updated and Vist & co’s conclusions were different. The scales had shifted since the first version. There were now 21 statistically significant outcomes, split 10 and 11. And with its search for studies having been done in 2007, the review was too old to rely on, assuming the evidence was still growing. So there was that sinking, oh-jeez-I’ve-probably-been-wrong, feeling.
I didn’t find a recent, rigorous systematic review on the question, and the body of evidence probably isn’t adequate enough anyway. There are biases in studies of bias, just as for other kinds.
There’s a systematic review by Natasha Fernandes and colleagues from 2014, though, that shapes my opinion on this question now, which is, there’s not enough evidence, and it’s complicated. There’s still no proof that being in a phase 3 trial is an inherently risky proposition, because of course, many trials end up finding no important benefit from the intervention they studied.
So it all depends on how often potential treatments turn out to be effective, doesn’t it? If an intervention turns out to be effective, and only the people in a trial can get it, then trial participants are at an advantage. And that’s what Fernandes & co found from 9 studies of that scenario.
In the recent past, that means people in phase 3 vaccine trials probably benefited, when vaccination wasn’t an option outside the trial: 85% of vaccines have gone on to be approved for general use. But it’s more like an even split on drugs being investigated for treatment. Although close to 60% went on to approval, some of those would turn out later to be ineffective in the general population – and about a quarter of new drugs get safety warnings or are withdrawn for what may be safety reasons within a few years.
What it means for people in trials of surgery and all other forms of health care is likely to be different again. And it would vary from health issue to health issue, too. In a hypothetical disease where no treatments help much, but all carried substantial risks, then joining a trial would be an act of hope and generosity, with grim prospects. Fortunately, though, many trials move us forward, and the hope of progress is often justified. So there is one absolute certainty. We owe the people participating in trials our gratitude.
Learn more about trials – in multiple languages – at Testing Treatments, including a free book to read.
Update November 1: I changed the word “treatment” in 2 paragraphs to reflect a point made by Craig Gedye on Twitter – to avoid using that term for investigational drugs when a sentence referred specifically to that. Thanks, Craig! (Not that every approved drug is an effective treatment, of course.)