Skip to content

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

PLOS BLOGS Absolutely Maybe

The Renewed Debate About Blinding in Clinical Trials

 

Cartoon of woman blindfolded

 

 

Things can go really off the rails in clinical trials. Take the most influential trial of the Mediterranean diet (PREDIMED). It had to be retracted because of a bunch of screw-ups with randomization. Here’s one of them:

A researcher at one of the 11 clinical centers in the trial worked in small villages. Participants there complained that some neighbors were receiving free olive oil, while they got only nuts or inexpensive gifts.

So the investigator decided to give everyone in the same village the same diet. He never told the leaders of the study what he had done.

“He did not think it was important”….  [Source]

Wince.

Randomization and blinding at various steps along the way are meant to prevent bias creeping in and distorting the results. (Explained here.) Even if bias doesn’t distort the results, when it’s obviously there, it threatens a trial’s credibility.

Several of the steps related to blinding are some of the pillars in rating the risk of bias in a clinical trial. And those ratings can have a pivotal role in how much weight is put on the trial’s results.

Enter MetaBLIND: a meta-epidemiological study by Helene Moustgaard and colleagues, published in January 2020. They took the ratings made by Cochrane reviewers in a year’s worth of systematic reviews. After further detailed analysis, they found 142 meta-analyses including both blinded and unblinded trials – 1,153 trials altogether. They expected to find that blinding had a tendency to skew trial results, but they didn’t:

No evidence was found for an average difference in estimated treatment effect between trials with and without blinded patients, healthcare providers, or outcome assessors. These results could reflect that blinding is less important than often believed or meta-epidemiological study limitations, such as residual confounding or imprecision.

Firstly, it’s important to remember what these authors are not saying. They are not saying they found definitive proof that blinding has no impact – only that they did not prove there was one in their analyses of these particular trials. There were only 18 meta-analyses for the analysis of patient-reported outcomes, for example, and 29 where healthcare practitioners making outcome assessments were blinded. That means the results could have been different with a different or larger sample, as the authors point out:

In all instances the credible intervals were wide, including both considerable difference and no difference.

 

Caveman cartoon

 

MetaBLIND also wasn’t set up to determine whether blinding was important at the stage of entering people into a trial (allocation concealment). But a post hoc subgroup analysis found “about 10% exaggeration of the odds ratio in trials without adequate concealment of the allocation sequence”.

This study had a lot of strengths. For example, the authors didn’t just scoop up the Cochrane reviewers’ assessments – they re-assessed themselves, and contacted trial authors trying to get more information. It had several weaknesses as well, in addition to the sample size. For example, they point out that studying meta-analyses that included both blinded and unblinded trials might have skewed results. The meta-analysts might have been more likely to keep studies combined when their results weren’t too different.

In the accompanying editorial, Aaron Drucker and An-Wen Chan agree with the authors about strengths and limits of the study, stressing some of their points, like:

The very nature of the study restricts its generalisability; the MetaBLIND study could not include subject areas with only blinded trials (eg, interventions and areas of medicine where blinding was deemed essential and the effect of blinding might have been most pronounced) or only non-blinded trials (eg, interventions that by nature were not amenable to blinding).

They recommend studies within trials to assess the potential impact of people’s beliefs. That adds another load to clinical trial. In another article published alongside MetaBLIND, Rohan Anand and colleagues argue that blinding can already sometimes make trials “unnecessarily complex”, or harm them in other ways. For some trials, blinding could deter people from participating or staying the course, leading to underpowered trials or more drop-outs in one group than another. Blinding in some circumstances can be elaborate and expensive, too:

Money spent on blinding has opportunity costs if it reduces funding to optimise other features that would have more influence on the trial’s robustness such as the training of trial staff, boosting the sample size, and comprehensively measuring outcomes.

So how does MetaBLIND fit in the bigger picture of research on blinding? Its authors wrote:

Meta-epidemiological studies are often used to assess empirically dimensions of bias in randomised trials, but they could themselves be biased.

That means the MetaBLIND study, too, needs to be considered with other research on the question, and the authors suggested it needs to be replicated – which would mean a different sample, and a bigger one, given the findings. However, Jos Kleijnen has argued that the method they used has so many weaknesses, it doesn’t deserve to be repeated.

 

 

The authors quote 2 systematic reviews of meta-studies on trial characteristics, one from 2015 and one from 2016. Those reviews only found a few studies of blinding other than allocation concealment, and results were mixed. For example, a 2008 study of 76 meta-analyses in 746 trials found unblinded trials exaggerated effect size for some types of outcomes and not others.

Where does this leave us? The MetaBLIND authors were careful:

The implication seems to be that either blinding is less important (on average) than often believed, that the meta-epidemiological approach is less reliable, or that our findings can, to some extent, be explained by lack of precision.

Drucker and Chan agree that we don’t have a definitive answer:

The relevance of blinding to a study’s risk of bias warrants careful consideration on a trial by trial basis, but concrete recommendations for making this judgment cannot be provided based on current evidence.

I think this strengthens the case for being very careful about how much weight we place on some aspects of risk of bias assessment in systematic reviews. It’s not just that we don’t really know for sure that the same design element is important in each trial, or even most trials. There can also be a lot of variability in how individuals reach those judgments. Dismissing or minimizing the importance of particular trials needs more than superficial, pro-forma justification. That said, the case for the importance of allocation concealment in a randomized trial is holding firm.

 

 

~~~~

 

Update 3 February 2020: Added a mention of Jos Kleijnen’s opinion about this study after his tweet. Thanks Jos! The sentence originally read: 

That means the MetaBLIND study, too, needs to be considered with other research on the question, and it needs to be replicated in a different and larger sample.

The cartoons are my own (CC BY-NC-ND license). (More cartoons at Statistically Funny and on Tumblr.)

 

 

Discussion

Leave a Reply

Your email address will not be published. Required fields are marked *


Add your ORCID here. (e.g. 0000-0002-7299-680X)

Back to top