We’re now into the sixth month since the genome of the new coronavirus was sequenced. Twelve to 18 months is the soonest…
Study Publications Must Be Up-To-Date in Covid Time & That New Hydroxychloroquine Study
When I woke up yesterday morning, I saw that hydroxychloroquine had become a trending topic on Twitter again overnight. And – good grief! – it was author- and media-fueled hype of a problematic observational study triggering a flurry of maybe it works after all speculation.
There’s good reason to say it’s not even worth discussing this one. But the paper and (social) media reactions highlighted a problem that’s not only about this type of study: journal editors need to be doing better at being on top of the evidence in real time when they lob another study at us.
Let’s zero in on the critical importance of these quotes from the study:
Currently there is no known effective therapy or vaccine…
Currently, randomized trials of hydroxychloroquine for treatment and chemoprophylaxis are underway… [4 citations]
Recent observational retrospective studies and randomized trials of hydroxychloroquine have reported variable results… [citations include 3 for randomized trials]
From Samia Arshad & colleagues (July 1, 2020) [notes added]
The most generous thing I can think of to say about these statements is that they are inaccurate and/or incomplete, and misleading. And all of this matters, in several ways, especially given the predictable impact of this study in the media.
The first statement is out of date, because during the peer review period for this manuscript, the UK RECOVERY randomized trial identified the first drug that reduces mortality of people with Covid-19 on oxygen or ventilation (press release, June 16; full results released in a preprint, June 22 ). As we’ll see later, that was important for several reasons. The version of the Arshad study revised after peer review was submitted to the journal on June 22. It was accepted for publication on June 29, and published on July 1.
The other 2 statements paint a picture of the evidence from trials about hydroxychloroquine that is more than just misleading.
The evidence around hydroxychloroquine and Covid-19 at the time Arshad submitted their manuscript was, indeed, “weak and conflicting” (per a systematic review by Adrian Hernandez & co). But Arshad left out critically important parts of that evidence as it evolved dramatically while the article was going through its editorial processes.
Following is a timeline of key events that were prominent. It’s a lot, but that’s what life is like in Covid time.
- March 6: Trial 1 comparing the drug to standard care, with 30 people (cited by Arshad). Too small for a meaningful result on mortality.
- April 10: Trial 2, with 62 people (cited by Arshad). Too small for a meaningful result on mortality.
- April 24: A trial comparing doses of hydroxychloroquine, stopped at 81 people because too many people taking the higher dose of the drug were dying (not cited by Arshad).
- May 4: Trial 3, with 150 people (cited by Arshad). Too small for a meaningful result on mortality.
- May 22: A large study claiming adverse effects of hydroxychloroquine was published by the Lancet. In response, the WHO put a “temporary pause” on the hydroxychloroquine arm of their Solidarity trial for Covid-19 treatments, and other trialists scrambled to review safety in their hydroxychloroquine trials. Soon after, amid concerns about the veracity of the Surgisphere data in the Lancet paper, the WHO announced it would resume the hydroxychloroquine arm. The Lancet paper was retracted on June 5. (Not cited by Arshad.)
- June 3: A trial of hydroxychloroquine to prevent Covid-19 after exposure, with 821 people. Found no benefit. (Not cited by Arshad.)
- June 5: Trial 4, findings of the RECOVERY trial, with 3,674 people, 1,542 taking hydroxychloroquine announced (not cited by Arshad). The hydroxychloroquine arm was stopped because the drug was having no benefit, despite the associated risk of adverse events. **As of this point, the evidence was no longer weak and conflicting.**
- June 15: The FDA revoked its emergency use authorization of hydroxychloroquine for Covid-19, based on the RECOVERY trial’s results, concerns about harm, and data casting doubt on the mechanism by which hydroxychloroquine had been thought to potentially work. (The revoking of FDA authorization was cited by Arshad, but not the evidence for it.)
- June 16: RECOVERY trial press release on the steroid, dexamethasone, reducing mortality. (Not cited by Arshad.)
- June 17: The WHO stopped the hydroxychloroquine arm of its Solidarity Trial on treatments for Covid-19, referring to the RECOVERY trial and similar results from the Solidarity trial.
- June 20: The NIH stopped its trial of hydroxychloroquine after review of its results to then also showed no benefit (although no harm).
- June 22: Preprint of RECOVERY trial finding of some mortality reduction with dexamethasone released. (Not cited by Arshad.)
- July 1 (the day the Arshad paper was published): The FDA released its review of safety of hydroxychloroquine in Covid-19, and cautioned against its use. (A few days later, WHO also announced they had some safety concerns from their trial results, publication coming soon.)
Findings of this observational study provide crucial data on experience with hydroxychloroquine therapy, providing necessary interim guidance for COVID-19 therapeutic practice…
From Samia Arshad & colleagues (July 1, 2020)
“It’s important to note that in the right settings, this potentially could be a lifesaver for patients,” Dr. Steven Kalkanis, CEO of the Henry Ford Medical Group, said at the news conference.
CNN (July 3, 2020)
Both in the paper and at the press conference when challenged, the position of the Detroit hospitals in the Arshad paper, is basically we hydroxychloroquine better. But the state of the evidence was obscured in the paper. Could Arshad & co, and the journal’s editors, have known about the RECOVERY trial and its implications on June 22 when the manuscript they accepted a week later arrived? They certainly should have known.
If they didn’t – and that seems unlikely, given the massive publicity – it was at best carelessness: the FDA revoking of its emergency use authorization includes data from the RECOVERY trial, and the FDA action is noted and cited. The case for them not knowing that the evidence was no longer weak and conflicting relies on believing that they did not read the FDA document they cited – and that no one in leadership of the hospitals, study, or journal, noticed the media firestorm around hydroxychloroquine.
That said, there is clearly also sheer sloppiness on all this accepted by the peer reviewers and editors. Arshad’s paper includes 4 citations included for ongoing trials of hydroxychloroquine for treatment or prevention (chemoprophylaxis). Those 4 are, curiously:
- An NIH website that had apparently listed ongoing investigations – but it’s a broken link;
- A trial with 479 planned participants, and another with 620; and
- A letter speculating that hydroxychloroquine might prevent Covid-19, but it does not describe or propose a trial.
I haven’t done a thorough search for relevant information – just the amount of fact-checking I would do if I had been a peer reviewer for the Arshad paper: and I would have stopped at this point, because the picture of what was happening here was so clear. And it had direct bearing on the conclusions that could reasonably be drawn in this study – especially recommending a place for the therapy.
Among all the things wrong with the Arshad study, why am I focusing on their cherry-picking of the evidence, and being out of date? It’s because distorting the public record makes it so much harder for everyone to grapple with the fire-hose of evidence. You want to advance knowledge? Then you need to truly start from what we know. And that changes quickly with Covid-19. Authors, peer reviewers, and journals have to work in Covid time: something with potentially radical implications for a paper can happen late in the editorial process.
Cherry-picking of the evidence is enormously critical and it distorts the whole research pipeline, from grant and ethics review applications on. And it’s so easy to fact-check it.
Leaving aside the issues about this study not putting its findings into even halfway-accurate context, what else was problematic? It wasn’t randomized, or controlled in any way, and that’s a massive limitation. Just take a look at what the journal editors knew were problems when they went ahead and published it in this form: the following list includes some of the points in the editorial by Todd Lee & co that they published alongside.
- Hydroxychloroquine and/or azithromycin was the standard for care in the hospitals in the study, and there were signs the 16% who didn’t receive this treatment were fundamentally different in several ways that bias towards worse outcomes – like possibly not initiating the drugs if people’s prognosis was particularly bad. People not given the drugs were more likely to die than go into intensive care, for example;
- Steroids were used twice as often for the people getting hydroxychloroquine than the people who didn’t – and yes, the RECOVERY trial’s results are cited in the commentary;
- The authors didn’t take time factors into account, so there was likely to be some immortal time bias (see my explainer for that).
I also wonder about the impact of some other factors. They only counted in-hospital mortality, for example, and results after people’s first admission: 121 (4% of all patients) were re-admitted. Another 0.6% were lost to follow-up, because they were transferred to other hospitals, for example. The upshot? Observational studies for a question like this are problematic: this one was particularly so.
It is, however, very sobering to note that the number of patients in this single observational study would have made a substantive contribution to any randomized controlled trial. While all healthcare providers feel a clinical imperative to offer patients treatment, there was little evidence to justify a hydroxychloroquine protocol at the outset of the pandemic.
Todd Lee & co, editorial accompanying the Arshad paper (July 1, 2020)
That brings us to another aspect of publication responsibility in Covid. Why do editors go ahead and publish papers without another revision, when they get an accompanying editorial like that one? Now that science journals, especially medical journals, are part of the general public discourse, some traditions like that one should be in the firing line.
There are major incentives for journals to publish hype. But just when we need the very best that the science community can deliver, medical journals are doing the opposite of raising the bar. After doing a case-control study of clinical research, Richard Jung and colleagues recently concluded that Covid-19 papers were more likely to have been rushed through and be low in quality and high in bias compared to clinical research papers from the year before in the same journal (preprint). Here’s hoping that’s just a feature of the early stage of the pandemic.
The risk of dying from Covid-19 could, and should, be creeping a little lower now. But it will only be as low as it could be if people can set aside their egos, biases, and conflicts of interests. We should not let journals off the hook for their role in amplifying them.
~~~~
Disclosures: None. I write about Covid-19 studies, but am not involved in clinical research. My recently submitted PhD dissertation is about the impact of shifting evidence on the reliability of systematic reviews in biomedicine.
The cartoons are my own (CC BY-NC-ND license). (More cartoons at Statistically Funny and on Tumblr.)
Normally one would wish to wait for results of RCTs, which take an average of 5.5 years (reference on request). In a pandemic, we must weigh the risk of treatment options versus the risk of severe complications and death in the absence of RCTs. If a treatment can prevent infection and/or disease with minimal side effects, it should be explored in the real world. Ringing hands over lost RCTs is like criticizing the menu on the Titanic.
I think you’re demonstrably wrong on all counts, here. It doesn’t matter what your reference says: as the RECOVERY trial has shown, it does not have to take 5.5 years in the middle of an outbreak of an acute infectious disease to get answers.
“If a treatment can prevent infection and/or disease with minimal side effects”: unless it’s a very dramatic effect, you cannot know it that it can in fact do that without randomized trials. And if the effect is dramatic, then you would know very quickly from very small numbers of people that it worked.
Criticizing the fact that there were not more trials like RECOVERY is not like “criticizing the menu on the Titanic”: it’s like criticizing the Titanic’s operators for thinking they didn’t need to worry about providing an adequate number of lifeboats, because they believed they knew what that they were doing, so there was no need to consider that the ship might sink.
I would take that reference; I’d guess the RCTs evaluated were for the (now more common) chronic, first world diseases which do, in fact, take a long time to study. This is an acute, rapidly moving infectious disease. As we can see from the RECOVERY trial, it doesn’t actually take long to do a proper study in this scenario. It takes a willingness to admit that you don’t know whether something works, even if it’s become a de facto standard of care. Also, the risk of severe complications and death appear (at this point) to be on the side of no hydroxychloroquine; before the RECOVERY trial, I would have said that the risks were about even – and for me, that doesn’t justify using a treatment. A drug that is as likely to kill you as the disease is seems to me suboptimal; all you’re choosing is the method of the patient’s demise, and not addressing the question of whether demise will occur.
Great blog, Hilde! Thanks for your painstaking research, which can’t be said of all principal investigators or “pushers” of one treatment over another.
Thanks! (And yes, it is painstaking!)