Skip to content

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

PLOS BLOGS Absolutely Maybe

From the Lab to Evidence Reviews to Communication (Sicily Conference Part 3)

Photo of a hazelnut gelato cone in my hand, as I'm walking down a street among old buildings.

The Sicily Evidence-Based Healthcare Conference was a treat, and not just because of the glorious location and hospitality. It was big enough to have a spectrum of people and views, and small enough to be highly social. That’s a great way to encourage connections and understanding.

This is the last in my series of posts. In this one, I cover one of the themes that got me thinking from each of the 3 days of the main conference: Improving the reliability of preclinical research, plans for the future of the Cochrane Collaboration, and communicating about evidence.

. . . . .

It’s always enlightening to hear Emily Sena give a talk, so this was a special treat early in the conference. She’s professor of meta-science and translational medicine at the University of Edinburgh, and a key member of the CAMARADES research group for collaborative meta-analysis and review of preclinical research.

Sena described preclinical research as “modelling of human disease in the laboratory.” And translational failure is when a drug that seemed to work in vitro and in animals fails to work in human trials – or if drugs are discarded because they didn’t work in some preclinical research, but might have improved health in humans.

I’ve written before about the many reasons that preclinical research can be a poor guide to what will happen in humans. Sena gave a great example: Multiple Sclerosis. Animal studies begin the day that the disease is induced in the animals. That has no applicability to the human situation, where people generally don’t even get diagnosed till well after symptoms start. That can be years. It’s no wonder the results so often don’t translate to humans.

Sena spoke about what she called “the standardization fallacy.” This, she said, is the belief that you can improve reproducibility in preclinical studies simply by reducing variations between laboratories by standardizing lab conditions, the tests used, and genetics of the animals.

Sena argued this increases the risk of detecting apparent effects that won’t be real, as well as the risk of missing an actual effect. She made a strong case for multicenter animal studies, to allow for variations between conditions and practices at laboratories that you can’t measure or standardize for.

We already know a lot about what’s going wrong with preclinical research, Sena said. She had a long list, including some very basic things. But we need to know more about what improvement strategies work. She pointed to a new international project that’s just starting, aiming to test ways of improving reproducibility. It’s called iRISE.

Someone asked from the floor, is preclinical research even worthwhile then? Sena answered that if we improve the quality of the work, and there is still so little impact, then that becomes an important question.

. . . . .

From preclinical research, we now jump far along the spectrum of clinical research to systematic reviews of evidence. Jordi Pardo Pardo, interim chair of the Cochrane Collaboration’s board, spoke to us via video link from Canada in the morning – a ghastly time zone difference.

Upended by the double-whammy of the pandemic and a major funder setback for many of its structures, Cochrane has radically re-organized. You would be forgiven for believing, from the way people usually discuss the Cochrane Collaboration’s 30-year history, that it’s been a long upward trajectory of success after success, with at most a few bumps along the way. With that view, this turn of events might look drastic. But I saw several perilous moments from the inside over the years, and many predictions of Cochrane’s doom, from before it even started. So far, it’s been like a proverbial nine-lifed cat. Though given it’s Cochrane, and international, let’s say a cat with a range of 6 to 9 lives!

There’s a website for their re-structuring process, called Future Cochrane, and that slogan was the theme of Pardo’s talk. From March 2020, early in the pandemic, Cochrane’s central focus shifted to evidence synthesis for pandemic-related questions. Pardo said they now see that process as the proof-of-concept for Future Cochrane.

One of the results of Cochrane’s massive Covid pivot was an increase in use of their work. Website usage in 2020 was 30% higher than in 2019. One of the Covid reviews became, he said, “the most talked about review in the history of The Cochrane Library.” This claim is based on Altmetric’s measurement. The review was on ivermectin for Covid, and in 2021, it ended up in the top 50 of their list of highest-scoring research outputs for the year.

The organization is in transition to a new model of review production and publishing, with some of its new structures – like thematic groups instead of the old review groups – to work alongside some remaining “old Cochrane” structures, and it seems there will be a lot of pilots trying structures out. In some parts of this sprawling, complex organization, mixtures of old and new will co-exist for a few years. Instead of the editorial process, including peer review, happening at the level of a plethora of topic-specific review groups, “Future Cochrane” reviews will be edited centrally. And there will be a variety of ways that a draft could reach that central service.

Here’s hoping this flows, and improves the variable quality that has always plagued Cochrane reviews. Improvement isn’t a given, though, as there were strengths in the old model, when there were editors with high levels of methodological expertise as well as strong content knowledge and editorial skill.

Both old and future Cochrane models are quite intricate, so mixing the streams must be a massive headache. Switching to different funding models must be another. Yet another is the move towards open access publication. I parted ways with Cochrane back in 2012 over this issue. So their announcement a while ago that they would transition to open access by 2025 was a big one to me.

How they will do it, Pardo said, is still under discussion, but they don’t favor author-paid models. That was good to hear. I hope an alternative path is becoming clear, with 2025 just around the corner. They have quite a few years left on a 10-year deal with their commercial publisher, though, suggesting there’s a fair bit of confidence about the viability of their direction.

. . . . .

One of my past Cochrane lives was being the founding Coordinating Editor of its review group on communication evidence. And I spoke about that a bit in my plenary talk on the conference’s last day. The topic they gave me was “How to improve patients’ and consumers’ involvement in healthcare.” I couldn’t really give an evidence-based answer to that question, for the same reason I left the review group.

When we started that group, I was very keen: Oh, all the answers I would get on how to be an evidence-based communication practitioner! A large part of my career was writing and editing evidence-based patient information. But after a few years on the review group, I realized that while there was no shortage of randomized trials, systematic reviews of them weren’t going to help me in the foreseeable future. Serious changes were needed in this research field. They still are.

I gave a few examples of the widespread problems that bedevil evidence and evidence syntheses in this field. I started with the quality and transparency of the interventions themselves.

I’ve seen some excellent interventions in randomized trials – but more often, the quality of the writing or other components would never have gotten past my editorial desk. Often, you can’t see exactly what they were testing, even when the intervention isn’t a proprietary one behind paywalls.

That’s not only a problem of communication studies, of course, and there’s guidance for better reporting of interventions in the TIDieR checklist. However, there still aren’t reliable methods for assessing the quality of these interventions in systematic reviews. So you end up wondering, do conflicting trial results mean brochures, say, have mixed effects for unknown reasons – or does variation in the quality of those brochures explain a lot?

Variable quality can be in the writing, or in components such as data presentation. Communication is a complex intervention, and the quality of each part could make a difference. Some info producers use pictograph software for every one of their products, for example. Pictographs, or icon arrays, are those images where there are say, 10 or 100 bodies or face icons, with a color or expression depicting who’s affected and who isn’t. There’s a ton of complexity in this, and we know using them can be problematic for some kinds of data, and in some ways they’re presented. Using them for everything could land people in hot water sooner or later. (This review from 2013 is good for understanding why.)

Another big issue is selection bias. Who is going to agree to be in a study where their comprehension of technical information is going to be tested?

Here’s a quick example to give you an idea of what happens when researchers don’t go to great lengths to avoid the obvious problem here. For a few years, I was following updates of the Cochrane review on patient decision aids – a practice that spread quite widely, because so many people say the evidence for them is strong.

The review doesn’t give an adequate indication of who was in the trials. So I was going to the included trials and extracting data on educational level of participants for a major outcome as a marker. (Participation in decision making, Analysis 5.1.)

I didn’t have time to update my tally on this for the latest update when I prepared this talk, as there were too many new included trials. I checked the group of 4 trials that carried half the weight of the results though, and I can tell you the results were pretty typical for the 3 trials where I found this information.

Think about what proportion of people in your country, and globally, have university-level education. In these trials, it was 60% to 70% of the participants.

I think some people suffer from trickle-down thinking about this, and that’s a dangerous trap. I don’t believe we know enough about what this selection bias means for applicability of this kind of evidence. What if we are systematically ditching useful interventions because they don’t seem to work, when they would actually make an important difference for the real spread of patients, who are less socially privileged? Or selecting for interventions that can only help patients with university-level education? We do know that the situation can be very different, for a range of reasons, and not just because of education – for example, when a clinician and patient aren’t of the same race. (See this 2023 review on shared decision making with black patients.)

If you are thinking my example of selection bias could well suffer from my selection bias, here’s another quick check I did. It was for some trials that were presented at the Sicily Conference when I got home. They’re from the GRADE group. Holger Schünemann presented 3 communication trials, and meta-analyses of them. That evidence is guiding their decisions about communication methods for patients and consumers.

One of the trials doesn’t seem to be published yet. Of the other 2, the rate of university-level education was 60% in a trial in youth, where the age limit was 24 – by that age, 10% of the group already had a master’s or doctoral degree. In the second trial, well over 80% of participants had post-secondary education – 34% had a master’s or doctoral degree. Sigh.

It’s not enough to say these are limitations of studies. We need studies that do better than that, and of course, some do.

I don’t need to end on a gloomy note about communicating evidence, though. Nasreen Jessani from Stellenbosch University in South Africa comes to the rescue here, with an awesome presentation on synthesizing evidence, and communicating it to achieve change.

The work she discussed was done in 5 Sub-Saharan African countries, by the German government-funded Collaboration for Evidence-Based Healthcare and Public Health in Africa (CEBHA+). They did systematic reviews of evidence for low and middle countries on a range of topics, gathered data on local issues and needs, and other research as needed. Jessani presented projects on non-communicable diseases.

This is a super-simplified explanation of what they did then. Methodological research, evidence, and knowledge translation theory were integrated to develop, implement, and study efforts to implement high priority changes. Highly targeted one-page issue briefs based on the evidence reviews are developed for the people who could act to address a problem. It could be a government minister, a head of a hospital department – whoever.

You can see some examples of these here – like, multisectoral structures to assess and implement WHO “Best Buys” population level diabetes and hypertension risk factor programs in South Africa; a physical activities program for Malawi’s Civil Service; making roads in Kampala safer for pedestrians with disabilities; a case for funding genomic research on esophageal cancer in Africans. And here’s one of Jessani & co’s papers on their approach.

That’s what really made this Conference great: Seeing people and work as impressive as Sena and Jessani and their projects, was just what I needed. Gathering together a selection of the major issues and controversies in evidence-based healthcare for us to chew over and debate was terrific, too. But the biggest treat was leaving Sicily feeling hopeful for the future of evidence synthesis and evidence-based healthcare.

~~~~

Correction: This post had a mistake in a footnote about the results of the Cochrane review on patient decision aids, so I deleted it. Many thanks to Dawn Stacey for alerting me to my error.

Disclosures: I was invited to give one of the keynote presentations at this conference, with travel support from the organizing group (the non-profit GIMBE Foundation). The categories of sponsors accepted for the conference are listed here, and do not include manufacturers of drugs etc.In terms of the evidence “tribes” with a strong presence at this conference, I have a long relationship with the Cochrane Collaboration and the BMJ, and I was a member of the GRADE Working Group for a time long ago. More info here, including my role in stakeholder engagement for Cochrane reviews on ME/CFS and on HPV vaccines. If you’re interested in the ME/CFS issue, there’s more here at Cochrane, including my reports, and here’s my project talk page.

The photo at the top of this post is my own “gelato selfie”, taken in at Taormina, Sicily in October 2023 (CC-BY-SA 4.0).

Discussion
  1. thanks a lot Hilda for your insight; always enriching to read your thoughts on the state of the science and practice in health and social care.
    we have a new funded study of shared decision making in the global south; it will be led by Amédé Gogovor from Togo. your thoughts will be helpful. SES is important but also culture, language, context, etc. Also, I would like to highlight reviews by Ali Ben Charif from Iles des Comores as well as from Roberta Coroa from Brazil on the science and practice of scaling; there is a reverse conclusion here: LIMC countries have a wealth of know-how that HIC do not and LIC should learn from LMIC. thank you again and best regards. France

    1. Thanks, France! That’s great news – and I agree. I’ll have a look at those reviews. High income countries have always had a lot to learn from community development in health work, too.

  2. * Analysis 5.1, participation in decision making. In this latest version of the review, the pooled effect lands at worse outcomes for people in the decision aids group (RR 0.68 [0.55 to 0.83]).

    As a point of clarification, these outcomes are for clinician controlled decision making and as such people in the decision aid groups take on a more active role in decision making compared to those in the usual care group; hence, a better outcome for those in the decision aid group. These results have higher certainty in the 2023 in press update that included 209 randomized controlled trials (double the number of trials from 2017). For the 2023 update, clinician controlled decision making was measured in 21 studies involving 4,348 participants (RR 0.72 [0.59, 0.88]) ⊕⊕⊕⊕ high certainty according to the GRADE rating. The 2017 version is available on our research website and as soon as the 2023 version is published we will add it to our website.

    1. Thanks, Dawn – whether people in the decision aids groups fared better or not wasn’t what my point was about. I’ve corrected the footnote affected by this. Given how little is explained there, it takes some mental gymnastics to work out that the effect being larger on the “usual care” side means the result is in favor of decision aids.

  3. This sounds like a wonderful conference; it sounds like touched on many areas of great interest to me. Thank you so much for the summaries and discussion of particular issues.

Leave a Reply

Your email address will not be published. Required fields are marked *


Add your ORCID here. (e.g. 0000-0002-7299-680X)

Related Posts
Back to top