Skip to content

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

PLOS BLOGS Absolutely Maybe

4 Ways This Tweet Shows How Public Health Messaging Can Go Off the Rails

When I saw the tweet, it felt like watching pinball for misleading public health hype – ding! ding! ding! – hitting target after target in just one sentence and data image. Yet, the expert who pushed it into my timeline apparently couldn’t see any of that. Yikes!

The tweet came from a highly influential expert with half a million followers, Scott Gottlieb, but it hasn’t gone wildly viral at this point – when I posted this, it had more than 300 retweets and over 1,000 likes. It’s a relatively low-heat one, which I think makes it ideal for dissecting features that recur in problematic public health messaging.

So let’s break it down. The tweet has 3 parts: an important public health message about addressing the consequences of healthcare neglected in the pandemic, book-ended by a one-sentence data claim and a graph of survival rates. The issues I want to highlight are in those book-ends.

Body of tweet marked up to show sections - click to see the tweet or keep reading for quotes of relevant sections

1. A dramatic data claim for a false premise

Overwhelming data show that early detection of cancer is the key to long term survival.

(Emphasis added)

As wonderful as that would be, there is no “key” to living longer for all cancers – though fortunately, some types of cancers aren’t particularly life-threatening, very early cancers often don’t progress anyway, and screening for some cancers can definitely make a critical difference. Of course, the implication here is that the data in the graph represent the “overwhelming data”, but it doesn’t, and we’ll get to why soon.

What is the data on the impact on survival of early detection of cancer? Here’s a quick way to get a rough idea.

According to the U.S. National Cancer Institute, there are more than 100 types of cancer. Every one doesn’t have a method that can detect it early, and, distressingly, each doesn’t have tolerable treatment options that can materially lengthen the lives of most people who get it.

The U.S. Preventive Services Task Force (USPSTF) is the body that assesses the evidence for screening to detect diseases early. It has 13 sets of current cancer screening recommendations. They cover more than 13 cancers, but it’s still only a small minority. And some of those are like this oneRecommendation: Do not screen for thyroid cancer.

The Task Force assigns a grade to their individual recommendations, that incorporates how overwhelming or not the data is, from A (high certainty of substantial benefit) down to D (moderate to high certainty of no benefit, or harms that outweigh benefits), and I for not enough evidence. The sets of recommendations carry 20 of these grades: only 2 of those 20 score an A (colorectal cancer and cervical cancer), whereas there are 7 D’s and 4 I’s.

That’s not “overwhelming data” for benefit of early detection – that only exists for a tiny number of cancers. Rather, the evidence supports the classic statement about screening: “All screening programmes do harm; some do good as well”. Why? For several reasons – especially the burden of anxiety and potential harmful treatments after diagnosing cancers that were never going to cause any harm if undetected.

Cartoon about the effects of being an early bird are over-rated
From my 2013 post “What’s So Good About Early Anyway?”

2. Misleading outcome measure that’s riddled with bias

Overwhelming data show that early detection of cancer is the key to long term survival.

(Emphasis added)

The way “long term survival” is used, you’d be forgiven for thinking it means you will live longer if your cancer is detected than you would have if you didn’t. But that’s not what it means. Which is why it leads so many people astray. Welcome to the phenomenon technically known as lead-time bias!

Here’s the thing. “Long term survival” is measured from the time of diagnosis, not the time you got sick (which is generally not known). And if you drag the needle of diagnosis earlier, the length of time you survive is longer, whether or not the length of your life changes by even a minute.

Below is a rough sketch of what this means for a person who developed a cancer at 40 and died at 80. If they were diagnosed when they were 76, their disease survival time was 4 years. If they were diagnosed any time before they were 75, then their survival time was longer than 5 years – even if the length of their life was the same.

If you want to know whether screening increases the length and/or quality of people’s lives, you measure that in other ways – for example, what proportion of people died of the cancer. (If people live longer with a disease, more of them die of something else.) Establishing whether or not a screening program does more good than harm involves a chain of evidence, considering issues like how effective screening is at picking who has the condition without over-diagnosing it, and treatment effectiveness.

Sketch of the chunk of an 80-year-old's life before and after a diagnosis - explained in text

3. Using data visualization to make a claim more convincing

EXHIBIT 1: U.S. 5-year survival rates for localized vs. metastatic cancer

It sure is a striking image: huge differences in survival rates for 11 types of cancer for people whose cancers don’t spread versus those that do (metastatic). While that in itself says nothing about whether early detection shifts the odds, including it implies it’s some of that “overwhelming data”. Indeed, screening for some of those cancers, like ovarian and pancreatic, got D grades from USPSTF for being proven to either not help, or to actively harm people.

I don’t know if any people assumed that the stark differences in those bars on that chart indicated the risk of dying “with versus without” early detection. I don’t know how many people have the specific literacy needed to see that graph doesn’t prove anything – other than that cancer spreading is bad news. And I haven’t kept up with the evidence on the impact of whacking in an irrelevant graph when you’re trying to convince people of something. So I don’t know how much influence that graph is likely to have had on people’s impressions.

But I do know it doesn’t support the dramatic data claim in that tweet. And it’s an example of why critical literacy of data viz matters: a message isn’t more reliable just because there’s an impressive-looking graph with a scientific attribution.

4. Reliant on an unverifiable source

Source: Nature magazine, Bernstein analysis

There was no link – just that attribution. I could not locate the source, though I wanted to. I expected to, as well – I’m pretty good at this – and I really tried. For nearly half an hour. For example, one of the searches I did was in PubMed for every article in Nature with a Bernstein as first author or last author – no joy there. I think it’s fair to say the data source is not specified enough to regard it as verifiable.

Linking to, or at least adequately and accurately identifying the source of, data is truly basic. Verifiability of scientific claims is central to science. Science communication should be the same. But vague sourcing or no source at all is a plague in traditional and social media.

Which brings us to the spread of this kind of easily-refutable message. The effect on credibility isn’t the only reason hype is problematic, of course. Still, in the contest for trust, valid messaging is vital. Claims that are easy to debunk are great targets for those who want to sow distrust in authorities. When those of us with expertise jump on this kind of obviously BS bandwagon, the respect and credibility people have for us can take a meaningful hit.

We need to learn from snap errors of judgment like this – we all make them – and then remember to slow down and be sure before jumping on board. This episode is a good reminder of one of the most common thinking threats we face:

authoritative person +

message-I-believe-in +

data viz =

extreme confirmation bias risk

Cartoon public service announcement about confirmation bias

~~~~

Disclosures: I have written several blog posts critical of screening and communication about screening, and I have led the production of national public patient information on understanding evidence about screening, and on specific health screening programs. In 1998 I planned to do a systematic review with colleagues on improving understanding and minimizing psychological impact of screening: it wasn’t completed, so the protocol was retracted. I was a co-author of a systematic review on treatment of gestational diabetes as part of investigating the chain of evidence for screening for that condition, for the Institute of Quality and Efficiency in Health Care in Germany (Horvath 2010). My personal stance on specific screening programs varies, depending on which disease and what evidence (eg critical of Alzheimer disease screening, less critical than many of breast cancer screening). More about me.

The cartoons are my own (CC BY-NC-ND license). (More cartoons at Statistically Funny.)

Leave a Reply

Your email address will not be published. Required fields are marked *


Add your ORCID here. (e.g. 0000-0002-7299-680X)

Related Posts
Back to top