“When you see the term ‘preprint’ in a scientific news article, what do you interpret it to mean?” That’s the key question…
More Pandemic Research on Public Trust and Scientists
I’m glad researchers didn’t waste this chance to study public trust while attention to scientists was so intense – and especially since many scientists were making high-profile errors of judgment. Those of us who have long argued that scientists should be more humble about what’s known, describe uncertainties, and be open about our mistakes, didn’t have much empirical data to guide us.
Recently I blogged about an experiment on communicating scientific uncertainty as a hedge against loss of trust when there’s new evidence that changes thinking (Dries 2024). Now there’s another interesting one. It comes from Ben Seyd and colleagues in the UK and US (Seyd 2024). They used a conjoint survey experiment to test how heavily various factors affect public trust in scientists in those two countries.
Conjoint surveys give people a series of different scenarios with varying attributes, and ask people which they choose. This aims to draw out the value people are placing on those attributes. In this case, the researchers were testing how things like competence and perceived independence from political influence stack up against each other when people decided who is trustworthy. The participants were about 1,500 adults from each country, weighted for proportions in the general population in terms of age, gender, income, working status, and geographical region.
The attributes of scientists they got people to weigh up were competence, transparency, representativeness, benevolence, honesty, communication, independence, and values. As usual for this type of study, competence ended up as the strongest influence on trust. But before we get to that, I’ll mention a few other points that emerged strongly. The results from the two countries were similar, by the way – except that holding rightwing political views was associated with less trust in science in the US, but not the UK. (Though I think that might be more widespread than this one survey can address – see for example another study the authors cite from Germany by Bromme and co.)
Perceived integrity had an impact on which scientists people said they would trust: “Thus, a scientist presented as being fully transparent attracts a 10 percentage-point higher level of trust than a scientist presented as not transparent at all. A similar effect on trust is found for honesty; a scientist presented as always admitting to mistakes is 12 percentage points more trusted than one who rarely admits to such mistakes.”
Another factor making an impact was independence from partisan political considerations: “Where scientific decisions are presented as being driven by the science alone and as independent from political considerations, scientists attract a significantly higher level of trust (marginal mean: 0.61; 95% CIs (0.59, 0.62)) than where those decisions are presented as adjusted to reflect politicians’ views (marginal mean: 0.40; 95% CIs (0.38, 0.41)).”
This, argued Seyd and colleagues, was a critical issue in the pandemic, and scientists had to walk a tightrope. They needed to get close to political decision-making to influence it, but risked trust in their independence when they were. “[T]rust in scientists is lower where partisan considerations are seen to interfere with scientific decisions,” Seyd wrote.
This study leaves a trail of questions. Some of that comes from the inevitable drawbacks of considering hypothetical decisions alone. Even when a study’s scenarios are enriched with multiple attributes, it’s still going to be far less complex than real life. What people believe about themselves (and therefore say), can be different from what people’s behavior suggests.
I struggled with that when, for example, Seyd concluded that scientists having similar positions as their own on questions like lockdown wasn’t strongly associated with people’s trust. Hm. We see people flock in enormous numbers to partisan researchers all the time. Confirmation bias seems to be almost all-pervasive. Is the result in the survey a feature of people seeing themselves as more un-partisan and un-biased than they are, or is my perception skewed by unusual behavior?
Seyd and colleagues reported: “When it comes to trust in scientists, we have pointed to the large difference (of 27 percentage points) associated with presenting scientific work as being of either low or high quality.” In theory, sure, but how do people decide what work is low or high quality? The issue of perceived competence is key, too, but how do people determine this?
For some with strongly held views, it seems as though competence is determined by purity tests: “If they’re not on my side of specific points, then they are incompetent. Case closed.” That’s perhaps just an extreme version of heuristics that we all have, to some degree. Most of us surely have some version of determining competence based on someone’s conclusions about something we already know. I’d like to see more research digging into the heuristics of perceptions of competence.
Seyd cites another article for background (Besley 2021). Besley writes about “competence-based components (e.g., expertise, knowledge, qualifications and intelligence).” Another of the studies they cite point to a classic heuristic – trusting medical practitioners from academic medical centers (Purvis 2021). What happens when these people strongly disagree, though? We’re seeing some research from climate science questions, but there is massive scientific consensus on key issues in that field.
We saw people operating outside their specialized fields still gaining massive influence in the pandemic, and highly partisan or contrarian scientists, too. Rapid sensational takes were a path to amassing huge followings. That those takes were so often wrong doesn’t seem to lessen many people’s appetite for the next take, and that’s frustrating to watch. Reading this paper, though, reminded me that we really need solid research into more representative slabs of populations to put things in perspective. I hope there’s lots more on the way.
I’ve added a tag of my posts on trust to help me keep track as more studies emerge.
You can keep up with my work at my newsletter, Living With Evidence. And I’m active on Mastodon: @hildabast@mastodon.online
~~~~
The cartoon is my own (CC BY-NC-ND license). (More cartoons at Statistically Funny.)