Skip to content

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

PLOS BLOGS Absolutely Maybe

Experiments on Naming Authors in Peer Review – Now With a New Kid on the Block

Cartoon about finding peer reviewers with no conflict of interest about peer review

It’s been 6 years since the last results arrived from a randomized trial of revealing or concealing authors’ names for journal peer review. And from the reception to this month’s new arrival, you’d be forgiven for thinking it was a real game-changer. According to a report in Science, for example, it’s “incredible” and “unusually robust”. I don’t think it’s any of those things.

Before we start digging in, a couple of things I think are critical. An effect on peer reviewers’ opinions isn’t the same thing as an effect on publication bias. Even in the rare event that all peer reviewers agree a manuscript should be rejected, editors can decide otherwise. And they know who the authors are. They can be swayed by bias, or they can just want to publish an attention-getting article.

Secondly, the magnitude of effect matters. Advantages need to outweigh disadvantages for individual scientists and the quality of papers. Lack of transparency has costs, including losing potential for detecting undisclosed conflicts of interest.

With that in mind, let’s look at the new trial. The authors are Jürgen Huber and colleagues, and there’s a preprint. They took a real, unpublished manuscript by 2 economists. One is a Nobel prize winner from a US university, described at Science as having a name that “is ‘American-sounding’ and he is white” (Vernon L. Smith), while the other is an early career researcher who is “a citizen of Niger and dark skinned” (Sabiou Inoua). So renown, seniority, and race are all in the mix.

Huber and his colleagues invited a large number of potential peer reviewers to review the pair’s manuscript for an economics journal, letting them know it was part of a study. The invitees had been randomized to emails with no named authors, or either Smith or Inoua named (as corresponding author): no group got the full authorship details. They were more likely to agree to review the one attributed to the Nobel laureate, and least likely to agree to review the one by his colleague.

A small proportion went on to actually peer review, with the ones who had received the anonymized email randomized again between the 3 naming options for the full paper – so only a small proportion of peer reviewers weren’t given any author’s name. And again, Smith being named made a major difference in comparison to Inoua, who didn’t fare well against Anonymous, either. The disadvantage faced by Inoua’s manuscript wasn’t across the board, though. Those who had agreed to review the manuscript with him as corresponding author were, wrote Huber, “milder in their judgements” of it than reviewers that had the anonymous invitation. Their recommendations were similar – e.g. 52% recommended the manuscript be rejected compared with 48% of those not given any name.

The authors focus on the results for only the group of peer reviewers who originally received an anonymized email invitation (313 of the total 534 reviewers). Those rejection rates went from 23% (for Smith) to 48% (Anonymous) to 65% (Inoua).

Could this manuscript be an inherently divisive one? We can’t know, because no evaluation of the manuscript by methodological and content experts was reported, and the expertise of the peer reviewers for the particular subject matter wasn’t reported either. If this manuscript is an extremely unusual example – a divisive study, with an exceptionally renowned and trusted author – then it wouldn’t be a good way for us to get a handle on the potential effects of concealing authors’ names on the regular. And of course, using hundreds of peer reviewers doesn’t test the normal situation to give us a sense of the magnitude of any impact.

Here’s what Huber and his colleagues conclude in their abstract:

“Our findings complement and extend earlier results on double-anonymized vs. single-anonymized review (Peters and Ceci, 1982; Blank, 1991; Cox et al., 1993; Okike et al., 2016; Tomkins et al., 2017; Card and Della Vigna, 2020) and strongly suggest that double-anonymization is a minimum requirement for an unbiased review process.”

Hmm. That string of citations looks impressive, but 1 is a review, not the results of a further study, and only 3 of the other 5 are studies of revealing or not revealing authors’ names to peer reviewers (Blank, Okike, Tomkins). Only 2 of those are about journal peer review – the Tomkins paper is about conference submissions. (See Stelmakh 2019 for a critique of that study.) And of those 2, only 1 is a randomized trial (Okike): the one by Blank uses alternation for allocation, not randomization. Meanwhile, there are 6 additional randomized trials of revealing/concealing authors’ names in journal peer review that they did not cite, as well as at least 1 more of full papers for conference submissions – and more for abstract submissions to conferences, too.

So what difference does this trial make to the body of randomized evidence for revealing/concealing authors’ names for journal peer review?

There’s a lot of variation in what outcomes were studied in the 9 trials, as well as in other aspects of study design and in their scientific quality. There’s a big difference, too, in how much effort went into concealing authors – from just removing identifying info from the top and tail of articles as is common practice at journals that say their peer review is “anonymized”, to combing the text and redacting clues to identity.

However, there’s not a lot of variation on the question of effect: there was mostly no impact on the outcomes measured (4 trials), or only a small effect (3 trials).

The 9 randomized trials of revealing or concealing authors’ names in peer review

Randomized trialFieldJournalsManuscriptsPeer reviewers
Alam 2011Medicine14020
Borsuk 2009BiologyHypothetical1989*
Fisher 1994Medicine157220
Godlee 1998Medicine11221
Huber 2022Economics11534
Justice 1998Medicine5118>200
McNutt 1990Medicine1127252
Okike 2016Medicine11119
Van Rooyen 1998Medicine1467618
* Under- and post-graduate students

The 4 trials that found no effect include the largest so far (Van Rooyen 1998), and the only trial that included a large number of manuscripts at multiple journals (Justice 1998). Both had superficial masking of authors’ identities.

In the 5 trials that found some effect, the effect was small in 3 of them. In 2 of those, there were small differences, but only in those who hadn’t guessed who the authors were (Godlee 1998, McNutt 1990). Concealment didn’t only fail because authorship identity was given away in the text: people can often recognize others in their field, especially if it’s a small world. The differences found in those 2 studies were being more critical of a study’s methods in one case, and being more likely to recommend rejection in another – but there were more outcomes with no effect in those trials.

In the third trial that found a small effect (Fisher 1994), the group not receiving the names on the papers leaned more to recommending rejection than accepting with major revisions. Editors’ final decisions were reported in this trial. They rejected more manuscripts than either of the trial groups: 47% didn’t make it into the journal – the rate of recommended rejection by the peer reviewer groups was 21% and 30%. Fisher and colleagues speculated that editors and the group with the authors’ names revealed may have shared a bias towards authors with more previous publications.

What about the 2 that found a larger effect? The new Huber trial concluded there was a very large effect, as we’ve already seen. The other was Okike 2016. It tested peer review of a single article, too – a fabricated one, which they credited to 2 prestigious authors from prestigious institutions. When peer reviewers were shown these authors, they were more favorable about the manuscript, and more likely to recommend accepting the article: 87% vs 68% (relative risk 1.28 [CI 1.06-1.39]). So that’s not as big a difference as the Nobel laureate versus Anonymous, but still very high.

Where does this leave us? Back in 2015 – before the last couple of trials – I wrote that I didn’t think masking authors’ names from peer reviewers had been shown to reduce biased rejection of their articles, or improve the quality of published articles. Trying to conceal authors often didn’t work, I wrote, and doing it didn’t seem to be a powerful mechanism for reducing editorial bias. After catching up on the recent research, I still think that.

I would like to see a thorough and objective systematic review of a range of study types, though. But now might not be the best time to do that review. We shouldn’t have to wait 6 years to hear about results of the next new kid on the block, because a trial from one of the British Ecological Society’s journals should be around the corner. The editors of Functional Ecology began a randomized trial in late 2019. For 2 years, every author submitting a manuscript to the journal had to thoroughly conceal their identities in a version. That version was to then be randomized to peer review with either only the authors’ names concealed, or those of both authors and reviewers. Given the size of the journal, this will be the biggest trial of thorough blinding in real editorial processes yet. Fingers crossed this one actually moves us forward.

~~~~

Cartoon about things we could know

More on peer review at Absolutely Maybe:

Peer review research roundups

All posts tagged “Peer Review”

Disclosures: I’ve had a variety of editorial roles at multiple journals across the years, including being on the editorial board of PLOS Medicine and Drugs and Therapeutics Bulletin for a time, and PLOS ONE‘s human ethics advisory group. I do some research on post-publication peer review, subsequent to a previous role I had, as Editor-in-Chief of PubMed Commons (a discontinued post-publication commenting system for PubMed). Several of the trials in this post were run at BMJ, a journal with which I’ve had a long association, including recently occasionally blogging for them, and having contributed a chapter to their book, Peer Review in Health Sciences (2nd edition, 2003, edited by Tom Jefferson and Fiona Godlee).

The cartoons are my own (CC BY-NC-ND license)(More cartoons at Statistically Funny.)

Leave a Reply

Your email address will not be published. Required fields are marked *


Add your ORCID here. (e.g. 0000-0002-7299-680X)

Related Posts
Back to top