Skip to content

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

PLOS BLOGS Absolutely Maybe

5 Things We Learned About Journal Peer Review in 2021

This post is part of a series that I started in 2019, recapping the scientific results on peer review to that point. Peer review at journals is a critical part of science, but we’re spectacularly unscientific about it. Yet progress on addressing all the unanswered questions about it is still painfully slow.

Take biomedicine, where most of the experimental research on peer review has been done. A systematic review published in 2021 only found 3* more trials than a systematic review with a search for studies 5 years earlier. Across the whole of science, we can’t even count on each year bringing us a solid trial that advances knowledge enough to improve journal practices.

Of course, trials aren’t the only way to advance understanding of peer review. Other forms of research can be ideal, or close to it, for answering many questions. And we are getting useful contributions every year. Here’s my pick of 2021’s top 5.

* Those 3 trials: A 2016 trial report on single-blind versus double-blind peer review and author prestige; a trial on revealing conflicts of interest from 2019 (included in my 2019 “5 things we learned” post); and a 2020 trial report on giving invitees 1 versus 3 days to respond to the request to peer review.

Cartoon of journal steam punk hype machine

1. Generic advice to authors about how to reduce spin in the conclusions of their abstracts doesn’t seem to deter most hype.

[Randomized trial]

I wrote a post on science spin in 2019, and how incredibly hard it is to get authors, journals, universities, and everyone along the communication chain to stop it. So it’s not surprising when something doesn’t work. The intervention here was a set of instructions to authors on how to reduce 4 types of spin in the conclusions in the abstracts of their manuscripts. The instructions accompanied peer reviews. The amount of spin remaining in the conclusions in the re-submitted articles was assessed by 2 blinded researchers.

The study was done on manuscripts at BMJ Open, with 184 manuscripts randomly selected for the study. Of those, 108 were sent for revision after peer review, and those were randomized to get the instructions or not.

There was less spin in the conclusions in the intervention group, but it wasn’t much (57% versus 63%), and it’s far from certain that it was really an effect. The authors’ conclusions:

These short instructions to authors did not have a statistically significant effect on reducing spin in revised abstract conclusions, and based on the confidence interval, the existence of a large effect can be excluded. Other interventions to reduce spin in reports of original research should be evaluated.

Mona Ghannad & co

Mona Ghannad and colleagues (2021). A randomized trial of an editorial intervention to reduce spin in the abstract’s conclusion of manuscripts showed no significant effect.

2. Spin in final research reports to a funding agency was lower after peer review, but the same spin was often back in journal publications of that research.

[Observational study of spin in a funding agency’s projects and subsequent journal publications]

More on the persistence of the urge to hype your own work, unfortunately. This was an analysis of the final research reports for 64 projects submitted to the funder, the Patient-Centered Outcomes Research Institute (PCORI) in the US. Peer reviewers and/or editors identified spin in the overwhelming majority of reports – 86% – and authors mostly fixed the problems pointed out to them. But when the authors submitted articles on their research to journals, the spin could be back – the same problems appeared in 40% of published articles. Sigh.

Evan Mayo-Wilson and colleagues (2021). Peer review reduces spin in PCORI research projects.

Cartoon of editor scribbling

3. What peer reviewers write to editors behind the authors’ backs might not be all that different to what they write “to” the authors – in one journal, at least.

[Content analysis]

I don’t recall seeing a comparison of confidential comments to editors and the peer review reports visible to authors before, so this was fascinating – and it would be reassuring if this turns out to be true at journals generally. It’s a study of the section in an online system where authors can add comments that aren’t for the authors’ eyes, not emails sent to the editors.

It’s for a health profession education journal, Perspective on Medical Education. The authors analyzed peer review reports for all new research, review, or “show and tell” manuscripts submitted between January 2019 and August 2020. It was 358 reviews of 168 manuscripts. The researchers assessed content and tone, as well as concordance between what reviewers wrote to the authors and editors when they recommended rejection.

Nearly half of the reviews didn’t make any confidential comments to the editor. From the authors’ conclusions:

Most comments included a summary or rationale for the reviewer’s overall evaluation of the manuscript and carried a critical or constructive tone… [I]n many cases lack of alignment in content reflected concerns unrelated to the manuscript or circumstances the reviewer felt important call to the editors’ attention. These findings suggest opportunities for further investigation and discussion about the pros and cons of retaining or clarifying the purpose of confidential comments to the editors as part of peer review.

Bridget O’Brien & co

Bridget O’Brien and colleagues (2021). Transparency in peer review: Exploring the content and tone of reviewers’ confidential comments to editors.

4. Peer reviews for journals may be soaking up somewhere around 130 million hours or 15 thousand years of person-time annually as of 2020 – which could be worth more than a couple of billion US dollars globally.

[Bibliographic and economic estimates]

The authors of this article have gone to great lengths to come up with a reasonable estimate. Still, there is an awful lot of uncertainty all along the way for every input, as they stress themselves.

For example, their final calculation assumes 3 reviews per accepted manuscript (counting re-submissions), and 2 reviews per rejected one, with 6 hours spent on each review. That time is based on what people reported in a couple of surveys.

I very much doubt that the average peer reviewer spends 6 hours on every manuscript. I’ve seen plenty of peer review reports that were a few lines long and where I’m not convinced the people had even truly read the manuscript in question! On the other hand, many people spend considerably more than that.

This article is an interesting overview of the issue, and they certainly establish that this workload is enormous – and the potential for saving time and money by surfacing peer review reports could be huge, too. If peer reviews were truly portable, for example, the authors speculate:

Should the reviews of a previous submission be available to the journal of the new submission, reviewing time could be substantially reduced (presuming that the quality of review does not differ between journals – and it very likely does), but unfortunately this is not common practice. If we assume that the “passed on” or open reviews would reduce the requirements by one review per manuscript, then approx. 28 M hours (of our 85 M hour total estimate) could be saved annually.

Balazs Aczel & co

Balazs Aczel and colleagues (2021). A billion-dollar donation: estimating the cost of researchers’ time spent on peer review.

5. A substantial proportion of critical peer review comments of rejected manuscripts are resolved when the articles are eventually published somewhere else – but most specific suggestions to improve the work fall by the wayside.

[Content analysis of rejections from one journal and comparison of the manuscripts and eventual publications]

This one is based on an analysis of the fate of 250 randomly selected manuscripts that received rejection letters from an orthopedic journal in 2012. (The journal had rejected 1,164 clinical manuscripts – I couldn’t find the name of the journal in the article, though.)

The authors were able to find 200 of them published in other journals by July 2018. They classified issues raised in the original peer reviews as “actionable” or not. That was, they wrote, “a clear and unambiguous instruction to the authors where they could improve the body of the manuscript, tables, or figures, with a verifiable end point”. The action had to be a matter of substance – pointing out typos, or comments about style or grammar didn’t count. Beyond that, though, the authors didn’t try to rate whether or not the suggestion was right or important. I don’t assume all the actions should have been taken.

There were 609 of these proposed actions for the 200 manuscripts that were published: 34% of the suggestions had been implemented. Of the authors, 11% had addressed every single actionable item on their manuscript(s). Since there’s no access to what went on at the publishing journal, we don’t know how often the manuscript had been submitted unchanged from journal to journal. So we don’t know what proportion of improvements were acting on the original peer reviewers’ suggestions.

Tom J. Crijns and colleagues (2021). The effect of peer review on the improvement of rejected manuscripts.

Other interesting 2021 publications on journal and grant peer review:

  • Development of a cross-disciplinary tool to measure aspects of quality of peer review reports.
  • Philosophers of science, Remco Heesen and Liam Kofi Bright, argue that prepublication peer review should be abolished, and discuss the philosophical literature on journal peer review.
  • An experimental study with NIH grants by Richard Nakamura and colleagues (Published with an editorial by Michael Taffe.) Grant applicants’ identity, race, and institution were redacted from a set of 1,200 real NIH applications. It was powered to detect an impact on reducing a differential effect of applicants’ race. Researchers recruited for the study wrote review reports on either the original or the redacted version. Black applicants’ got the same scores, but the scores for white applicants reduced (by about half).
  • A randomized trial by Jan-Ole Hessenberg and colleagues at a funding agency in Norway, found that providing more detailed feedback on grant peer reviewers’ assessments didn’t reduce peer reviewer variability.
  • There’s an open database of members of editorial boards.

~~~~

Cartoon about things we could know

This is the 5th post of a series on peer review research – that started with a couple of catch-ups on peer review research milestones from 1945 to 2018:

Peer review research roundups

All posts tagged “Peer Review”

Disclosures: I know at least one author of the first 4 articles I selected this year. I’ve had a variety of editorial roles at multiple journals across the years, including having been a member of the ethics committee of the BMJ, and being on the editorial board of PLOS Medicine for a time, and PLOS ONE‘s human ethics advisory group. I wrote a chapter of the second edition of the BMJ‘s book, Peer Review in Health Sciences. I have done research on post-publication peer review, subsequent to a previous role I had, as Editor-in-Chief of PubMed Commons (a discontinued post-publication commenting system for PubMed). I am currently on the editorial board of the Drugs and Therapeutics Bulletin, and have been advising on some controversial issues for The Cochrane Library, a systematic review journal which I helped establish, and for which I was an editor for several years.

The cartoons are my own (CC BY-NC-ND license). (More cartoons at Statistically Funny.)

Discussion
  1. The previous comment is correct. With the amount of health misinformation, there needs to be more professional oversight to minimize chances of further injury to those already under medical treatment.

Leave a Reply

Your email address will not be published. Required fields are marked *


Add your ORCID here. (e.g. 0000-0002-7299-680X)

Related Posts
Back to top