Skip to content

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

PLOS BLOGS Absolutely Maybe

Transparency, Recognition, & Innovation: The ASAPbio Peer Review Meeting

 

Cartoon of author reading comments from Reviewer Number 3

 

All those “Reviewer 2″s – can’t live with ’em, can’t live without ’em!

So how can we improve both the quality of “the scientific literature” and the role of peer review in it?

ASAPbio has gathered dozens of people to work on these issues from 7 to 9 February, with speaker sessions webcast, at the Howard Hughes Medical Institute (HHMI). And I’m live-blogging from the meeting, adding entries as we go – the most recently added at the top. The meeting’s hashtag: #bioPeerReview

Day 3 – Friday 9 February:

Going down the home straight: summaries from a half-day of unconference.

Stephen Curry on DORA: discussed best practices, especially biosketches – “narrative-based synopses”, including credit for peer reviewing. And training for this too – he pointed to the issue of gender, for example, with some needing to learn to promote themselves more, and others who could do “less grandstanding”.

Mike Eisen: we need to think about publishing as part of the science:

Peer review is part of the scientific process: it needs to be scrutinized, it needs to be rewarded… We should all be doing experiments with these kinds of processes, in different contexts.

We also need to think carefully about these experiments, how they’re done, how they’re reported.

Great outcome from that group: HHMI came up with a plan to register peer review experiments, track results – others agreed to get on board.

 

 

Gary McDowell, early career researchers’ group: talked about the need for mentoring and training around peer review specifically – and more communication from journals about the peer review process. We need to diversify editorial advisory boards, the peer review pool.

Theo Bloom, group on separating different roles of peer review: there are the technical issues, what you need to do to make your case, and what you need to get into this particular journal.

Prachee Avasthi, training students group: once people have participated, DOIs for their peer reviews, attached to their ORCID identifier, would enable people to see people’s peer review skills. Peer review training and practice should be part of graduate training.

And that was a wrap! Thanks, ASAPbio!

….

End of the day on Thursday: reinvigorated, revitalized, with extra funding and a new board, chaired by Stephen Curry, DORA is tackling the challenge of better approaches to research assessment. There’s a re-vamped website too. Check out Curry’s op-ed here.

 

 

 

Ron Vale wrapped up the day, encouraging real action – with experiments of options that “have the least barriers to implementation”. His “what do we want? when do we want it?” is publishing peer reports, giving them DOIs, and giving credit to contributors. That should be standard practice in science communication, Vale argued: “Can we get there in a year?”

Mike Eisen called for people to start supporting f1000 Research and others who have moved to a fully post-publication review: submit work to them, do the post-publication review they need.

From Theo Bloom, “how about ‘sign every peer review’?” Another suggestion from the floor: write more structured peer review so that it is more portable. William Gunn agreed: “Silos – not good for data, not good for articles, and they won’t be good for peer reviews”. And Philip Campbell confirmed that Nature will have the option to post peer review reports.

 

 

 

Reports back from the afternoon breakout sessions on Thursday:

Open participation platforms: 1

This group defined it broadly – including Twitter in the mix. Opening up and diversifying the mix is both a strength and weakness: it is scattered, “dissipating the power of the comments”. Trolling behavior and reprisals are a big downside, lack of incentives to participate another.

“Robust pseudonymy” was considered – an alternative between open identity and anonymity.

Open participation platforms: 2

This group added a discussion of Hypothesis and PubPeer (plus requiem nod to PubMed Commons). Only a very tiny minority of readers in all sorts of forums take the step of commenting. It takes time and thought. If authors would respond to comments, that would be an incentive to comment.

Linking comments to ORCID was popular in the group. But a lot comes down to culture – whether it’s commenting, what’s considered “constructive”, and what’s expected of moderators.

Improving traditional peer review & reducing reviewer burden: 1

Closed peer review systems “allow too much to fall on too few”. Tackling statistical issues in manuscripts at the pre-publication stage is a clear area that journals could provide critical service.

There was support for sharing peer reviews among journals – debate arises on implementation issues. And acknowledgment that peer review is currently absorbing the extra burden of peer review around datasets as well as manuscripts.

Collaborative review between peer reviewers gets a big thumbs up – but it’s resource-intensive. The role of editors in achieving high quality peer review was a recurring theme in this group discussion.

 

Cartoon of editor who wants peer reviewers without conflicts of interest about peer review

 

Improving traditional peer review & reducing reviewer burden: 2

Expanding diversity and including more postdocs could play a critical in increasing the reviewer pool. One of the issues that’s increasing the rounds of peer review is poor matching – authors submitting articles that are fine, but just choosing a journal where it their work was out of scope. Better match-making between authors and journals could reduce the peer reviewer burden.

We shouldn’t expect there to be a universal solution that solves all these problems.

New ways of evaluating articles: 1

Lots of enthusiasm about journal clubs around preprints in this group. “Layered reviewing” is worth considering – parceling out the work, so you don’t have to tackle everything.

Could there be interactions and use of bioRxiv and repositories for curating data, so it’s not just a data dump.

New ways of evaluating articles: 2

No love here for the journal impact factor, either – but people rely on it because it’s so convenient. That’s not so easy. Article-level metrics relying on citation are one option, but that’s not available when the paper is published – and not even soon after.

With bioRxiv, people can’t rely on the journal as the indicator of quality: perhaps, they thought, it could be the venue for experimentation about different ways of judging a paper.

….

Lightning talk round again after lunch on Thursday….

….

Last talk of the session: Bodo Stern from HHMI. Can we move to article-specific indicators of quality and potential impact, and tag articles with the data?

When people have hundreds of applications to look at, nobody can ready all the papers each applicant produced: people will use some kind of indicator or triage for the first round.

If we develop an indicator that’s a reliable proxy for quality, it would need to be easily generated and understood, easy to find, and could change over time. The kind of thing here is a technical quality score, reproducibility score, citation score. This is so tough!

 

….

Ron Vale discussed ASAP Bio. Central, he said, is understanding the culture of science. First there was the advocacy of preprints – to unbundle the goals of disclosure of work and the evaluation of it.

Peer review needs unbundling too, he suggested. There’s evaluation on one side, and curating a journal on the other. You can, he said, separate the service of improving scientific work from the “gatekeeper” role.

Could scientific societies play a role in author-driven “journal agnostic, assigned peer review” of preprints? In a way, he sees this as a revival of the old school process of sending your work around to get feedback before ever submitting to a journal.

….

“I know I have a reputation for wanting to burn things down, but…” Mike Eisen up next, on the APPRAISE project, to experiment with community-driven post-publication peer review. It’s not just about peer review for technical quality. It’s also about “providing information for audiences of readers, potential readers, and others”.

Who are the audiences and users of peer review? People looking for papers to read, people interested in a specific article, people evaluating the scientists, and the authors of the paper. (I would add to Eisen’s list, meta-researchers.)

The project will by default be signed, but there will be an option for certified anonymity. They are working with bioRxiv and PLOS. (More detail here.)

….

Second talk was Andrew McCallum, presenting by video courtesy of influenza. He uses his machine-learning expertise to try to improve peer review.

McCallum looked at the broader issue of accessibility to peer review processes and models in computer science: “Even stodgy communities are willing to try small steps of change”. Experience with open public review anonymously led to a trolling problem, though.

You can check out these places at OpenReview.net. It’s a software platform, with an open API to build their own for their purposes.

….

First up was Prachee Avasthi, on preprint journal clubs for students. Students learn critical evaluation and producing peer review reports. Central here is how to communicate criticism constructively. Then reports are posted to the preprint – Avasthia reports that they’ve received great feedback from preprint authors. And a shoutout for Prereview.org, encouraging and enabling peer review of preprints.

 

 

….

After lunch discussion, Thursday:

We came back to a session on gauging the level of feeling in the room, via polling and discussion: few people against moving to publishing peer reports along with papers – but lots of concern, still, about revealing names.

….

Some highlights from the report backs from Thursday morning’s small group discussions:

 

 

Scientists’ perspective: 1

There was a perception of a big shift recently in the idea of publishing peer review reports – again, less so on identity. There was also a lot of discussion about demystifying the peer review process: open peer review reports could serve an educational role.

Should reviewers be tagged to rejected papers and to preprints? No resolution of opinion there. Triple blind peer review during the process also discussed (editors, authors, and peer reviewers all in the dark), with reports published later.

How can we achieve what we aspire to, which is fully open with no repercussions? There’s a danger in ignoring the reality of the current hierarchies in science. No consensus on how to strike the balance.

Scientists’ perspectives: 2

There were overwhelming benefits to moving towards more open peer review, and funders could shift this dramatically. Creating a more open and transparent process would be well-received by the public. Getting credit for your peer review – as in Publons, where you can be anonymous and get credit at the same time. But what about peer reviews and rejected papers?

They were unanimously supportive of publishing it all – but not on the identities of participants.

Technology and journal perspective: 1

Saw many benefits to more open peer review. Risks discussed again in open identities, including legal liability. Would peer reviewers be harder to find in an open system? They agreed they already were hard to find, so this wasn’t as big an issue to this group – but more data on experience would help.

Should all peer review reports get a DOI? Yes!

Technology and journal perspective: 2

Are people actually reading the open peer reports? The sense was that they were, by people with a particularly intensive interest. There are concerns about public perceptions, and how do we increase public understanding that peer review is “a fault-finding exercise”.

There is a need for editing at times, too, if reports are going to be published – and that’s resource-intensive.

Again, there was unanimity on reports being published. (Not necessarily on identity, though.) “Do funders care, though? If they don’t, then credit is meaningless”.

Funders’ and universities’ perspective: 1

“What is the purpose of credit? Why do we care?” Answer: it’s critical to the scientific process, so people should get credit for being a good citizen in the processes. And that could include participation on the open platforms, not just invited pre-publication journal peer review.

If it’s not open, we don’t have the data that we could study the review process. And that should include the diversity of participation, and whether new generations are being fully incorporated into the science community. Peer review reports need to be discoverable – not just at the journal, but at PubMed.

Funders’ and universities’ perspective: 2

Should people get credit? Yes, but are people going to read them? Funders and universities might have a strong interest in reading them.

Concern about potential harm and benefit for “junior” researchers was going to be critical. Open by choice is a separate issue, that skews the experience, and needs to be considered separately, too.

A possible benefit of open peer review might be a trend to including more peer reviewers. We need a systematic review of the entire process.

Could funders and universities want, and value, different models in peer review? For funders, asking a question about open peer review in grant applications could have a big impact, quickly.

If credit matters, then the issue of people having uneven access to invitations to peer review.

Discussion ensued!

What about those separate peer reviews that are for editors’ eyes only? That should end, one group had decided – with rare exceptions.

For peer review – journal and grant – knowing who the great evaluators are, is valuable information.

Oh the rejected papers: what happens to those peer review reports? No real answers here, although some want them linked. But many of these problems, it was pointed out, are related to keeping the basics of the old publishing model, trying to “shoe-horn” some openness into a fundamentally closed system. Preprints and the like, change that landscape.

Open peer review reports might sometimes be weaponized: but it was argued that this would be far outweighed by the benefit of being able to counter claims that scientists are being secretive and hiding criticism.

 

A valiant attempt from the floor to out the elephant in the room: if everyone agrees peer review reports should be open, then why aren’t all the journals doing it? One brave soul ventured that it takes resources to change, and the pull to do it hasn’t been strong enough to overcome the resource question. Journals that have done say it’s not so much time – although for some it does add up to serious effort, and there are cheap ways to make things available: eLife‘s software, for example, is open source.

 

 

….

“Who reviews the reviewers?”, asked Natalie Ahn. She spoke about the experience of a colleague who had made mistakes that weren’t widely understood in his field to be unacceptable – and paid a large career price for what was not considered research misconduct, but got dealt with quite savagely online.

….

Kafu Dzirasa spoke about culture and context. For example, what about the danger of bad peer reviews being weaponized in some countries? It doesn’t matter whether you believe there is bias and discrimination, many people who have experienced discrimination will believe they will experience it. We can’t underestimate the concern about open peer review from early career researchers and others: “There is no science, without scientists”.

 

 

….

Andrew Preston from Publons, on developing ways to keep track of all the peer reviews you do – privately or publicly. You can connect all your work. They don’t show the article that you reviewed, though. You can say that you reviewed for a specific journal, but not which article. Reviews are added by Publons users – the reviews are integrated by publishers. You can get a downloadable record of your verified contributions – with information that shows how you perform compared to your peers.

 

Preston argued for more openness. If you can tap into things that are already happening, change and scale can be easier to achieve.

 

….

Jennifer Lin from CrossRef asked, “have we in fact regressed?” We need to improve what we do with metadata: for literature discover, for reproducibility, for research integrity, and so much more. There are now 91 million publications captured in CrossRef.

CrossRef is now working with publishers who do open peer review, capturing a publication type, peer reviews. Since November 2017, they’ve added nearly 10,000.

 

 

….

Next up was Joyce Backus, from the U.S. National Library of Medicine (NLM), talking about peer reviewing the journals, for indexing in PubMed (MEDLINE or PubMed Central).

“A journal is a living, breathing organism”, so things can change over time. Journals are sometimes de-indexed from PubMed.

Some PubMed Central journals (f1000 Research and eLife) are submitting open peer review history, and these are being included as “sub-articles”. (Scroll down to see the whole history in this example from eLife.)

….

Up third is Theo Bloom from the BMJ. They operate an open peer review system, with open identity, and very active post-publication commenting: you see all previous versions, and peer reviewers: “It’s hard work, but it isn’t rocket science”.

Peer reviewers are catching a lot of errors – she pointed to this publication – but they don’t catch everything. Bloom pointed to a systematic review on openness and peer review – and a shoutout to my posts (anonymity and logic). (Thanks, Theo!)

….

Lightning talk number 2 is Rebecca Lawrence, from f1000 Research – which is 5 years old now. They have shifted from being a publisher, to more of a service provider – of tools, infrastructure, and editorial services. Partners include the Wellcome Trust, Gates, and others. (There are 5 now.)

The model involves a “preprint-like stage”, then post-publication peer review – formal, invited peer review: it just happens after publication.

It’s very fast: about 2 working days for the preprint-like stage, and 14-16 days to first publication. Post-publication peer review is signed: “complete transparency is important”, Lawrence said. After there is sufficient peer review approval, the articles are indexed in PubMed.

As of today, 2,184 have got through to PubMed.

Lawrence advocated for giving authors more control, and complete transparency so people get credit and the process is more constructive. She said they don’t have a big problem getting peer reviewers.

….

First up was Tony Ross-Hellauer, based on surveys to open peer review, and this systematic review on types of open peer review” “It appears open review is already mainstream” – but there is “strong pushback” against open identity. The majority think reviewers should have a choice. Attitudes vary by age group, the younger are more enthusiastic – and the over 65s. And there are differences by fields and disciplines, too. Ross-Hellauer:

If peer review is the bedrock of science, I think it is outrageous that we don’t have a [research] program on what works and what doesn’t…

Hear, hear! (My views on this, as well as pointers to posts on the research on open peer review: the fractured logic of blinded review.)

 

 

Photo of panel
Panel (left to right): Erin O’Shea, Jeremy Berg, Mike Lauer

 

Last session: speaker panel and audience discussion

How could the publishing culture start to change? O’Shea said their thinking was to do experiments with changes at journals willing to participate. Discussion also turned to peer review of grant proposals – but we keep hitting into a bunch of unknowns, don’t we? It’s a startling feature of the science community, isn’t it, that it has not tended to be scientific about itself. There was a plug, though, for the meta-research community around the science of peer review. (More on that recent conference here.)

(Chatham rules for the meeting’s discussion, so there’ll be no names other than speakers.)

From the audience, it’s not just that we need to have new models, but “we have to make sure that you can’t get an advantage by subverting things”. First one from the online audience that it was ironic that this discussion was happening in the week that the NIH closed PubMed Commons. Why would you participate in post-publication evaluation when there’s no reason to do it, in the current context? We have to show people why it matters, and it has to matter, for people to do it. (Disclosure: the Commons was part of my day job).

 

Debate turned to what the role for professionals in peer review: for the major medical journals with large budgets, there is professional statistical review for manuscripts that get close to publication. But, it was argued, professional peer review isn’t the answer for the scale of science publication.

There’s been discussion this evening, about the need to train peer reviewers: but the core is better teaching for scientists, and knowing when to seek advice from statisticians for example.

Lauer points out that peer review itself has a small sample size: there are so few peer reviewers. For grants, there’s some evidence that you need 3 times as many peer reviewers as happens in practice.

The last question: what’s the disincentive for journal editors to publishing peer reviews? Why don’t journals do it? Jeremy Berg got this hot potato: he said he believes it’s because some reviewers don’t want to. But the experiences raised from journals suggest that’s only a minority of peer reviewers.

….To be continued on Thursday….

 

Final keynote on Wednesday: Mike Lauer, from the NIH, started with the subject of science’s over-reliance on small, underpowered studies. Required reading, he said, was Kate Button and colleagues’ paper on the subject of power failure. He showed data on how nearly universal problems like under-powered studies in preclinical research:

If 99% of your peers aren’t paying attention to power size, it doesn’t matter how transparent your peer review is, it won’t solve the problem… Who’s going to fix this? The scientists? So far we haven’t done very well.

Lauer made a strong plea for better, professional statistical review – including post-publication statistical review.

The first question to the panel, then, was why does the NIH fund all these under-powered studies then?

 

Second keynote, Wednesday:

After Erin O’Shea’s impassioned plea for a massive change in the role of journals and editors, the editor of Science, Jeremy Berg, gets to follow! He spoke about the value of sharing peer reviews among peer reviewers. And didn’t disagree on the issue of the harm of impact factors: “Why on earth would anyone judge papers on where they are published? The impact factors are one of the great mysteries of my life”. He doesn’t see the journals as responsible for this: it’s the scientific community creating science culture.

 

Erin O’Shea, president of HHMI, gave the first keynote. What’s the purpose of scientific publishing in the first place, she asked? O’Shea listed a series of challenges that interfere with this:

  • Corrupting incentives – including “being evaluated based on where they publish, not what they publish”.
  • Lack of transparency and accountability. The lack of transparency “comes at a great cost”.
  • Waste of resources and time.
  • Limited access to resource outputs.

O’Shea: “Hyper-competition pours oil on the flames”.

Time for a change, she argued: “The subscription model and editors as gatekeepers made sense when print distribution was necessary and expensive…But these limits don’t exist any more”. What’s the optimal system now?

She’d like to see the authors in charge – “authors as publishers”, not editors as gatekeepers. Peer reviews should be published; and we need post-publication commenting with article-specific tags. Finally, she argued, we need open access “at the time of publication”.

O’Shea has a lot of faith in authors! As well, she argued, “We need to treat peer review for what it is: scholarly activity…Peer review should be constructive and critical”. In her utopia, there will be fewer editors and journals: however, “they should play a critical role as curators”. Journals should play a role in post-publication evaluation, not gatekeeping. O’Shea: “We feel very strongly that a subscription is absolutely wrong for publishing services” – but ok for curating services.

 

 

First remarks, Wednesday night:

Ron Vale got off to an encouraging start about science as a career, but said we have a serious problem, “Young scientists are more anxious than ever about science”, and coping with the “multiple cycles of rejection”.

How did we get to this “gridlocked” system of scientific publishing? “We’ve drifted here, little by little”, said Vale, “without sufficient attention being paid by our scientific community. But this is changing”.

Change can come faster than some think. Vale spoke about how much negativity there was about preprints in biology, just a few years ago: “Many people were dismissive”. But reactions to the first ASAPbio meeting saw quick movement. Vale pointed to the report in Science, Preprints for the life sciences. That was just in May 2016.

Vale: “While preprints are making considerable progress, they are not ‘the answer'”. Peer review was a logical next issue to tackle. How can we do it better? What experiments could be tried in peer review?

“Many people on the internet are watching to see what we do and how we behave. I think they are desperately hoping to see some success… Can we work in transparent ways as a community, instead of working behind closed doors?”

 

~~~~

Cartoons are my own (CC BY-NC-ND license). (More cartoons at Statistically Funny and on Tumblr.)

 

* The thoughts Hilda Bastian expresses here at Absolutely Maybe are personal, and do not necessarily reflect the views of the National Institutes of Health or the U.S. Department of Health and Human Services.

 

Leave a Reply

Your email address will not be published. Required fields are marked *


Add your ORCID here. (e.g. 0000-0002-7299-680X)

Back to top