Preprints, peer review, and the eLife experiment

The “journal” eLife (more about the quotation marks shortly) made a splash last week, announcing a major change in their publication process. In a nutshell, eLife will no longer let peer review influence whether they accept or reject a manuscript. Instead, if they send it out for review at all, they’ll publish the manuscript along with its peer reviews. Authors can respond to peer review either by revising their manuscript or by writing a rejoinder – but they needn’t. You should read eLife’s rather breathless editorial (Eisen et al. 2022) to get the full picture.

It’s a major change for eLife, but I think it’s less revolutionary than it’s painted. I won’t attempt a comprehensive assessment here – that would take a dozen posts, and there’s lots of discussion elsewhere. But I can offer three points that I find interesting, and that I haven’t (yet) seen emphasized by others.

1. For quite a while we’ve been hearing that the model of scientific publication that will fix all our journal problems is preprints plus post-publication peer review. This is exactly where eLife is coming from (see three bullet points on the editorial’s second page). The problem is, we’re far enough into the preprinting era in biology to have convincing empirical data: we can’t evaluate this claim, because nobody does the post-publication peer review part. (OK, that’s a slight exaggeration, but only a slight one: the modal number of post-publication peer reviews available on BioRXiv and similar servers is zero. The vast majority of preprints posted don’t get any review.) So eLife’s experiment, like similar efforts from Peer Community In and others, is best understood as representing two things. First, it’s an admission that left unmanaged, post-publication peer review doesn’t happen. Second, it’s a way of building a mechanism to manage it: to ensure that each preprint on a server gets reviewed at least N times (where – for a change – N > 0). And guess what: for all the excitement about doing away with journals, the system to manage post-publication peer review looks a LOT like a journal. It has editors, who send manuscripts out for review; it editorially rejects some manuscripts but isn’t able to provide a very clear explanation of how those rejection decisions are made; it publishes the reviewed manuscripts under a “journal” name; and it costs money to do all this (for eLife, $2000 USD*). When I first saw the announcement, I thought “Oh, this means eLife is no longer a journal; now it’s a preprint server”. But I was mostly wrong. If eLife isn’t a journal, it’s mostly a journal – just one with a 0% rejection rate for manuscripts that make it to the peer review stage, and without any requirement that authors respond to the peer reviews they get.

2. Peer review, traditionally, has two functions: manuscript improvement and gatekeeping. The Life experiment is explicitly doing away with the second one (the editorial’s title is “Scientific Publishing: Peer Review Without Gatekeeping”). This is, at least superficially, a very appealing idea; I’ve argued elsewhere that of the two functions, the manuscript-improvement function is by far the more important. But there’s a caveat, and hence my use of “at least superficially”. We can see the eLife experiment as asking whether you can have the manuscript-improvement function without the threat of the gatekeeping function. After all, eLife will allow authors to repond or revise after peer review, but won’t require them to: all reviewed manuscripts will be published regardless of that author decision. So will authors actually revise their preprints substantively, or merely supply a pro forma rebuttal – or do nothing because they’d rather turn to their next research project? Someone could probably gather data from the (very limited) stock of post-publication reviews on BioRXiv, although that someone won’t be me. Instead: we’ll see.

3. As you’d expect given their new model, the eLife team are very bullish on the value of peer reviews that are open for readers to see – and correspondingly dismissive of the importance of peer reviews that aren’t. They argue that an “outdated model of peer review – in which reviews are used to make accept/reject decisions** and never made publicly available – strips it of most of its value.” As a reader, though, I’d argue pretty much the opposite. You see, if I’m expected to read a manuscript, and its reviews, and an author response, and then weigh all that to figure out whether the authors have heeded the reviewer comments and what I should believe, then I think much of the work that ought to be done by the writer has been downloaded onto me as a reader. That just makes our existing firehose-of-published-information problem many times worse, as work that could have been done once by reviewers, editor, and writer before publication must now get done over and over again by each new reader. So for me as a reader, what “strips [peer review] of much of its value” is actually the policy of accepting and publishing a paper whether or not its authors have engaged with the reviews and used them to improve the paper! Of course, some surely will, even though they’re not required too – but how many? And I won’t know if this author team has without doing all the work that, under the more usual model, the editor has done. (Perhaps imperfectly, of course.)

Now, all this might sound quite critical of the eLife experiment. And it’s true, as you can tell, that I’m not immediately sold on this being the way forward. But: we should be doing this kind of experiment – trying new models for scientific communication, and seeing what works. Perhaps we should especially be doing experiments of which old fogeys like me are skeptical! So kudos to eLife for giving this a whirl, and I look forward to seeing, after five or ten years, how it all plays out.

© Stephen Heard  October 25, 2022

UPDATE: This post from Mark Hanson has some similar and some different takes – well worth a read. I’m especially intrigued by his connection of this to our larger societal concern with the death of expertise.

Image: the new eLife model summarized. © 2022 eLife Sciences Publications Ltd, CC BY 4.0

*^Is there a business opportunity here? If all that matters is that posted preprints – wherever they are – get peer reviewed, then you can post a preprint on BioRXiv and for only $1900 I’ll review it and get someone else to review it too – thus accomplishing exactly what eLife says it will accomplish. But I don’t this is a promising business opportunity. And that tells you that no matter how vociferously eLife argues that “publication” there will no longer connote importance (or even correctness) of a manuscript, they expect scientists to continue behaving as if it did.

**^Note that there’s a bit of rhetorical sleight of hand going on here. Remember, accept-or-reject is only one of two functions of peer review. This dismissal of internal peer review ignores the value of the reviews in improving the manuscript. Bit of a straw man, I’d say.



9 thoughts on “Preprints, peer review, and the eLife experiment

  1. baskin2013

    Wait, what am I missing? The eLife journal will publish anything it sends out for review? But who decides what gets sent out for review? Either it sends out every ms it gets or it gatekeeps. We do have a successful journal that disallows gatekeeping: PLoS One. This journal relies heavily on ms improvement. Seems to do a good job.


  2. Jason Bosch

    One thing I find seems to be missing in these discussions (like for point 3, here) is that we, the readers, are also the reviewers. We can review as we read and say “this paper I’m reading is rubbish.” I’ve done that with papers that had already gone through peer review! Of course, there may be times when we don’t have the skills to see the flaw in a paper but that will happen to reviewed papers as well; I don’t think 2-3 reviewers is going to always cover all the potential flaws a paper may have. To me, if one is an active scientist and capable of being called on to perform a peer-review function then one should be capable of reading an unreviewed paper and having an idea whether it is nonsense or not.


    1. ScientistSeesSquirrel Post author

      I would totally agree about “should”, for papers in one’s own subfield. Outside my own subfield, though, I find it useful to outsource SOME of that judgement to reviewers who are more expert than me. And, of course, even if I am *capable* of reviewing as I read, that doesn’t mean that’s an efficient way for us to do scientific communication! That was really my point in (3) – not that I can’t do it; but that making every reader do it is far less efficient than reviewers doing it.


    2. baskin2013

      I think it is more than just a question of ‘rubbish’ or ‘not rubbish’. When I review a paper, I usually find various small problems. Perhaps a figure has incomprehensible units or a relevant paper is not cited or understood. Taking care of these things makes the paper better. All papers benefit from this. Same story with things like copy editing and graphic design.

      Liked by 1 person

  3. Peter Apps

    Unloading the writer’s job onto the reader is something that I am seeing more and more of, both in published papers and in manuscripts that I review. It takes the form of excessive wordiness, long spiels of marginally relevant introduction and discussion, and lots of extraneous references. In other words, thesis / dissertation chapters designed to impress supervisors and external examiners are being copy-pasted as journal articles that take much longer to read than they should do.


  4. Marco Mello

    I totally agree with you. In addition, after the coronapocalypse, people should have learned the value of scientific gatekeeping. Many disastrous decisions were made by governments all over the world based on pseudoscience published in record time. Preprints proved to have a dark side too. Of course academic publishing is not perfect and needs to evolve. Intermediate stages of publishing, like preprints and preregistration, are welcome. But we need to discuss the whole system more broadly, and to invite the media and decision-makers to the table. In summary, the bold statements made in that manifesto went too far.


  5. John Pastor

    Isn’t the point to publish good science, as best as we can? Isn’t that what peer review-revision-publish (or not) is supposed to do?

    e-life sounds like watching sausage be made instead of publishing the best science.

    Liked by 1 person

  6. Jeremy Fox

    All well said–my thoughts exactly!

    A few further thoughts and questions:

    -re: nobody doing any post-publication review, called it!
    Of course, I was far from the only one to call it, so I assume I will have a lot of company on my victory lap. 🙂

    -In placing accept/reject decisions entirely in the hands of editors, without resort to external peer review, eLife is going back to the system that prevailed before WWII, and didn’t go away entirely until the 1970s. I say this not as praise or criticism of eLife; I just find it interesting.

    -What fraction of mss does eLife reject without external review now? And how will that fraction change under the new editorial policy? Any guesses? When making your guess, remember that editor and author behavior might change in response to the new policy…

    -the new eLife system reminds me a bit of how some journals will sometimes publish a focal paper along with invited commentaries and an author response. There’s a leading statistics journal that does this, if memory serves. But those focal papers usually are selected for commentary because they’re of especially broad interest/especially provocative/etc. And the commentaries are written as, well, commentaries, not peer reviews. The whole package of focal paper + commentaries + response is intended as an interesting, informed, multi-way conversation. I like reading those sorts of conversations. What eLife has in mind sounds kind of like an inferior version of those sorts of conversations.

    -There are already journals that publish the (pre-publication) peer reviews their papers receive alongside the papers. Anyone know of any data on how often those reviews are read (or skimmed, or even glanced at)?


  7. Jeremy Fox

    A half-baked analogy: if you want food, there’s a whole range of options from “grow and prepare it yourself” to “go to a restaurant”. In between are various options available at grocery stores: “buy raw ingredients to prepare yourself”; “buy premade frozen meals and reheat them yourself”, etc. Those options fall on a continuum of how much effort you have to spend obtaining and preparing the food.

    Analogously, if you want to consume interesting+correct scientific papers, there’s a range of options from “do your own science” to “carefully read preprints and evaluate them yourself for interest and correctness” to “skim the papers in high-impact peer reviewed journals”. Doing your own science is like being a farmer in this analogy–it’s the highest-effort option. Carefully reading and evaluating preprints is like buying raw ingredients to prepare yourself. Pretty high effort. Skimming high-impact peer reviewed journals is like going to a restaurant–the lowest effort option. One way to think of eLife is as providing a new options falling somewhere in a previously-unoccupied space in the middle of the “reader effort continuum”.

    I haven’t thought about this analogy very hard, so I’m sure I have left myself wide open for all sorts of trolling and jokes at my expense, which I eagerly anticipate. 🙂



Comment on this post:

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.