It’s been hard to escape calls lately for a paradigm shift in scientific publishing (most of them starting with a pronouncement that “publishing is broken”). We’re supposed to abandon pre-publication peer review, and replace it with a system of online preprint posting, open to anybody with no or minimal screening, that allows post-publication “peer review” in the form of a commenting forum. The preprint servers are here already: ArXiv has been an important channel for communication in physics and mathematics for years now, and BioRχiv is newly arrived in biology. What’s interesting is the other half of the prescription: the notion that preprint servers obviate the need for pre-publication peer review or for the existence of conventional scientific journals – and we’d be better off without them.
Does this make sense? Well, I’m on record saying that pre-publication peer review isn’t glacial and is nearly always helpful, so it won’t surprise you that I’m not enthusiastic about the preprint-with-post-publication-review model. There are many reasons why – today, privilege.
The packaging of papers into conventional journals, following pre-publication peer review, provides an important but under-recognized service: a signalling system that conveys information about quality and breath of relevance. I know, for instance, that I’ll be interested in almost any paper in The American Naturalist*. That the paper was judged (by peer reviewers and editors) suitable for that journal tells me two things: that it’s very good, and that it has broad implications beyond its particular topic (so I might want to read it even if it isn’t exactly in my own sub-sub-discipline). Take away that peer-review-provided signalling, and what’s left? A firehose of undifferentiated preprints, thousands of them, that are all equal candidates for my limited reading time (such that it exists). I can’t read them all (nobody can), so I have just two options: identify things to read by keyword alerts (which work only if very narrowly focused**), or identify them by author alerts. In other words, in the absence of other signals, I’ll read papers authored by people who I already know write interesting and important papers.
Who are these people I already know? By definition, they’re relatively senior – they’ve come to my attention because I’ve read their previous papers, which means they’ve published before (probably frequently). As a result, they tend to be authors who have privilege on two levels. First, they’re in established – often tenured – research positions, and usually have been for a while. Second (and statistically, if not causatively, correlated with this), they’re disproportionately (albeit not exclusively) people like me: middle-aged straight white men. And what about new scientists without long publication records – the younger and excitingly more diverse population of grad students, postdocs, and young faculty looking toward tenure decisions? They may be left on the outside looking in.
This seems to me a huge irony about proposals to replace pre-publication with post-publication peer review. At first glance, such proposals seem like the ultimate democratization: everyone’s manuscript on an equal footing. My manuscript and yours, a Nobel prize-winner’s and the rankest amateur’s, all available for readers whose comments will bubble the very best to the top. But this democratization will, I worry, turn out to be self-disrupting. Its very existence (coupled with the enormity of our literature) seems to force us to use prioritization signals that restore the very privilege we thought we were stamping out.
I don’t want this. My own early career benefitted enormously from the signalling associated with publication in traditional, pre-publication-reviewed journals. This paper, for instance, didn’t fit any pre-established line of research and came out when nobody had heard of me – but reviewers liked it, and it was widely read and cited because it appeared in Evolution. Only because it appeared in Evolution? Well, perhaps not; but I can’t imagine it having the impact that it did, if it had been cast adrift in a preprint server with only my undistinguished name to mark it. My paper would have caught in a catch-22: unread until marked by post-publication commenters, but uncommented until it was read. This is one thing our pre-publication system does very nicely: breaks the catch-22 by assigning “commenters” equally to every new manuscript entering the system, so that their quality can be signalled to readers without depending on authors’ prior fame.
So: should we upload preprints to the new servers? By all means. And we should welcome post-publication review, too – when it supplements, rather than replaces, the pre-publication review that can signal quality without depending on privilege.
UPDATE: Here’s a forcefully written and well-reasoned response from Micah Allen. I don’t agree entirely with his take on this, but you should read it!
© Stephen Heard (sheard@unb.ca) December 14, 2015
Related posts:
- Is publishing, and everything else, “broken”?
- Is peer review glacial, or do we expect too much?
- On the craziness and saintliness of peer reviewers
- When not to read the literature
*Full disclosure: I am an associate editor for The American Naturalist, and have been for 12 years. I’m also a subject editor for the new open-access journal FACETS. I have no involvement with any of the preprint systems.
**A problem that will have to wait for a future post, but in brief, I worry that this will lead to extreme contraction of silos within our broader fields.
***But aren’t these new scientists the very people pre-publication peer review is supposed to be biased against? It’s easy to find such claims, but data seem to suggest that privilege-bias isn’t a major problem. See here (including the comment thread), for instance, for discussion of gender and outcomes at several journals. For every reviewer who might give a famous person a free pass, there seems to be another eager to take them down. And experiments with double-blind review seem to suggest that while we can improve the peer-review system, the gains typically aren’t big enough to suggest we’d had really serious problems.
Spot on. Though I don’t use automated author name filters myself, except informally. If I happen across a paper by an author I recognize when I’m skimming the contents of leading journals, I’m more likely to look at the abstract.
Shameless self-promotion: a couple of old posts of mine on the same topic, the main goal of which is to disabuse people of the idea that post-publication “review” is any more “democratic” than pre-publication review:
https://dynamicecology.wordpress.com/2012/07/16/citation-concentration-filtering-incentives-and-green-beards/
https://dynamicecology.wordpress.com/2013/04/08/selective-journals-vs-social-networks-alternative-ways-of-filtering-the-literature-or-po-tay-to-po-tah-to/
To inject a bit of data into the discussion: an old reader survey of ours indicates that even ecologists who read blogs mostly filter the literature in pretty old-school ways, using newfangled methods like Google Scholar recommendations and social media as a supplement:
https://dynamicecology.wordpress.com/2013/10/29/survey-results-how-do-you-find-papers-to-read-when-you-cant-do-a-search/
LikeLiked by 1 person
Thanks, Jeremy. And for not-Jeremys: read that 2nd link (at least) for a thorough discussion of the need for filters and a take on “social-media filtering” that’s pretty complementary to what I wrote here.
LikeLike
I agree with you: preprint as a supplement of the traditional way.
But I think we are entering the era of mega-journals. As a scientist who always had Internet, I rarely look at the table of contents of a single journal (although I follow some RSS feeds from relevant journal). So the journal itself seems of low relevance to me.
Some people have suggested that the solution of experts curated list: expert in a field create a list of highly important papers. PeerJ gives good example of that : https://peerj.com/collections/
LikeLike
What’s the difference between an expert curated list and a selective journal? An expert-curated list seems like a selective journal with a one-person editorial board. Which has some serious downsides. No one person can read all that much, for instance. (Though maybe a one-person editorial board also has upsides, at least from some perspectives…)
LikeLike
I agree, but selective journal increase time to publication and reviewer overload. How many time have articles been rejected because they are “not in the scope” or results “not novel enough” for the journal? With mega-journal, you relax the need for scope/novel results so articles are published faster, without being reviewed 10 times.
Expert curated list also have their downside, but are they so different from the selection made by editors?
LikeLike
Well, one difference is that curatorial experts have to do much more work to identify interesting work than do editors at selective journals. Editors at selective journals benefit from authorial self-selection. I can imagine solutions to that. But frankly, it all seems a bit like reinventing the wheel to me. Especially since we already have mega-journals that judge papers only on technical soundness, not to mention pre-print servers. Anyone who wants to just get their work out there quickly can now do it. If you don’t want to risk wasting time getting your work rejected, don’t submit to selective journals!
I agree that it’s wasteful to review the same ms many times for many different journals. Sharing of reviews among journals is one solution to that. Besides the various review cascades (e.g., rejected mss from Ecology Letters can get referred to Ecology & Evolution), there’s Axios Review: https://dynamicecology.wordpress.com/2015/03/05/axios-review-is-working-you-should-try-it/
LikeLiked by 2 people
From the perspective of a non-scientist who wants to stay as current in various branches of science as a nonscientist can, good journals publishing peer-reviewed articles are the most valuable resource. I am not the only amateur who subscribes to journals like SCIENCE and NATURE (as well as more specialized ones) for this reason, and if I had the time (and money) there are at least three more journals I used to read in grad school that I’d subscribe to now.
An educated, interested public could be a great help to scientists–but getting us from “interested” to “well-informed” requires helping us find the good stuff (the readable good stuff) and realizing that we don’t have any of the other supports that professional scientists do (networks of people in the same and related fields who will email and say “Hey, you need to read X’s new paper.” We certainly aren’t going to wade through the floodwaters of everybody’s pre-publication site-of-preference, any more than someone who wants a good readable novel is going to wade through every self-published manuscript to find the diamond-studded good ones. Twitter is now a help, if you subscribe to the right accounts (like this one)–it’s similar to having a professional network–but you have to follow a lot of accounts to cover as broad an area as one issue of NATURE. So I’m all for peer-reviewed journals (and Twitter) as the best way to feed an insatiable curiosity about almost everything.
LikeLike
Great point, Elizabeth, and one I completely missed in writing my post. If it’s hard for a practicing scientist to sample selectively from the firehose, it must be hugely harder for the interested layperson – even more so for the interested layperson with less science background than you have. I suspect most scientists are guilty of the same mistake I made in omitting this angle – we don’t think about our literature as an outlet for what we label “science outreach”. Sure, a fairly small number of non-scientists (I assume) pick up copies of Nature; but I bet they tend to pass on science to others, and I wonder if this avenue of “outreach” is a significant one that I just haven’t been paying attention to?
Thanks very much for commenting – this really adds to my post.
LikeLike
As discussed on twitter I see more opportunities than downsides.
I have started to look at journals as just another provider of services in the scientific process. Much like sequencing, gene synthesis or travel agencies. My problem with journals is that we have let them function like gatekeepers of what is worth publishing or not. I have heard a professional editor of a Nature speciality journal saying that she loved her work because she gets to say to labs what science they have to do? Really? If you want to do science, get your own funding!!
My ideal system?: we submit to pre-print servers, actively seek feedback from experts that can critically review our work, change accordingly etc. This is all done openly. At any point in this process, authors can contact journals, or journals authors to transfer article and also reviews (this already happens between journals) to journal. Authors decide which journal offers the best services in terms of editorial work, presentation of results, speed, ability to update results, etc. Journals will then be competing to sell you the best services possible, not their JIF.
LikeLike
Ruben – I like that in your “ideal system” everyone understands that a preprint isn’t the end of the process, and that journals have a role in communicating papers (or filtering them, from a reader’s point of view). I also like the idea that comments on the preprint can become the pre-publication peer reviews – although I’m not sure if they could completely replace “traditional” peer reviews, as they would not (I think) necessarily be aligned with the scope, emphasis, quality level, etc. of the journal. But that’s interesting to think about. Thanks for commenting!
LikeLike
Pingback: Friday links: holiday caRd, DEB numbers, does death advance science, and more | Dynamic Ecology
Pingback: Weekend reads: 179 researchers indicted; how to reject a rejection; breaking the law on clinical trial data - Retraction Watch at Retraction Watch
A word from (particle) physics, where in practice the arxiv is the route by which more-or-less everyone accesses new research, and journal publication serves purely as a badge of esteem/quality/could-be-bothered. What you mention is certainly a hypothetical risk, but not one that I recognise in practice.
It depends on how researchers in the field choose to become aware of interesting new research — and the appearance of a central preprint server for bio subjects may change that behaviour over the coming years. After decades of arxiv access and now dominance, I doubt that many of us in particle physics actually look at new paper or even online journal editions, scanning the table of contents to find what is new. Very many researchers, especially theorists, instead pore through each morning’s arxiv email looking for things that interest; big results start getting referenced in talks — and on social media too, these days; literature searches pop up all sorts of papers depending on title and full-text matches; etc.
Yes, there is probably a slight bias toward papers by “big names” getting read, but I strongly doubt that it is much stronger than the same bias back in the days of journal dominance.
I’m a big fan of the preprint system in terms of what it has done for particle physics working culture, and I agree with what you say about the value of detailed, dedicated review — crowdsourced post-pub review will not give the same level of scrutiny or feedback, and a badge of verified quality is valuable. But that is not a reason to leave the journals unreformed — there is a strong argument for many “journals” to instead become coordinators of peer review only, based on the public e-doc, without any of the cargo-cult “traditional” overheads of retypesetting and physical distribution of unread volumes. Given that the peer review is already done for free, on elimination of the production overheads there is surely potential for this to be organised pro bono within the scientific communities. This evolution of the “journal” quality mark as the metric on which to assign funding would produce huge savings to research budgets, without losing the valuable part that peer review brings.
LikeLiked by 1 person
Pingback: Kerim Kaylan's personal website
Pingback: Research Reading Roundup: The value of pre-publication peer review, FASTR and more | PLOS Blogs Network
“The packaging of papers into conventional journals, […] conveys information about quality and breath of relevance.”
The number journals is exploding. Very successful journals like PLOSone are not specialized at all (neither are the pre-print repositories), but all journals share *the same pool of reviewers*, namely our peers. Journals claiming for themselves ‘best quality’ have the highest retraction rates. The journal brand is not a proper predictor of quality and relevance anymore, and the trend is moving further away from this idea. I know very few young researchers, who read tables of content of specialized journals instead of making use of search engines instead. The journal as a categorizer is obsolete, and I would expect reading habits to change accordingly in the future. I personally don’t even notice the journal until after I clicked on the link provided by the search engine.
“A firehose of undifferentiated preprints, thousands of them, that are all equal candidates for my limited reading time (such that it exists). I can’t read them all (nobody can)”
I do get large lists of hits almost daily from pubmed. But after choosing the most promising titles, there are usually very few papers left for me to actually read. The list shortens with reading the abstracts.
Obviously you don’t want to be overwhelmed with bad first-version manuscripts, but there is no reason why the search couldn’t be further narrowed to manuscripts that already received review and revisions for topics you don’t feel like you need to know what is going on the day it’s published.
I therefore agree with the fear of chaos as little as I agree with the journal doing an important filtering job. It’s not like the people asking to move away from journals and pre-publication peer review are incapable of imagining ways to deal with such problems. It’s not like journals are providing any services that haven’t been or can’t be easily “democratized” in the information age (most of them already have been, academia just ignored it).
LikeLiked by 1 person
Pingback: The Wild West of Publication Reform Is Now | Neuroconscience
The central point of post-publication peer review is to avoid blocking work from coming to light by editors or reviewers that, not seeing anything wrong with it, still find it irrelevant or not “good enough” for a specific journal. There are many examples of this. One of them is e.g. Lynn Margulis’ proposal of endosymbiosis for the origin of cellular organelles, which was published without review in PNAS after several years of finding unwilling editors and reviewers who mocked her work.
The possible issue of being overwhelmed by publications without the filter of review is in already happening, with reviews: work rejected from one journal finds a home in another journal further down the ranks. In my view, what the top journals select for is timeliness: a piece of work sufficiently advanced to be interesting, but sufficiently close to the current knowledge that its significance can be understood without much effort. Groundbreaking work falls outside this narrow range.
The best publication approach I have seen described is the “Reviewing Entities”, which are like journals except that they do not publish: they merely select from pre-print archives, solicit reviews from experts in the field, and provide a curated list of publications along with summaries and commentary. Yan LeCun proposed them and described them at length: http://yann.lecun.com/ex/pamphlets/publishing-models.html
LikeLike
Albert – thanks for commenting. Yes, I’m quite intrigued by the “Reviewing Entity” model – might be just the complement/hybrid of pre- and post-“publication” review I’m suggesting. Thanks for that link!
LikeLike
Thanks for your reply. Indeed, when I first stumbled upon the “Reviewing Entities” proposal, I thought nothing else made sense anymore. It embodies as you point out an intermediate system that might just capture the best of both worlds. Now the question is how to transition the current system to the “Reviewing Entities” system. Being an evolutionary biologist you must certainly be aware of the many forces at play to prevent the change. Perhaps only an outside force, such as funding agencies with a specific political mandate, could bring about the change. Three funding agencies have already joined forces to transform the situation significantly with the creation of eLife, which, as both reviewer and author, I have found to be extraordinarily compelling compared to the current system. Perhaps eLife, with its overt tolerance of pre-prints, is well-posed for transitioning to a “Reviewing Entity”. In some ways all journals that accept pre-prints as “original manuscripts” are already partly a “Reviewing Entity” in the Yan Lecun sense.
LikeLike
This strikes me as basically a knowledge management issue. Take a step back and view it through a KM lens — support organizations in vast multinational corporations which serve millions of customers on business-critical issues have determined ways to manage the flow of information to those who need it most. I know it well – I’ve been in that space since 2000.
Ultimately, you want the most trusted source you can find, but knowledge grows and evolves at a blistering pace, these days, so cutting out the front-runners (or delaying them) when they could provide useful insights to certain consumers doesn’t necessarily serve the whole. The problem is really that a select few attempt to control the show. Put a disclaimer warning label on the “not yet reviewed” content and release it into the wild. Some folks need that knowledge now.
LikeLike
Preprint is not post-reviewed; it is simply not reviewed. None of your criticisms would apply to actual post-reviewed process.
LikeLike
Daniel – well; I did make this distinction (perhaps a bit subtly). But you are correct: If every preprint were immediately and multiply post-reviewed, with those reviews done by well-qualified people without conflicts of interest, then my “privilege” criticism would not apply. Empirically, that doesn’t (yet) happen, I think – and if it did, the preprint system would simply be our existing journal system with “online early” access at time of submission rather than time of acceptance, wouldn’t it?
LikeLike
Pingback: Théorie du signal universitaire | Matières Vivantes
Pingback: “Peer Community In”: Beyond the traditional publishing model (guest post) | Scientist Sees Squirrel
Pingback: Théorie du signal universitaire | Matières Vivantes