Peer review is a dumpster fire, right? At least, that’s what I hear – and there’s a reason for that.
Last month, I got reviews back on my latest paper. Opening that particular email always makes me both excited and depressed, and this one ran true to form: a nicely complimentary opening from the editor and Reviewer 1 – followed by several pages of detailed critiques from Reviewer 2 – and Reviewer 3 – and, believe it or not, Reviewer 4. Continue reading
Image: Coins by KMR Photography CC BY 2.0
Reviewers, we all tell each other to remember, are unpaid. Sometimes we’re being scandalized about it, as in “Megapublisher X is making unconscionable profits on the back of unpaid reviewers”. Other times we’re being laudatory, as in “We should be grateful to reviewers for all the help they give us, since they’re working for us without pay”. I’ve said versions of the latter many times: for example, in The Scientist’s Guide to Writing, in this older post, and more recently and more explicitly in this post. But the thing is, it (mostly) isn’t true. We should probably stop saying it. Continue reading
I recently learned about Peer Community In (PCI), a new system for reviewing and recommending preprints. I’m really intrigued. It’s true that I’m an old fuddy-duddy who’s on record as saying that we often exaggerate the problems with the status quo, and as not liking to think outside the box. And yet there are good reasons to think it might be good to have other ways beyond traditional journals to disseminate science. We should experiment with a variety of new systems, and PCI seems like one well worth exploring. Read on to learn more!
What follows is a guest post by Denis Bourguet (firstname.lastname@example.org), Benoit Facon (email@example.com), Thomas Guillemaud (firstname.lastname@example.org), and Ruth Hufbauer (email@example.com). DB, BF, and TG are the founders of PCI, and RH is a colleague and member of the board of PCI Evol Biol.
We believe that the current system of publishing with academic journals suffers from four crucial problems. First, Continue reading
Like most people, I often feel a little impostery. I’m convinced that sooner or later, people will notice that my work isn’t actually all that important, that my papers are somehow flawed, that I don’t really know what I’m talking about when I teach. (People may even figure out that Scientist Sees Squirrel is seldom original, mostly wrong, and only occasionally interesting.)
I was part of some discussion on Twitter recently about imposter syndrome in the particular context of peer reviewing. Some folks worry that they really aren’t qualified to review. They worry that they may make the wrong recommendation: either miss a critical flaw or (conversely) see something as a critical flaw that really isn’t. As an editor, I’ve had people whose judgement I respect decline to review on the grounds that they didn’t feel confident in their reviewing abilities. Ironically, these are often the early career scientists who tend to be absolutely terrific reviewers.
For a variety of reasons, I think this fear is generally misplaced. Continue reading
Warning: I’m grumpy today.
Last week I got a review request from a major open-access journal. It specified a 10 day deadline. I thought that seemed a little quick – but the manuscript looked right up my alley, and I could see the beguiling glint of some available time coming up. So I agreed. But it turns out 10 days meant 10 calendar days, not 10 business days as I’d assumed, and now I’m late* and getting rather testy autogenerated messages from the editorial office about it. This makes me rather testy in return. Continue reading
So, last week Meghan Duffy and I put up what amounted to point-counterpoint blog posts. I sign most of my reviews, while Meg doesn’t sign most of hers; but neither of us is quite sure that’s right. As I’d hoped, we got a bunch of good comments in the Replies on each blog. Here are a few things I learned from them: Continue reading
A few months ago, I wrote a post that prompted a brief twitter discussion with Meghan Duffy about whether we sign our reviews. I tend to sign mine, and Meg tends not to, but neither of us felt completely sure that our approach was the right one. So, we decided that it would be fun to write parallel posts about our views on signing (or not signing) reviews. Here is Meg’s, over at Dynamic Ecology; please read it, as she makes excellent points (all of which I agree with) even while arriving at a different conclusion (and a different default practice) than I do!
A lot has been written about the merits of signed vs. anonymous peer review. There are arguments on both sides (which I don’t intend to review comprehensively), but in general I’m firmly convinced that at least the offer of anonymity is important to getting broad reviewer participation and high-quality reviews. But I sign almost all of the reviews I write. This seems odd in at least two ways. First, here I am plugging anonymity, but I don’t use it much; and second, if I sign almost all of my reviews, why don’t I sign all of them? I’ll try to explain; and I’m trying to explain to myself as much as I am to you, because I’m far from convinced that I’m doing the right thing. Continue reading