Category Archives: peer review

Reviewing with imposter syndrome

Like most people, I often feel a little impostery. I’m convinced that sooner or later, people will notice that my work isn’t actually all that important, that my papers are somehow flawed, that I don’t really know what I’m talking about when I teach.  (People may even figure out that Scientist Sees Squirrel is seldom original, mostly wrong, and only occasionally interesting.)

I was part of some discussion on Twitter recently about imposter syndrome in the particular context of peer reviewing.  Some folks worry that they really aren’t qualified to review.  They worry that they may make the wrong recommendation: either miss a critical flaw or (conversely) see something as a critical flaw that really isn’t.  As an editor, I’ve had people whose judgement I respect decline to review on the grounds that they didn’t feel confident in their reviewing abilities.  Ironically, these are often the early career scientists who tend to be absolutely terrific reviewers.

For a variety of reasons, I think this fear is generally misplaced.  Continue reading

Things that are more important to me than reviewing your manuscript

Warning: I’m grumpy today.

Last week I got a review request from a major open-access journal.  It specified a 10 day deadline.  I thought that seemed a little quick – but the manuscript looked right up my alley, and I could see the beguiling glint of some available time coming up.  So I agreed.  But it turns out 10 days meant 10 calendar days, not 10 business days as I’d assumed, and now I’m late* and getting rather testy autogenerated messages from the editorial office about it.  This makes me rather testy in return. Continue reading

To sign or not to sign: what the Replies taught me

So, last week Meghan Duffy and I put up what amounted to point-counterpoint blog posts.  I sign most of my reviews, while Meg doesn’t sign most of hers; but neither of us is quite sure that’s right.  As I’d hoped, we got a bunch of good comments in the Replies on each blog.  Here are a few things I learned from them: Continue reading

Why I sign (most of) my reviews

A few months ago, I wrote a post that prompted a brief twitter discussion with Meghan Duffy about whether we sign our reviews. I tend to sign mine, and Meg tends not to, but neither of us felt completely sure that our approach was the right one. So, we decided that it would be fun to write parallel posts about our views on signing (or not signing) reviews. Here is Meg’s, over at Dynamic Ecology; please read it, as she makes excellent points (all of which I agree with) even while arriving at a different conclusion (and a different default practice) than I do!

A lot has been written about the merits of signed vs. anonymous peer review.  There are arguments on both sides (which I don’t intend to review comprehensively), but in general I’m firmly convinced that at least the offer of anonymity is important to getting broad reviewer participation and high-quality reviews.  But I sign almost all of the reviews I write.  This seems odd in at least two ways.  First, here I am plugging anonymity, but I don’t use it much; and second, if I sign almost all of my reviews, why don’t I sign all of them?  I’ll try to explain; and I’m trying to explain to myself as much as I am to you, because I’m far from convinced that I’m doing the right thing. Continue reading

How to handle a useless review

How should you handle a useless review?  I don’t mean one that’s actively idiotic, but a review that’s superficial, misunderstands the manuscript, is positive but lukewarm, or otherwise just doesn’t seem to point to any avenues for improvement. Perhaps it’s this gem:

This study seems competently executed, and most of the writing is pretty good.  A few analyses could benefit from more modern approaches.  However, in the end I’m unconvinced of its importance.*

Let’s start with how not to handle a useless review. Continue reading

Early career researchers make great peer reviewers. How can we get more of them?

This is a joint post by Steve Heard and Timothée Poisot (who blogs over here).  Steve is an Associate Editor for The American Naturalist and for FACETS, while Timothée is an Associate Editor for PLOS Computational Biology and Methods in Ecology & Evolution.  However, the opinions here are our own, and may or may not be shared by those journals, by other AEs, or by anyone, really.

Working as an (associate) editor can be rewarding, but it’s not always easy – in part because finding reviewers can be a challenge.  Perhaps unsurprisingly, editors often think first to call on senior scientists; but many of us have learned that this isn’t the only or the best path to securing helpful peer reviews.  In our experience, some of the best reviews come from early career researchers (ECRs).  ECR reviewers tend to complete reviews on time, offer comprehensive comments reflecting deep familiarity with up-to-date literature, and to be constructive and kind while delivering criticism.  Online survey data confirm that our positive impressions of ECR reviewers are widely shared among editors (who nonetheless underuse ECRs), while other surveys indicate that ECRs are very willing to review, with many even feeling honoured by such requests.  [Both sets of surveys mentioned here were particular to ecology and evolution, although we suspect the results apply more widely.]

So there’s a paradox here: we (editors in general) love ECR reviews, but we underuse them.  Why?  Continue reading

Please don’t “make science transparent” by publishing your reviews

Image: “Transparency”, CC BY-SA HonestReporting.com, flickr/freepress

Note: This is a modestly revised version of my original post, which was not written very clearly. (Yes, I’m aware of the irony.)  It was easy, reading the original version, to think I was primarily objecting to journals publishing peer reviews.  I’m ambivalent about that (and my arguments below apply only weakly to that situation).  It should be clearer now that I’m focusing on authors publishing their peer reviews.  If you’d like to see how my writing led folks astray, I’ve archived the original version here.

We hear a lot about making science more transparent, more open – and that’s a good thing.  That doesn’t mean, though, that every way of making science more transparent should be adopted.  It’s like everything else, really: each step we could take will have benefits and costs, and we can’t ignore real impediments.  I worry that sometimes we lose sight of this.

One place I suspect we’re losing sight of it is in the movement for authors to publish their (received) peer reviews.  (There are also journals that publish peer reviews, such as Nature Communications; I think this is a lot of work with dubious return on investment, but that’s a topic for another day).   What I often see is the suggestion that whenever I publish a paper, I should post the full history of its peer reviews on Github or the equivalent. This lets readers see for themselves all that went into the making of the sausage.  It’s worth reading a good argument in favour of this, and I’ll point you to Terry McGlynn’s, which I think puts the case as well as it can be put.

I don’t agree, though.  Here’s why I won’t be posting my (received) peer reviews: Continue reading