So, last week Meghan Duffy and I put up what amounted to point-counterpoint blog posts. I sign most of my reviews, while Meg doesn’t sign most of hers; but neither of us is quite sure that’s right. As I’d hoped, we got a bunch of good comments in the Replies on each blog. Here are a few things I learned from them: Continue reading
A few months ago, I wrote a post that prompted a brief twitter discussion with Meghan Duffy about whether we sign our reviews. I tend to sign mine, and Meg tends not to, but neither of us felt completely sure that our approach was the right one. So, we decided that it would be fun to write parallel posts about our views on signing (or not signing) reviews. Here is Meg’s, over at Dynamic Ecology; please read it, as she makes excellent points (all of which I agree with) even while arriving at a different conclusion (and a different default practice) than I do!
A lot has been written about the merits of signed vs. anonymous peer review. There are arguments on both sides (which I don’t intend to review comprehensively), but in general I’m firmly convinced that at least the offer of anonymity is important to getting broad reviewer participation and high-quality reviews. But I sign almost all of the reviews I write. This seems odd in at least two ways. First, here I am plugging anonymity, but I don’t use it much; and second, if I sign almost all of my reviews, why don’t I sign all of them? I’ll try to explain; and I’m trying to explain to myself as much as I am to you, because I’m far from convinced that I’m doing the right thing. Continue reading
How should you handle a useless review? I don’t mean one that’s actively idiotic, but a review that’s superficial, misunderstands the manuscript, is positive but lukewarm, or otherwise just doesn’t seem to point to any avenues for improvement. Perhaps it’s this gem:
This study seems competently executed, and most of the writing is pretty good. A few analyses could benefit from more modern approaches. However, in the end I’m unconvinced of its importance.*
Let’s start with how not to handle a useless review. Continue reading
This is a joint post by Steve Heard and Timothée Poisot (who blogs over here). Steve is an Associate Editor for The American Naturalist and for FACETS, while Timothée is an Associate Editor for PLOS Computational Biology and Methods in Ecology & Evolution. However, the opinions here are our own, and may or may not be shared by those journals, by other AEs, or by anyone, really.
Working as an (associate) editor can be rewarding, but it’s not always easy – in part because finding reviewers can be a challenge. Perhaps unsurprisingly, editors often think first to call on senior scientists; but many of us have learned that this isn’t the only or the best path to securing helpful peer reviews. In our experience, some of the best reviews come from early career researchers (ECRs). ECR reviewers tend to complete reviews on time, offer comprehensive comments reflecting deep familiarity with up-to-date literature, and to be constructive and kind while delivering criticism. Online survey data confirm that our positive impressions of ECR reviewers are widely shared among editors (who nonetheless underuse ECRs), while other surveys indicate that ECRs are very willing to review, with many even feeling honoured by such requests. [Both sets of surveys mentioned here were particular to ecology and evolution, although we suspect the results apply more widely.]
So there’s a paradox here: we (editors in general) love ECR reviews, but we underuse them. Why? Continue reading
Image: “Transparency”, CC BY-SA HonestReporting.com, flickr/freepress
Note: This is a modestly revised version of my original post, which was not written very clearly. (Yes, I’m aware of the irony.) It was easy, reading the original version, to think I was primarily objecting to journals publishing peer reviews. I’m ambivalent about that (and my arguments below apply only weakly to that situation). It should be clearer now that I’m focusing on authors publishing their peer reviews. If you’d like to see how my writing led folks astray, I’ve archived the original version here.
We hear a lot about making science more transparent, more open – and that’s a good thing. That doesn’t mean, though, that every way of making science more transparent should be adopted. It’s like everything else, really: each step we could take will have benefits and costs, and we can’t ignore real impediments. I worry that sometimes we lose sight of this.
One place I suspect we’re losing sight of it is in the movement for authors to publish their (received) peer reviews. (There are also journals that publish peer reviews, such as Nature Communications; I think this is a lot of work with dubious return on investment, but that’s a topic for another day). What I often see is the suggestion that whenever I publish a paper, I should post the full history of its peer reviews on Github or the equivalent. This lets readers see for themselves all that went into the making of the sausage. It’s worth reading a good argument in favour of this, and I’ll point you to Terry McGlynn’s, which I think puts the case as well as it can be put.
I don’t agree, though. Here’s why I won’t be posting my (received) peer reviews: Continue reading
Photo: Journal of Universal Rejection coffee mug (crop), by Tilemahos Efthimiadis via flickr.com, CC BY-SA 2.0.
Peer review gets a lot of grief. It’s one of the things we love to say is “broken”. It takes too long, or at least we think it does. Occasionally a reviewer completely misses the point, goes on an ad hominem attack, or produces some other kind of idiotic review. But for all the flak aimed its way, I’m convinced that peer review – overall – is fantastic; volunteer reviewers and editors have vastly improved nearly every one of my papers.
But there’s one kind of review that really burns my bacon. Continue reading
Image: Author expectations for “optimal” peer review: Figure 1 from Nguyen et al. (2015) PLoS ONE 10(8):e0132557.
Two things I saw last week motivated todays’ post. The first was Amy Parachnowitsch’s interesting blog post, wondering if peer review might sometimes be faster than she’d like: too fast for her to get head-clearing perspective by putting a manuscript away for a while. The second was a paper by Nguyen et al. reporting author opinions of how long peer review should take. Some of those opinions are absolutely astonishing.
I’ll get my astonishment in a moment, but first: how long should peer review take? Continue reading