Photo: Journal of Universal Rejection coffee mug (crop), by Tilemahos Efthimiadis via flickr.com, CC BY-SA 2.0.
Peer review gets a lot of grief. It’s one of the things we love to say is “broken”. It takes too long, or at least we think it does. Occasionally a reviewer completely misses the point, goes on an ad hominem attack, or produces some other kind of idiotic review. But for all the flak aimed its way, I’m convinced that peer review – overall – is fantastic; volunteer reviewers and editors have vastly improved nearly every one of my papers.
But there’s one kind of review that really burns my bacon.
Maybe you’ve gotten this review too. It’s the one that points out that there was a better way to do your experiment (or build your model, or make your observations). Not, importantly, that the way you did it was wrong; just that there was a way to gather data (for example) that tested the hypothesis more directly, or were less noisy, or had better distributional properties.
Usually, I know there was a better way to do the experiment – after all, there almost always is. Of course, that better way probably would have been more expensive, or more labour-intensive, or more time-consuming. (Sometimes it would have required a smarter scientist, but let’s not think too hard about that.) Very often, I tried that “better” way, and learned that it doesn’t work. One or more of those reasons is generally why I’m not reporting the use of that “better” method in the first place. Sometimes, to be sure, the reviewer’s better way really is better, and something I hadn’t thought of. That’s exciting, because it puts a new arrow in my quiver for my next experiment; but it doesn’t have anything to do with judging the value of the data in hand. So, there might have been a better way – so what? I’d like to see my work judged not on whether there was a hypothetically better way to do it, but instead on whether what I actually did advances our knowledge of nature*.
As I’d love to write in a response letter (usually, but not always, I stop myself before actually doing such things):
You don’t have a choice between my data and perfect data. You have a choice between my data and no data at all.
I’ve had several papers rejected as a result of a there-was-a-better-way review**, and it makes me furious every time. As a result, when I review, I try hard to assess the value of data in hand independently of my brilliant (and usually untested) alternative ideas. If I’m tempted to write “it would be better to do X”, I think twice. If I’m tempted to write “it would have been better to have done X”, I just don’t. Finally, as an editor, I try to recognize there-was-a-better-way reviews and reassure authors that they can disregard them. Please join me.
© Stephen Heard (email@example.com) September 27, 2016
*^By which I do not mean that my result is incontrovertibly, reproducibly “correct”. Those who expect every scientific paper to be such have, I think, the rather naïve view that our literature should consist of a big pile of facts. Instead, I mean that my result can add to the edifice we as scientists are collectively building, and in the long run in combination with other studies (both mine and yours) deepen or broaden our understanding.
**^None permanently. I’ve only ever submitted one manuscript that will remain forever unpublished – likely because (as reviewers argued, and I only much later came to agree) it was completely and irreparably wrong.