Last week, I wrote about lists of suggested reviewers (for manuscripts). Most journals require them, although authors sometimes resent it; as an editor I use them and appreciate them very much.. But there’s another list that puzzles some authors: the list of disfavoured reviewers. This is a list of people that you’re requesting not be asked to review your manuscript. As an editor, how do I use that list? And who (if anyone) should you put on yours? Continue reading
You know the feeling: you’ve spent many hours painstakingly massaging your manuscript into compliance with a journal’s idiosyncratic formatting requirements. You’ve spent another two hours battling its online submission system*. You’re almost there – ready to hit “submit” and go for a well-deserved beer or cinnamon bun – but there’s One More Screen. The system wants your list of five recommended reviewers. Does this really matter? What does an editor do with it?
Well, I can’t speak for every editor (and I hope some others will add their own thoughts in the Replies). But I can tell you what I do with them, and perhaps that can guide you when you get asked for that list. Continue reading
It’s been a rough couple of weeks for rose coloured glasses in biology. There’s the unfolding saga of paper retractions in social behaviour; and then there’s cite-my-paper-gate. I don’t have much to say about the former (beyond expressing my admiration for the many scientists who are handling their unintended involvement with grace and integrity). But the latter made me think.
If you didn’t hear about cite-my-paper-gate: someone (yet to be publicly identified) has been busted over all kinds of reviewing and editing malpractice. Continue reading
Image: Experiment, © Nick Youngson via picpedia.org, CC BY-SA 3.0
I’m often puzzled by the reluctance of scientists to think scientifically and do science. “Wait”, you say, “that’s a bizarre claim – we do science all the time, that’s why we’re called scientists”. Well, yes, and no.
We love doing science on nature – the observations and experiments and theoretical work we deploy in discovering how the universe works. What we don’t seem to love nearly as much is doing science on ourselves. Continue reading
Image: Gandalf the Gatekeeper, CC 0 via goodfreephotos.com
Peer review is arguably central to what we do as scientists – to a considerable extent it’s what lets us recognize an authentic scientific enterprise. Consider, for instance, the distinction between peer-reviewed publications and hack pieces in predatory journals; or think about how peer-reviewed grant proposals differ from pork-barrel politics. Given this key role, it’s rather surprising to find a great deal of disagreement about what peer review is for, how it works best, or even whether it works at all.
Along these lines, I was very surprised a couple of weeks ago to see a flurry of tweets from some folks who wanted journals to give them a simple thumbs-up or thumbs-down on their manuscripts. No comments, please, and no suggestions for improvement, thanks, just a writ of execution or an ennoblement. Continue reading
Image: One way? © Andrea Schafthuizen licensed CC 0 via publicdomainpictures.net
Last week I got the first two peer reviews of my new book (of the complete manuscript, that is*). I read them with equal doses of eagerness and trepidation (as one does), and before long something very, very familiar happened: I caught Reviewer 1 and Reviewer 2 offering exactly opposite and completely conflicting suggestions. It was a structural issue: according to Reviewer 1, the book has too many short chapters and I should combine them into fewer, longer ones, while according to Reviewer 2, shorter chapters are a plus because they make the material easier to absorb. So what do I do? Continue reading
Image: The reviewer-selection screen at one journal I edit for.
Warning: more detail than you may care for.
Every manuscript submitted to a (peer-reviewed) journal needs reviewers, and it’s the editor’s job to choose appropriate ones. How does that happen? Have you wondered? Well, I can’t tell you how it happens in general; but I can tell you how I do it. Continue reading
Image: Deadline, by geralt CC 0 via pixabay.com.
Warning: I’m a bit grumpy today.
I’m back tilting at one of my favourite windmills today: requests for manuscript reviews with unreasonably short deadlines. I’ve explained elsewhere that one should expect the process of peer review to take a while. Journals would love to compress the process by reducing the time the manuscript spends on the reviewer’s desk – and so they ask for reviews to be returned in 2 weeks, or in 10 days, or less. As a reviewer, I don’t play this game any more: I simply refuse all requests with deadlines shorter than 3 weeks.
I’ve asked a few editors and journal offices why they give such short deadlines, and they give two kinds of answers: one outcome-based, and one process-based. Continue reading
Image: “Waiting”, Edgar Degas, circa 1882 (pastel on paper). Collection of the Getty Center, Los Angeles. Public domain.
I’m sure it’s happened to you. It’s happened to me. With excitement, you punch the “submit” button, and celebrate your manuscript being off your desk and into peer review. And then you wait. And wait. And you wait some more. Sometimes, it feels like you’re waiting forever. When that happens, is it appropriate to e-mail the journal office to ask what’s holding things up? And if so, how long should you wait? Continue reading
Warning: another grumpy one
I’m seeing it more and more: requests to review manuscripts with ludicrously short deadlines. Sometimes 10 days, sometimes 7, sometimes one week (5 business days). And I see editors on Twitter bragging about a paper they’ve shepherd through the entire review process in 5 days, or a week, or two weeks. I want all this to stop. Continue reading