Last week, I wrote about lists of suggested reviewers (for manuscripts). Most journals require them, although authors sometimes resent it; as an editor I use them and appreciate them very much.. But there’s another list that puzzles some authors: the list of disfavoured reviewers. This is a list of people that you’re requesting not be asked to review your manuscript. As an editor, how do I use that list? And who (if anyone) should you put on yours? Continue reading
You know the feeling: you’ve spent many hours painstakingly massaging your manuscript into compliance with a journal’s idiosyncratic formatting requirements. You’ve spent another two hours battling its online submission system*. You’re almost there – ready to hit “submit” and go for a well-deserved beer or cinnamon bun – but there’s One More Screen. The system wants your list of five recommended reviewers. Does this really matter? What does an editor do with it?
Well, I can’t speak for every editor (and I hope some others will add their own thoughts in the Replies). But I can tell you what I do with them, and perhaps that can guide you when you get asked for that list. Continue reading
This is a guest post by Bastien Castagneyrol. This is an issue I’ve thought about (as have others), and like Bastien, I don’t quite know what action to take. I like Bastien’s climbing metaphor. In a related one, the journey from subscriber-pays paywall to author-pays-open-access crosses a very rugged landscape, with crevasses both obvious and hidden.
Disclosure from Bastien: what follows is not exhaustive and could be much better documented. It reflects my feelings, not my knowledge (although my feelings are partly nurtured with some knowledge). I’m trying here to ask a really genuine question.
The climbing metaphor
My academic career is a rocky cliff. Continue reading
Image: The reviewer-selection screen at one journal I edit for.
Warning: more detail than you may care for.
Every manuscript submitted to a (peer-reviewed) journal needs reviewers, and it’s the editor’s job to choose appropriate ones. How does that happen? Have you wondered? Well, I can’t tell you how it happens in general; but I can tell you how I do it. Continue reading
Image: Three choices – out of thousands.
Warning: long post. Grab a snack.
Having lots of options is a wonderful thing – right up until you have to pick one. Have you ever been torn among the two dozen entrées on a restaurant menu? Blanched at the sight of 120 different sedans on a used-car lot? If you have, you might also wonder how on earth you’re going to choose a journal to grace with your latest manuscript. There are, quite literally, thousands of scientific journals out there – probably tens of thousands – and even within a single field there will be hundreds of options. (Scimago lists 352 journals in ecology, for example, but that list is far from comprehensive.)
What follows are some of things I think you might consider when you choose a journal. Continue reading
Warning: another grumpy one
I’m seeing it more and more: requests to review manuscripts with ludicrously short deadlines. Sometimes 10 days, sometimes 7, sometimes one week (5 business days). And I see editors on Twitter bragging about a paper they’ve shepherd through the entire review process in 5 days, or a week, or two weeks. I want all this to stop. Continue reading
Warning: astonishingly trivial
Three weeks ago I showed you my Journal Life List, and I invented the Journal Diversity Index (J/P, where my P papers have appeared in J different journals). A lot of you liked that and calculated your own JDIs, and I don’t know that we learned anything profound, but it was fun and there’s nothing wrong with that.
But I can never leave well enough alone. Continue reading
I enjoy watching birds, but I don’t keep a life list. I don’t keep a life list for anything, really, which might surprise people who know how data-nerdy I am. The exception: the journals I’ve published in. I don’t really know why I track this, but for some reason I find it fun. (To be honest, I’m kind of proud of it and I celebrate each new addition, but I can’t tell you why and I have a sneaking suspicion that I shouldn’t*).
So here’s my list as of today: Continue reading
Image: “Transparency”, CC BY-SA HonestReporting.com, flickr/freepress
Note: This is a modestly revised version of my original post, which was not written very clearly. (Yes, I’m aware of the irony.) It was easy, reading the original version, to think I was primarily objecting to journals publishing peer reviews. I’m ambivalent about that (and my arguments below apply only weakly to that situation). It should be clearer now that I’m focusing on authors publishing their peer reviews. If you’d like to see how my writing led folks astray, I’ve archived the original version here.
We hear a lot about making science more transparent, more open – and that’s a good thing. That doesn’t mean, though, that every way of making science more transparent should be adopted. It’s like everything else, really: each step we could take will have benefits and costs, and we can’t ignore real impediments. I worry that sometimes we lose sight of this.
One place I suspect we’re losing sight of it is in the movement for authors to publish their (received) peer reviews. (There are also journals that publish peer reviews, such as Nature Communications; I think this is a lot of work with dubious return on investment, but that’s a topic for another day). What I often see is the suggestion that whenever I publish a paper, I should post the full history of its peer reviews on Github or the equivalent. This lets readers see for themselves all that went into the making of the sausage. It’s worth reading a good argument in favour of this, and I’ll point you to Terry McGlynn’s, which I think puts the case as well as it can be put.
I don’t agree, though. Here’s why I won’t be posting my (received) peer reviews: Continue reading
I’ve seen half a dozen posts and essays arguing that we should stop publicizing, listing, or paying attention to the names of the journals our papers are published in. The argument goes along these lines*. First, we should judge the worth of papers based on their content, not based on where they were published. Second, when filtering papers – deciding which ones to read – we should filter them based on what they’re about (as communicated by their titles and abstracts), not by the journal they’re in.