Image: Three choices – out of thousands.
Warning: long post. Grab a snack.
Having lots of options is a wonderful thing – right up until you have to pick one. Have you ever been torn among the two dozen entrées on a restaurant menu? Blanched at the sight of 120 different sedans on a used-car lot? If you have, you might also wonder how on earth you’re going to choose a journal to grace with your latest manuscript. There are, quite literally, thousands of scientific journals out there – probably tens of thousands – and even within a single field there will be hundreds of options. (Scimago lists 352 journals in ecology, for example, but that list is far from comprehensive.)
What follows are some of things I think you might consider when you choose a journal. Continue reading
Warning: another grumpy one
I’m seeing it more and more: requests to review manuscripts with ludicrously short deadlines. Sometimes 10 days, sometimes 7, sometimes one week (5 business days). And I see editors on Twitter bragging about a paper they’ve shepherd through the entire review process in 5 days, or a week, or two weeks. I want all this to stop. Continue reading
Warning: astonishingly trivial
Three weeks ago I showed you my Journal Life List, and I invented the Journal Diversity Index (J/P, where my P papers have appeared in J different journals). A lot of you liked that and calculated your own JDIs, and I don’t know that we learned anything profound, but it was fun and there’s nothing wrong with that.
But I can never leave well enough alone. Continue reading
I enjoy watching birds, but I don’t keep a life list. I don’t keep a life list for anything, really, which might surprise people who know how data-nerdy I am. The exception: the journals I’ve published in. I don’t really know why I track this, but for some reason I find it fun. (To be honest, I’m kind of proud of it and I celebrate each new addition, but I can’t tell you why and I have a sneaking suspicion that I shouldn’t*).
So here’s my list as of today: Continue reading
Image: “Transparency”, CC BY-SA HonestReporting.com, flickr/freepress
Note: This is a modestly revised version of my original post, which was not written very clearly. (Yes, I’m aware of the irony.) It was easy, reading the original version, to think I was primarily objecting to journals publishing peer reviews. I’m ambivalent about that (and my arguments below apply only weakly to that situation). It should be clearer now that I’m focusing on authors publishing their peer reviews. If you’d like to see how my writing led folks astray, I’ve archived the original version here.
We hear a lot about making science more transparent, more open – and that’s a good thing. That doesn’t mean, though, that every way of making science more transparent should be adopted. It’s like everything else, really: each step we could take will have benefits and costs, and we can’t ignore real impediments. I worry that sometimes we lose sight of this.
One place I suspect we’re losing sight of it is in the movement for authors to publish their (received) peer reviews. (There are also journals that publish peer reviews, such as Nature Communications; I think this is a lot of work with dubious return on investment, but that’s a topic for another day). What I often see is the suggestion that whenever I publish a paper, I should post the full history of its peer reviews on Github or the equivalent. This lets readers see for themselves all that went into the making of the sausage. It’s worth reading a good argument in favour of this, and I’ll point you to Terry McGlynn’s, which I think puts the case as well as it can be put.
I don’t agree, though. Here’s why I won’t be posting my (received) peer reviews: Continue reading
I’ve seen half a dozen posts and essays arguing that we should stop publicizing, listing, or paying attention to the names of the journals our papers are published in. The argument goes along these lines*. First, we should judge the worth of papers based on their content, not based on where they were published. Second, when filtering papers – deciding which ones to read – we should filter them based on what they’re about (as communicated by their titles and abstracts), not by the journal they’re in.
This argument is, I think, a logical extension of arguments against the impact factor. I think those arguments are overdone, and I think this one is too. Continue reading