Warning: astonishingly trivial
Three weeks ago I showed you my Journal Life List, and I invented the Journal Diversity Index (J/P, where my P papers have appeared in J different journals). A lot of you liked that and calculated your own JDIs, and I don’t know that we learned anything profound, but it was fun and there’s nothing wrong with that.
But I can never leave well enough alone. Continue reading
I enjoy watching birds, but I don’t keep a life list. I don’t keep a life list for anything, really, which might surprise people who know how data-nerdy I am. The exception: the journals I’ve published in. I don’t really know why I track this, but for some reason I find it fun. (To be honest, I’m kind of proud of it and I celebrate each new addition, but I can’t tell you why and I have a sneaking suspicion that I shouldn’t*).
So here’s my list as of today: Continue reading
Image: “Transparency”, CC BY-SA HonestReporting.com, flickr/freepress
Note: This is a modestly revised version of my original post, which was not written very clearly. (Yes, I’m aware of the irony.) It was easy, reading the original version, to think I was primarily objecting to journals publishing peer reviews. I’m ambivalent about that (and my arguments below apply only weakly to that situation). It should be clearer now that I’m focusing on authors publishing their peer reviews. If you’d like to see how my writing led folks astray, I’ve archived the original version here.
We hear a lot about making science more transparent, more open – and that’s a good thing. That doesn’t mean, though, that every way of making science more transparent should be adopted. It’s like everything else, really: each step we could take will have benefits and costs, and we can’t ignore real impediments. I worry that sometimes we lose sight of this.
One place I suspect we’re losing sight of it is in the movement for authors to publish their (received) peer reviews. (There are also journals that publish peer reviews, such as Nature Communications; I think this is a lot of work with dubious return on investment, but that’s a topic for another day). What I often see is the suggestion that whenever I publish a paper, I should post the full history of its peer reviews on Github or the equivalent. This lets readers see for themselves all that went into the making of the sausage. It’s worth reading a good argument in favour of this, and I’ll point you to Terry McGlynn’s, which I think puts the case as well as it can be put.
I don’t agree, though. Here’s why I won’t be posting my (received) peer reviews: Continue reading
I’ve seen half a dozen posts and essays arguing that we should stop publicizing, listing, or paying attention to the names of the journals our papers are published in. The argument goes along these lines*. First, we should judge the worth of papers based on their content, not based on where they were published. Second, when filtering papers – deciding which ones to read – we should filter them based on what they’re about (as communicated by their titles and abstracts), not by the journal they’re in.
This argument is, I think, a logical extension of arguments against the impact factor. I think those arguments are overdone, and I think this one is too. Continue reading