Category Archives: publishing

Let’s stop (usually) with the second round of review

I’m grumpy today about something that hasn’t even happened yet. Yes, that’s probably unreasonable; but I’m grumpy about something that happens too often, and I’m going to make myself feel better by venting just a little. I claim (at least partly because it’s true) that I have a real point to make.

Here’s what I’m grumpy about: second rounds of peer review. Continue reading

Those journals may be “fake”, but I don’t think they’re “predatory”

If your email inbox is like mine, you’ve seen more than a few invitations like the one above. There are thousands of “journals” offering to publish pretty much anything, without peer review or with only the pretence of it. They tend not to bother with such things as copy-editing or secured long-term web hosting either – and why should they? They’re not in business to help drive scientific progress; they’re in business strictly to collect authors’ money (normally in the form of article processing charges, but notice the slick little grift in the teaser email illustrated above).

Journals like this get labelled “predatory”, but I don’t think that’s the right label. Continue reading

Weird things scientists believe: that paying reviewers won’t cost us

Warning: a little ranty.

I’m fascinated by the weird things some scientists believe, in the face of what seems to me common sense and obvious constraints. There are many examples (like the common disdain for “nearly significant”), but the one I’ve chosen to offend people with today is a surprisingly common belief: that we could have journals pay their peer reviewers out of their profit margins without additional cost to authors. I see this claim frequently, most often on Twitter (although I’m not going to link to any particular exemplar, because the claim is too common to make it sensible to dunk on any one individual).

To get one thing out of the way immediately: I’m talking here about the notion that a journal could pay its reviewers. Continue reading

Why are scientific frauds so obvious?

This post was sparked by an interesting e-mail exchange with Jeremy Fox, over at Dynamic Ecology. We’d both come across the same announcement of a (very likely) case of research fraud, and had some similar reactions to it. We both knew there was a blog post in it! We agreed to post at the same time, but not to share draft posts. My prediction: we agree on some parts, not on others; but Jeremy’s post is better.

Behavioural economics got a bit of a black eye last week with the revelation that a major study by some very prominent authors is, virtually certainly, based on fraudulent data. What’s really astonishing, if you read that post (and you should) is that the fraud was so stunningly obvious with even a rather shallow dive into the data. Just to pick one thing, a treatment effect in the paper seems to have been generated by taking one variable, and adding to it a random number pulled from a uniform distribution bounded by 0 and 50,000. (Seriously, read the post.) This is such an implausible distribution for a real experimental effect that, once it’s been noticed, it’s about the most flagrant red flag you could imagine.

It’s not just this paper, though. Continue reading

Tricks for reading and correcting proofs

Some parts of a writing project are exhilarating; some parts (at least for me) are grueling; and some are stubbornly perplexing.  One part is important but very, very tedious, and I’m deep in that part now:* checking proofs. Fortunately, there are some tricks to make dealing with proofs easier.

In case you haven’t yet had the pleasure: the “proof” is the all-but-final version of your piece of writing, typeset exactly as it will appear in the journal (or as a published book, or whatever). “Checking” proof means what it sounds like: going through the proof in search of any errors or other problems introduced during the typesetting process – or the (hopefully rare!) errors that have snuck through revision and copy-editing undetected.**

Checking proof is mind-numbingly boring, and it’s hard to do effectively. Continue reading

Why I don’t want to be part of “open peer review”

Warning: header image captures this post pretty well.

Should peer review be open and transparent?  Sounds appealing, doesn’t it?  Who’d want to go on record as saying anything shouldn’t be made more open and transparent? Well, I’ll give it a go, because I’ve recently declined to review two manuscripts that looked interesting, for a reason that’s entirely new to me.* In both cases, the journals specified that by agreeing to review, I was consenting for my reviewer comments, and the authors’ response, to be published as a supplementary file with the paper. Sorry – I’m not having any part of that. Continue reading

Yes, that paper is paywalled. But you can read it anyway.

Last week, I wrote about a fascinating and puzzling (if somewhat dispiriting) paper assessing the value of science-communication training. In an (obviously futile, I know) attempt to counter the scourge that is “I didn’t read the paper but here are my thoughts anyway”, I suggested repeatedly that folks ought to read the paper. And I suppose I should have seen it coming: a veritable deluge of “It’s paywalled, I can’t read it”.

The first half of that objection is true: the paper is “paywalled”. So are a lot of good things in life: Continue reading

Opinion, evidence, and preprints

Perhaps you’ve noticed that scientists, like other humans, can hold very strong opinions about certain things.* Perhaps you’ve also noticed that those opinions are sometimes backed up by voluminous evidence (gravity points down; climate change is real and caused by humans; vaccines are safe and effective) – but that sometimes they are not. Here’s a great example related to preprints.

Preprints are probably the most interesting development in scientific publishing in the last 100 years.** Continue reading

The list of disfavoured reviewers: who should be on yours? And will an editor heed it?

Last week, I wrote about lists of suggested reviewers (for manuscripts).  Most journals require them, although authors sometimes resent it; as an editor I use them and appreciate them very much..  But there’s another list that puzzles some authors: the list of disfavoured reviewers.  This is a list of people that you’re requesting not be asked to review your manuscript.  As an editor, how do I use that list?  And who (if anyone) should you put on yours? Continue reading

Do editors really use those lists of “recommended reviewers”? And who should you suggest?

You know the feeling: you’ve spent many hours painstakingly massaging your manuscript into compliance with a journal’s idiosyncratic formatting requirements. You’ve spent another two hours battling its online submission system*.  You’re almost there – ready to hit “submit” and go for a well-deserved beer or cinnamon bun – but there’s One More Screen.  The system wants your list of five recommended reviewers.  Does this really matter?  What does an editor do with it?

Well, I can’t speak for every editor (and I hope some others will add their own thoughts in the Replies).  But I can tell you what I do with them, and perhaps that can guide you when you get asked for that list. Continue reading