Tag Archives: peer review

Turning our scientific lens on our scientific enterprise: a randomized experiment on double-blinding at Functional Ecology

Image: Experiment, © Nick Youngson via picpedia.org, CC BY-SA 3.0

I’m often puzzled by the reluctance of scientists to think scientifically and do science.  “Wait”, you say, “that’s a bizarre claim – we do science all the time, that’s why we’re called scientists”.  Well, yes, and no.

We love doing science on nature – the observations and experiments and theoretical work we deploy in discovering how the universe works.  What we don’t seem to love nearly as much is doing science on ourselves. Continue reading

Of course you can do significance testing on simulation data!

Warning: wonkish. Also long (but there’s a handy jump).

Over the course of a career, you become accustomed to reviewers raising strange objections to your work.  As sample size builds, though, a few strange objections come up repeatedly – and that’s interesting.  Today: the bizarre notion that one shouldn’t do significance testing with simulation data. Continue reading

Yes, most reviews are submitted at the deadline. No, that doesn’t justify shorter deadlines

Image: Deadline, by geralt CC 0 via pixabay.com.

Warning: I’m a bit grumpy today.

I’m back tilting at one of my favourite windmills today: requests for manuscript reviews with unreasonably short deadlines.  I’ve explained elsewhere that one should expect the process of peer review to take a while.  Journals would love to compress the process by reducing the time the manuscript spends on the reviewer’s desk – and so they ask for reviews to be returned in 2 weeks, or in 10 days, or less.  As a reviewer, I don’t play this game any more: I simply refuse all requests with deadlines shorter than 3 weeks.

I’ve asked a few editors and journal offices why they give such short deadlines, and they give two kinds of answers: one outcome-based, and one process-based. Continue reading

I refuse all review requests with deadlines < 3 weeks. Here’s why, and how.

Warning: another grumpy one

I’m seeing it more and more: requests to review manuscripts with ludicrously short deadlines.  Sometimes 10 days, sometimes 7, sometimes one week (5 business days).  And I see editors on Twitter bragging about a paper they’ve shepherd through the entire review process in 5 days, or a week, or two weeks.  I want all this to stop. Continue reading

The efficiency of the lazy scientist

Photo: Lazy red panda CC 0 via pxhere.com

I’ve just published a paper that had some trouble getting through peer review.  Nothing terribly unusual about that, of course, and the paper is better for its birthing pains.  But one reviewer comment (made independently, actually, by several different reviewers) really bugged me.  It revealed some fuzzy thinking that’s all too common amongst ecologists, having to do with the value of quick-and-dirty methods.  Quick-and-dirty methods deserve more respect.  I’ll explain using my particular paper as an example, first, and then provide a general analysis. Continue reading

Why journals like “reject, but resubmit”

It happened to me again, a few weeks ago: a manuscript I’d had high hopes for came back from the journal with a decision of “reject, but with an invitation to resubmit”.  It’s better than a flat-out reject, to be sure, but disappointing nonetheless.

There’s a widespread belief – almost a conspiracy theory – that journals use “reject, but resubmit” as a device to cheat on their handling time statistics (by which we mostly mean time from submission to first acceptance).  After all, if a manuscript gets “revision”, the clock keeps ticking from the original submission; but “reject, but resubmit” means we can pretend the resubmission is a brand new manuscript and start the clock over.  Clever but deceptive move, right?  Continue reading

Four unconvincing reasons not to crowdfund science

Image:  Crowdfunding, US Securities and Exchange Commission (no, really), CC BY-NC-SA 2.0.

Sometimes I hold an opinion that I’m almost certain has to be wrong, but I can’t figure out why. This is one of those times.  I need you to help me.

I’ve been watching the trend to crowdfunded science, and it bothers me.  I completely understand why it happens, and why it’s become much more common. The science funding environment continues to be difficult – indeed, in many places it seems to be getting steadily more difficult, especially for early-career scientists and those doing the most basic/curiosity-driven science.  At the same time, the rise of web-based crowdfunding platforms* has made it relatively easy to reach potential donors (at least in principle, and more about that below). For any given researcher at any given time, surely the science is better with access to crowdsourced support than it would be without.  And several colleagues I like and respect have crowdsourced part of their work.  So why am I so uncomfortable with the model? Continue reading