I don’t usually blog about my own papers, except in some rather meta ways, but last week saw the publication of a paper I’m really, really proud of. And it has some interesting backstory, including its conception right here on Scientist Sees Squirrel.
The paper is called “Site-selection bias and apparent population declines in long-term studies”, and it’s just come out in Conservation Biology. It started, back in August of 2016, with a post called Why Most Studied Populations Should Decline. That post made a very simple point about population monitoring, long-term studies, and inferences about population decline. That point: if ecologists tend to begin long-term studies in places where their study organisms are common (and there are lots of very good reasons why they might), then we should expect long-term studies to frequently show population declines simply as a statistical artifact. That shouldn’t be controversial – it’s just a manifestation of regression to the mean – but it’s been almost entirely unaddressed in the literature.
A bunch of folks read that blog post. Some were mortally offended. Continue reading
Image: Gandalf the Gatekeeper, CC 0 via goodfreephotos.com
Peer review is arguably central to what we do as scientists – to a considerable extent it’s what lets us recognize an authentic scientific enterprise. Consider, for instance, the distinction between peer-reviewed publications and hack pieces in predatory journals; or think about how peer-reviewed grant proposals differ from pork-barrel politics. Given this key role, it’s rather surprising to find a great deal of disagreement about what peer review is for, how it works best, or even whether it works at all.
Along these lines, I was very surprised a couple of weeks ago to see a flurry of tweets from some folks who wanted journals to give them a simple thumbs-up or thumbs-down on their manuscripts. No comments, please, and no suggestions for improvement, thanks, just a writ of execution or an ennoblement. Continue reading
Image: This mobile, hanging in my office, was given to me by my friend Mary Harris when I got tenure. It’s driftwood from the Skunk River in Iowa. I’d just gotten tenure, and it’s made of dead wood – get it?
A rather poorly-executed and very poorly-communicated study made a big splash last week, with the claim that half of all ecologists “drop out” of the field within just 5 years. The many, many flaws in this way of measuring and communicating people’s career trajectories have been thrashed out in other places, so I’ll just note for the record that by the paper’s critera, I myself have “dropped out” of the field.* Continue reading
Image: “Waiting”, Edgar Degas, circa 1882 (pastel on paper). Collection of the Getty Center, Los Angeles. Public domain.
I’m sure it’s happened to you. It’s happened to me. With excitement, you punch the “submit” button, and celebrate your manuscript being off your desk and into peer review. And then you wait. And wait. And you wait some more. Sometimes, it feels like you’re waiting forever. When that happens, is it appropriate to e-mail the journal office to ask what’s holding things up? And if so, how long should you wait? Continue reading
(This is a lightly edited version of a post that originally ran in January 2015. But you probably didn’t see it then.)
Here’s a problem you might not have thought of: did you know you can submit and publish a paper with a coauthor who’s deceased, but not with one who’s in a coma and might recover?
A lot of people have never thought of this, and a lot don’t think it’s a problem worth worrying about. Please bear with me, though, because I think it’s a more important problem than most of us realize – but also one that’s easily avoided.
The unavailable-coauthor problem is actually more general than my coma example. Continue reading
Image: Recycling logo by gustavorezende, released to public domain
Warning: long post. There’s a TL;DR in the Summary at the end.
Is recycling Methods text from an old paper, to use in a new paper that applies the same techniques, efficient writing – or self-plagiarism?
We’ve all had the dilemma. You write two papers that use (at least some of) the same methods. For the first paper, you craft a lovely, succinct, clear explanation of those methods. For the second paper, you’d like to just cut-and-paste the Methods text from the first one. Can you? And should you? Continue reading
It happened to me again, a few weeks ago: a manuscript I’d had high hopes for came back from the journal with a decision of “reject, but with an invitation to resubmit”. It’s better than a flat-out reject, to be sure, but disappointing nonetheless.
There’s a widespread belief – almost a conspiracy theory – that journals use “reject, but resubmit” as a device to cheat on their handling time statistics (by which we mostly mean time from submission to first acceptance). After all, if a manuscript gets “revision”, the clock keeps ticking from the original submission; but “reject, but resubmit” means we can pretend the resubmission is a brand new manuscript and start the clock over. Clever but deceptive move, right? Continue reading