It happened to me again, a few weeks ago: a manuscript I’d had high hopes for came back from the journal with a decision of “reject, but with an invitation to resubmit”. It’s better than a flat-out reject, to be sure, but disappointing nonetheless.
There’s a widespread belief – almost a conspiracy theory – that journals use “reject, but resubmit” as a device to cheat on their handling time statistics (by which we mostly mean time from submission to first acceptance). After all, if a manuscript gets “revision”, the clock keeps ticking from the original submission; but “reject, but resubmit” means we can pretend the resubmission is a brand new manuscript and start the clock over. Clever but deceptive move, right? Continue reading
Image: Crowdfunding, US Securities and Exchange Commission (no, really), CC BY-NC-SA 2.0.
Sometimes I hold an opinion that I’m almost certain has to be wrong, but I can’t figure out why. This is one of those times. I need you to help me.
I’ve been watching the trend to crowdfunded science, and it bothers me. I completely understand why it happens, and why it’s become much more common. The science funding environment continues to be difficult – indeed, in many places it seems to be getting steadily more difficult, especially for early-career scientists and those doing the most basic/curiosity-driven science. At the same time, the rise of web-based crowdfunding platforms* has made it relatively easy to reach potential donors (at least in principle, and more about that below). For any given researcher at any given time, surely the science is better with access to crowdsourced support than it would be without. And several colleagues I like and respect have crowdsourced part of their work. So why am I so uncomfortable with the model? Continue reading
Image: © (claimed) Terrance Heath, CC BY-NC 2.0
“How good a manuscript”, I’m sometimes asked, “is good enough to submit”? It’s a natural enough question. A manuscript heading for peer review isn’t the finished product. It’s virtually certain that reviewers will ask for changes, often very substantial ones – so why waste time perfecting material that’s going to end up in the wastebasket anyway? Continue reading
I recently learned about Peer Community In (PCI), a new system for reviewing and recommending preprints. I’m really intrigued. It’s true that I’m an old fuddy-duddy who’s on record as saying that we often exaggerate the problems with the status quo, and as not liking to think outside the box. And yet there are good reasons to think it might be good to have other ways beyond traditional journals to disseminate science. We should experiment with a variety of new systems, and PCI seems like one well worth exploring. Read on to learn more!
What follows is a guest post by Denis Bourguet (email@example.com), Benoit Facon (firstname.lastname@example.org), Thomas Guillemaud (email@example.com), and Ruth Hufbauer (firstname.lastname@example.org). DB, BF, and TG are the founders of PCI, and RH is a colleague and member of the board of PCI Evol Biol.
We believe that the current system of publishing with academic journals suffers from four crucial problems. First, Continue reading
Image: Asim Saeed via flickr.com CC-BY-2.0
This is a joint post by Steve Heard and Andrew Hendry (crossposted here on Andrew’s blog).
Another week, another rejection, right? If you’ve been in science long at all, you almost certainly have a bulging file of rejections for grants, manuscripts, fellowships, and even jobs. Here, for example, is Steve’s truly impressive job-rejection history; and here’s a previous analysis of Andrew’s manuscript rejections.
We were part of a recent Twitter exchange that began when Steve tweeted in celebration of submitting a manuscript – to its third different journal:
Like most people, I often feel a little impostery. I’m convinced that sooner or later, people will notice that my work isn’t actually all that important, that my papers are somehow flawed, that I don’t really know what I’m talking about when I teach. (People may even figure out that Scientist Sees Squirrel is seldom original, mostly wrong, and only occasionally interesting.)
I was part of some discussion on Twitter recently about imposter syndrome in the particular context of peer reviewing. Some folks worry that they really aren’t qualified to review. They worry that they may make the wrong recommendation: either miss a critical flaw or (conversely) see something as a critical flaw that really isn’t. As an editor, I’ve had people whose judgement I respect decline to review on the grounds that they didn’t feel confident in their reviewing abilities. Ironically, these are often the early career scientists who tend to be absolutely terrific reviewers.
For a variety of reasons, I think this fear is generally misplaced. Continue reading
How should you handle a useless review? I don’t mean one that’s actively idiotic, but a review that’s superficial, misunderstands the manuscript, is positive but lukewarm, or otherwise just doesn’t seem to point to any avenues for improvement. Perhaps it’s this gem:
This study seems competently executed, and most of the writing is pretty good. A few analyses could benefit from more modern approaches. However, in the end I’m unconvinced of its importance.*
Let’s start with how not to handle a useless review. Continue reading