Category Archives: publishing

Why I don’t want to be part of “open peer review”

Warning: header image captures this post pretty well.

Should peer review be open and transparent?  Sounds appealing, doesn’t it?  Who’d want to go on record as saying anything shouldn’t be made more open and transparent? Well, I’ll give it a go, because I’ve recently declined to review two manuscripts that looked interesting, for a reason that’s entirely new to me.* In both cases, the journals specified that by agreeing to review, I was consenting for my reviewer comments, and the authors’ response, to be published as a supplementary file with the paper. Sorry – I’m not having any part of that. Continue reading

Yes, that paper is paywalled. But you can read it anyway.

Last week, I wrote about a fascinating and puzzling (if somewhat dispiriting) paper assessing the value of science-communication training. In an (obviously futile, I know) attempt to counter the scourge that is “I didn’t read the paper but here are my thoughts anyway”, I suggested repeatedly that folks ought to read the paper. And I suppose I should have seen it coming: a veritable deluge of “It’s paywalled, I can’t read it”.

The first half of that objection is true: the paper is “paywalled”. So are a lot of good things in life: Continue reading

Opinion, evidence, and preprints

Perhaps you’ve noticed that scientists, like other humans, can hold very strong opinions about certain things.* Perhaps you’ve also noticed that those opinions are sometimes backed up by voluminous evidence (gravity points down; climate change is real and caused by humans; vaccines are safe and effective) – but that sometimes they are not. Here’s a great example related to preprints.

Preprints are probably the most interesting development in scientific publishing in the last 100 years.** Continue reading

The list of disfavoured reviewers: who should be on yours? And will an editor heed it?

Last week, I wrote about lists of suggested reviewers (for manuscripts).  Most journals require them, although authors sometimes resent it; as an editor I use them and appreciate them very much..  But there’s another list that puzzles some authors: the list of disfavoured reviewers.  This is a list of people that you’re requesting not be asked to review your manuscript.  As an editor, how do I use that list?  And who (if anyone) should you put on yours? Continue reading

Do editors really use those lists of “recommended reviewers”? And who should you suggest?

You know the feeling: you’ve spent many hours painstakingly massaging your manuscript into compliance with a journal’s idiosyncratic formatting requirements. You’ve spent another two hours battling its online submission system*.  You’re almost there – ready to hit “submit” and go for a well-deserved beer or cinnamon bun – but there’s One More Screen.  The system wants your list of five recommended reviewers.  Does this really matter?  What does an editor do with it?

Well, I can’t speak for every editor (and I hope some others will add their own thoughts in the Replies).  But I can tell you what I do with them, and perhaps that can guide you when you get asked for that list. Continue reading

What should you do when Reviewer #2 says “Cite my papers”?

It’s been a rough couple of weeks for rose coloured glasses in biology. There’s the unfolding saga of paper retractions in social behaviour; and then there’s cite-my-paper-gate.  I don’t have much to say about the former (beyond expressing my admiration for the many scientists who are handling their unintended involvement with grace and integrity).  But the latter made me think.

If you didn’t hear about cite-my-paper-gate: someone (yet to be publicly identified) has been busted over all kinds of reviewing and editing malpractice. Continue reading

The climbing metaphor, or where should we encourage students to send their papers?

This is a guest post by Bastien Castagneyrol.  This is an issue I’ve thought about (as have others), and like Bastien, I don’t quite know what action to take.  I like Bastien’s climbing metaphor.  In a related one, the journey from subscriber-pays paywall to author-pays-open-access crosses a very rugged landscape, with crevasses both obvious and hidden.

Disclosure from Bastien: what follows is not exhaustive and could be much better documented. It reflects my feelings, not my knowledge (although my feelings are partly nurtured with some knowledge). I’m trying here to ask a really genuine question.

The climbing metaphor

My academic career is a rocky cliff. Continue reading

Turning our scientific lens on our scientific enterprise: a randomized experiment on double-blinding at Functional Ecology

Image: Experiment, © Nick Youngson via picpedia.org, CC BY-SA 3.0

I’m often puzzled by the reluctance of scientists to think scientifically and do science.  “Wait”, you say, “that’s a bizarre claim – we do science all the time, that’s why we’re called scientists”.  Well, yes, and no.

We love doing science on nature – the observations and experiments and theoretical work we deploy in discovering how the universe works.  What we don’t seem to love nearly as much is doing science on ourselves. Continue reading

What copyediting is, and what it isn’t

Image: a snippet of the (excellent) copyedit for my forthcoming book.

Over the last six months, I’ve had several pieces of writing go through the copyediting process: a few papers, and one book.  Over my career, I’ve seen closer to 100 pieces of writing through copyedits.  It’s a stage of publication that was, for a long time, rather mysterious to me, but contrasting two of my recent experiences provides a pretty good illustration of what good copyediting is, and what good copyediting very definitely isn’t. Continue reading

It’s been a while since I’ve been this proud of a paper

I don’t usually blog about my own papers, except in some rather meta ways, but last week saw the publication of a paper I’m really, really proud of.  And it has some interesting backstory, including its conception right here on Scientist Sees Squirrel.

The paper is called “Site-selection bias and apparent population declines in long-term studies”, and it’s just come out in Conservation Biology.  It started, back in August of 2016, with a post called Why Most Studied Populations Should Decline.  That post made a very simple point about population monitoring, long-term studies, and inferences about population decline.  That point: if ecologists tend to begin long-term studies in places where their study organisms are common (and there are lots of very good reasons why they might), then we should expect long-term studies to frequently show population declines simply as a statistical artifact.  That shouldn’t be controversial – it’s just a manifestation of regression to the mean – but it’s been almost entirely unaddressed in the literature.

A bunch of folks read that blog post.  Some were mortally offended. Continue reading