Tag Archives: story behind the paper

It’s been a while since I’ve been this proud of a paper

I don’t usually blog about my own papers, except in some rather meta ways, but last week saw the publication of a paper I’m really, really proud of.  And it has some interesting backstory, including its conception right here on Scientist Sees Squirrel.

The paper is called “Site-selection bias and apparent population declines in long-term studies”, and it’s just come out in Conservation Biology.  It started, back in August of 2016, with a post called Why Most Studied Populations Should Decline.  That post made a very simple point about population monitoring, long-term studies, and inferences about population decline.  That point: if ecologists tend to begin long-term studies in places where their study organisms are common (and there are lots of very good reasons why they might), then we should expect long-term studies to frequently show population declines simply as a statistical artifact.  That shouldn’t be controversial – it’s just a manifestation of regression to the mean – but it’s been almost entirely unaddressed in the literature.

A bunch of folks read that blog post.  Some were mortally offended.  Some protested that it couldn’t possibly be true.  Others protested that it was obviously true, but wasn’t important.  Still others protested that it was true, and important, but everybody already knew about it (this third group was, it seems, blissfully unaware of the existence of the first two).  Finally, a few made excellent constructive comments about it, and suggested further work to turn a thought experiment (blog post) into some actual science (paper).

Among that last (constructive) group were two people who ended up coauthoring the published paper: the terrific Auriel Fournier (now Director of the Illinois Natural History Survey’s Forbes Biological Station), and the equally terrific Easton White (now a postdoc at the University of Vermont).  There were also several who didn’t end up as coauthors but played important roles (more about that below).  Auriel and Easton pointed out to me that we could do two important things to build on my small thought experiment.  First, we could use simulation studies to ask whether the effect is potentially important – whether, given simulated population trends  (or the lack of them) and particular patterns of density-biased initial site selection, analyses would overestimate declines enough to matter.  Second, we could use compilations of data from long-term population studies to ask whether there’s evidence of the site-selection effect compromising real studies.  The test takes advantage of a simple prediction: if trends are distorted because researchers initially choose sites with dense populations and avoid those with sparse ones, then trimming the first 10 years (“10” is an arbitrary choice) from the time series should reduce the frequency and magnitude of inferred declines.  With enough long-term population-dynamic datasets, we can test that prediction.

We did both of those things, and the answers are “yes” and “yes”: simulated data show that the problem can be a big one, and there’s strong evidence that the problem occurs in real studies.*  We also did one more thing.  We found what we believe to be the only high-profile paper that mentions the problem (Pechmann et al. 1991) and traced every paper that has ever cited it.  We discovered that of 478 citations, just one cites that paper for its mention of the site-selection problem in inference; over 40 (by comparison) cite it as having inferred population declines.  So: the effects of site-selection bias are real, and important, and largely undiscussed.

I mentioned that I’m pretty proud of this paper.  There are three major reasons.

First, I think the paper is important – perhaps my most important ever.  That’s not because I doubt the existence of population declines – many species are clearly in deep, deep trouble, we’re the cause of it, and we need to take action.**  It’s because we desperately need our inferences about population declines to be accurate.  If we’re to allocate conservation funding – and we must allocate it, as we simply can’t target all species equally – then we need to know which species are in the most trouble.  Our paper identifies an extremely important way that we could be wrong about that, and suggests ways to avoid the problem.

Second, I think the paper provides some evidence that I haven’t wasted the time I spend writing Scientist Sees Squirrel.  I do invest time in this blog (a few hours a week, usually, mostly at times when I’m not otherwise very productive; but still, it’s non-trivial).  So I’m proud to see that the ideas I write about here can mean something to my colleagues – even, at least once, can excite some sharp early-career folk enough to have them help me write a paper.  How cool is that?  (Mind you, Why Most Studied Populations Should Decline isn’t the only post I’m proud of.)

Third, the paper took some serious persistence to get published, and I’m proud that we were able to stick to the Tubthumping Strategy and see it through.  The paper was rejected, with remarkable vigour, by three journals in rapid succession – and in the process received some truly stupid comments from reviewers and editors.  Now, I’m a huge fan of the peer-review process in general, but that doesn’t mean all reviews are always wonderful. In this case, it was completely obvious that reviewers had made up their minds that they hated the manuscript and disagreed with its conclusions before they bothered to actually read it.  They accused us of not citing enough papers (when our point was that an issue was unaddressed in the literature), and of citing irrelevant ones, and of failing to locate the “many” papers they knew dealt with the issue – not one of which they could actually identify. They accused us of not understanding elementary population biology (I teach population biology) and basic statistics (I teach statistics).  They objected vociferously to our having said things that we actually didn’t say, and to our not having said things that we did, very clearly, say. I’ve never seen as clear a case of the Boxer Effect:

            A man [sic] hears what he wants to hear/And disregards the rest.***

Look, maybe in a way this was understandable: we were pointing out an awkward truth about mistakes a subdiscipline is making – and we were doing it from outside the subdiscipline in question.  But I hope I never, ever write the kind of peer review that we got, repeatedly.

We’re grateful to the reviewers and editor at Conservation Biology, the fourth journal we tried, who were willing to consider our responses to comments, and ultimately accepted the paper.  The lesson: if you continue tubthumping long enough, you can publish the good work you believe in.  Another lesson: even truly stupid reviewer comments – like those on our first three submissions – can be used to improve a paper.  The published version is unambiguously better than the first version that went out.

Now, there’s an important way I’m not proud of our new paper, and that’s that there are a couple of peoplewho made important contributions early in the project, but who aren’t part of the final author list.  There were misunderstandings about intended authorship that left me disappointed and upset – with myself as much as anything (I should stress that these misunderstandings were in no way the fault of my earlier-career coauthors).  I’m not going to go into what happened here; it wouldn’t be fair to use this platform to tell just one side of the story.  But let me encourage everyone to do as I say, and – apparently – not as I do: get clear agreements on authorship at the start.  Critically: those clear agreements need to cover not just the ways in which collaborators join the project (that’s easy), but also ways in which one might recognize and adjudicate their having left the project (much harder).

Despite those regrets over coauthorship misunderstandings, and despite the worst peer review experience of my career, the project was enormous fun.  I was kept motivated by my awesome coauthors, by Twitter followers and blog readers, and by discussions with colleagues when I presented the work at conferences and in departmental seminars.  Thanks to every one of you!

Finally: my goal here at Scientist Sees Squirrel is definitely not to pump my own papers; but I’m going to close by suggesting that if you ever read on of my papers, make it this one.  It’s an easy read, we think (we worked hard on that), and we’d love to see its message spread widely.  Read it – even just skim it, if you like – and then pass the link on to a friend or colleague.  Thank you.

© Stephen Heard  June 25, 2019


*^We can’t, with our method, identify a site-selection problem in a particular dataset, but we can infer that a sizeable fraction of a set of datasets have site-selection problems.  What to know more?  Read the paper!

**^My deepest fear is that some right-wing, anti-science troll or buffoon will pick up on the study, and misread it to say that “oh, scientists have now declared that all this hand-wringing over endangered species was all based on a big dumb mistake”.  That’s not true and the paper doesn’t say that, but the world is not short on right-wing, anti-science trolls and buffoons who can’t or won’t understand that.

***^From The Boxer, lyrics by Paul Simon.  Who should have won the Nobel Prize in Literature that went to Bob Dylan – an opinion that is both utterly irrelevant to this post and utterly resistant to objective argument either for or against.

Advertisements

Reach and impact of science-community blogs in ecology (new paper!)

My latest paper just came out, and it’s unlike anything I’ve done before.  It’s called Bringing Ecology Blogging into the Scientific Fold: Reach and Impact of Science-Community Blogs.  Really, I’d be perfectly happy if you just went and read the paper – but for those who might like a bit of context and backstory, here are a few thoughts.

(1)  It was tons of fun to have, as coauthors, a bunch of terrific bloggers: Amy Parachnowitsch and Terry McGlynn from Small Pond Science, Manu Saunders of Ecology is Not a Dirty Word, Margaret Kosmala of Ecology Bits, Simon Leather of Don’t Forget the Roundabouts, Jeff Ollerton of Jeff Ollerton’s Biodiversity Blog, and Meghan Duffy from Dynamic Ecology.* If you’re reading me but not them, I don’t know what the heck you think you’re doing. Continue reading

Story behind the paper: Integrating phylogenetic community structure with species distribution models

(Crossposted with edits from the Ecography Blog; original post July 8, 2014)

In July 2014, we (my collaborator Jeremy Lundholm, our joint PhD student Oluwatobe “Tobi” Oke, and I) published a paper in Ecography: “Integrating phylogenetic community structure with species distribution models: an example with plants of rock barrens”.  (And kudos to Holly Abbandonato for 1st-rate field help).  I wrote the following “story behind the paper” for Ecography’s blog. I like reading this kind of thing, so you’ll probably see more on the blog in future.

Our paper combines approaches from phylogenetic community ecology and species distribution modeling to understand the assembly of plant communities on rock barrens.  It was enormous fun to be involved with the work, in part because before we started I knew nothing about SDMs and next to nothing about rock barrens.  That we ended up with what I think is a pretty good paper is a testament to the value of collaboration and coauthorship. Continue reading