Figure: Time series for two populations, each fluctuating in size. At time zero, I start a long-term study, and can choose either of the two populations (open circles). At some other time, I recensus (closed circles). Red arrows show net population change.
On any given day it’s hard not to notice another headline about a population in decline. Amphibians are in decline, songbirds are in decline, bumblebees are in decline, fish stocks are in decline. Nature is under relentless human pressure, both direct and indirect, and before I proceed to make my point today, I need to be very clear that this pressure is real and severe and I don’t doubt for a moment that it’s driving down population sizes of many, many species.
But there’s a very simple but pervasive statistical problem with the data behind population declines. It doesn’t seem to be widely acknowledged, perhaps because it’s a bit difficult to deal with (if you can point me to some literature contradicting this claim, please do; use the Replies). (UPDATE: Thanks to David Steen for pointing out one clear acknowledgement, in Joe Pechmann et al’.s 1991 Science paper on amphibian declines†.) Put most plainly: when we gather time-series data for populations, we should nearly always expect them to decline. Not because researcher activity inflicts harm (although that may sometimes be true), but because of a simple – indeed, trivial – interaction between population dynamics and research logistics.
I’ll explain what I mean in cartoon form. Imagine that I’m going to start a long-term study of some species – let’s call it the lesser purple-snouted crompus. My study could have many research foci, but let’s assume I at least include estimates of population size. Crompus populations (like pretty much every population I know about) fluctuate through time; and local populations are imperfectly correlated. The figure below (or above the post, it’s the same one) shows the two crompus populations available to me, so when I start my study I can choose to work in a population with plenty of crompi* or one in which crompi are rare. For many obvious logistical reasons, I’m far more likely to do the former (starting at the hollow circle in the top population) than the latter (the X’d out circle in the bottom population). So (in my cartoon), I start at what looks like maximum crompus density (as I’d tend to do if I had a choice of many more populations than two), and as a result, almost no matter when I wrap up my study (solid dots), I conclude that there has been net population decline (red arrows).
I’m not claiming this is a brilliant insight; but it is important because it constitutes a logistical bias** pushing our data toward apparent declines. And sure, my crompus example is a cartoon; a number of things may weaken the logistical bias in real studies. I’ll name a few here, but you may think of others. Longer-term studies may last through many population fluctuations; these will be just as likely to find net declines, but more likely to document the ups and downs to put those declines in context. Studies that start in large but not maximal populations will sometimes show net increases. Studies using multiple populations will by definition have to start in more than just the largest one. And a few long-term studies use locations that are chosen independently of current population sizes (for instance, I think the Christmas Bird Count and the Breeding Bird Survey escape the logistical bias, although the Great Backyard Bird Count does not). But I think it’s impossible to argue that the bias isn’t real and fairly pervasive, because the only way to avoid it is to sometimes begin studying crompi in a place that there aren’t any – and this is not a feature of a promising grant proposal.
Because the logistical bias seems like a fairly obvious thing, I’ve been surprised over the years that when I ask questions about it, reactions range from bemusement to hostility. I first asked such questions many years ago, of friends who were studying frogs and were publishing papers on amphibian declines (back when those declines were front-page news). My friends were busily reporting that the ponds in which they’d started their long-term studies had many fewer frogs than they used to; but when I asked how many frogless ponds they started long-term studies in, they seemed to think I was making fun of them. I wasn’t. It just seemed clear to me (as it still seems now) that we can’t estimate long-term population trends unless we sometimes study frogs (or crompi) in places that there aren’t any. (UPDATE: I got this reaction despite the clear mention of the logistical bias in the Pechmann et al. Science paper, which I now realize likely predated even my earliest questions, and certainly predated most of them!)
About now I should probably revisit this point: I have no doubt at all that human pressures have left many populations in serious long-term decline. It’s just that if we want accurate estimates of these declines, we need to account for the logistical bias. We should want those accurate estimates, and we generally say that we do – but I sometimes wonder, and there’s a more general point here. My experience is that in scientific discussions of conservation problems, suggestions we might be overestimating a problem are often not very welcome. A person making such a suggestion might be accused of being a Pollyanna or a corporate shill (I’ve been called both). On the other hand, if we exaggerate a problem a bit, well, many people seem to think that just makes the case for taking action a bit more convincing, and that’s a good thing. I think this conflates conservation the science with conservation the action, and in doing so it just inflicts yet more bias on our data. Many things are amiss in this world; we needn’t paint them as worse than they are – not with the logistical bias, and not with anything else.
All this isn’t in any way an attack on conservation biology. It’s yet another way in which conservation biology as a careful and modern science is a critical foundation for conservation action. Recognizing and acknowledging the logistical bias is easy enough, which is why I’m surprised it isn’t more widely discussed. Fully accounting for it will take some work, but it’s nothing a discipline full of smart people can’t do. Or, I hope, have already done, even if it isn’t obvious to me.
© Stephen Heard (firstname.lastname@example.org) August 2, 2016
UPDATE: Trevor Branch points out an interesting and important corollary for fisheries management (I’ll paraphrase his tweets here). If new fisheries tend to start on larger populations (and why wouldn’t they), then there will be a tendency for stocks to be first exploited when they are unusually large (with respect to their long-term population dynamics). In fisheries, it is (apparently) common to use pre-fishing biomass as a benchmark, and base sustainable catches on it.This may lead to quotas being set inappropriately high!
*^Hey, I made crompi up, I can decide how to pluralize them.
**^Throughout, I’m using the word bias in its statistical sense, not its pejorative one. A bias is simply something that makes the expected value of an estimate differ in a particular direction from the true underlying value. In this case, the logistical bias makes our estimate of ΔN more negative. The operation of this bias doesn’t make us evil – unless, I’d argue, we’re aware of it and deliberately ignore or hide it.
†^(Footnote added with update). The Pechmann paper reports long-term population abundance data for four amphibians in a single ephemeral pond (Rainbow Bay) in South Carolina. Over the study period, all four species fluctuated dramatically, and both increases and decreases were observed. In discussing the difficulty of distinguishing natural fluctuations from declines, Pechmann et al. state “Large populations may be more likely to be noticed or used by researchers. Anecdotal data may therefore be biased toward observing peak populations that eventually will decline, rather than the reverse.” This is a crystal clear statement of the logistical bias, and it’s interesting (in this light) that the paper does not explain why Rainbow Bay was originally chosen as a study site.