Back in February, I asked “What’s your most overcited paper?. That left an obvious question hanging: what, instead, is your most undercited paper? I’m going to tell you about mine, and I hope you’ll tell me about yours in the Comments. You may be worried that this will be an exercise in which I whine that nobody appreciates my work, but in fact that’s not what I have in mind. Well, not exactly*.
In my “overcited” post, I drew a distinction between a statistically overcited paper and an expectationally overcited one. I’ll draw the same distinction here: a paper is statistically undercited if its citation rate is low relative to other papers I’ve published; it’s expectationally undercited if its citation rate is lower than I’d have expected based on its content.
The simplest way to identify a statistically undercited paper is to regress citation count against years post-publication, and look for large negative residuals**. The figure above shows this regression for my publications, with data from Google Scholar (using Web of Science instead gives a slightly shallower slope but no other important difference). My largest negative residual is for paper A, which is Heard and Semple (1988), The Solidago rigida complex: a multivariate morphometric analysis and chromosome numbers: 27 years old, and cited 14 times (including 7 self-citations). But the low citation rate for this paper doesn’t surprise me a bit. It’s a taxonomic revision, and these are undercited as a class; and it’s a revision of a very small group of plants that occur (but aren’t dominant) in North American prairies and are unimportant otherwise. This was perfectly competent and well-motivated work (I claim), but nothing anybody would expect to become a citation classic!
What about expectational undercitation? That’s more difficult, or at least more subjective, but also much more interesting. Here I nominate Paper B, Heard and Remer (2008), Travel costs, oviposition behaviour, and the dynamics of insect-plant systems***. This paper has had just 6 citations in 7 years (two of them self-citations) – and yet I think it reports some very interesting and quite important results.
In our travel-costs paper, we used simulation models to ask a fairly simple question: could insect herbivores regulate plant populations, imposing density-dependence in mortality or reproduction? We found that, indeed, a couple of simple pieces of biology could lead to density-dependence in attack and thus in plant reproduction. First, if female insects experience costs of travel from one host plant to the next, and those costs scale with distance, then sparser populations experience reduced attack. Second, if females behave adaptively to mitigate those travel costs (by laying more eggs per plant when plants are rare, thus reducing lifetime travel costs but accepting more larval competition), an increasing fraction of plants escape attack as the plant population dwindles. So that’s the answer to our question: yes, insect herbivores have the potential to make plant reproduction density-dependent, and so to regulate populations of their host plants. This should be a big deal: the similar potential for predators to regulate their prey and for parasites and parasitoids to regulate their hosts has been a major theme of population ecology since at least the 1920’s (when the Lotka-Volterra predator-prey models were first derived). Since essentially no studies had asked the question about plants, we thought we were filling a pretty big knowledge gap.
So why has our travel-costs paper elicited a collective shrug from ecologists? I don’t know, but I have a small idea and a big idea about it:
- The small idea: we published our paper in a journal with limited visibility (Theoretical Ecology, volume 1). You often hear people arguing that in this age of electronic searching, journal identity and impact factor don’t matter anymore, and every paper will stand or fall on its own merits. Well, I don’t think we’re there yet.
- The big idea: our paper answered a question that we weren’t supposed to ask. For reasons I’ve never understood, a remarkably large fraction of population ecologists seem to be convinced that insect herbivores simply don’t affect the population dynamics of their hosts. Sure, everyone accepts that insects damage individual plants, but this isn’t supposed to make any difference to plant population size. If you’re skeptical that such an idea could be so widely held: our travel-costs paper was rejected from six different journals (!), and insects-just-don’t-affect-plant-population-dynamics was an explicit criticism over and over again. This is really very odd, for four reasons. First, nobody ever seem to cite any published paper, or even any data, as a basis for believing that insects don’t affect plant population dynamics; it’s just something everybody knows (maybe it’s a zombie idea?). Second, if plant population dynamics aren’t affected by insect herbivory, it makes one wonder why farmers spend billions of dollars on insecticides to protect harvests consisting of seeds. Third, some famous biocontrol success stories make it obvious that at least some plants can be strongly controlled by insect herbivory. And fourth, if you look for data that actually quantify population-level impact of insect herbivores on rare plants (for which the question is really important), you don’t find much – but what you do find suggests dramatic impacts. Our review paper establishing this, by the way, is also undercited.
I think there’s a more general point here. I think every field has questions that we all agree we’re supposed to ask, and questions that we’ve all somehow decided aren’t worth asking. (Perhaps Thomas Kuhn’s scientific revolutions are times when we change our minds about which questions belong in which category – although I’m not arrogant enough to think our travel-costs paper should spark a Kuhnian revolution.) Our question about whether insect herbivores could regulate populations of their hosts just isn’t part of the accepted agenda of population ecology as a field. As a result, there was almost no literature for us to connect our paper to, reviewers didn’t seem to think there was any point applying models to the question, and nobody is discovering our paper because nobody is searching for literature on the topic. It’s hard to change the direction of a field, and it might be harder now than ever with our enormous literature and our emphasis on citation rate as a metric for impact. You can think of this as a bootstraps problem, if you like: a question won’t be recognized as important until there are a bunch of highly cited papers about it, but we won’t write or cite papers about a question until it’s widely recognized as important. This is a good recipe for being stuck.
Now, I realize that this is all verging uncomfortably close to “I’m not crazy, I’m ahead of my time, and nobody recognizes my genius”. So I have to acknowledge the alternative possibilities: that insect herbivory really doesn’t ever matter; or that it does, but our paper just isn’t very good. I don’t believe these alternatives, myself; but if you do, you might not be wrong.
So that’s my most undercited paper. What’s yours, and what do you think made it so? Please tell us about it in the Comments.
© Stephen Heard (firstname.lastname@example.org) May 26, 2015
*Actually, I don’t think my work as a whole is underappreciated. I haven’t published as many papers as some of my peers, but I’m proud of the impact they’ve had, and some have been cited fairly heavily (if you’re curious, here’s my Google Scholar profile). But sprinkled among my well-cited papers are some poorly-cited ones. There are probably a variety of reasons for this, which is a good topic for a future post.
**I suggested in my “overcited” post that a better analysis would take into account factors like research subdiscipline and the type of paper (review, primary paper, etc.). I don’t think that would change the outcome here.
***Yes, I know, it’s paywalled. In the unlikely event you can’t find a PDF copy on the web, shoot me an e-mail and I’ll send you one.