A couple of things have me thinking about review papers lately. First, I’ve just published one and I’m about to submit another. Second, over at EcoEvoEvoEco, Andrew Hendry had some fun figuring out how his citation impact would have been improved had he only ever published review papers rather than primary-science ones.
As Andrew points out, writing reviews brings a lot of career benefits. Among them:
- They tend to be widely read and heavily cited
- They build your reputation as an expert in the subfield you review
- They draw attention to your primary-literature work (presuming your review cites it)
- They support future grant proposals to fill knowledge gaps they identify.
So the case for review-writing as a career move is strong. But what about the case for review-writing as a contribution to science? Not all reviews move science forward much. It’s easy* to identify a topic, collect 20 years’ worth of papers touching on that topic, and summarize them – but the result may not be worth publishing.
The publishability question often comes up because many reviews evolve from the 1st chapters of theses. That’s certainly true for some of mine (for example, Heard 1994 from my own thesis, or Ancheta and Heard 2011 from Justin Ancheta’s). But while every thesis reviews the literature; not every thesis contains a publishable review. I’m not disparaging theses that don’t; instead, this happens because the function of the review chapter in a thesis is not always the same as the function of a review in the literature.
So when is a review paper worth publishing? I think there are two criteria: a weak one and a strong one.
The weak criterion is this: that a particular subfield, or more interestingly a particular question, hasn’t been reviewed for a long time. As an example: my most recently-published review (Pureswaran et al. 2016) is on the subject of spruce budworm population dynamics. Eastern spruce budworm is the most important forestry pest in central and Atlantic Canada, and it’s also a canonical example of outbreak population dynamics. It has major outbreaks every 35 years or so (one is in its early stages now), and despite a truly enormous literature, there is no strong consensus about what governs its population cycles. My coauthors and I realized that there wasn’t anything recent laying out the basics of budworm biology and how this informs the three rival models (well, classes of models) that purport to explain the cycles. (There was a much more technical review delving into the details of the modeling, but that one was written for a very different audience than we imagined). We thought it would be useful to publish a review that would introduce people to the conceptual thinking underlying our current understanding of budworm cycles, how that thinking developed over the last 100 years, and how the most recent work is moving it forward. I think our review meets that goal, but it does so by summarizing current understanding; it doesn’t (I think) lead to any new understanding by itself.
The stronger criterion (as my last sentence hints) is that writing a review is a good idea if you can synthesize literature on a question and, by doing so, come to some important new insight. This is the criterion the broadest and highest-impact journals use when considering proposals for review papers. As I’ve progressed in my career, I’ve put more weight on this criterion and become less interested in the weaker one. As an example, Justin’s review (Ancheta and Heard 2011, mentioned above) considered the impact of insect herbivory on the population dynamics of rare plants. There’s a conventional wisdom, partly in the literature but mostly so deeply entrenched that it doesn’t even make it there, that insect herbivores don’t have significant impacts on plant population growth** (with a few interesting exceptions, like defoliators of coniferous trees, including spruce budworm). Our review establishes pretty clearly that, at least for rare plants, this conventional wisdom is wrong; insect herbivores can have major effects on plant vital rates and, in the few instances where it’s studied, on plant population sizes – right up to local extinctions. The studies showing this, though, are scattered, and it takes a review to see the story they collectively tell. Finding that story wrings a genuinely new answer from the literature.
How does a review meeting the strong criterion come to be? In my experience, the idea usually grows from a nagging feeling of dissatisfaction with the literature on some topic. I’ve read what’s out there, and I can see some kind of pattern, or contradiction, or gap, that the writers of individual papers don’t seem to be picking up on. That was very much the story of Justin’s review; I had read paper after paper showing strong impacts of insect herbivores on host plant populations, but because these papers were scattered in the literature, none ever seemed to point out that this was a generally important thing that contradicted conventional wisdom. And to be fair, none of them by itself supported such a conclusion. Only their sum offers strong evidence that we’ve been looking at herbivory wrong (from a population dynamics point of view). So despite my claim that there are good reasons not to read too much of the literature, some important insights do come from reading a lot of papers and noticing patterns and contradictions and gaps.
I guess I should go read some papers now.
© Stephen Heard (sheard@unb.ca) December 1, 2016
Many of the thoughts in this post I first expressed in an interview I did some time ago with @AurelieLitReview. Thanks to her for spurring my thoughts on this.
*^For some definitions of “easy”.
**^I blame in part Hairston, Smith and Slobodkin’s 1960 “The World is Green” paper – which almost nobody reads, but everyone takes as establishing that plants aren’t under top-down control by herbivores. A paper Lynne Remer and I wrote exploring stability of herbivore-plant dynamics was repeatedly rejected on the grounds that our models didn’t matter because herbivores didn’t affect plant dynamics – something that was stated, independently by multiple reviewers, completely without evidence. Grrr.
Interesting post – I encourage my students to their thesis review chapter as a meta-analysis as I feel that then they do add something to the field.
LikeLike
Agreed, formal meta-analysis certainly makes a review stronger than just a narrative. I don’t think that meta-analysis alone makes a paper meet my criteria, but it helps!
LikeLiked by 1 person
This reminded me that I wrote a bit about writing reviews a while ago https://smallpondscience.com/2014/10/07/writing-a-review-thoughts-from-the-trenches/ and then I was a little embarrassed on how long ago it was. Although two of these are out now, the first review I started has been put on the back burner. Time to dig it out I’d say!
LikeLiked by 1 person
Pingback: Friday links: Freeman Dyson vs. community ecology, “white trash” in academia, and more | Dynamic Ecology
Interesting post, thanks!
What are your thoughts on writing a primary research paper paired with a subsequent comprehensive review vs. a single larger paper that includes both primary data and an extensive discussion that mimics a comprehensive literature review but is more condensed?
I’ve got a smattering of old data that is not in and of itself likely a complete enough “story” for publication in a decent journal, but should be very impactful when when put in context of hundreds of as yet unconnected reports a literature. I think it’s valuable to review those 15 years of papers, since a significant theme is obscured by continued use of non-standard nomenclature. However, since I’ve moved on in my research, it’s unlikely that two papers will ever get done, I am leaning towards a single paper with a mega-discussion, but that seems a tall order to get published.
Any ideas?
LikeLike
Of course it’s hard to have strong opinions without knowing more specifics, but I like the argument that the data may mean a lot more in the perspective of the review. Of course, the key will be to control the size of the combined MS. Very large papers aren’t just harder to get published; I think they’re harder to get *read*, too. So ruthless editing will probably matter! Fortunately, I think the virtues of “comprehensiveness” in a review are less than people often think (so ruthless editing may be less painful than expected). Good luck!
LikeLike