Scientists don’t agree on all that much, but we agree that it simply isn’t possible to “keep up with the literature”. Our scientific literature is such a torrential firehose that there’s just no way. And if we’re aware of that as readers, you’d think that as writers we’d be taking special pains to be concise. Well, maybe you’d think that. Or maybe you’d think instead that we’d just like everyone else to be concise.
That last sentence was a little tiny rant, I know. It’s brought to you by several manuscripts I’ve seen lately and by their interesting common feature: they seemed to be constructed not as scientific papers, but as Wikipedia articles. They tried to be encyclopedic founts of information on every aspect of a problem, rather than telling a focused story that raises and then answers an important scientific question.* Here are some of the ways our manuscripts catch Wikipedia disease:
- Comprehensive literature reviews in the Introduction. Not just enough to situate the work in the field and demonstrate the knowledge gap they intend to fill – but attempting to summarize and cite every paper that’s relevant, or even tangentially connected, to the work.
- Encyclopedic description of the study system in the Methods. If you did a study on the interaction of a plant with its leaf-chewing beetles, a paragraph outlining the plant’s flowing phenology, pollination biology, and seed-dispersal strategy isn’t something your reader needs.
- Methods presented in the kind of detail required for someone to exactly replicate the work. Yes, I know it’s shocking that I wouldn’t think that’s necessary; but 99.99% of your readers aren’t there to repeat your work. If you’re philosophically committed to the idea of replicable science, put those details in an online supplement where most readers can conveniently ignore them. (And see this piece, about the 450-year history of replicability and authority in science.)
- Variables, samples, and measurements reported in the Methods but never analyzed or discussed. Yes, I know it was a lot of work to catch your fish, so while you had them in hand you measured eighteen morphological variables and seven blood-chemistry ones. But if answering your research question didn’t involve analyzing those data, they don’t belong in your paper.
- Three possible explanations in the Discussion for every single result. Folks often think they’re supposed to “discuss their results”. But that’s not right, or particularly useful. Instead, discuss the ways the results answer the research question. You don’t need to recapitulate every thing that happened or every data pattern you noticed – just the ones that weigh for or against the hypothesis you’re testing.
I’m sure you can think of some more common offenders – please use the Replies!
Why does this happen? I think there are (at least) three important drivers.
First, it’s unusual for scientific writing to be taught in depth, or well.** Instead, it’s often taught through undergraduate labs, with dubious advice like “write like what you see in the literature” and “write so the reader could repeat exactly what you did”. And students may think (probably correctly) that there’s more grading risk to leaving information out than there is to putting extra information in.
Second, there’s perfectly normal human psychology: it was an enormous amount of effort to measure that variable, dig up that citation, or run that analysis – so darn it, I’m going to put it in the paper. We all feel that urge!
Third, there’s some understandable confusion, at least for early career writers, about the purpose of writing. As a graduate student, you’re writing (or, you wrote) for two reasons: to communicate information, and to communicate your knowledge of information. This is especially true of the thesis – which exists partly to communicate newly discovered knowledge, but also to communicate the case for awarding a credential that recognizes mastery of existing knowledge. Scientific papers don’t have that latter function. Yes, you have to demonstrate that you’re aware of crucial literature background or the most appropriate approaches to statistical analysis; but only so as to support your approach to the narrow research question – not your standing as an authority in the broader field.
So, scientific writers, please let Wikipedia be Wikipedia; and let papers be papers. Your readers will thank you.
© Stephen Heard September 21, 2021
*^This is actually just one of two ways a manuscript might be too long: a matter of too much content. It’s also possible (and extremely common!) for a manuscript to use more text than needed to communicate a given amount of content. I explore this distinction, with recommendations for each case, in Chapter 20 of The Scientist’s Guide to Writing.
**^You’d probably expect me to have strong feelings on this. You’d probably expect me to link to my book again, or to the syllabus for my own scientific writing course. But that would be terribly gauche, and I’m not going to… oh, OK, since you’re insisting. The book. The syllabus.