Your paper is not a Wikipedia article

Scientists don’t agree on all that much, but we agree that it simply isn’t possible to “keep up with the literature”. Our scientific literature is such a torrential firehose that there’s just no way. And if we’re aware of that as readers, you’d think that as writers we’d be taking special pains to be concise. Well, maybe you’d think that. Or maybe you’d think instead that we’d just like everyone else to be concise.

That last sentence was a little tiny rant, I know. It’s brought to you by several manuscripts I’ve seen lately and by their interesting common feature: they seemed to be constructed not as scientific papers, but as Wikipedia articles. They tried to be encyclopedic founts of information on every aspect of a problem, rather than telling a focused story that raises and then answers an important scientific question.* Here are some of the ways our manuscripts catch Wikipedia disease:

  • Comprehensive literature reviews in the Introduction. Not just enough to situate the work in the field and demonstrate the knowledge gap they intend to fill – but attempting to summarize and cite every paper that’s relevant, or even tangentially connected, to the work.
  • Encyclopedic description of the study system in the Methods. If you did a study on the interaction of a plant with its leaf-chewing beetles, a paragraph outlining the plant’s flowing phenology, pollination biology, and seed-dispersal strategy isn’t something your reader needs.
  • Methods presented in the kind of detail required for someone to exactly replicate the work. Yes, I know it’s shocking that I wouldn’t think that’s necessary; but 99.99% of your readers aren’t there to repeat your work. If you’re philosophically committed to the idea of replicable science, put those details in an online supplement where most readers can conveniently ignore them. (And see this piece, about the 450-year history of replicability and authority in science.)
  • Variables, samples, and measurements reported in the Methods but never analyzed or discussed. Yes, I know it was a lot of work to catch your fish, so while you had them in hand you measured eighteen morphological variables and seven blood-chemistry ones. But if answering your research question didn’t involve analyzing those data, they don’t belong in your paper.
  • Three possible explanations in the Discussion for every single result. Folks often think they’re supposed to “discuss their results”. But that’s not right, or particularly useful. Instead, discuss the ways the results answer the research question. You don’t need to recapitulate every thing that happened or every data pattern you noticed – just the ones that weigh for or against the hypothesis you’re testing.

I’m sure you can think of some more common offenders – please use the Replies!

Why does this happen? I think there are (at least) three important drivers.

First, it’s unusual for scientific writing to be taught in depth, or well.** Instead, it’s often taught through undergraduate labs, with dubious advice like “write like what you see in the literature” and “write so the reader could repeat exactly what you did”. And students may think (probably correctly) that there’s more grading risk to leaving information out than there is to putting extra information in.

Second, there’s perfectly normal human psychology: it was an enormous amount of effort to measure that variable, dig up that citation, or run that analysis – so darn it, I’m going to put it in the paper. We all feel that urge!

Third, there’s some understandable confusion, at least for early career writers, about the purpose of writing. As a graduate student, you’re writing (or, you wrote) for two reasons: to communicate information, and to communicate your knowledge of information. This is especially true of the thesis – which exists partly to communicate newly discovered knowledge, but also to communicate the case for awarding a credential that recognizes mastery of existing knowledge. Scientific papers don’t have that latter function. Yes, you have to demonstrate that you’re aware of crucial literature background or the most appropriate approaches to statistical analysis; but only so as to support your approach to the narrow research question – not your standing as an authority in the broader field.

So, scientific writers, please let Wikipedia be Wikipedia; and let papers be papers. Your readers will thank you.

© Stephen Heard  September 21, 2021


*^This is actually just one of two ways a manuscript might be too long: a matter of too much content. It’s also possible (and extremely common!) for a manuscript to use more text than needed to communicate a given amount of content. I explore this distinction, with recommendations for each case, in Chapter 20 of The Scientist’s Guide to Writing.

**^You’d probably expect me to have strong feelings on this. You’d probably expect me to link to my book again, or to the syllabus for my own scientific writing course. But that would be terribly gauche, and I’m not going to… oh, OK, since you’re insisting. The book. The syllabus.

 

17 thoughts on “Your paper is not a Wikipedia article

  1. Peter Apps

    So it’s not just me whose teeth are worn to stumps over this. I put it down to the growing requirement for students to publish their work before they submit their theses or dissertations. They write their chapters, in pondrous academinc prose to signal their aspiring membership of the academy, citing every scrap of literature they can get their hands on to demonstrate to the supervisor and examiners that they are diligent in the library, they describe minute details of the method (but usually leave out something critical) to show that they worked hard in the field or the lab, fill three files of supplementary with results to show how fruitful their efforts were, then bury the results in a heap of indiscriminate number crunching because proper scientists use R. Then they demonstrate the breadth of their imagination by dreaming up obscure possible explanations, and their critical faculties by shooting them down. All well and good for a dissertation or thesis that only half a dozen people are ever going to read, and for a student seeking a qualification.

    But then they take that same chapter, and copy and paste it to a journal’s online submission page …….

    Liked by 2 people

    Reply
  2. sleather2012

    “If you did a study on the interaction of a plant with its leaf-chewing beetles, a paragraph outlining the plant’s flowing phenology, pollination biology, and seed-dispersal strategy isn’t something your reader needs.”

    I sort of see where you are going with this but if I am reviewing a field study/experiment paper, I would want that sort of background stuff, so I can asses the validity of the approach

    Liked by 2 people

    Reply
    1. ScientistSeesSquirrel Post author

      Well, I guess I’d push back, Simon. If the paper is about how the plant interacts with leaf-chewing beetles, what part of the flowering phenology or seed-dispersal strategy is necessary for you to assess “the validity of the approach”? Of course you and I could probably both come up with special cases where it would be – perhaps the leaf-chewing beetles have larvae that depend on pollen. But that information goes in, IMO, if and only if it’s relevant – not as a need for general background information.

      Now your turn to push back against my pushback 🙂

      Liked by 3 people

      Reply
      1. sleather2012

        I’m not strongly pushing back here, but there are some occasions where knowing something about how the plant interacts with the environment under different circumstances could make a difference. Now I guess, I’m going to have to find a concrete example 🙂

        Liked by 2 people

        Reply
        1. Martin Pareja

          Hi Stephen and Simon
          I think this is a very interesting point you are discussing here. I agree completely with Stephen that only necessary information for the study needs to be included. However what Simon’s point highlights (and another comment below by thetweedybiologist) is that “necessary” is different to each person, so (some of) my readers might think something is necessary even though I do not. And as writer’s how do we balance that? I see this often in reviews, when more aspects of natural history are requested, even though I feel they contribute very little.

          I think the line is not as clear as we would like it to be…

          Liked by 2 people

          Reply
          1. jpschimel

            The essence of all these comments is that in the introduction, your’e framing the problem and setting up the questions. So the question is “Is this information important to doing that?” Sometimes phenology is–so give us just enough to show us why, and how it fits into the narrative. How does this advance MY argument? But I do often see manuscripts where the introduction is mostly textbook material, rather than framing a problem.

            And for teaching resources, I think Stephan’s book is one of the very best available. My own “Writing Science: how to write papers that get cited and proposals that get funded” is one of the very few others. They have different approaches but line up in the core arguments.

            Josh Schimel

            Liked by 2 people

            Reply
        2. Jeff Houlahan

          Simon and Steve, your comments perfectly captures why we have long introductions – because there is so much potential variability in what reviewers will expect and you are more likely to get your manuscript turned around for leaving something out than putting too much in. And I don’t think it’s wrong for reviewers to want different things. I skim read most intros – pausing at the first and last paragraphs. But that reflects my idiosyncratic ‘wants’ not the ‘correct’ amount of info that should be in an intro.

          Liked by 1 person

          Reply
  3. thetweedybiologist

    I think there is also a question of what constitutes as appropriate background literature. The author may view the study in one context, the reviewer in another, and not wanting to get rejected, the author may choose to merely cover all bases.

    Liked by 2 people

    Reply
  4. Jeremy Fox

    I used to agree with you on not reporting the collection of data that aren’t analyzed in the paper. But now I’m not so sure. The choice of data to analyze can be a form of p-hacking (it’s not always, but it can be).

    On the other hand, I’m not sure that making authors report all the data that weren’t analyzed actually does much to prevent this form of p-hacking. I kind of feel like it’s preregistration or bust if you really want to stamp this form of p-hacking.

    Liked by 1 person

    Reply
  5. Jeremy Fox

    Anecdotally, I find that experienced PIs tend to be much more impatient with long introduction sections than new grad students are. Same for talks–I want speakers to dive right in, not spend the first third (or whatever) of the talk on background material I don’t need. But anecdotally, I don’t feel like grad students are nearly as impatient with talk introductions as I am.

    I wonder if part of this is because students do find literature review useful, and so they don’t think about the other purposes that an introduction can or should serve. Whereas I appreciate papers and talks that have a narrative structure–that tell a story. A list of things previous researchers have learned or done about [topic], followed by a statement of what the author did about [topic], is a bad narrative. Just like how a list of events, in the order in which they occurred, is a bad story.

    Shameless self-promotion:
    https://dynamicecology.wordpress.com/2016/04/19/many-introduction-sections-suck-heres-how-to-ensure-yours-doesnt/

    Other-promotion, to cancel out the self-promotion: http://ecoevoevoeco.blogspot.com/2014/10/how-to-writepresent-science-baby.html

    Liked by 2 people

    Reply
  6. Jennifer

    A year ago, I would have agreed wholeheartedly with all these points. Then, this year we had two papers rejected without review for the same reason – editors came back with “this has already been done and the technique is on the market”. Our articles were proposing a better methodology than currently marketed versions AND included a correction of a serious calculation error present in earlier articles by other groups.

    To counter this reasoning, we decided to add a supplementary data table with a full literature review and analysis showing how most of the earlier articles had great titles but no concrete results, and secondly that *all* the earlier articles noted the methodological weakness that our work was treating with new analytical tools.

    I know that peer review is a highly subjective and imperfect process. But those rejections were so easily demonstrated as false that I’ve been wondering if this has something to do with journals becoming risk averse. Maybe the last few years of fake news and faked data scandals in ecology are changing how editors and reviewers treat new ideas at some journals??

    (and happy end to the anecdote – the article is now published with a very long supplementary data section 🙂 )

    Liked by 2 people

    Reply
    1. Marco Mello

      I have a similar impression. Editors of top journals are now profoundly split between the need for publishing flashy science and the need to avoid scandals. It makes them more susceptible to conscious or unconscious manipulation by authors and reviewers than they would like to admit. Probably as a consequence, editorial decisions in the past few years look more and more strange and less technical. Maybe the real problem is us having passed our journals to commercial publishing houses, whose interests don’t have much to do with the advancement of science. I miss the good days of academic journals being managed by academic societies, without the interference of corporations, marketing, media, and the like.

      Liked by 2 people

      Reply
      1. Jennifer

        I agree – there are several journals that I never look at anymore because of these “strange” decisions! And that includes more than a few with skyhigh page charges supposedly justified by the quality of the science they publish.

        Liked by 1 person

        Reply
  7. Mats Ittonen

    I do agree with most of Stephens post, but, on the other hand, I do often appreciate a little bit of unnecessary information about study organisms. Of course one shouldn’t write about all aspects of their life histories, but just the necessary information is often too little. Understanding the study system a little bit more broadly than the minimum required for understanding a paper makes the paper easier to read and relate to. And that cute or cool, but completely unnecessary, detail about your study species may be effective in keeping tired readers interested.

    Liked by 1 person

    Reply

Comment on this post:

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.