The Discussion of a scientific paper is, I think, the hardest part to write. That’s because every other section has a fairly well-defined purpose and thus a set of standard contents: the Methods communicates how you did the work, the Results shows what you found, and the Introduction sets the work in context while foreshadowing what’s to come in the Discussion. But the Discussion is a challenge because the writer has considerable freedom in both content and organization. What goes in a Discussion? Almost anything, it seems.
Actually, Discussions aren’t quite as free-form as that. There are four very common elements of a Discussion: interpretation of the study’s results (usually with reference to the broader literature); consideration of limitations; broader implications of the work; and the future prospects that arise from the work. Most Discussions include most of these, often (but not always) in that order. Together, they help you “consider your results to claim the strongest interpretation and the broadest importance that you can legitimately argue” (that’s a direct quote from The Scientist’s Guide to Writing).
Some writers (especially early in their careers) have trouble with that second element: consideration of limitations. It’s trouble of an interesting sort. You’ve probably read a Discussion (you’ve probably written one) in which the “consideration of limitations” ends up being a laundry list of all the possible ways things might have gone wrong. Perhaps the sample size was too small; perhaps the experiment didn’t run long enough*; perhaps there were unmeasured but confounding variables; perhaps a reagent was incorrectly mixed or an instrument incorrectly calibrated; perhaps – well, you get the idea. There’s possibility after possibility – some plausible, some requiring demons that are both malevolent and very, very clever – but there’s little explanation of which inferences the limitations might affect, and how little or much. These exercises in self-flagellation invite readers to think that there’s little if any value in the work, and that they’ve therefore wasted their time in reading. This is not the mental state you want in a recent reader of your paper, I bet.
Let’s go back to what I think is the key: that in a Discussion, you “consider your results to claim the strongest interpretation and the broadest importance that you can legitimately argue”. Limitations are indeed an important part of that – that’s why you’re going for the strongest interpretation that you can legitimately argue. Readers will sniff out over-claiming (if reviewers don’t sniff it out first). Your job is to pre-emptively address limitations so as to defuse them, or put appropriate bounds on them, in the mind of the reader – not to shred your own work so that nobody will see it’s value. And you must believe it has value, because if you didn’t, you wouldn’t be writing it up! Josh Schimel puts this really well in his excellent book Writing Science. He argues that your Discussion shouldn’t say “Yes, but…” – it should say “But, yes”. Deal with the study’s limitations fairly, but do so early in the Discussion and move from limitations into the value of your results – ending not with a “but”, but with a “yes”. Yes, despite the study’s limitations, we’ve learned from the results; yes, despite the study’s limitations, the reader has been rewarded for reading the paper.
Why do (many) early-career writers write this way? Interestingly, I think it’s a good example of how well-meaning attempts to teach scientific writing lead to worse, not better, results. Undergraduates are usually trained in scientific writing via lab reports, and in their first year or two they’re given some recipes for what various sections of papers should contain. The Discussion recipe is tough for instructors to write and tough for students to follow, because Discussions aren’t formulaic, but every recipe I’ve ever seen mentions discussing limitations. So that’s what students do, and they do it with enthusiasm. They shred their own results.
Results-shredding is one of several ways that our attempts to teach writing make things worse, not better. Consider the general advice to “read some scientific papers and write like that”: the result is students modeling the worst features of our literature, like the passive voice, impenetrable thickets of jargon and acronyms, and stiffly formal wording with no trace of the writer’s own voice. More controversially, consider “write your Methods so that a reader could repeat your experiment exactly”, which leads to stultifying text full of endless irrelevant detail). I guess we shouldn’t be shocked. Few of us have any formal training in scientific writing; virtually none of us have any formal training in teaching scientific writing. So we muddle through doing our best.
Science is hard, and every study has limitations. That’s not the same thing as every study being worthless. Let’s not write as if it were. Let’s not ignore or hide the limits to our inference – but let’s not use our Discussions to shred our own results.
© Stephen Heard May 11, 2021
This post is based in part on material from The Scientist’s Guide to Writing, my guidebook for scientific writers. In particular, this post reflects one place where the upcoming second edition will give better guidance.
*^Interestingly, this was by far the most common reason folks offer for not believing the “SciComm teaching doesn’t work” paper I blogged about a couple of weeks ago. If you don’t see a treatment effect in a one-semester experiment, perhaps you’d have seen one after two semesters, or two years, or ten years! Which is true, of course. It’s always true, and therefore uninteresting without some data or model suggesting that it’s particularly likely in this case. I haven’t seen anyone come up with that.