Image: Rube Goldberg design by Stivi10 CC BY-SA 3.0 via wikimedia.org.
There are many reasons for “writing early” – for starting to write up a project before data collection and analysis are complete, or even before they’re started. (I discuss this in some detail in The Scientist’s Guide to Writing.) This is particularly true for the Methods section, which is far easier to write when you’re doing, or even proposing, the work than it is when you’re looking back on the work months or years later. But one use for early writing often surprises my students: early writing as a “plausibility check” for methods I’m trying to decide about using.
Here’s what happens. I’ll be sitting with a student (or sometimes, just with myself) and we’ll be trying to decide on an experimental method, or perhaps on a point of statistical analysis. We’ll wonder, “should we do X?” And I’ll say: “OK, let’s imagine writing a Methods paragraph describing X. How would it feel?” Continue reading
Image: Reproducible bullseyes, by andresmp via publicdomainfiles.com. Public domain.
You read a lot about reproducibility these days. In psychology, it’s a full-blown crisis – or so people say. They may even be right, I suppose: while it’s tempting to dismiss this as melodrama, in fact a surprising number of well-known results in psychology do actually seem to be irreproducible. In turn, this has given rise to fervent calls for us to do “reproducible science”, which has two main elements. First, we’re asked to publish detailed and meticulous accounts of methodologies and analytical procedures, so that someone else can replicate our experiments (or analyses). Second, we’re asked to actually undertake such replications. Only this way, we’re told, can we be sure that our results are reproducible and therefore robust*.
Being reproducible, though, makes a result robust in only one of two possible senses of the word, and I think it’s the less interesting of the two. What do I mean by that? Continue reading
Image: Excerpt from Heard et al. 1999, Mechanical abrasion and organic matter processing in an Iowa stream. Hydrobiologia 400:179-186.
Nearly every paper I’ve ever written includes a sentence something like this: “All statistical analyses were conducted in SAS version 8.02 (SAS Institute Inc., Cary, NC)”* But I’m not quite sure why.
Why might any procedural detail get mentioned in the Methods? There are several answers to that, with the most common being: Continue reading
Image: Once Upon a Time, CC-0 via pixabay.com
We often tell ourselves that a good Methods section allows someone else to replicate our experiments. I’ve argued, among other places in The Scientist’s Guide to Writing, that we needn’t and shouldn’t expect this function of most Methods sections. Rather, a good Methods section gives readers what they need to ascribe authority to you as a scientist, and to understand the Results you’ll present.
I get frequent pushback against this idea, usually in connection with prominent hand-wringing over the so-called “replication crisis”. But a couple of weeks ago I gave a talk about writing at Saint Mary’s University (the one in Halifax, Nova Scotia) and I got a different and very convincing kind of pushback. Continue reading
(Image: Robert Boyle’s (1660) vacuum pump, from New Experiments Physico-Mechanical, Touching The Spring of the Air, and its Effects; Made, for the most part, in a New Pneumatical Engine)
Unless you’ve been living under quite a large rock, you’ve heard or read a lot lately about the “reproducibility crisis” in science (here’s a good summary). That our work should be reproducible is certainly a Good Thing in principle, but there are complications where the rubber hits the road. Today, some thoughts on reproducibility, and on what, if anything, it means for the writing of a paper’s Methods section. And I think some historical perspective is both interesting and useful – because the reproducibility “crisis” is 400 years old.
There’s an odd disconnect in the way we think about our Methods sections. Continue reading