Tag Archives: reproducibility

Reproducibility and robustness

Image: Reproducible bullseyes, by andresmp via publicdomainfiles.com.  Public domain.

You read a lot about reproducibility these days.  In psychology, it’s a full-blown crisis – or so people say.  They may even be right, I suppose: while it’s tempting to dismiss this as melodrama, in fact a surprising number of well-known results in psychology do actually seem to be irreproducible.  In turn, this has given rise to fervent calls for us to do “reproducible science”, which has two main elements. First, we’re asked to publish detailed and meticulous accounts of methodologies and analytical procedures, so that someone else can replicate our experiments (or analyses).  Second, we’re asked to actually undertake such replications.  Only this way, we’re told, can we be sure that our results are reproducible and therefore robust*.

Being reproducible, though, makes a result robust in only one of two possible senses of the word, and I think it’s the less interesting of the two.  What do I mean by that?  Continue reading

Writing the Methods section as narrative

Image: Once Upon a Time, CC-0 via pixabay.com

We often tell ourselves that a good Methods section allows someone else to replicate our experiments.  I’ve argued, among other places in The Scientist’s Guide to Writing, that we needn’t and shouldn’t expect this function of most Methods sections.  Rather, a good Methods section gives readers what they need to ascribe authority to you as a scientist, and to understand the Results you’ll present.

I get frequent pushback against this idea, usually in connection with prominent hand-wringing over the so-called “replication crisis”. But a couple of weeks ago I gave a talk about writing at Saint Mary’s University (the one in Halifax, Nova Scotia) and I got a different and very convincing kind of pushback.  Continue reading

Our literature isn’t a big pile of facts

pile of journals smallI wrote recently about the reproducibility “crisis” and its connection to the history of our Methods, and some discussion of that post prompted me to think about another angle on reproducibility. That angle: is our literature a big pile of facts? And why might we think so, and why does it matter?

John Ioannidis (2005) famously claimed that “Most Published Research Findings Are False”. This paper has been cited 2600 time and is still frequently quoted, tweeted, and blogged.* Ioannidis’ paper made important points about the way natural variation, statistical inference, and publication culture interact to mean that we can’t assume every statistically significant result indicates a truth about nature. But I think it’s also encouraged us as scientists to over-react, and to mislead ourselves into a fundamental misunderstanding of our scientific process. To see what I mean, start with Ioannidis’ very interesting opening sentence:

“Published research findings are sometimes refuted by subsequent evidence, with ensuing confusion and disappointment.”

I say this is an “interesting” sentence because I think raises an important question about us and our understanding of the scientific process. That question: why should we experience “confusion and disappointment” when a published study isn’t backed up by further evidence? Surely this is an odd reaction, and one that only makes sense if we think that everything in a journal is a Fact, and that our literature is a big pile of such Facts – of things we know to be true, and things we know to be false. Continue reading

Reproducibility, your Methods section, and 400 years of angst

(Image: Robert Boyle’s (1660) vacuum pump, from New Experiments Physico-Mechanical, Touching The Spring of the Air, and its Effects; Made, for the most part, in a New Pneumatical Engine)

Unless you’ve been living under quite a large rock, you’ve heard or read a lot lately about the “reproducibility crisis” in science (here’s a good summary). That our work should be reproducible is certainly a Good Thing in principle, but there are complications where the rubber hits the road. Today, some thoughts on reproducibility, and on what, if anything, it means for the writing of a paper’s Methods section.  And I think some historical perspective is both interesting and useful – because the reproducibility “crisis” is 400 years old.

There’s an odd disconnect in the way we think about our Methods sections. Continue reading