Category Archives: scientific method

Reproducibility and robustness

Image: Reproducible bullseyes, by andresmp via publicdomainfiles.com.  Public domain.

You read a lot about reproducibility these days.  In psychology, it’s a full-blown crisis – or so people say.  They may even be right, I suppose: while it’s tempting to dismiss this as melodrama, in fact a surprising number of well-known results in psychology do actually seem to be irreproducible.  In turn, this has given rise to fervent calls for us to do “reproducible science”, which has two main elements. First, we’re asked to publish detailed and meticulous accounts of methodologies and analytical procedures, so that someone else can replicate our experiments (or analyses).  Second, we’re asked to actually undertake such replications.  Only this way, we’re told, can we be sure that our results are reproducible and therefore robust*.

Being reproducible, though, makes a result robust in only one of two possible senses of the word, and I think it’s the less interesting of the two.  What do I mean by that?  Continue reading

Sheep, lupines, pattern, and process

Photo: Lupines below Öræfajökull, and sheep grazing at Sandfell, Iceland (S. Heard)

Last summer, we were driving around southern Iceland, admiring the fields of lupines (beautiful, even though they’re invasive) and the gamboling sheep (also invasive, at least to the extent they’re allowed to graze free). Before long, we noticed an interesting pattern: we saw dense fields of lupines, without sheep; and we saw thousands upon thousands of sheep, in fields without lupines – but we drove for days without ever seeing sheep and lupines together.

Being a nerd scientist, I came up with a hypothesis to explain this pattern: Continue reading

Our literature isn’t a big pile of facts

pile of journals smallI wrote recently about the reproducibility “crisis” and its connection to the history of our Methods, and some discussion of that post prompted me to think about another angle on reproducibility. That angle: is our literature a big pile of facts? And why might we think so, and why does it matter?

John Ioannidis (2005) famously claimed that “Most Published Research Findings Are False”. This paper has been cited 2600 time and is still frequently quoted, tweeted, and blogged.* Ioannidis’ paper made important points about the way natural variation, statistical inference, and publication culture interact to mean that we can’t assume every statistically significant result indicates a truth about nature. But I think it’s also encouraged us as scientists to over-react, and to mislead ourselves into a fundamental misunderstanding of our scientific process. To see what I mean, start with Ioannidis’ very interesting opening sentence:

“Published research findings are sometimes refuted by subsequent evidence, with ensuing confusion and disappointment.”

I say this is an “interesting” sentence because I think raises an important question about us and our understanding of the scientific process. That question: why should we experience “confusion and disappointment” when a published study isn’t backed up by further evidence? Surely this is an odd reaction, and one that only makes sense if we think that everything in a journal is a Fact, and that our literature is a big pile of such Facts – of things we know to be true, and things we know to be false. Continue reading