Image: The Beach Boys (2012 reunion), © Louise Palanker via flickr.com CC BY-SA 2.0
It came on the radio again the other day: “Kokomo”*. It’s a fundamentally and phenomenally stupid song, and yet it’s so perfectly executed that you can’t help singing along a little, even knowing that you’ll hate yourself for it later. Even knowing that you’re hating yourself right now while you’re still singing, but you still can’t stop. That such a stupid, stupid song can still grab you and not let go, and can still blight the airwaves 30 years after its release, is a testament to the song writing craftsmanship of its authors** and to the performance craftsmanship of the Beach Boys. It’s just astonishing how good “Kokomo” can be, while simultaneously being so very, very bad***.
So what is science’s Kokomo? What scientific idea is fundamentally stupid, yet persists (or persisted for a very long time) anyway because it’s been argued with craftsmanship and polish enough to persuade? Continue reading
Every now and again, a paper is published that’s so peculiar, or so apparently irrelevant to any important question, that it attracts derision rather than citation. Perhaps it picks up a Golden Fleece Award, or more fun, an IgNobel Prize; or perhaps it just gets roundly mocked on Twitter*. Much more than every now and then, a paper gets published that just doesn’t seem to connect to anything, and rather than being derided it’s simply ignored.
Perhaps you think this kind of thing is a recent phenomenon. Continue reading
Image: Reproducible bullseyes, by andresmp via publicdomainfiles.com. Public domain.
You read a lot about reproducibility these days. In psychology, it’s a full-blown crisis – or so people say. They may even be right, I suppose: while it’s tempting to dismiss this as melodrama, in fact a surprising number of well-known results in psychology do actually seem to be irreproducible. In turn, this has given rise to fervent calls for us to do “reproducible science”, which has two main elements. First, we’re asked to publish detailed and meticulous accounts of methodologies and analytical procedures, so that someone else can replicate our experiments (or analyses). Second, we’re asked to actually undertake such replications. Only this way, we’re told, can we be sure that our results are reproducible and therefore robust*.
Being reproducible, though, makes a result robust in only one of two possible senses of the word, and I think it’s the less interesting of the two. What do I mean by that? Continue reading
Photo: Lupines below Öræfajökull, and sheep grazing at Sandfell, Iceland (S. Heard)
Last summer, we were driving around southern Iceland, admiring the fields of lupines (beautiful, even though they’re invasive) and the gamboling sheep (also invasive, at least to the extent they’re allowed to graze free). Before long, we noticed an interesting pattern: we saw dense fields of lupines, without sheep; and we saw thousands upon thousands of sheep, in fields without lupines – but we drove for days without ever seeing sheep and lupines together.
Being a nerd scientist, I came up with a hypothesis to explain this pattern: Continue reading
I wrote recently about the reproducibility “crisis” and its connection to the history of our Methods, and some discussion of that post prompted me to think about another angle on reproducibility. That angle: is our literature a big pile of facts? And why might we think so, and why does it matter?
John Ioannidis (2005) famously claimed that “Most Published Research Findings Are False”. This paper has been cited 2600 time and is still frequently quoted, tweeted, and blogged.* Ioannidis’ paper made important points about the way natural variation, statistical inference, and publication culture interact to mean that we can’t assume every statistically significant result indicates a truth about nature. But I think it’s also encouraged us as scientists to over-react, and to mislead ourselves into a fundamental misunderstanding of our scientific process. To see what I mean, start with Ioannidis’ very interesting opening sentence:
“Published research findings are sometimes refuted by subsequent evidence, with ensuing confusion and disappointment.”
I say this is an “interesting” sentence because I think raises an important question about us and our understanding of the scientific process. That question: why should we experience “confusion and disappointment” when a published study isn’t backed up by further evidence? Surely this is an odd reaction, and one that only makes sense if we think that everything in a journal is a Fact, and that our literature is a big pile of such Facts – of things we know to be true, and things we know to be false. Continue reading