Category Archives: scientific method

Maybe it’s time to stop teaching “the scientific method” in 1st year biology

Recently, my department held a search for a new instructor to oversee our 1st year labs. An important part of our search process is a “teaching talk”, in which we pretend (poorly) to be students, and the candidates give a lecture they might deliver in one of their assigned courses. We set the topic (so it’s the same for all candidates), and this time, we asked them to deliver a lecture for 1st-year biology on “the scientific method”.

We were lucky to interview three wonderful candidates (I’d have been happy with any of them), and I think they did the best job possible with that lecture topic. But the experience crystallized something that’s been bothering me for many years. I’m becoming convinced that even the best job possible of teaching “the scientific method” to first year biology students simply isn’t worth doing. Or, to be a bit more forceful: it probably does more harm than good. I know, that’s nothing short of heresy. Continue reading

Covid-19, mystery novels, and how science works

This is a guest post from Emma Despland.  Her first pandemic-themed guest post is here; this week, she asks what the pandemic can teach the public about science, and teach us about public understanding of science.

There is considerable frustration about uncertainty surrounding the Covid-19 pandemic, how serious it is and what we should do.

Do we need to wear masks? What kind of mask? If you’re had Covid-19, are you immune? For how long? Do I need to disinfect my groceries? Is it safe to go jogging outside? One model  suggests that you need to be 10 m away from someone who is running to avoid getting hit by their microdroplets and possibly contaminated, whereas other experts think this long-distance transmission is unlikely.

Fictional representations of science show too many Eureka moments. Continue reading

What is science’s “Kokomo”?

Image: The Beach Boys (2012 reunion), © Louise Palanker via flickr.com CC BY-SA 2.0

It came on the radio again the other day: “Kokomo”*.  It’s a fundamentally and phenomenally stupid song, and yet it’s so perfectly executed that you can’t help singing along a little, even knowing that you’ll hate yourself for it later.  Even knowing that you’re hating yourself right now while you’re still singing, but you still can’t stop.  That such a stupid, stupid song can still grab you and not let go, and can still blight the airwaves 30 years after its release, is a testament to the song writing craftsmanship of its authors**  and to the performance craftsmanship of the Beach Boys.  It’s just astonishing how good “Kokomo” can be, while simultaneously being so very, very bad***.

So what is science’s Kokomo?  What scientific idea is fundamentally stupid, yet persists (or persisted for a very long time) anyway because it’s been argued with craftsmanship and polish enough to persuade? Continue reading

Robert Boyle’s Monstrous Head

Every now and again, a paper is published that’s so peculiar, or so apparently irrelevant to any important question, that it attracts derision rather than citation.  Perhaps it picks up a Golden Fleece Award, or more fun, an IgNobel Prize; or perhaps it just gets roundly mocked on Twitter*.  Much more than every now and then, a paper gets published that just doesn’t seem to connect to anything, and rather than being derided it’s simply ignored.

Perhaps you think this kind of thing is a recent phenomenon.  Continue reading

Reproducibility and robustness

Image: Reproducible bullseyes, by andresmp via publicdomainfiles.com.  Public domain.

You read a lot about reproducibility these days.  In psychology, it’s a full-blown crisis – or so people say.  They may even be right, I suppose: while it’s tempting to dismiss this as melodrama, in fact a surprising number of well-known results in psychology do actually seem to be irreproducible.  In turn, this has given rise to fervent calls for us to do “reproducible science”, which has two main elements. First, we’re asked to publish detailed and meticulous accounts of methodologies and analytical procedures, so that someone else can replicate our experiments (or analyses).  Second, we’re asked to actually undertake such replications.  Only this way, we’re told, can we be sure that our results are reproducible and therefore robust*.

Being reproducible, though, makes a result robust in only one of two possible senses of the word, and I think it’s the less interesting of the two.  What do I mean by that?  Continue reading

Sheep, lupines, pattern, and process

Photo: Lupines below Öræfajökull, and sheep grazing at Sandfell, Iceland (S. Heard)

Last summer, we were driving around southern Iceland, admiring the fields of lupines (beautiful, even though they’re invasive) and the gamboling sheep (also invasive, at least to the extent they’re allowed to graze free). Before long, we noticed an interesting pattern: we saw dense fields of lupines, without sheep; and we saw thousands upon thousands of sheep, in fields without lupines – but we drove for days without ever seeing sheep and lupines together.

Being a nerd scientist, I came up with a hypothesis to explain this pattern: Continue reading

Our literature isn’t a big pile of facts

pile of journals smallI wrote recently about the reproducibility “crisis” and its connection to the history of our Methods, and some discussion of that post prompted me to think about another angle on reproducibility. That angle: is our literature a big pile of facts? And why might we think so, and why does it matter?

John Ioannidis (2005) famously claimed that “Most Published Research Findings Are False”. This paper has been cited 2600 time and is still frequently quoted, tweeted, and blogged.* Ioannidis’ paper made important points about the way natural variation, statistical inference, and publication culture interact to mean that we can’t assume every statistically significant result indicates a truth about nature. But I think it’s also encouraged us as scientists to over-react, and to mislead ourselves into a fundamental misunderstanding of our scientific process. To see what I mean, start with Ioannidis’ very interesting opening sentence:

“Published research findings are sometimes refuted by subsequent evidence, with ensuing confusion and disappointment.”

I say this is an “interesting” sentence because I think raises an important question about us and our understanding of the scientific process. That question: why should we experience “confusion and disappointment” when a published study isn’t backed up by further evidence? Surely this is an odd reaction, and one that only makes sense if we think that everything in a journal is a Fact, and that our literature is a big pile of such Facts – of things we know to be true, and things we know to be false. Continue reading