Photo: Mushroom arrays on the forest floor in a “play” experiment (S. Heard).
Much of science is a craft: doing it well involves the application of practiced skills, which can be honed (if never completely mastered) by anyone with time and experience. In an experiment, for example, we have powerful experimental design, meticulous repetition and recordkeeping, appropriate statistical analysis, and clear writing to report the results – all things we can become objectively better and better at with practice.
But there’s creativity in science too, and it lies in the source of our ideas. This part of science is more mysterious. If running an experiment is craft, coming up with the idea for one is art. If testing a hypothesis is craft, formulating an important one is art. If solving a model is craft, specifying one that captures interesting biology is art. Where do our ideas come from? I suspect most of us haven’t a clue. If someone asks me how I run an experiment, I can tell them; but if someone asks how I think of an experiment to run, I’m usually stumped*.
We may not know how our creativity works, but we seem pretty sure that it should precede our craft. That is, we tell students that it’s pointless at best, and quite likely a waste of time and resources, to collect data without knowing what novel question they will answer, or to make observations that aren’t motivated by an idea and a hypothesis**. I tell myself that I believe this.
And yet…might there be room in science not just for creativity, but for play? What I mean by “play” is an experiment (or an observation, or a model) that one conducts without an identified purpose in mind –without knowing (yet) why it matters, or what hypothesis it’s testing. You’d never get a grant to support such an experiment, and you’d never admit to your supervisor, your colleagues, or your students that you were doing it. I allowed myself play once, though, and it worked out pretty well.
I ran my play experiment when I was still a PhD student. I had become (somewhat vaguely) interested in flies that breed in decaying mushrooms***. These tend to be pretty diverse, despite little obvious resource partitioning and intense local competition in some mushrooms (while others go unexploited). One possibility is that competitors can coexist in the system via some kind of spatial mechanism. In particular, larval aggregation (large numbers of larvae on some mushrooms, and very few on others) could, theoretically at least, allow competitors to coexist without resource partitioning.
I thought the mushroom-fly system was interesting, and I started tinkering – in between bits of my actual PhD research. I bought some ordinary white mushrooms at the grocery store, left a few out in the forest, and found that Drosophila females would locate and lay eggs on them. Then, for some reason, I decided to scale this up. I put out arrays of 30 mushrooms, with three different spacings: 5 cm between neighbouring mushrooms (photo, right), 30 cm (photo, left), or 150 cm. After a few days, I brought them all back to the lab, dissected them, and counted the Drosophila larvae in each one.
Why did I run an experiment with a spacing treatment? I didn’t know; I just wanted to see what would happen. That sounds lame even to me, but once I had the data in hand, something interesting did happen. In my closely-spaced arrays, every mushroom had a few larvae (little aggregation); but in my distantly spaced ones, many mushrooms were unexploited and a few were teeming with larvae (strong aggregation). I hadn’t expected such an effect (or anything else, really), but it made me very excited – because it suggested the possibility that competitors might coexist more easily when resources were scarce (my distant arrays) than they were abundant (my close arrays). If you aren’t an ecologist, trust me – this is just as counterintuitive as it sounds!
After repeating the experiment in two more years (with similar results), I published it here (it’s paywalled, but I’ll send you a PDF if you like). Even better, this counterintuitive notion of competition relaxing when resources are scarce was just begging for theoretical exploration, which I did (with my first grad student!) here, here, and here. Collectively, those papers have been cited 110 times – not rock-star numbers, but not bad for something that started in play. I’m especially pleased with the last one, in which we extended the aggregation idea to consider ways herbivore-plant systems might be stabilized. I think that’s an important idea (even if the paper – my “most undercited paper”– hasn’t yet gotten the attention I think it deserves).
I’m not going to spend the rest of my career running experiments just for play, and I don’t think you should either. But my experience suggests that a bit of scientific play, once in a while, might lead us to some unexpected and otherwise undiscovered insight. My play, at least, was worthwhile.
I can’t be the only person who’s ever run an experiment (or made an observation, or tinkered with a model) without knowing why. Is anyone brave enough to tell their story in the Replies?
© Stephen Heard (email@example.com) September 14, 2015
- On my most undercited paper
- That paper was hugely improved by peer review
- Are side projects self-indulgent?
*Well, not completely. I can say that a remarkable large fraction of my new ideas come out of the shower. For whatever reason, I think well and creatively there, and a SCUBA notebook is nearly as important as the soap.
**Although see this excellent post by Manu Saunders on the value of happenstance observation in ecology. While I agree entirely with her, I still tell students that they need to start with an idea, a question, and a hypothesis. Happenstance data are a bonus, not a cornerstone strategy for research.
***Ecologists are a peculiar breed. There, I saved you saying it.
Great post and I’m all in favour of this approach to science. Worth noting that the discovery of graphene was apparently made under similar circumstances, messing around in a lab during a regular Friday “play day”, and won its discovers a Nobel prize.
Several of my papers resulted from similar “playing”, which I’ve sometimes referred to as “poke-it-with-a-stick-and-see-what-happens” ecology. The one I’ll share with you is the time I tried to germinate some half-eaten seeds of the plant I studied for my PhD, Lotus corniculatus. These had been partially consumed by a weevil seed predator and in many cases had more than one third of the seed mass missing. To alleviate the tedium of counting and weighing hundreds of seeds I decided to sow some of the damaged ones on wet filter paper just to see what happens.
To my surprise the damaged seeds germinated faster than the undamaged seeds, and after about 100 days damaged seedlings were the same size as the undamaged seedlings. I’d love to see this tested in a field experiment but have never got round to it. Interestingly I think this has been under-cited too. Here’s the reference and a link to the PDF:
Ollerton, J. & Lack, A.J. (1996) Partial predispersal seed predation in Lotus corniculatus L. (Fabaceae). Seed Science Research 6: 65-69
Click to access OllertonPredispersal.pdf
LikeLiked by 2 people
Pingback: Recommended reads #60 | Small Pond Science
Pingback: Science needs room for creativity – Ecology is not a dirty word
Pingback: Graphic design, error, and accidental learning | Scientist Sees Squirrel
Pingback: Happy 4th birthday, Scientist Sees Squirrel! | Scientist Sees Squirrel