Many scientists (most?) have side projects; but when we talk about them, we often minimize them in an offhand way – as if we’re just slightly embarrassed to have taken them on. It’s considered somehow virtuous to focus with laserlike intensity on your core research, and a little bit sinful to let yourself be distracted by unrelated side projects.
If pursuing side projects isn’t virtuous, it must be because they waste effort that might otherwise go to your core research. And if they’re “wasting” effort, that suggests that time spent on side projects has a lower return than time spent on core research. Pursuing side projects, then, is self-indulgent: something you do even though you know your lifetime contribution to Science would be higher if you could somehow resist the temptation. I think this belief is pretty widespread (my experience at tenure review suggests so); but is it accurate?
I wondered about that. I’ve had a lot of side projects through my career (to go along with a lot of field-hopping), and I’ve had some oddly conflicted feelings about them. On the one hand, I’ve felt defensive, thinking of side projects as treats I give myself, even though they aren’t as important as my core research. On the other hand, I’ve seen several side projects have substantial impact, and have sometimes even wondered (a bit sadly) if my side projects have had more impact than my core research. These rather contradictory thoughts cry out for data, and so I finally crunched some.
To get a rough measure of impact, I took Google Scholar citation data* for all my publications, and regressed citation counts against years post-publication (the same data and analysis from posts identifying my most overcited and undercited papers). Residuals from this regression indicate the relative impact of my papers, corrected for their age. The result? My side projects have actually had slightly more impact than my core research (graph above), although the difference isn’t significant**.
I could have stopped there, content with the knowledge that my side projects haven’t been self-indulgent wastes of time. But I wondered why. Some ideas:
First, in my core research I sometimes publish less-than-riveting results because I need them as part of a larger scheme. I think of these as “cinder-block” papers; they’re needed for the structure I’m building from paper to paper, but they aren’t lovely by themselves. I don’t expect these to be cited much (although occasionally one surprises me). In side projects, it’s easier to cherry-pick and there’s less need for cinder blocks. However, I don’t think this is the whole story, because dropping all papers with citations < 10 doesn’t change the pattern much.
Second, I might just be lucky. One of my side projects (on what phylogenetic tree shape reveals about ecological controls on diversification rates) hit something of a nerve. It began with a paper I published as a grad student that happened – entirely by chance – to come out just as interest in phylogenetic tree shape was heating up. Later, I moved to UBC as a postdoc and – again entirely by chance – shared an office with Arne Mooers, who was also interested in tree shape, and who’s a superb researcher. Because of that collaboration, I published more tree-shape papers, and they were much better papers than I could have written on my own. I think luck is a big part of the explanation for my side-project impact.
Finally, and distressingly, it might be that I’m really bad at picking core research areas. I’ve argued elsewhere that scientists (especially poorly-funded ones like me) ought to be pretty good at prioritizing their research, to get the most impact bang for their limited buck. But if my core research has no more impact than my side projects, it suggests I’m actually not that good at prioritizing – otherwise, my side projects would be my core. And perhaps I should have made tree shape my core research; my career citation impact might have been higher. Fortunately or unfortunately, though, I wasn’t aware that things would turn out this way; and even with perfect foreknowledge, I don’t think I’d have let citation impact entirely dictate my research direction.***
What’s the take-home message here? I’m not completely sure, but these data seem like a useful reminder that it’s hard to forecast the impact of planned research. We know this at a different scale, of course: it’s why we resist foolish efforts by governments to steer research funding entirely toward short-term commercializable work. In the end, my side-project success – or my core-research mediocrity, if you prefer (which I don’t) – may just be a good illustration of the stochastic nature of scientific progress. Which is one reason it’s so much fun to watch.
© Stephen Heard (firstname.lastname@example.org) August 25, 2015
*Yes, I know Google Scholar overcounts. Web of Science undercounts (and pretty badly). Fortunately, I’ve yet to find a comparative pattern that doesn’t hold up across the two databases.
**Red lines are means; ANOVA F1,51 = 1.46, P = 0.23 (or F1,49 = 0.04, P = 0.84 omitting the two apparent outliers among side projects). Excluded are publications on which I played a minor role, for instance as a stats-only collaborator.
***Sometimes I do research just because I want to know the answer. Similarly, sometimes I write blog posts just because I want to: my posts on Wonderful Scientific Names don’t get many page views, but I really enjoy writing them, so there will be more.