Last month I posted “Why grant funding should be spread thinly”. In a nutshell, I provided a simple mathematical model that I think supports the argument for an agency’s awarding many smaller grants rather than just a few very large ones. The discussion in the Comments section of that post was lively, no doubt because as scientist we’re heavily invested in the way society supports, or doesn’t support, our work. Our grants give us the tools we need to do the science we’re passionate about, and that passion comes out when we talk about granting policy.
My earlier post left some loose ends. Some of those I left deliberately (because only so many complications would fit in a single post); others involve issues that came up in the comments. Here I’ll tug on a few of those loose ends. If you’ve read the earlier post, this should all make sense to you; but I won’t explain the model from scratch again, so refer back if you’re confused.
First, two things I should have been clearer about.
- I think the model is best used to think about the allocation of funding among applicants within As a society, we also have to make decisions about allocation of funding between fields: how much to spend on the space program, how much on psychology, and how much on pure math. I think these allocations are to a substantial degree a political and cultural issue, not a scientific one. It may well be true that, for the cost of one space shuttle launch, we could have funded a discovery program to name every species on Earth – but whether we should have done so is not a question my graphical model helps much with.
- I probably should have titled the post “Why grant funding should be spread equally”, not “Why grant funding should be spread thinly”. I thought it would be obvious that besides allocating available funding, society can also decide to increase (or decrease) total funding. Nothing about my argument requires that every grant be small; the issue is only, given the available total funding, whether some grants should be smaller than others. Of course, even though many large grants might be best, practically speaking, that may not be a choice we have.
Now, a few elaborations on my graphical model.
- The fact that each scientist wants to maximize their productivity generates the downward slopes in the graph, and thus the insight that too much inequality reduces overall productivity. But what if scientists aren’t good at this maximization? There are a couple of possibilities.
– All scientists might be equally poor maximizers. If so, the curves would be nearly flat. This would work against my conclusion, making x larger and favouring more inequality in funding.
– Better scientists might be better maximizers. Perhaps scientist A is not just better at doing science, but also better at predicting which projects have the best payoff. If so, then scientist A’s curve will be steeper (higher Y-intercept, lower asymptote), making x larger and again favouring more inequality in funding.
– Conversely, lesser scientists might be better maximizers. Scientist A, with a large grant, may not need to worry quite so much about picking projects with the highest payoff. Scientist B, given a smaller grant and forced to scrimp and save to pursue even one or two projects, may be ruthless in pursuing only the most promising. If so, Scientist B’s curve will be steeper, and x will be smaller. This possibility, then, strengthens the argument for evenly-spread funding.
These possibilities push the optimum strategy in opposite directions. Which will be strongest? This is an empirical question, but my own experience* suggests a lot of weight on the last possibility: scientists who can only do one thing tend to pick something really, really good.
- Chris Lane, among others, argues that there is a non-zero level of funding below which he can’t do any science at all. That is, the marginal value of the first dollar is zero, and the curves I illustrated don’t start until some threshold grant size is reached (image below). This may be true. What effect does it have in the model? Well, productivity thresholds likely vary between fields, but this doesn’t matter if the model is (as I suggest) being used for allocation within fields. Within fields, if the threshold is the same for everyone, the only difference it makes is that there’s a minimum grant size below which it isn’t worth making an award (NSERC has this already). Then, rather than spreading the funding infinitely thinly, you spread it thinly enough that every (funded) scientist has a grant just past the productivity threshold. Now, if different scientists within a field have different thresholds, things get complicated, but it’s likely to favour funding of scientists with low thresholds over those with higher ones**. I think this is independent of the issue of concentrating or spreading funding, but I’m not sure.
- The model assumes scientists’ productivity curves are measured at a point in time. My 8-year-old son read my post and pointed out that scientists learn; and if you don’t fund Scientist B, then he or she won’t be able to practice the craft of research and improve***. That is, we might want to fund Scientist B in part because doing so will help them raise their curve, becoming in the future the more productive Scientist A. (There is a substantial economics literature on fledgling-industry protection, which is analogous, although my son probably hasn’t read it.) To oversimplify, I’d argue that scientist B is more likely to be an early-career researcher, and funding them is partly an investment in future grant cycles. NSERC may have this backward, since first-applicant success in NSERC Discovery competitions is 20% lower than established-researcher success. However, the effect of learning on optimal inequality depends on how much better established researchers are on average, and on how quickly early-career researchers improve with and without funding – and these are empirical questions.
- The model assumes there are no synergistic effects to increasing the number of funded scientists. I think, in fact, there are. Having more scientists working in the field recruits more viewpoints, which should lead to more collaboration and more coauthorship. This should lead to more insight on difficult problems and to more creative ways of doing science. In the model, such an effect could be accommodated by raising the 2nd-best scientist’s curve by a small constant, raising the 3rd-best scientist’s curve by a slightly larger constant, and so on. This reduces the difference between curves, reduces x, and favours (again) spreading funding thinly.
- The model is static, in the sense that it assumes that the shape of the scientists’ curves is independent of the way funding is awarded. It’s possible that the model should really be dynamic, with policy decision by the funding agency based on the shape of the curves feeding back to change the shape of the curves. In particular, this effect might be important for the pursuit of “risky” projects (thanks to Dennis Eckmeier for raising this possibility). If funding is extremely unequal and based strongly on metrics of past performance, scientists (regardless of funding level) are likely to avoid “risky” science and prioritize studies that are sure to yield publications – even if those aren’t high in interest or importance. On the other hand, once a funding agency commits to less unequal funding, the cost of doing risky science drops (because a project that doesn’t pan out will have less impact on future funding). This might reduce measured productivity in the short term (because some risky projects fail), but should greatly increase it in the long term (if you believe that many major advances come from risky science). Note that if this effect manifests itself as all the curves moving down (in the short term) and up (in the long term), it might not change the actual value of x. However, it would arguably favour the agency acting as if x were smaller, in order to incentivize risk-taking.
All this makes it obvious (if it wasn’t already) that my graphical model is a simplification. This shouldn’t surprise you – most if not all of the models we use to do science are simplifications, and they still have value, so the same should be true about the models we use to think about doing science. My model’s utility isn’t that it gets every detail right. It’s that it helps me think more clearly about the choices we’re making in allocating funding, about the assumptions we’re making as we do so, and about which choices might be optimal (and how we could know). We won’t all agree on the policy we’d like our funding agencies to follow; but discussing policy without the careful thought that comes from modeling is a recipe for plenty of heat but not much light.
© Stephen Heard (firstname.lastname@example.org) June 4, 2015
*Ah, anecdata. I hope someone will point out in the Comments that we shouldn’t pay much attention to what “my experience suggests”; rather, we should try to measure the curves empirically.
**I predict that someone will get in a lather over this statement in the Comments, almost certainly without having read this footnote, and likely without offering any analysis. (Mind you, if you have reasoned analysis and are still in a lather, by all means bring it on!)
***I think my son is brilliant, Every parent thinks their child is brilliant. But this insight makes me think I might actually be right.