Last month I posted “Why grant funding should be spread thinly”. In a nutshell, I provided a simple mathematical model that I think supports the argument for an agency’s awarding many smaller grants rather than just a few very large ones. The discussion in the Comments section of that post was lively, no doubt because as scientist we’re heavily invested in the way society supports, or doesn’t support, our work. Our grants give us the tools we need to do the science we’re passionate about, and that passion comes out when we talk about granting policy.
My earlier post left some loose ends. Some of those I left deliberately (because only so many complications would fit in a single post); others involve issues that came up in the comments. Here I’ll tug on a few of those loose ends. If you’ve read the earlier post, this should all make sense to you; but I won’t explain the model from scratch again, so refer back if you’re confused.
First, two things I should have been clearer about.
- I think the model is best used to think about the allocation of funding among applicants within As a society, we also have to make decisions about allocation of funding between fields: how much to spend on the space program, how much on psychology, and how much on pure math. I think these allocations are to a substantial degree a political and cultural issue, not a scientific one. It may well be true that, for the cost of one space shuttle launch, we could have funded a discovery program to name every species on Earth – but whether we should have done so is not a question my graphical model helps much with.
- I probably should have titled the post “Why grant funding should be spread equally”, not “Why grant funding should be spread thinly”. I thought it would be obvious that besides allocating available funding, society can also decide to increase (or decrease) total funding. Nothing about my argument requires that every grant be small; the issue is only, given the available total funding, whether some grants should be smaller than others. Of course, even though many large grants might be best, practically speaking, that may not be a choice we have.
Now, a few elaborations on my graphical model.
- The fact that each scientist wants to maximize their productivity generates the downward slopes in the graph, and thus the insight that too much inequality reduces overall productivity. But what if scientists aren’t good at this maximization? There are a couple of possibilities.
– All scientists might be equally poor maximizers. If so, the curves would be nearly flat. This would work against my conclusion, making x larger and favouring more inequality in funding.
– Better scientists might be better maximizers. Perhaps scientist A is not just better at doing science, but also better at predicting which projects have the best payoff. If so, then scientist A’s curve will be steeper (higher Y-intercept, lower asymptote), making x larger and again favouring more inequality in funding.
– Conversely, lesser scientists might be better maximizers. Scientist A, with a large grant, may not need to worry quite so much about picking projects with the highest payoff. Scientist B, given a smaller grant and forced to scrimp and save to pursue even one or two projects, may be ruthless in pursuing only the most promising. If so, Scientist B’s curve will be steeper, and x will be smaller. This possibility, then, strengthens the argument for evenly-spread funding.
These possibilities push the optimum strategy in opposite directions. Which will be strongest? This is an empirical question, but my own experience* suggests a lot of weight on the last possibility: scientists who can only do one thing tend to pick something really, really good.
- Chris Lane, among others, argues that there is a non-zero level of funding below which he can’t do any science at all. That is, the marginal value of the first dollar is zero, and the curves I illustrated don’t start until some threshold grant size is reached (image below). This may be true. What effect does it have in the model? Well, productivity thresholds likely vary between fields, but this doesn’t matter if the model is (as I suggest) being used for allocation within fields. Within fields, if the threshold is the same for everyone, the only difference it makes is that there’s a minimum grant size below which it isn’t worth making an award (NSERC has this already). Then, rather than spreading the funding infinitely thinly, you spread it thinly enough that every (funded) scientist has a grant just past the productivity threshold. Now, if different scientists within a field have different thresholds, things get complicated, but it’s likely to favour funding of scientists with low thresholds over those with higher ones**. I think this is independent of the issue of concentrating or spreading funding, but I’m not sure.
- The model assumes scientists’ productivity curves are measured at a point in time. My 8-year-old son read my post and pointed out that scientists learn; and if you don’t fund Scientist B, then he or she won’t be able to practice the craft of research and improve***. That is, we might want to fund Scientist B in part because doing so will help them raise their curve, becoming in the future the more productive Scientist A. (There is a substantial economics literature on fledgling-industry protection, which is analogous, although my son probably hasn’t read it.) To oversimplify, I’d argue that scientist B is more likely to be an early-career researcher, and funding them is partly an investment in future grant cycles. NSERC may have this backward, since first-applicant success in NSERC Discovery competitions is 20% lower than established-researcher success. However, the effect of learning on optimal inequality depends on how much better established researchers are on average, and on how quickly early-career researchers improve with and without funding – and these are empirical questions.
- The model assumes there are no synergistic effects to increasing the number of funded scientists. I think, in fact, there are. Having more scientists working in the field recruits more viewpoints, which should lead to more collaboration and more coauthorship. This should lead to more insight on difficult problems and to more creative ways of doing science. In the model, such an effect could be accommodated by raising the 2nd-best scientist’s curve by a small constant, raising the 3rd-best scientist’s curve by a slightly larger constant, and so on. This reduces the difference between curves, reduces x, and favours (again) spreading funding thinly.
- The model is static, in the sense that it assumes that the shape of the scientists’ curves is independent of the way funding is awarded. It’s possible that the model should really be dynamic, with policy decision by the funding agency based on the shape of the curves feeding back to change the shape of the curves. In particular, this effect might be important for the pursuit of “risky” projects (thanks to Dennis Eckmeier for raising this possibility). If funding is extremely unequal and based strongly on metrics of past performance, scientists (regardless of funding level) are likely to avoid “risky” science and prioritize studies that are sure to yield publications – even if those aren’t high in interest or importance. On the other hand, once a funding agency commits to less unequal funding, the cost of doing risky science drops (because a project that doesn’t pan out will have less impact on future funding). This might reduce measured productivity in the short term (because some risky projects fail), but should greatly increase it in the long term (if you believe that many major advances come from risky science). Note that if this effect manifests itself as all the curves moving down (in the short term) and up (in the long term), it might not change the actual value of x. However, it would arguably favour the agency acting as if x were smaller, in order to incentivize risk-taking.
All this makes it obvious (if it wasn’t already) that my graphical model is a simplification. This shouldn’t surprise you – most if not all of the models we use to do science are simplifications, and they still have value, so the same should be true about the models we use to think about doing science. My model’s utility isn’t that it gets every detail right. It’s that it helps me think more clearly about the choices we’re making in allocating funding, about the assumptions we’re making as we do so, and about which choices might be optimal (and how we could know). We won’t all agree on the policy we’d like our funding agencies to follow; but discussing policy without the careful thought that comes from modeling is a recipe for plenty of heat but not much light.
© Stephen Heard (email@example.com) June 4, 2015
*Ah, anecdata. I hope someone will point out in the Comments that we shouldn’t pay much attention to what “my experience suggests”; rather, we should try to measure the curves empirically.
**I predict that someone will get in a lather over this statement in the Comments, almost certainly without having read this footnote, and likely without offering any analysis. (Mind you, if you have reasoned analysis and are still in a lather, by all means bring it on!)
***I think my son is brilliant, Every parent thinks their child is brilliant. But this insight makes me think I might actually be right.
I certainly agree that we need to open the door wider to get people in the system early. The first grant should be the easiest to get, because we don’t know what a person can do until they have the money to do it. In the US we give large start-up packages for this exact reason, but I don’t know that’s the most efficient system.
I see three problems with your model, and I’ll remain unlathered.
1) The inflexibility of the system is very unattractive. As your son pointed out, things change over time. What a scientists does in their first 5-10 years may lead them in very different directions. What if an ecologist wants to incorporate molecular work into their next grant? Can they afford to do it at the Official Ecology Grant Level? This sounds to me like standardized testing in middle school that forces one down a designated career path. Rather than allowing people to take on risk, as you suggest, you’re forcing people to only conceive of science that can be done at a certain financial threshold. Chances are, once that threshold is set, it’ll remain the same for far longer than anyone intends. The result will be that the money in 2025 will be 70% of the actual purchasing power it was in 2015 (See: NIH R01 modular budget limit).
2) What about labs that do more than one thing? My lab, for instance, has both a biodiversity side (low cost) and a genomics side (high cost). I did this specifically to open up multiple avenues for funding, which has worked out. But in your system I would have to drop one of the two maybe do them both sub-optimally. Is that good for science? Am I spread to thin? Which science doesn’t need to be done?
3) The third and final is a carry over from the last post: Where do all you PhDs go? This is one of the biggest issue and no one has even made an attempt to answer it. Since you won’t have money in your budget for a tech or postdoc, what’s the plan for all those students? Is an international postdoc now going to be de rigueur? Are you only going to hire internationally or are we going back to handing fresh PhDs a lab?
My main point in the last post and now is that the long-term consequences of going to a spread it thin model are not reflected in your graph because you will lose valuable personnel and purchasing power over time, while pigeon holing your scientists in thinking only about what they can afford to do on a relative shoe string. Your graph only shows what happens in year one, not year 10.
Chris – thanks for the comments. I was hoping you’d weigh in again.
I think your last point is indeed an important one. I would actually prefer a different approach altogether: funding of postdocs (and grad students) via fellowships rather than research grants. I agree with you that I won’t ever be able to pay a postdoc from my NSERC Discovery grant. I would rather see grad students and postdocs as personnel whose training is invested in by science funders, rather than as employees who are funded from grants to individual PIs. (We currently have a hybrid model with some of each, of course).
I’ve never been a fan of letting a funding agency determine who I hire. Just because someone didn’t have straight As or incredible GRE scores doesn’t mean that won’t be successful at research. Those grants also heavily prioritize by school reputation (both at the grad and postdoc level), at least based on the numbers at NSF. And we also need money to fund all these people. We can’t spread funding to more PIs AND massively expand NSERC’s grad and postdoc programs (which were underfunded even when I was there), can we?
But we still have on ~15% of PhDs getting faculty positions. One of the strengths I found in the Canadian system while I was there was the support for non-PI scientists. It was a legit career path. Your system would eliminate that.
Pingback: Friday links: whatever happened to the population bomb, and more | Dynamic Ecology
Pingback: Link Round-Up: Working in Academia
My apologies. I have just been researching a related topic, and came across this blog. I am not sure if you closed the topic. My query is slightly different: has anyone found work on how to optimize how large a grant should be to “maximize” the total output? Specifically, I am tangentially involved in the process of awarding graduate students in earth sciences small student grants; typically $1000-$2000 per student. If a society has $10,000 to award, is there some way of guessing / estimating if the most bang for the buck is with 20 $500 grants, vs 10 $1000, or 5 $2000 grants, etc? Obviously the total yield of output (‘amount’ of science done, degrees awarded) is 0 for some silly large number of awards, and small for just 1-2 recipients. And there are issues about what one might measure as impact – but degrees awarded, or time to degree, + abstracts presented or papers published, would be some metrics. Any insights are welcome.
Jim – I’m unaware of formal studies of grants on the scale you’re talking about. However, there have been some analysis of NIH grants – here, for instance http://www.nature.com/news/2010/101116/full/468356a.html via Zen Faulkes, and in a couple of studies linked to in the first post of this series. They tend to support my logic that more smaller grants are better, but you are right (this is Chris Lane’s objection treated above) that there is a too-small size for which output likely crashes.
Thanks! I agree that there is some cliff – where there would be a fall off of impact – especially on a cost-benefit perspective. But how can we estimate where that cliff is? Most students wouldn’t bother filling out a 1 page form for $50 if their total project costs are $10,000, say. I am trying to find someone to help me at least provide some estimates. I am wary of super quantified optimisation models; I am trying to figure out what a rough estimate of impact vs # grants would be. A wild card to keep in the equation is that the larger number of grants allows for the odd genius idea to get through, or a transformative experience to occur for a student who then goes on to do great stuff, and uses the money to move up a big step from some modest background. Very hard to predict, but is of course a desired outcome. ( ps – just bought your book. )
LikeLiked by 1 person
My preference leans to many-small, but I agree, one can run that out to a ridiculous level. It’s distressing that there are so few good data on this – we spend billions on research (well, maybe not your program!), we should probably spend millions on studying how to do it efficiently!