How should a granting agency distribute the funds at its disposal? Different agencies have different answers to that question. The NSF (USA), for example, has traditionally awarded operating grants to rather few applicants, with each successful applicant getting quite a lot of money. NSERC (Canada), on the other hand, has traditionally awarded operating grants to most applicants, but with each successful applicant getting less money (a recent snapshot and some discussion here). NSERC has been moving slowly but steadily in the direction of the NSF model, with lower funding percentages, larger grants for top-ranked applications, and new categories of super-grants intended to recognize “excellence” (e.g., Vanier graduate scholarships, Banting postdoctoral scholarships, Canada Excellence Research Chairs program). Scientists have widely decried NSERC’s shift (for example, here) and NSF’s practice (for example, here and here) – but are they right? How should an agency like NSERC optimally distribute its funds?
I think it’s helpful to approach this with a graphical model. I’m going to base mine on just two simple assumptions, each of which I think is hard to question:
1. Each individual scientist wants to maximize the amount of science produced from their grant.
2. The granting agency wants to maximize the total amount of science done for its budget.
Now, let’s allow that the granting agency is probably right that some scientists are “better” than others – they will produce more science for a given grant. The question of optimal distribution, then, boils down to whether the agency should try to steer its money to the better scientists, while denying funding (or giving much smaller grants) to the less productive ones – or whether it should just give a bit of money to everyone. NSF works very hard at doing the former, and NSERC increasingly follows along – but the graphical model suggests that this is badly misguided. Here it is:
What I’ve done here is plot the marginal return on grant funding (that is, amount of science produced per additional dollar) vs. grant size – for a “better” scientist (A, in red) and a “lesser” one (B, in blue*). I’ll get to the two asterisks in a bit.
The critical feature of this graph is the shape of each curve, which follows directly from assumption #1. As a scientist, I have a bunch of research projects I could undertake. If you give me enough money for just one of them, I’m do the most promising one; if you give me a bigger grant, I’ll add in my next most promising project, and so on. As long as some possible expenditures produce more science than others, and as long as I spend to maximize my output, my curve must have negative slope; but it will asymptote at or above the X axis (and so it must be concave up). Everything else follows from this shape. By the way, I’m ducking the question of precisely how we define an amount of “science”; we don’t all agree on that, which will matter later.
I’ve drawn Scientist A as “better” than Scientist B: for any given grant size, Scientist A is more productive. But the crucial insight is this: the agency isn’t actually interested in Scientist A’s productivity, or Scientist B’s, but rather the total productivity of all funded scientists (Assumption 2).
Which brings us to the two asterisks. The blue asterisk is the lesser scientist’s productivity with the first grant dollar awarded. The red asterisk is the better scientist’s productivity with the x+1st dollar awarded – where x is the threshold grant size for which the red asterisk is below the blue one. It is inevitable that this will happen for some value x, and this is the grant size at which the agency is wasting its money increasing the award to the better scientist, and should prefer to reserve that money to fund a lesser one instead.
It’s an empirical question is whether x is large or small – that is, how much money should the better scientist get before the lesser one gets any? (The question of whether two grants should be the same size, given that both are awarded, would get an equivalent analysis).
It seems improbable that x is very large. For x to be large, variance among scientists in productivity has to be much larger than the (guaranteed) downslope in the funding utility curve. (In the figure, Scientist A is about half again as productive, when equally resourced, as Scientist B). I won’t dispute that there’s some variance, but my experience with faculty hiring strongly suggests that there really aren’t a whole lot of useless stiffs getting hired these days. If x is not very large, then we get the most total science by spreading the funding out equally (or nearly so).
Now, so far I’ve imagined that we can predict the future productivity of scientists with perfect accuracy. But of course we can’t, and this substantially strengthens the argument for spreading the funding out. In the model, uncertainty about who’s the better scientist moves the A and B curves closer together (vertically). To see this, imagine what would happen if we were classified scientists as “better” or “lesser” at random: the expected productivity would be the same for the two groups. Individual scientists might still be intrinsically better or lesser, of course, but the productivity of those we identified as “better” or “worse” would be the same**. There is plenty of uncertainty in grant reviewing, of course, as there is with other attempts to measure quality and productivity of scientists – especially their future productivity. In fact, we can’t even agree on how best to define “amount of science produced” (Number of papers? Number of papers weighted by impact factor? H-index? Number of grad students trained?). Such disagreements have the same effect as uncertainty in a measurement we (hypothetically) agree on. In either case, by bringing the curves closer together, uncertainty reduces the size of x and thus reduces the optimum inequality in grant size.
All this suggests we could probably spend a lot less effort worrying about identifying the very best scientists and showering them with money – and we’d get a better total return. Doing so would come with a bonus: our efforts to distinguish better scientists from lesser ones cost money, and the more certain we want to be in our distinctions, the higher these administrative costs. (It may even cost more to make these distinctions than it would to just give every applicant a grant!). Reducing administrative spending by worrying less about distinguishing quality would let us increase the total funding pool, meaning we could fund more scientists and give bigger average grants – two ways to get more total science. This is, in fact, more or less what NSERC used to do.
So why do funding agencies persist in awarding large grants to a few people? Presumably, they’re convinced that the value of x is so large, and their ability to judge future productivity is so good, as to obviate the arguments I’ve made here. This claim seems very bold, and until I’m shown data to back it up, I think NSERC is headed in the wrong direction (and when it gets there, it will find the NSF waiting for it).
© Stephen Heard (email@example.com) May 12, 2015
UPDATE: Jeremy Fox points me to an old blog post on Dynamic Ecology, where he develops a very similar mathematical model (although from a slightly different perspective). It’s well worth reading his post too, and I’m hoping he’ll say more about this in the Comments.
*Of course scientist B is blue. Wouldn’t you feel blue if you found out you were being used as an example of a lesser scientist?
**The argument here does not depend on grant reviewing being a complete crapshoot, and I’m sure it isn’t. But any amount of uncertainty moves the curves together, and most of us agree that uncertainty is large.