Why grant funding should be spread thinly

How should a granting agency distribute the funds at its disposal? Different agencies have different answers to that question. The NSF (USA), for example, has traditionally awarded operating grants to rather few applicants, with each successful applicant getting quite a lot of money. NSERC (Canada), on the other hand, has traditionally awarded operating grants to most applicants, but with each successful applicant getting less money (a recent snapshot and some discussion here). NSERC has been moving slowly but steadily in the direction of the NSF model, with lower funding percentages, larger grants for top-ranked applications, and new categories of super-grants intended to recognize “excellence” (e.g., Vanier graduate scholarships, Banting postdoctoral scholarships, Canada Excellence Research Chairs program). Scientists have widely decried NSERC’s shift (for example, here) and NSF’s practice (for example, here and here) – but are they right? How should an agency like NSERC optimally distribute its funds?

I think it’s helpful to approach this with a graphical model. I’m going to base mine on just two simple assumptions, each of which I think is hard to question:

1. Each individual scientist wants to maximize the amount of science produced from their grant.

2. The granting agency wants to maximize the total amount of science done for its budget.

Now, let’s allow that the granting agency is probably right that some scientists are “better” than others – they will produce more science for a given grant. The question of optimal distribution, then, boils down to whether the agency should try to steer its money to the better scientists, while denying funding (or giving much smaller grants) to the less productive ones – or whether it should just give a bit of money to everyone. NSF works very hard at doing the former, and NSERC increasingly follows along – but the graphical model suggests that this is badly misguided. Here it is:

grant sizes

What I’ve done here is plot the marginal return on grant funding (that is, amount of science produced per additional dollar) vs. grant size – for a “better” scientist (A, in red) and a “lesser” one (B, in blue*). I’ll get to the two asterisks in a bit.

The critical feature of this graph is the shape of each curve, which follows directly from assumption #1. As a scientist, I have a bunch of research projects I could undertake. If you give me enough money for just one of them, I’m do the most promising one; if you give me a bigger grant, I’ll add in my next most promising project, and so on. As long as some possible expenditures produce more science than others, and as long as I spend to maximize my output, my curve must have negative slope; but it will asymptote at or above the X axis (and so it must be concave up). Everything else follows from this shape. By the way, I’m ducking the question of precisely how we define an amount of “science”; we don’t all agree on that, which will matter later.

I’ve drawn Scientist A as “better” than Scientist B: for any given grant size, Scientist A is more productive. But the crucial insight is this: the agency isn’t actually interested in Scientist A’s productivity, or Scientist B’s, but rather the total productivity of all funded scientists (Assumption 2).

Which brings us to the two asterisks. The blue asterisk is the lesser scientist’s productivity with the first grant dollar awarded. The red asterisk is the better scientist’s productivity with the x+1st dollar awarded – where x is the threshold grant size for which the red asterisk is below the blue one. It is inevitable that this will happen for some value x, and this is the grant size at which the agency is wasting its money increasing the award to the better scientist, and should prefer to reserve that money to fund a lesser one instead.

It’s an empirical question is whether x is large or small – that is, how much money should the better scientist get before the lesser one gets any? (The question of whether two grants should be the same size, given that both are awarded, would get an equivalent analysis).

It seems improbable that x is very large. For x to be large, variance among scientists in productivity has to be much larger than the (guaranteed) downslope in the funding utility curve. (In the figure, Scientist A is about half again as productive, when equally resourced, as Scientist B). I won’t dispute that there’s some variance, but my experience with faculty hiring strongly suggests that there really aren’t a whole lot of useless stiffs getting hired these days. If x is not very large, then we get the most total science by spreading the funding out equally (or nearly so).

Now, so far I’ve imagined that we can predict the future productivity of scientists with perfect accuracy. But of course we can’t, and this substantially strengthens the argument for spreading the funding out. In the model, uncertainty about who’s the better scientist moves the A and B curves closer together (vertically). To see this, imagine what would happen if we were classified scientists as “better” or “lesser” at random: the expected productivity would be the same for the two groups. Individual scientists might still be intrinsically better or lesser, of course, but the productivity of those we identified as “better” or “worse” would be the same**. There is plenty of uncertainty in grant reviewing, of course, as there is with other attempts to measure quality and productivity of scientists – especially their future productivity. In fact, we can’t even agree on how best to define “amount of science produced” (Number of papers? Number of papers weighted by impact factor? H-index? Number of grad students trained?). Such disagreements have the same effect as uncertainty in a measurement we (hypothetically) agree on. In either case, by bringing the curves closer together, uncertainty reduces the size of x and thus reduces the optimum inequality in grant size.

 All this suggests we could probably spend a lot less effort worrying about identifying the very best scientists and showering them with money – and we’d get a better total return. Doing so would come with a bonus: our efforts to distinguish better scientists from lesser ones cost money, and the more certain we want to be in our distinctions, the higher these administrative costs. (It may even cost more to make these distinctions than it would to just give every applicant a grant!). Reducing administrative spending by worrying less about distinguishing quality would let us increase the total funding pool, meaning we could fund more scientists and give bigger average grants – two ways to get more total science. This is, in fact, more or less what NSERC used to do.

So why do funding agencies persist in awarding large grants to a few people? Presumably, they’re convinced that the value of x is so large, and their ability to judge future productivity is so good, as to obviate the arguments I’ve made here. This claim seems very bold, and until I’m shown data to back it up, I think NSERC is headed in the wrong direction (and when it gets there, it will find the NSF waiting for it).

© Stephen Heard (sheard@unb.ca) May 12, 2015

UPDATE: Jeremy Fox points me to an old blog post on Dynamic Ecology, where he develops a very similar mathematical model (although from a slightly different perspective).  It’s well worth reading his post too, and I’m hoping he’ll say more about this in the Comments.

*Of course scientist B is blue. Wouldn’t you feel blue if you found out you were being used as an example of a lesser scientist?

**The argument here does not depend on grant reviewing being a complete crapshoot, and I’m sure it isn’t.  But any amount of uncertainty moves the curves together, and most of us agree that uncertainty is large.


28 thoughts on “Why grant funding should be spread thinly

  1. Chris Lane

    Steve, I do think there’s an optimal efficiency and there’s decent data from NIH in the US that suggests somewhere in the neighborhood of 2-3 R01s is close. At $250k/yr each in direct costs, that’s a far cry from what you’re talking about here. But that’s for Bio-med. It would be different for chemistry. It would be different for physics. It’s different in every sub-field within those disciplines, just as it is very different for you and I.

    My problem with proposals like what you’ve laid out here is that they, by default, rank “cheap” science equal with all other types of science. You can’t get the same answers from a forest ecology study as a Bio-med study that requires the use of many animals. Will you scrap the space program? How about particle physics? Do we not need those answers as much as we need to know how many beetles are out there?

    And who is paying for people? I don’t know about if there’s been some special fund to pay for postdocs or technicians since I left, but they didn’t work for free when I was in Canada. Without funds for them you are forcing any PhD who stays on an academic track out of the country. Is that the best use of your science resources? Training people up and forcing to leave when they are most competent to be productive?

    We have had these discussion in the context of NSF a billion times and it is ALWAYS people who can do their science at the low end of the cost spectrum who think spreading funding out is the best way to go. Trust me, I’m all in for more stability, but I would also rather be put out of my misery than be on life support at $20k/yr. That would simply not be enough to effectively train even one person in my lab (no meetings, barely any summer pay, we would have to ditch access to the computing cluster, making all tasks take 50x longer….).

    This “efficiency” argument comes up almost monthly, and unless you have actual data across multiple fields demonstrating where each field’s sweet spot is, it’s a lazy and tired exercise.


    1. ScientistSeesSquirrel Post author

      Thanks, Chris – these are good comments (particularly with respect to HQP). I agree that my analysis is a bit simple (although I’ll claim that’s not the same as lazy!). I’d not be unhappy with a model broken down by fields – say, we decide on how much we’ll spend on physics, and how much on systematics, and how much on ecology, then use a model like mine within each. Would that solve some of your objections?


  2. Jeremy Fox

    Nice post Stephen.

    I have an old post looking at what I think is the same funding allocation model you consider, except that I didn’t go so far as to consider uncertainty:


    It’s interesting that the same model can be used to ask different questions, thereby perhaps leaving readers with different “gestalt impression” of our posts even though we’re actually not disagreeing at all. You use the model to ask whether it’s a good idea to just give all the money to the most productive scientists. I use the model to ask whether it’s a good idea to give everyone the same amount of money. In both cases the answer’s “no”. But we’re looking at the same model, and so in both cases the optimal funding allocation (absent uncertainty) is the one that equalizes every PI’s per-dollar productivity (however productivity is measured). So that, if an extra, infinitesimally small amount of funding magically became available, it wouldn’t matter which PI you gave it to.

    As far as I know, there are no data on how the productivity of individual PI’s scales with their funding, unfortunately. So in practice I think both our posts are necessarily hypothetical exercises.

    One issue both our posts gloss over is whether there’s some science that’s really worth doing, but that no PI could afford to do in a world in which all PIs are funded on an NSERC DG-type model. That is, both our posts treat “science” as a single thing that we want as much of as possible. That’s surely false; “science” is really some indeterminate number of only partially-commensurate things. So in reality, any funding agency should (and does!) have various programs funding various sorts of stuff–operating grants, major equipment purchases, separate programs dedicated to particular sorts of work like genome sequencing or ship-based research…The optimal allocation of funding among those various programs is a totally intractable allocation problem. You just have to rely on the professional judgement of the funding agency administrators.

    You note in passing the possibility of just giving everyone a grant, presumably on the grounds that the optimal allocation of funding among PIs is sufficiently close to equal, and that it costs more money than it’s worth to review grants and weed out the rare terrible ones. I used to think this radical idea might be right, but I’ve changed my mind. In large part because of data from NSF DEB indicating that, when you make it easier to apply for funding, *lots* of people who previously didn’t apply will start applying (https://dynamicecology.wordpress.com/2015/04/30/we-asked-nsf-answered-per-pi-success-rates-at-the-nsf-deb/). So if NSERC were to just give a grant to any qualified applicant (say, anyone with a PhD, holding at least an adjunct appointment at a Canadian academic institution whose published at least one refereed paper), they’d get *far* more applications than they currently do, which would have the perverse effect of dramatically reducing average award sizes. (And that’s before universities started adding lots of adjuncts and encouraging them to publish single papers so as to be able to get more NSERC funding and the associated overhead, which would definitely happen.)


    1. Zen Faulkes (@DoctorZen)

      “There are no data on how the productivity of individual PI’s scales with their funding.”

      Pretty sure there is for NIH. Let’s see… Ah! I can see why one might miss it. It’s in some obscure journal called Nature.

      Study says middle sized labs do best: http://www.nature.com/news/2010/101116/full/468356a.html

      I think the article is based on this blog post: https://loop.nigms.nih.gov/2010/09/measuring-the-scientific-output-and-impact-of-nigms-grants/

      Liked by 1 person

      1. Jeremy Fox

        Hi Zen,

        Thanks for the link, but I’d already seen that. I didn’t cite it because those data don’t address my question, which you seem to have misunderstood. What I was asking about was data on how the productivity of *specific* individual PIs scales with their funding. Not data on the average productivity of all PIs holding a given level of funding. Those are two different things. See this post at the obscure Dynamic Ecology blog for a discussion of the difference:


        Sorry for fighting snark with snark, I know you probably meant it light-heartedly. But after the argument earlier in the thread, I’m not in the mood for this sort of thing.


  3. Chris Lane

    There’s no question that the suggested limits would cripple many fields, entirely, in biology alone. Go find out what animals costs are at your institution and ask to see how many animals/cages people typically use for one experiment. If you are unfamiliar with those costs, the numbers with shock you. I don’t even want to tell you what we paid in sequencing costs last year, let alone the analysis support and software. Just paying for people…

    It’s also institutionally specific. For instance, our student support does not include summer pay, meaning that comes out of my grants. Our postdocs get a pretty wide range of benefits that are not found in other places, which also come out of my grants. Overhead rates vary (in the US). The constraints at my institution are different from another, further compounding the issues with a simple model.

    I used the word “lazy” because you’re only looking at the problem through the lens you know. As I have watched this argument played out dozens of times, I can promise you that not everyone shares your costs or perspective and what is efficient for one group of people is wildly inadequate for another. No one person working at the PI level has the range of experience to judge what is “right” for another field, though many opine.

    If there were an easy solution this wouldn’t be the massive issue it continues to be. There is such an enormous range of research and associated costs that a “one size fits all” model is immediately doomed to fail.


    1. Jeremy Fox

      Chris, you’re being very unfair and your strong language is totally out of place. The post is admittedly and deliberately narrow in its focus on operating grants at a single agency. It’s fine to raise broader issues, like differences among fields funded by different programs or agencies and differences in the funding structure of different countries. I raised the issue of differences among fields in my comment as well. But saying or implying that Stephen or his post is lazy for failing to raise those broad issues is *way* out of line. Plus, he’s already granted your point. But instead of replying to his comment productively and continuing what could be a productive conversation, you’re just repeating yourself and implying that his post was lazy and ignorant, which it wasn’t.

      You say you’ve had lots of conversations about this issue. Well, with respect, those past conversations seem to be a liability rather than an asset. They apparently are coloring your ability to read this post and the comments and engage productively. The problem here isn’t Stephen’s narrowness of perspective–it’s your insistence on putting words into his mouth.

      By the way, if you’d bothered to read the post carefully and click the links, you’d find that Stephen linked to a study of NIH, which argues that NIH allocates its money inefficiently by giving too much to a few star researchers. You prefer to talk about biomedical work? Fine–how about a comment that engages in detail with that NIH study? I’ve read that study, and I find it quite persuasive though not completely compelling. But I’m not a biomedical researcher and so I’d be very interested to hear your informed perspective on that study.

      What I’m not interested in is someone saying or implying, falsely, that we Canadian ecologists have no idea how much lab animals cost, or that we think it’d be a good idea to limit every scientist in every country in the world to an operating grant the size of a typical NSERC Discovery Grant, or that it’s lazy and ignorant not to talk about ALL funding allocation issues in one little blog post.

      Nobody here thinks there are easy answers to the broad issues you raise, or that one size fits all. So if you want to strawman, take it elsewhere.

      Liked by 2 people

  4. Chris Lane

    Thanks for the high-handed tone-scolding Jeremy, I’m sorry to offend you so easily by simply having a discussion. Amazingly, I have read the NIH study, which is why I cited the 2-3 R01s in my first comment, as being possibly optimal for that work. No one is arguing here that funding a few people with truck-loads of money is the way forward. But stabilizing a lab with enough money to pay the people we’re supposed to be training does help.

    NSERC funds well beyond just ecology. It may not fund Bio-med, but it certainly funds molecular biology, physics, chemistry and engineering. That’s a pretty massive swath of diverse fields. Some of them are more expensive than others. Ergo, spreading funding thinly across them all will have more massive connotations for some fields than others.

    No field should be setting the bar for another. Sometimes science is expensive. People are expensive and some projects require more of them. Without working in other fields I can not accurate predict how many people and how much money it will take to move that science down the road at an acceptable pace. Even within evolutionary bio there is a pretty massive range of what different labs require, depending on what they are doing. Making a hard limit on funds restricts scientists to proposing only projects that can be done for short money. For some that will be fine. Others will have to overhaul their approach or go elsewhere.

    There’s no question (and NSF has published the data) that the best proposals do not result in the biggest advances. We also leave a lot of good science on the cutting floor. Finding the right balance between funding the most people while allowing science to actually get done is a challenge every federal funding agency wrestles with constantly. The “spread it thin” opinion is a popular one* that even comes up at panels when the Bio director is in for their visit. At least at NSF, they feel they are spreading it as thin as it will go without impairing the science. Arguing that it should be spread even thinner is saying that YOU can do it for cheaper, so everyone probably can. That leaves me uncomfortable, no matter what agency or country we’re talking about.

    *And yes, it is always brought up by people who have low cost labs.


    1. Jeremy Fox

      Re: my tone, sorry. I get kind of upset with people who called informed, thoughtful colleagues of mine “lazy”. I too would like to have a discussion. That’s why I’m not calling you lazy. You clearly know and have thought a lot about these issues, and you clearly don’t think one size fits all. So I’m not sure why you seem to think otherwise of Stephen and I. But whatever. I’m happy to move on and talk about substantive issues, and I hope you’ll do the same.

      In the spirit of moving the conversation forward: sounds like we’re all in agreement that it’s inefficient to give a bunch of project-based grants to a few stars. Unfortunately, at least at NSF (well, NSF DEB, I don’t know about other NSF divisions), reallocating operating grant funding away from those stars wouldn’t free up all that much money, because so few people currently hold multiple operating grants. At NSERC, as a post Stephen links to notes, there might be significant money to be freed up by allocating funding away from “stars”, but it would involve reallocating funding away from other NSERC programs to the DG program. Since the purpose of at least some of those other programs is to attract top people, rather than to pay the operating costs of research, it’s not entirely clear to me what the optimal allocation is among those various programs. My instinct is that it’s suboptimal to spend as much as NSERC does on programs to attract a small number of top people, but I couldn’t prove it with data. As we all agree, it’s not at all clear how to allocate funding between apples and oranges (different fields, different purposes within fields, etc.).

      One operating grant model that might be worth trying, that as far as I know doesn’t exist anywhere, is a hybrid of an NSERC DG-type system and an NSF/NIH-style project-based system. Andrew Hendry once suggested this to me. If memory serves, I think he suggested it because in his own work he found that there were lines of research that he couldn’t afford to pursue on his DG, but that didn’t fit within any other program at NSERC (e.g., because they weren’t industrial collaborations or didn’t involve collaborative networks of researchers). So he suggested that NSERC should lop, say, 10% off the DG program, and put that money into an NSF/NIH style program of project-based grants, worth (say) 100-200K/year for 3-5 years. Success rates for those project-based grants would surely be very low, quite possibly 1% or even less. But that would be ok because most people would still have their NSERC DGs. Depending on the field, one could imagine tweaking the parameters of this model so as to vary the success rate and average grant size for the DG-type program, and the success rate and average grant size for the NSF/NIH-type program.*

      Another model worth considering in the US is one in which some grad students who are currently paid for by NSF/NIH project based grants are replaced by full-time career researchers (technicians, “research associates”, lab managers, etc.) The idea being that, if you have fewer trainees, and if more of those trainees can go on to viable careers in science as technicians/research associates/lab managers, etc., you can still produce the same amount of science, since one full-time technician/research associate would hopefully be as productive as a couple of grad students. Depending on the numbers, you could maybe even produce the same amount of science with somewhat smaller grants on average, and so perhaps somewhat higher success rates*. And as a side benefit, you’d also avoid having lots of people go on to post-grad school careers that often only make modest or no use of the specialized training they received. Technician/research associate/etc. positions wouldn’t be tenured, but they’d be sufficiently long-term, and there’d be a sufficient number of them relative to the supply of people who want them, that it would be a viable career path. But it’s not clear to me if, say, NIH and/or NSF could force adoption of this model on its own just by changing its granting policies (e.g., by capping how many students it will support on an R01 grant). Possibly, adopting that sort of model would require more wholesale changes in how academic research is organized and paid for, and that obviously require coordination among many actors and so is much harder to bring about.

      It’s also worth thinking about needed adjustments on the part of researchers as opposed to, or in addition to, adjustments by the government funding agencies. Certainly in the US, it’s now very difficult, maybe effectively impossible in some fields, to sustain one’s lab in the long term by getting a succession of NSF and/or NIH grants. How should researchers respond? One obvious one, which some people already do, is to diversify and look for different pots of money from different sources. But in some fields and subfields, there aren’t many other pots out there. Another response is to run a “boom and bust” research program–one that’s capable of ramping up quickly and cranking out a lot of good science on the rare occasions when one happens to get a big operating grant. And that winds down to near-zero when that grant ends. I’d be interested to hear ideas on what it would take to make that boom-and-bust model viable. What has to be true about the way your research program operates–the equipment and people you need, the sort of question you’re working on, your employer’s expectations of you, etc.–to make that boom and bust model viable?

      *Although it’s actually quite difficult to change funding allocation policies so as to change success rates, because people and grant applications tend to follow the money and the high success rates. For instance, doubling NIH funding a while back didn’t do anything for success rates in the long term because it just attracted more people and applications. Even policy changes that aren’t intended to affect success rates can do so, as when NSF DEB brought in a preproposal system, which attracted many additional applicants who hadn’t previously been applying to DEB (https://dynamicecology.wordpress.com/2015/04/30/we-asked-nsf-answered-per-pi-success-rates-at-the-nsf-deb/).

      Liked by 1 person

  5. Jeff Houlahan

    Hi Steve, an interesting post and discussion – the argument Chris puts forward against the simple approach you use is that some disciplines are just more expensive. I think it’s a reasonable point but unless a convincing case can be made that expensive science gives a greater return than cheap science it rests on a shaky foundation. I suspect that that expensive science can often make a compelling case… but scientists doing it should be obligated to make that case. And I get that ‘greater return’ is a tricky question but if it’s unanswerable then I’m not sure how anybody can argue against doing lots of cheap science over a little bit of expensive science. best, Jeff H.

    Liked by 2 people

  6. Chris Lane

    I have often advocated for a system at NSF whereby people can specifically apply for small awards. Program Officers will tell you there is no minimum and that you can apply for small awards through the normal channels, but the preproposal system (no budgets) has effectively made this impossible. When a small project is compared at the preproposal stage to a larger one, it appears insignificant.

    However, I’m starting to think this isn’t a great idea for the reasons listed above: that low hanging fruit attracts a lot of bears. The numbers on the new applications in the preproposal system are rather stunning and make it clear that there are many unfunded PIs out there who are willing to try if you lower the activation barrier. I think an R03 equivalent in the NSF system would be quickly swamped out and success rates would not improve. Perhaps if you could only apply for one at a time and only carry one, it might sort itself after a bit.

    With regards to having to justify more expensive science, that happens everyday. Methods and budget justification are part of every NSF proposal. At NIH, if you want to ask for more than the modular budget ($250k/yr) you have to justify it, but that’s a different scale. There are just certain things you can’t do without animal fees. There’s certain costs associated with genomics. I’m sure they same is true in physics and engineering, but that’s already part of every application. Are we now going to say “justify any dollars over $40k/yr”? That’s just a penalty for doing science different from those in this conversation. An alternative would be for someone else to suggest limiting ecology proposals to $40k/yr to free up money for other science, which probably sounds less appealing to this audience.

    As far as the “boom or bust” model goes, it’s terribly inefficient. Under those conditions, you train people up and they all leave before the new crop. That means you are losing institutional knowledge, but then the PI must train EVERY person in the lab on everything (techniques plus ordering, where to get certain supplies, who to talk to for this issue, etc.), rather than trusting a postdoc, tech or senior grad student to help out. It’s an unnecessary waste of PI time, in addition to all the things you don’t know that your in-the-lab personnel could have passed on. I see that type of operation as one that would get me out of the game far sooner.


  7. Jeff Houlahan

    Hi Chris, “budget justification” on NSERC grants (it may be different on NSF grants but I suspect not) is small ‘j’ justification. That is, do you needs the equipment, services and personnel that you’ve asked for to do what you propose to do and is it priced properly? I’m talking about justification in the sense of ‘how does humankind benefit from your work?’ and more particularly ‘how much does humankind benefit from your work?’ And I realise that these are very difficult questions but when some disciplines get billions and others get millions and some PI’s get tens of millions and others get tens of hundreds it seems like a question that has to be answered. I’m not arguing that ecology should get more money – as a discipline I think we have some work to do if we want to convince people that we deserve a lot more money- and I don’t think that this is strictly about ecologists taking a position that serves their interests . On the other hand, the onus to demonstrate tangible benefits probably falls more on the ‘big ask’ disciplines than it would on ecology. Just because it takes a lot of money to do Arctic work doesn’t mean you should get it, just because it takes a lot of money to do genomic work doesn’t mean you should get it, just because it takes a lot of money to measure what happens when two atoms collide at high speeds doesn’t mean you should get it. Doesn’t mean you shouldn’t get it either.
    However, the original question was ‘Is it better to give lots of people a little money or a few people lots of money?’ – Steve has made a logical argument that makes some sense and there had been empirical work done suggesting that Steve’s analysis isn’t wrong. Steve cited one paper and Fortin and Currie 2013 have presented some data on this (http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0065263). Is it convincing – well, I’m getting there but I do work in one of those disciplines that can do something with a little. Your arguments against spreading the money thinly are,

    1. Some disciplines have a larger activation energy – they need more just to produce something. This suggest that the logical and empirical assessments would need to be discipline -specific. Fair enough. And you made the point that NIH has looked at this and the optimal number is big by ecology standards. Again, fair enough, but do those numbers you gave us, by Bio-Med standards, suggest a little to lots of people or a lot to few people?

    2. With small grants you can’t hire post-docs and technicians. There is no doubt that it takes a decent sized grant to consistently hire post-docs and technicians but is that hindering science in Canada? Well, it gets back to same question – do we get more and better science if we fund a few labs a lot (in part so they can hire post-docs and technicians) or lots of labs a little. What little empirical evidence we have so far suggests the answer is no.

    3. You and the folks you talk to at NSF only hear this from people with small grants. This kind of anecdotal evidence is not that compelling but further, the fact that the it’s mostly the rich who are against the rich paying more taxes and mostly the poor who are for the rich paying more taxes says nothing about whether it’s a good idea for the rich to pay more taxes.

    4. You couldn’t run your lab on 20K per year. Again, just not that relevant. If I can do some research on 20 K a year and you can’t, that says something good about my research. If your activation energy is 40K and mine is 20K, you have to demonstrate that there is at least 2x the bang for your buck (however ‘bang’ gets measured). Again, the little evidence we have so far doesn’t support your position.

    This is an open question, in my opinion, and a discussion worth having. Best, Jeff H.

    Liked by 1 person

  8. Chris Lane

    “You couldn’t run your lab on 20K per year. Again, just not that relevant. If I can do some research on 20 K a year and you can’t, that says something good about my research. “

    This is a fundamentally flawed assertion and emblematic of why this discussion always goes off the rails. Really? Your science is good because it costs less? So scientific value is based on some sort of frugality olympics? Under that metric, pretty much all health-related research should be shut down. Alrighty then….

    And how are we measuring “bang”? Papers? H-index? Likes on RateMyProfessor? Every single proposal I write commits a significant section to the benefits of the work, but there’s no possible way to actually measure that without years of reflection after the fact. So I think we already justify this in the way we can before the work is done and it’s all in the eye of the beholder anyway. And if you measure “bang” based on what someone has already done, then you massively handicap junior people in the evaluation stage. So unless you have a specific way you want people to justify larger budgets, I would argue this is already happening.

    And we still haven’t figured out where all your PhDs are going to go. With no money for postdocs in this new system, either you’re headed back to hiring right out of grad school or committing to sending every PhD who wants a faculty job out of the country. Since the postdoc years are some of the most productive, I’m not sure how that boosts Canadian science.

    The “Spread it thin” model is short-sighted and ignores the logistics of the current training pipeline AND the fate of PhDs who want to stay in research in non-PI roles. “Industry” doesn’t want all your PhDs, I can assure you.


    1. dr24hours

      Cheap research is not necessarily of greater value, but there is obviously value to be found in cheap research. Jeff said that it said something good about his “research”, not his “science”. If being cheap means one is doing bad science, then, yeah, that’s a huge problem.

      But some very good science is cheap, and I think “This high quality research is also inexpensive,” is a compliment. (i.e., “Something good” in Jeff’s terms.)

      Chris’s point that the “spread it thin” model is shortsighted is, I think, nevertheless correct. I think it’s clear that what we want is high value research, and that’s about the ratio, not the cost. A mix of expensive and inexpensive research – all conducted with rigorous methods – seems like the obvious path to me.


    2. ScientistSeesSquirrel Post author

      Chris – uncertainty about how to measure “bang” was addressed in my original post; mathematically, it favours reducing variance in award sizes. And I think most of us are discussing funding within fields, not between – I agree with you that optimizing between fields is a much harder problem. It may even be largely a political/social issue rather than a scientific one!


  9. Tal

    Great post–I wrote something complementary here. If you accept the basic idea that there should be at least some bias towards smaller grants, the question still arises as to how to best allocate funds in a way that’s consistent with that ideal but doesn’t seem very heavy-handed (i.e., by imposing hard caps). The approach I suggest is to have different tiers, where all program requirements are identical but PIs have to make an explicit decision about the risk/reward tradeoff.


  10. Pingback: Friday links: statistics vs. TED talk, #scimom, Jeremy vs. Nate Silver, and more | Dynamic Ecology

  11. Pingback: Recommended Reads #53 | Small Pond Science

  12. Pingback: Why grant funding should be spread thinly: some followup | Scientist Sees Squirrel

  13. Pingback: Are “side projects” self-indulgent? | Scientist Sees Squirrel

  14. Pingback: Searching for squirrels | Scientist Sees Squirrel

  15. Pingback: Supervisory inflation and “value-added” mentoring | Scientist Sees Squirrel

  16. Pingback: Blogging as an introvert | Scientist Sees Squirrel

  17. Pingback: Making people angry | Scientist Sees Squirrel

  18. Pingback: Can we please stop paying attention to grant funding on researcher CVs? | Scientist Sees Squirrel

Comment on this post:

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.