Is everything “broken”?

Photo: Chair (cropped), Zen Sutherland via CC BY-NC-SA 2.0

No, it isn’t, of course, but you’d sure think it is if you chat around the water cooler, pay attention to Twitter, or read blogs or Nature News.  Publishing is broken. Tenure is broken. Peer review is broken. Academia is broken. Reassuringly (I guess), at FiveThirtyEight Christie Aschwanden recently posted a long essay arguing that science isn’t broken. It’s an excellent and persuasive read, but the fact that it exists at all is pretty good evidence that a lot of people think science is broken. It’s not just science, either: Google will return lots of hits for “politics is broken”, “health care is broken”, “the music industry is broken”, and many more. What a broken world, we tell each other, we’re living in!

Why is our discourse so rich in “X is broken”? I think there are two simple reasons, and they’re the same two reasons that bad news dominates the lay media (Nick Kristof makes this point well here). First, it’s easy to write a piece about how horrible something is. Examples of fraudulent papers, deadwood faculty members, and delayed peer reviews are easy to find, because we all love to pass them on when we find them, and moral outrage is easy to muster (after all, these things are indeed bad when they occur). Second, we love to hear or read stories about how horrible something is. Speaking for myself, anyway, a story about a fraudulent paper, a deadwood faculty member, or a delayed peer review leaves me feeling good about myself that I don’t do those things, and deliciously scandalized that other people do. That’s why blog posts (for instance) that decry an injustice or point out a systems failure rack up thousands of page views. It just isn’t as much fun to write, or read, about how the way we do things works pretty well, most of the time. Most papers aren’t fraudulent, most tenured faculty work hard, most peer reviews are on time and helpful. But what’s the fun in that?

Now, I’m not a complete Pollyanna. Science has had enormous success, and it’s great fun to do – but plenty of things about it can still be improved. Science isn’t broken, but parts of it are dented, chipped, or only roughly hewn, and we shouldn’t ignore that. We are far from finished diversifying the community of scientists and removing biases (conscious and unconscious) that affect how we see each other. There are legitimate debates around whether or not we’re overproducing and underpaying graduate students and postdocs. There are problems with our funding models for science, especially for so-called “basic” or “curiosity-driven” research. These issues, and more, deserve our serious attention.

So I’m not suggesting that you shouldn’t read the next pronouncement that “Thing X is Broken”. It’s always useful to have a problem (real or perceived) on your radar. But when you do read that next pronouncement, ask yourself three questions. First, is this really a problem at all? Second, if so, how big a problem is it? And third, and most important, do we really need to toss out X and start over, or is there something small I can do to help solve the problem? Perhaps you can take on an extra peer review, or submit your next one more promptly. Perhaps you can mentor an extra underrepresented student.  Perhaps you can give a radio interview about the importance of research funding. Despairing that things are “broken” makes us unlikely to take these actions – but I think it’s precisely such incremental but important steps, when they’re taken by each of us and by all of us, that pull science forward.

© Stephen Heard ( October 13, 2015

Related posts (about things I don’t think are broken):


23 thoughts on “Is everything “broken”?

  1. jeffollerton

    Well said, “x is broken” is such a lazy cliche. In the UK I’m fed up of hearing “the NHS is broken” – it’s not, it’s just under a lot of pressure and 90% plus of the time works perfectly well. Ditto science, publishing, teaching, etc. etc.

    Perhaps we need to start a new meme: “broken is broken” 🙂

    Liked by 2 people

  2. tdepellegrin

    terrific piece! the invention or dramatization of problems (and their effects) is the new rhetoric of novices. i think “the sky is falling” approach to diatribes is successful in gathering people with whom the message resonates, but who also don’t necessarily have all the facts. one of the effects of this presentation is that it assumes that X is broken and needs to be completely trashed/revamped/stopped – there’s often no or little room for compromise or incremental change. incidentally, i just saw an ad on tv for waffles: ‘the toaster is broken’ – now there’s a reason to use the word properly!


  3. Jeff Houlahan

    Hi Steve, here’s a train I can’t hop on board – I’m not sure about science but I think ecology is broken. I’ve been doing this for 15 years and I’m not sure what I know about how the world works that I didn’t know when I started. And I’m convinced that this is true for many ecologists. The kinds of assertions that most of us are willing to stand by are close to trivial – big areas have more species than small areas; plants will grow faster where there is lots of sunlight, rain and nutrients; species abundances will decline if we destroy their habitat. The vast majority of ecological theories make predictions that are (1) wrong (when tested against nature), (2) vague, or (3) untested. We may have to accept (1) but we can fix (2) and (3). Best, Jeff H


    1. ScientistSeesSquirrel Post author

      Thanks for commenting, Jeff! I think I’d argue that ecology is very hard, but not that it is “broken”. I agree with you to some extent that novel general insight is hard to come by in ecology, but I’ve always put that down to system-level complexity. I would say we’ve made a lot of progress in some areas (for instance, epidemiology, speciation ecology) and less in others (control of regional diversity). But there’d be some long blog posts in that, and they might be more in the wheelhouse of someone like Brian McGill! (Unless you’d like to write one??)


      1. Jeff Houlahan

        Hi Steve, I would say the system-level complexity argument may explain why we have to accept predictions that are poor but I don’t think it explains (2) and (3). Every time we test a null hypothesis (…and we do that a lot in ecology) we test a vague model – that’s fine for a discipline in its infancy but not for one that’s in its second century. Every time we develop a model (theoretical or statistical) and don’t take it out into the world (…and we do that a lot in ecology) we leave it essentially untested.
        I think a mature discipline working on a hard problem would have a general consensus about what the best models are – we just would accept that they don’t do a great job. I don’t have a sense that there is much general consensus in ecology. Two fundamental ecological patterns that we would like to explain are abundance and diversity – what’s the prevailing ‘best’ model that explains diversity? If you put 10 ecologists in a room could they agree on the key variables in the model let alone the functional relationships or parameters? What about abundance? What are the key variables that influence abundance? My guess is that a large number of ecologists would like to answer those questions with ‘it depends’. This is almost always another way of saying ‘I don’t know’.
        I think ecology hasn’t recognised the need to fix 2 and 3 and actually find out if the world’s so complex that we can’t understand it very well. Right now, I just don’t think we’re very sure what we know.
        So, it may not be broken but I’m pretty sure it’s cracked.
        (I’m looking up the definition of curmudgeon as I hit the send button.)


  4. Jeremy Fox

    Part of what’s going on here is that pretty much everybody (including me) is crap at thinking about opportunity costs and tradeoffs. Time, money, and other finite resources spent fixing problem X are finite resources not spent fixing problem Y. And fixing problem X often creates or increases problem Y.

    But everybody has their own pet causes, and doesn’t know or care much about others. Which causes no end of frustration to anyone who’s ever had to allocate a budget among different entities (national governments, university deans, etc.). When an entity justifies a request for more money by talking about all the great things they could do with the money, whoever holds the purse strings should (and usually will) respond “So, who should I cut to free up funding for you?” To which no good answer is generally forthcoming. Same holds true when entities try to argue against budget cuts, of course.

    This issue crops up a lot on Dynamic Ecology (warning: shameless self promotion ahead). So much so that I worry I sound like a broken record every time I raise it in a new context. (That one’s particularly good, I think, because it’s a case where there really is no good excuse for failing to think about opportunity costs. It’s not a “guns or butter” case so much as a “salted or unsalted butter” case.)


    1. Jeremy Fox

      Following up on myself (further shameless self-promotion ahead): that last post includes a little poll, asking what ecologists should learn more of, *and* what they should learn less of to free up time for what they should learn more of.

      The poll results are here:

      The most popular choices of what to cut were topics that few ecologists learn in the first place (philosophy of science, economics), and topics that most science students (ecologists or not) are required to take at some point (chemistry, physics). The other popular choice of what to cut was the cop out “it depends” (which I admit to chickening out and choosing myself). Which I think nicely illustrates my claim that people are not good at facing up to opportunity costs and trade-offs. Forced to specify something to cut from ecology curricula in order to free up time for something more important, most poll respondents either refused to choose (“it depends”), suggested cutting something that can’t be cut because it’s not there in the first place, or suggested cutting the breadth of students’ general science training.

      In retrospect, I perhaps should’ve forced the issue by only listing ecology-specific topics, plus maybe a few other key topics like statistics, and not provided an “it depends” option. But the cynic in me suspects that many people simply would’ve refused to answer.


  5. jeffollerton

    Reading between the lines of Jeff H.’s comments, there seems to be an implied argument that the only correct way to do ecology is via formal models and hypothesis testing, with which I have to disagree. It’s a way of approaching science that we’ve imported from older science that is less affected by stochastic processes (at least on a macro level) i.e. physics. That approach of “develop a formal model and use it to make predictions that we test” is useful for some aspects of ecological research but not all of it, or even the majority of it, I’d argue.

    Hypotheses can be informal and based on observation rather than models; and there are times when exploratory ecology, simply asking questions about a system, is the appropriate approach. The latter is particularly important in areas where we need rapid results that can (hopefully) be translated into action via policy changes or funding allocation. An example might be understanding the types of habitats where high species diversity is being maintained within an otherwise depauperate environment (e.g. intensive farmland) and how those habitats might be linked together, and the effect of habitat enhancements and interventions on this distribution. Do we really need (or could we actually develop) formal models for such ecological research?


  6. Pingback: Recommended reads #62 | Small Pond Science

  7. Pingback: The dangers of twitter | Small Pond Science

  8. Charles

    I’m sorry to disagree with some of the previous comments, but a lot of these areas are in fact broken. Just two quick examples. On PubMed I searched for entries under publication type indicating a retraction or announcement of retraction. I sorted the results by journal. I found 0 entries for the New England Journal of Medicine. So, in its entire history since 1812 (I believe), they’ve not retracted a single article. This includes the original research they published related to Vioxx. NEJM did not even provide an ‘expression of concern’ for that article until they were notified it would be used as an exhibit in the litigation regarding that drug. I used this example because most of these ‘brokens’ revolve around publications and getting published. There’s no acceptable reason for not retracting the article. NEJM claims to be the “gold standard”. So, why allow the article to stand and not officially retract it?
    The other example: compare the years around 1915 and Einstein’s major publications and their implications. Several years went by after his work came out before the scientific community accepted his work and it wasn’t done until his work was validated / reproduced elsewhere. These days, researchers, journals, and institutions send press releases immediately: predict when you’ll get married, anti-aging in a pill, or some other spectacular claim is made and seldom if ever do they turn out to be robust findings. But they get ‘clicks’ and downloads and interviews and so forth, increasing name recognition, H-indexes, etc.
    Almost all of us need money, we need to buy food, pay the mortgage, the rent, etc. But too much of science right now is business and too little of it is knowledge. This is not a minor perturbation because knowledge is the primary aim of science. If that isn’t the case, then it’s broken.


  9. Pingback: Praise | Scientist Sees Squirrel

  10. Pingback: Kind mentors, and the other kind | Scientist Sees Squirrel

  11. Pingback: Post-publication peer review and the problem of privilege | Scientist Sees Squirrel

  12. Pingback: “Open Science” is not one thing | Small Pond Science

  13. Pingback: How to handle an idiotic review | Scientist Sees Squirrel

  14. Pingback: Fisher’s geometric model of software updates | Scientist Sees Squirrel

  15. Pingback: The one kind of review that really gets my goat | Scientist Sees Squirrel

  16. Pingback: Please don’t “make science transparent” by publishing your reviews | Scientist Sees Squirrel

  17. Pingback: To sign or not to sign: what the Replies taught me | Scientist Sees Squirrel

Comment on this post:

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s