Pizza dough, knowledge, and the problem of authority

I make pizza dough often, and I discovered recently that my pizza has lessons to teach about epistemology, reproducibility, and our practices in scientific writing.  That seems like a big load for some pizza dough to carry, so let me explain.

I follow a recipe* (at least more or less).  That puts me squarely in the medieval epistemological tradition of “scholasticism”, which emphasized the acquisition of knowledge by reading earlier texts – especially, texts written by authorities.  What did Aristotle say, a medieval scholar might ask (or what did Julia Child or Betty Crocker say, I might ask myself)?

Science claims to no longer adhere to this tradition, having gone through an upheaval roughly in the early 17th century towards a new attitude – “empiricism” – that emphasizes instead the acquisition of knowledge from direct observation and experiment.  The motto of the Royal Society of London, which was founded during this upheaval, is still “Nullius in verba”, or “Take nobody’s word for it”.

I was talking about pizza, wasn’t I?  Having realized I’d been a pizza scholasticist, I did what any good empiricist would do: I ran an experiment.  My pizza dough authority told me I needed to add yeast – so I tried leaving it out, to see for myself if yeast rises dough.

It does.  My yeastless dough was terrible.

OK, I’ll admit that my experiment wasn’t as deliberate as I just pretended; actually, I got distracted (squirrel!). But planned experiment or accidental one, it yielded data that confirmed three things.  First, it confirmed that I’m occasionally an idiot.  Second, it confirmed centuries of microbiological and culinary knowledge about leavening agents.  Third, it confirmed that pure empiricism is silly.  We can’t redo every experiment endlessly – every cook can’t have to run my yeast experiment, and every scientist can’t have to re-run Pasteur’s flask experiment.  We have to rely on previous texts for knowledge; otherwise knowledge can’t accumulate.  That’s why we have a scientific literature (and cookbooks).

So, if we need to draw knowledge from a literature, the texts that make up that literature need to have authority.  How does a text gain authority?  Why do we find it acceptable to draw knowledge from a paper in Cell or Ecology or Physical Review B, but not from an article in Wikipedia?**  That question, of course, is an entire subdiscipline, not a blog post.  So my point today is to observe that a lot of the features of our scientific writing and publishing are understandable as ways in which we seek to give our texts authority.  (And the reproducibility crisis is us collectively saying “oh crap, we may be overestimating that authority.)

What features of our process are attempts to give our texts authority?  Well, two obvious ones are peer review and the use of inferential statistics (various abuses of which make big contributions to the reproducibility crisis).  But I’m more interested in somewhat less obvious ones.  How about these?

  • The fact that our papers have Methods sections. I know, you’ve never thought twice about that; but papers didn’t always include Methods at all, and it’s relatively recent that they’ve routinely had an explicit heading.  A Methods section connects a text to an original observation or experiment, fusing the scholasticist and empiricist approaches. Once, these Methods sections included a remarkable level of first-person detail, in an attempt to give work authority through “virtual witnessing”.  That particular rhetorical trick is no longer in vogue, but rhetoric survives. Some Methods are needed for a reader to understand the Results; and lots of Methods are needed for the rare reader who will repeat the work.  But quite a bit of what we do when we write is use the Methods to show that we understand the techniques we’re applying, and that we know how to use them appropriately – and that’s a claim to authority.
  • Citations.  In weighing our own findings against those of others, we look for authority through agreement of our results with those of others. More subtly: by citing the “right” papers, we show that we are familiar with the literature, and this, like using appropriate methods, is a claim to authority.
  • Academic affiliations. A paper’s byline almost always has footnotes indicating the affiliations of the authors.  Why? In part, so you can contact them; but more, because it recognizes the professional credential of being affiliated with a research institution (be it a university, a government department, or a respected NGO).  It says “This is the intellectual product of someone who belongs to the community of science”.  This doesn’t, of course, entirely work as a marker of authority.  While the contribution of amateurs to science is a shadow of what it once was, unaffiliated folk can still produce good science; and conversely, Michael Behe’s affiliation with Lehigh University should not give his absurd views on evolution any authority at all. Nonetheless, the predictive power of an affiliation is non-zero, and the professionalization of science (which arguably passed an inflection point in the early 1900s) can be seen in part as an answer to the problem of authority.
  • Perhaps you think authority from academic affiliation is a bad idea. Well, you ain’t seen nothing yet. The adoption of the passive voice in scientific writing, in the mid 20th century, was a rhetorical trick: an attempt to pretend to objectivity and thus scientific authority by removing the subjectivity of the human actor from the writing. Not, notably, from the actual science, where it would matter (if it were possible!), but only from the writing.  The passive voice, then, was an attempt to communicate a claim of authority. Ugh.

All over our writing, if you look closely, you can see the fingerprints of an awkward fact: that, as a community, we decry argument-from-authority while simultaneously realizing that without argument-from-authority science can’t progress. Some attempts at getting this right are clear mistakes (old-style “virtual witnessing”; passive voice).  Others are reasonable ways in which we assess expertise (Methods sections).  What’s interesting is how that categorization has shifted over the centuries.  Would you make a wager that we have it right, here in the 2020s?

© Stephen Heard  January 28, 2020

Image: Cookbook advertisement, from Burpee’s Farm Annual (1891).  Note the academic affiliation that acts as a marker of authority.

*^I’ve scribbled a few changes in the margin, actually, but the basic recipe is from the 1986 edition of the Betty Crocker Cookbook.  (That sounds desperately uncool, but that edition of Betty Crocker is very good for the basics.)  I like a thick, doughy crust; if you’re a fan of crispy thin-crust pizza, this recipe might not be for you.  Here’s my version:

1 T yeast
1-1/3 c warm water
3-1/4 c all-purpose flour
3 T vegetable oil
1-1/2 tsp sugar
1-1/2 tsp salt
1-1/2 T dried onion flakes
1 tsp dried oregano

Combine onion, oregano, water, and oil.  Add remaining ingredients and knead.  Press out (one recipe covers an 11×17” cookie sheet to make a thick crust), top with pizza sauce, then let rise 30 min before adding remaining toppings and cheese.  15-20 min at 425F.

**^It’s traditional to dunk on Wikipedia – as an academic I’m almost required to.  It’s amazing, though, how often it’s useful.  The key, though, is that it’s mostly useful as a first toehold into a subject, not as an end.  It doesn’t hold authority, but it can point towards it.


11 thoughts on “Pizza dough, knowledge, and the problem of authority

  1. Pavel Dodonov

    A good point about Wikipedia, which I think I saw in a TED talk, is that for someone to take time to write on a subject, thon* must have great interest, time, and (if it’s not something subject to unsupported passionate views, such as politics or religion) knowledge on it. Otherwise why waste thons time? So I do find Wikipedia reliable on most subjects, for a first notion!

    * See what I did here?


  2. Peter Apps

    The use of ponderous sentence constructions, long words, incomprehensbile acronyms and modelling as a substitute for measurment and critical thought atre less honourable ways of trying to sound authorative.

    Liked by 1 person

  3. Jeff Houlahan

    Steve, I think calls to authority are almost always a mistake – in my opinion, peer review and inferential statistics don’t come close to providing the kind of filter required to be reasonably sure that there is ‘true’ knowledge contained in a manuscript. And the fact that we document our methods, cite other papers and have academic affiliation seems even more fragile as grounds for accepting transferred knowledge. If these were adequate filters for knowledge wouldn’t all disciplines that use those filters be equally confident about what they know? Psychology, ecology, physiology, chemistry, physics all use these filters but have dramatically different certainty about what they know. These don’t and shouldn’t provide much of a scaffold for scientific progress.
    I absolutely think science can progress without calls to authority and it is , roughly, through the kind of replication that you consider unlikely or impossible. The reason we are sure that we have a very good handle on aerodynamics is because every time a plane takes off and lands we carry out the experiment. The reason we are so sure yeast is required for pizza or bread is because the experiment has been done thousands and maybe tens of thousands of times – you aren’t the first to not put yeast in dough. One reason we are sure we understand human anatomy is because every time a surgery is performed we are doing the experiment (e.g. is the gall bladder where we think it is?)
    And where are we much less certain about what we know? – all the disciplines where the experiment is rarely repeated – psychology and ecology strike me as a couple of good examples. We know we have ‘true’ knowledge when the experiment is repeated and the prediction borne out, over and over again.


    1. ScientistSeesSquirrel Post author

      Jeff, I’m going to disagree with you pretty hard here, and the reason is very simple. You are almost right here: “The reason we are so sure yeast is required…is because the experiment has been done thousands of times”. Exactly – but that just sidesteps the question. How do you know it’s been done thousands of times? Because you read someone’s account of doing it. And how do you know that account is to be believed? Because unless you accept that there are markers of authority – and in rejecting both inferential statistics and peer review, you seem to be pretty far in the extreme in saying that no such markers exist – then you are stuck giving equal credence to every account. Which will certainly make a certain US President very happy!

      This is a real philosophical problem that you can’t just sidestep as you have here. It’s also, in a descriptive rather than prescriptive sense, a powerful explanatory paradigm for many of our writing practices! You can certainly argue that some markers of authority shouldn’t be taken as such (as I argue for the passive voice, for example). But if you really believe that there are no such markers at all, then you will not be able to contribute to science; you’ll be too busy redoing the no-yeast-pizza-dough experiment because you don’t believe ANYTHING in the literature (or cookbook)!


      1. Jeff Houlahan

        Steve, a single example that’s been peer-reviewed and uses inferential statistics is very unconvincing to me. It’s all about the repetition. So, I don’t give equal credence to all knowledge claims – I give more credence to knowledge claims that have been better tested. And by better tested I mean tested more often. Tested more often, is almost completely unrelated to peer review or inferential statistics. So, I’m not convinced that peer review and inferential statistics are necessary to make reliable knowledge claims but I know with near certainty that they aren’t sufficient.
        This isn’t an argument for eliminating peer review or inferential statistics – it’s simply saying that we have placed far too much faith in them as effective ways of filtering real knowledge claims from false ones. And, the reproducibility crisis is one of the most damning pieces of evidence, that we have placed too much faith in these filters. I think we need to be more , not less skeptical of the roles that peer review and inferential statistics play in allowing us to know what we know. Repeated “severe testing” (I like Deborah Mayo’s phrase) is how we learn what we know. Peer review and inferential statistics may help with the severe part (although I’m not certain) but they don’t help with the ‘repeated’ part.


  4. Pingback: Look, Ma, I found a squirrel! | Scientist Sees Squirrel

  5. Pingback: 2020 was weird for blogging, too | Scientist Sees Squirrel

  6. Pingback: Raisin buns, leaf packs, acronyms, and thinking | Scientist Sees Squirrel

Comment on this post:

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.