(Image: Robert Boyle’s (1660) vacuum pump, from New Experiments Physico-Mechanical, Touching The Spring of the Air, and its Effects; Made, for the most part, in a New Pneumatical Engine)
Unless you’ve been living under quite a large rock, you’ve heard or read a lot lately about the “reproducibility crisis” in science (here’s a good summary). That our work should be reproducible is certainly a Good Thing in principle, but there are complications where the rubber hits the road. Today, some thoughts on reproducibility, and on what, if anything, it means for the writing of a paper’s Methods section. And I think some historical perspective is both interesting and useful – because the reproducibility “crisis” is 400 years old.
There’s an odd disconnect in the way we think about our Methods sections. Most books on scientific writing (for instance, Katz or Day and Gastel) say the Methods should give enough detail for readers to repeat your work and reproduce your results. And when I poll seminar audiences, 75-80% agree that reproducibility is the primary function of the Methods section. However, studies of the way scientists actually write (e.g., Swales 1990, Gross et al. 2002) find that few published papers come close to this level of detail. We tell each other one thing, but we do something quite different. And it turns out this question of what the Methods section is for, and therefore what ought to go in it, has been causing angst among scientists for as long as we’ve been writing science. It’s part of a larger and still unsettled question about how scientific knowledge gains authority.
Scientists working at the birth of modern scientific communication, in the 17th century, belonged to the intellectual tradition of the European Renaissance. Renaissance thinkers rejected the older Medieval (or Scholastic) emphasis on learning from earlier texts, believing instead that learning should come from empirical observation. The problem was that as science progressed, this put scientists in an increasingly awkward position: it became more and more obvious that further progress could only come from one scientist building on results reported by others. So how could those reports earn authority?
The famous physicist Robert Boyle grappled with this question in the middle of the 1600s, and his answer had three elements (Shapin 1984). First, Boyle gave exhaustive detail of equipment, material, and procedures, so that readers could (at least in principle) reproduce his experiments. Second, he argued for “communal witnessing”: if results were to have authority, experiments should be witnessed – so Boyle conducted many of his key experiments in public, and published the names and qualifications of witnessing scientists along with his results. Third, Boyle described in exhaustive detail not just his methods, but his experiments’ circumstances and settings, his false starts and failures, and much else. For example, to accompany his reports of experiments using his famous vacuum pump, he provided an illustration (above) of the pump. Not, importantly, of a vacuum pump, but of the vacuum pump he used, complete with irregularities, dents, and dings. The point of all this description was to make readers feel as if they had been there – to recruit readers as “virtual witnesses” – and this is why 17th and 18th-century scientific texts often have a charmingly narrative feel. My favourite example is Pierre-Louis de Maupertuis’ (1737) account of an Arctic expedition to measure Earth’s shape. He spends many pages relating the excitement and hardships of his travels: among other things, the midnight sun in Finland, the assaults of biting flies, techniques for defence against kicking reindeer, and cold that left only his brandy unfrozen to drink.
The thing is, none of Boyle’s three answers to the authority problem really worked. Boyle himself conceded that his experiments were rarely repeated. And of course, if every study was repeated just once, the gross rate of scientific accomplishment would presumably be halved. Communal witnessing was cumbersome even when science was a hobby of a few gentlemen of leisure, and became hopelessly inefficient as our enterprise grew. Finally, virtual witnessing was a rhetorical device, not a logical one.
Around the middle of the 19th century, the professionalization of science led to a new kind of authority. Work began to be considered reliable not because it was replicated, witnessed, or detailed, but instead because it was done by someone belonging to a community of established and credentialed scientists. The historian Steven Turner suggests that science had come to “a deeply-rooted ideology of honesty and accuracy that helped ensure…trust” (pers. comm.). In the 20th century, this professionalism became supplemented by peer review, and the function of the Methods began to include convincing experts that an author was using appropriate methods that made the results plausible. Both reviewers and readers made, and make, these plausibility judgements, nearly always without actually attempting to replicate work.
Where do we stand today? In modern science both replicability and witnessing both survive, but I think their role lies largely in testing extraordinary claims like cold fusion or the supposed hyperdilution memory of water. That professionalism is the major grounds for authority explains why scientific fraud is always shocking and why it’s often slow to be discovered (Diederick Stapel, for instance, falsified data for at least 55 psychology papers before being caught). It also explains why no matter how much we tell each other our science should be reproducible, we rarely reproduce it.
If scientific results aren’t routinely verified by repetition, how are they verified? Many never are (you can think of this as our collective shrug about their importance). But when verification comes, it comes because a study’s results prove consistent with those of other different studies, and because other scientists are able to build further understanding on top of them. That fraud remains relatively rare, and seldom distorts our understanding for long, suggests that the rarity of repetition isn’t actually a major handicap to the progress of science.
All this suggests that claiming that a paper’s Methods section is about reproducibility is a misunderstanding of both the history and the process of science. A Methods section is really about establishing the credibility of your approach, and thus giving readers a reason to believe your findings. (In addition, the Methods tell readers what they need to know about the procedures if they are to understand the Results.) Since the vast majority of your readers will never try to reproduce your work, filling the Methods section with the detail they would need to do so is unnecessary – and worse, abuses your reader’s limited time and patience.
Does this mean we shouldn’t encourage reproducibility? Of course not; calls for greater reproducibility have led to some very good things, such as the increased archiving of raw data and posting of source code. And by all means report methods in great detail – but place that detail in an online supplement where it can be conveniently ignored. After all, most readers are better served by ignoring it.
And don’t be stressed if, in fact, your work is never precisely reproduced. It’s still science, and that’s been true for 400 years.
© Stephen Heard (sheard@unb.ca) Feb 27 2015
This post is based on material from The Scientist’s Guide to Writing, my guidebook for scientific writers. You can learn more about it here.
Since it is entirely possibly to reproduce results without having to slavishly copy the methods, it should not be necessary to specify every tiny step. I would argue that to reproduce the results using a different method advances a field further than just doing exactly the same thing over again.
There has to be an assumption that anyone trying to do similar work has the necessary technical skills, otherwise every description of a dilution series would have to include a method for filling a volumetric flask to the line. Indeed, methods sections that include procedural details that should be part of the standard skill set, or that make absolutely no difference to the final result, are dead give away that the researchers did not really understand what they were up to.
LikeLiked by 1 person
Good point – agreed!
LikeLike
I disagree. Sure, there are often-times many ways to reach the same desired product, but science-based businesses depend on getting things done with the greatest efficiency, It would be far better to find reproducible literature preps for a chemical synthesis than to recreate one from scratch. Unfortunately, too often literature preps are poorly written, and sometimes, I suspect they are deceptively written. I’ve lost count of the number of papers where the literature prep turned out to be total hogwash. I work in industry as a professional biochemist, and incorrect methods sections have cost my company time and money.
Robert Boyle’s extremely detailed descriptions are over-the-top, but there is no reason to write a literature prep that can’t be reproduced even by a person with requisite knowledge and skill.
LikeLiked by 1 person
I agree that it is very frustrating when a literature method fails to deliver. But I am arguing that specifying every tiny step is unnecessary, not that critical information should be omitted. I assume that you do not expect a literature method to mimic the step-by-step detail and elaborate cross referencing to basic procedures that is typical for in-house protocols and SOPs ?
And if you do find a literature synthesis that works as written, how does your copying it advance the science of organic synthesis ?
LikeLike
Your whole book is this awesome, right? It’s not like one of those movies where all the good bits are in the trailer, is it? 🙂
LikeLiked by 2 people
Well, you’ll just have to buy a ticket to find out, won’t you? (grin)
LikeLike
Timely piece and one that I will cite when I get around to writing my post about poor methods and materials sections in some journals – look forward to the book too
LikeLiked by 1 person
Fascinating post. You will have noticed a contemporary shift that many journals now place the methods section after the discussion, instead of after the introduction. I think this illustrates that the methods section is being treated more as an appendix like the reference section. While this seems to work fine for the issue of reproducility, it tends to cause trouble for the issue of understanding. Sometimes, the methods section is needed for the reader to understand what the heck was done and in cases like that the method section needs to go first. I think journals should offer authors a choice. It would also be interesting to have a split section, with conceptual methods up front and details behind. I have never seen that (though I have dumped some key stuff into the first paragraph of results when dealith with a methods-as-appendix format).
LikeLiked by 2 people
oops — dealing, not dealith!
LikeLike
Pingback: Friday links: leaky pipeline leaking less, against exhaustion, slurpee waves, and more | Dynamic Ecology
I disagree with the premise of your post. Perhaps experiments are never reproduced exactly, but having a detailed methods section helps people follow up on your work and figure out why their results are different from yours. There are tons of seemingly trivial details that may influence the outcome of a study in dramatic ways. I, for one, hate papers with methods written as an afterthought with the assumption that the reader will automatically know “the right way” to do an experiment. As for your statement that methods “abuse the reader’s time and patience” – the reader can just skip the fragments they are not interested in. That’s why this section has subsections on specific methods.
LikeLiked by 1 person
Pawel – I take your point, but I think there is a real risk in presenting too much detail. While a reader can certainly skip fragments they are not interested in, I think it’s the writer’s job to define the story to be told, and to present it clearly to the reader. But which reader? If the reader you write for is the very rare one who wants to replicate, you force the vast majority of your readers to essentially write “their” version of your Methods for you. Sure, a reader can do that by skipping fragments – but they may also just give up in frustration, and skip your over-detailed paper entirely!
Note that I’m not saying one can’t include such detail. Just please put it in an online supplement, or something of that sort.
LikeLike
The casual reader usually skips the Methods section entirely, so it’s not a problem for them. The more careful reader will want to know the nitty-gritty of your methods – they are as important to them as the results or discussion. It really takes a minute or two to scan through a detailed methods section to extract the detail you want. It takes hours or days to figure out how an experiment was done if the methods are not detailed enough. The problem with the supplement is that supplements are treated as “bastard children” by the authors – they are poorly formatted and sloppily written, and often ignored by the busy reviewer, so in the end you cannot extract the methods from them anyway. Most journals will force you to put the extra methodology in the supplement, anyway, due to size restrictions, but I think it’s a bad move.
LikeLiked by 2 people
I agree whole-heartedly, Pawel! I have never attempted to reproduce an experiment, but often wish to use a particular technique or protocol from a published paper. Almost invariably (and presumably for space reasons), the ‘methods’ provides only the sketchiest outline, almost guaranteeing that it takes quite a number of false starts to reinvent the wheel and figure out how to get decent results. I haven’t yet run into a case of fraud, but when you can’t get a published ‘protocol’ to work for some unknown reason, the distinction becomes kind of vague. It’s been a running gag here that this effect is deliberate on the part of the author(s) in an attempt to keep anyone else from overtaking them in their particular field of study, although I think that may be pushing it a bit. I should say that the obvious solution is simply to get in touch with the author(s) and get the protocol from the horses mouth. Sometimes that even works.
LikeLiked by 1 person
Pingback: Links 3/10/15 | Mike the Mad Biologist
Pingback: Weekly links round-up: 13/03/2015 | BES Quantitative Ecology Blog
I sometimes find methods sections hard to follow as well. It’s hard to feel confident in or reproduce something if the wording isn’t very clear in the first place. I wish more methods sections included figures to make the set up immediately understandable.
LikeLiked by 2 people
Pingback: Is there science in scientific writing? | Scientist Sees Squirrel
Pingback: Our literature isn’t a big pile of facts | Scientist Sees Squirrel
Pingback: How should grad students learn to write? | Scientist Sees Squirrel
Pingback: Ask us anything: do current trends in scientific publishing accelerate scientific progress, or create chaos? | Dynamic Ecology
Pingback: Good uses for fake data (part 2) | Scientist Sees Squirrel
Pingback: Writing the Methods section as narrative | Scientist Sees Squirrel
Pingback: The warship Vasa and argument from authority | Scientist Sees Squirrel
Pingback: Why do we mention stats software in our Methods? | Scientist Sees Squirrel
Pingback: Some OTHER good books on scientific writing | Scientist Sees Squirrel
Pingback: Reproducibility and robustness | Scientist Sees Squirrel
Pingback: The Scientist’s Guide to Writing is a year old! | Scientist Sees Squirrel
Pingback: Originality is over-rated. (Including by me.) | Scientist Sees Squirrel
Pingback: The Disco Era of scientific writing | Scientist Sees Squirrel
Pingback: Pizza dough, knowledge, and the problem of authority | Scientist Sees Squirrel
Pingback: Klaas’s cuckoo and the joys of old literature | Scientist Sees Squirrel
Pingback: Don’t use your Discussion to shred your own work | Scientist Sees Squirrel
Pingback: Why are scientific frauds so obvious? | Scientist Sees Squirrel
Pingback: Your paper is not a Wikipedia article | Scientist Sees Squirrel
Pingback: My most heterodox scientific-writing lecture (or, how I annoy my colleagues in a good cause) | Scientist Sees Squirrel