I sometimes get very upset with folks who hold strong opinions without data underneath them. I will, however, admit that when it comes to font choice, I am one of those people. In particular, I have strong opinions about how bizarre it is when people choose sans-serif fonts for writing documents.* Every time one of my students sends me a thesis chapter in Calibri, I grimace, grumble, and change the font – but I also find myself wondering why this choice has become so common when it’s just clearly wrong (tongue partly in cheek there, but only partly). I was pleased, therefore, to find a completely fascinating recent paper on people’s preference for, and performance reading, different fonts. There’s a lot of meat in it, but I’ll dip in today just to the part that grabbed me first: implications for serifs.
Here’s the conventional wisdom: sans-serif fonts are easier to read at a glance, when there are only a few words. This explains their use on highway signs and the like. Serif fonts help draw a reader’s eye along a line of text, and are therefore easier to read in long blocks of text. This explain why virtually every professionally published book uses a serif font. (Serifs, for anyone who is less of a font nerd than me, are those little horizontal bits at the bases and tops of letter strokes. But if you’re looking for them here you’ll come up empty. WordPress uses a sans-serif font by default, and I’m sure there’s a way I can fix that, and I would be less annoyed if I knew what it was.**)
So I know why I believe one shouldn’t use sans-serif fonts for long reading tasks. But to be honest, I don’t know whether I’m actually right to believe it. There’s literature on this – it’s just a rabbithole that for some reason I haven’t gone down.
Which brings me to Wallace et al. (2022). They asked a simple question that turns out to have a very-far-from-simple answer: do people read faster and understand better with text in particular fonts, and if so, are the fonts they prefer the ones that let them perform best? They addressed this with standardized reading-and-comprehension tasks that varied fonts and tracked performance. If you’re a font nerd, you should definitely read the paper, because there are lots of moving parts (like careful standardization of font sizes, which is not nearly as easy as you might think). But here are two key results – each of them completely fascinating.
First: people’s font preferences don’t correlate at all with their reading speed or comprehension (Wallace et al.’s Figure 10). People have strong opinions about what fonts they like – and often, about what fonts they find easier to read. And about the second half of that, they are generally wrong. I could poke fun at that, but it raises an obvious possibility: maybe I’m wrong about serifs! So, on to point 2…
Second, serifs do seem to make a difference – but not as much as I would have expected, and only for some readers. Now, the experiment wasn’t well designed to ask about serifs in particular. There were 16 different fonts, but only 3 had serifs – the three I’ve marked with stars in this version of Wallace et al.’s Figure 11:
Fonts are arranged from top to bottom by reading-speed performance for older (over 35) readers. So the three serif fonts included the very best font, two middling fonts, and no bad fonts. BUT: it’s clear that other things explain much more variance than serifs do; and for younger (under 35) readers there’s not much hint that fonts make any difference at all!
Now, before you suggest we can ditch serifs completely, it’s worth pointing out at least three reasons why the study design should underestimate their value. First, all reading was done on screen, while serif fonts are traditionally associated with reading on paper. Second, while the authors don’t specify the line spacing used in presenting texts, from the samples in Figure 6 it looks generous (see footnote** again). And third, the text samples presented were quite short (300-500 words), while any advantage of serifs should accumulate in longer passages. Still: my beliefs have been, if not upended, at least jostled a bit.
It’s really interesting that younger readers show no effect of font on performance. My first thought was that these are people whose reading postdates sans-serif fonts becoming word-processor defaults, and so we’re seeing an effect of familiarity. But if that were the case, we’d expect Calibri (that ubiquitous modern default) to show high performance – and it doesn’t. I think I can spin this result to satisfy whatever your own generational prejudice might be. Perhaps younger readers have acquired the skill of reading a wider variety of fonts (while older readers haven’t). Or perhaps younger readers have failed to acquire the skill of taking advantage of serif fonts to ease reading. Take your pick – but feel free to sniff haughtily about whichever hypothesis you choose.
So what have we learned? Well, we have yet more evidence that just because someone has a strong opinion about something, it doesn’t mean they have any data, or that they’re right. I’m always fascinated when the “someone” in question is a scientist, given how we’re supposed to feel about evidence. I’m a little bit queasy at the realization that in this particular case, the “someone” in question may be me.
© Stephen Heard May 24, 2022
Image: serif/sans-serif/nerd, own work CC BY 4.0; reading data, Figure 11 from Wallace et al, © 2022 Association for Computing Machinery, stars my additions, reproduced here as fair use.
*^I blame Apple for this, specifically via its choice of a sans-serif default word-processing font. This was surely an attempt to make PC-using folks feel, or at least look, unhip with their stodgy last-century Times New Roman. The spread of sans-serif fonts across other word-processing platforms – Calibri is now the default in Microsoft Word no matter where you use it – suggest that this attempt worked. Yes, I’m grumpy about this. Why do you ask?
**^At least I can compensate with generous line spacing, which should have a similar effect of easing eye movement along lines of text.