Image: “It was a dark and stormy night…”, from Edward George Bulwer-Lytton’s novel Paul Clifford (1830). Check out similarly wretched prose at the Bulwer-Lytton Fiction Contest.
I’ve been prepping recently for two different writing workshops: one on my home campus, and another half-way across the continent at the University of Wyoming. A funny thing happens when you write a book about scientific writing: people infer from that authorship that you know things about writing, and even that you’re good at it. I’ve come to accept the first half of that, although not the second.
I’m certainly a better writer than I once was. (Writing The Scientist’s Guide to Writing helped me improve quite a bit; I can only hope that reading it has a similarly salubrious effect.) There’s nothing unusual about my improvement: all of us learn to write better as we practice the craft. And that means we get to look back and cringe at the offenses we’ve committed in the past. For the early-career writer, though, the improvement ahead isn’t always obvious. I distinctly remember thinking with some despair that every faculty member in my PhD department was a dramatically better writer than I was. Well, sure – each of them had had years to learn from their mistakes (and all the other ways one learns to write). I had only just started to make mine! I see that now, with 20:20 hindsight, but I didn’t see it then. In that spirit, then – and in the knowledge that people really, really like blog posts about the stupid things I’ve done – what’s the worst writing sin I’ve ever committed?
I’d love it if my worst writing sin was in one of my first published papers. That would suggest that I improved smoothly and rapidly, with adeptness (if not excellence) achieved relatively early in my career. Oops. My worst writing sin wasn’t committed when I was a grad student or a postdoc. It was committed when I was already a year or so into a faculty job (in a collaboration with one of my 1st grad students, Lynne Remer, who should not be blamed in any way for the horrors unleashed).
Here it is. It’s this table (from Heard and Remer 1997, American Naturalist 150:744-770):
You don’t have to know much about the science in this table to see why it’s so awful*. All you have to know is that we’re interested in how the entries (coexistence times) depend on the row and column labels (clutch sizes of two competing species). We’re interested in that – but we’d have to work unreasonably hard to extract any actual information about it. What on earth was I thinking?
This atrocity of a table is actually a good illustration of what tables do well, and what they do poorly.
- Tables are very good at enabling lookup of exact numbers (as in a table of molecular weights, P values, or similar). Too bad that’s not what I was after with this one!
- Tables are rather poor for conveying trends or patterns. They ask a reader to compare quantities by reading some numbers and deciding which are larger. That’s not a hugely difficult task for two or three numbers, but it’s very inefficient for more (compared to, say, discerning the slope of a line on a graph). So for example, it’s important to our paper’s story that the numbers along the main diagonal increase to the lower right – but we sure made it hard for anyone to see that.
- Tables are much better for comparing entries up and down columns than they are for comparing entries across rows. They’re even worse for comparisons across rows when the numbers to be compared are interrupted by other numbers (the ones in parentheses, which are standard errors). It’s hard to see how I could have concealed the horizontal trends more effectively.
- Since tables rely on actual reading of a number, they’re vulnerable to the inclusion of too much precision. Why three significant digits in my entries? The third only makes it harder to compare entries. (It could have been worse; I’m sure I had all these numbers to four digits; and we’ve all seen statistical tables listing eight).
- Tables resist colour- and pattern-coding of distinctions. It’s very easy in a figure to show solid vs. hatched lines, or red vs. blue dots. In a table, the equivalent toolbox is very limited. Is it easy to tell the underlined entries from the regular ones? I must have thought so, at the time; but I was wrong.
This doesn’t mean you should never use tables in scientific writing. It just means I shouldn’t have used this one. As with any other decision you make in writing, the key is asking two questions: What do I want to communicate here? How can I make that easiest for a reader to see? I think I’d answered the first question. I certainly hadn’t answered the second. Not well, anyway.
So, there you have it: my worst writing atrocity (so far, at least). Is anyone willing to share theirs?
© Stephen Heard November 5, 2019
*^But if you care: it’s from a paper I’m proud of (and I hope Lynne is too; she certainly should be). Lynne and I asked how competition between two insects using a system of ephemeral resource patches might be affected by female clutch-laying behaviour (grouping eggs into a few large clutches vs. many small ones). We were able to show that clutch laying is strongly stabilizing, and – most intriguingly – that for plausible models of adaptive female behaviour, that competitors can coexist most easily not when resources are abundant, but when resources are scarce. This is still one of the neatest things I’ve ever discovered!