Yes, good writing matters: empirical evidence!

I’ve devoted a lot of time and effort, over the last decade or so, to writing about good writing. There’s The Scientist’s Guide to Writing, of course; there’s our recent preprint on the construction of good titles; there are dozens of posts here on Scientist Sees Squirrel; and I can neither confirm nor deny rumours of another currently-super-top-secret book project. And this doesn’t even count the innumerable hours I spend toiling to improve my own writing, and to mentor my students towards improving their own.

Does any of this matter?

It seems obvious that it should, doesn’t it? Surely people would rather read well-written papers than cryptic or tedious ones? Surely, then, those well-written papers would have more impact on the progress of science (and the progress of their writers’ careers)? And of course I’m not the only one preaching, and working for, good writing – there are many other books, blogs, podcasts, and other resources all devoted to helping people write better.

Surprisingly, though, what literature there is on the matter is decidedly mixed. A number of folks have attempted to measure the quality of writing for a bunch of papers and associate it with their citation impacts. But these efforts haven’t given a clear story. Sometimes “better” writing is associated with higher impact, sometimes with lesser impact, and sometimes there’s no pattern at all.*  The problem is, this literature is entirely observational. Yes, you can infer process from pattern; but you have to do so carefully. Good writing is likely to be confounded with many other things, so raw correlations of writing quality with citation record are probably not that helpful.**

Wouldn’t it be great, then, if we could do a randomized, controlled experiment in which we compare papers with better vs. worse writing that are otherwise identical?  Yes, it would – and that’s what a new preprint from Jan Feld, Corinna Lines, and Libby Ross (an economist and two professional editors) does. It’s a clever study; I’ll summarize, but you should read the whole thing.

Feld et al. started with 30 papers written by economics PhD students. They then had professional editors revise the writing (but not the content) of each, so they had original and writing-improved versions. They checked to see that the edited version really was “better written” by having a panel of writing experts judge them (each judged 5 original and 5 edited papers, but none saw both versions of a single paper and none knew about the editing intervention).*** Sure enough, the edited ones were judged better written; that is, the experimental treatment “worked”.

But what did economics readers think? Feld et al. recruited a second panel, this time of disciplinary experts rather than writing experts, and asked them to judge the academic quality of the papers. Again, nobody saw both versions of one paper, and nobody knew about the editing intervention. And here’s the payoff: they rated the better-written papers as superior academically (admittedly, not a lot – about 0.4 points on an 11-point scale).**** Better writing helped sell the academic content of the papers.

Yes, good writing matters – and I can sigh with relief, having not wasted the time I’ve invested in trying to write better, and to help other scientists write better, too. You’ve probably made – you’re probably making – similar investments. If so, you can sigh right along with me.

© Stephen Heard  April 12, 2022

Image: Not good writing. “It was a dark and stormy night…”, from Edward George Bulwer-Lytton’s novel Paul Clifford (1830).  Check out similarly wretched prose at the Bulwer-Lytton Fiction Contest.


*^There’s a brief summary of this literature in the Introduction to Feld et. al 2022 (about which, more in a moment).

**^We faced exactly this problem in our recent study of humour in paper titles – we discovered that people give funnier titles to the papers that they themselves subsequently cite less – that is, their less important ones. As a result, the raw correlation of title humour with citation impact is negative; but after correcting for the confound, the actual effect of humour on citation impact is positive. You should read our preprint!

***^Yes, this had human-subjects research approval. You knew that was a thing, right?

****^Is it a perfect study? Is any study perfect? I have a few minor quibbles. Both the writing-quality and academic-quality judgements were done quickly – in about 5 minutes per paper. I think that’s reasonable for writing quality; but it’s pretty superficial for academic quality. Feld et al. suggest that’s the kind of time someone might put in deciding whether or not to accept a paper for a conference or to decide on a desk reject at a journal. I hope this is wrong. And there was some heterogeneity in the kind of paper involves (micro- vs. macro-economics, empirical vs. theoretical) that fit a bit awkwardly into the way papers were grouped for scoring. But overall, this seems to me a strong paper, and I believe the results. Hopefully not just because I want to.

6 thoughts on “Yes, good writing matters: empirical evidence!

  1. Jason Bosch

    I would expect that writing quality would have to be pretty bad before it affects citations. If the paper is on a topic that is important to my research, I will read it. And if it’s badly written, I will grind my teeth but still push through and read it. If I can’t stand the writing, I might not focus enough to really understand it (and I can think of a paper I recently read like that).

    To me, the importance of better writing is not that it will be better cited or even communicate the science better. It’s just that it will be more pleasant and not ruin my day. A badly written paper will drain my energy whereas a well-written paper might leave me feeling excited. That’s worthwhile even if it’s not necessarily a benefit to the paper itself.

    Liked by 1 person

    Reply
    1. ScientistSeesSquirrel Post author

      While I totally agree that better writing makes my life more pleasant – the empirical data seem to conflict with your “expectation”! Which is of course why we have empirical data. Consider the study I’m commenting on here (granted, this considers perceived quality and probability of publication, not citation rate directly, although it’s hard to be cited if you aren’t published); or consider our recent preprint on the effects of title humour on citation rates (https://scientistseessquirrel.wordpress.com/2022/03/22/do-funny-titles-increase-or-decrease-the-impact-of-scientific-papers-new-preprint/).

      More generally, it’s an interesting phenomenon that folks (not just you, I mean!!) are happy to let intuition guide them on points about writing, while they would presumably never do that on points about statistics, or environmental impact. (OK, now that I’ve written that, I realize that “presumably” is doing a LOT of very hard work in that sentence…)

      Like

      Reply
      1. Jason Bosch

        I’m a bit confused now. I’m happy to go along with the empirical evidence but the examples you bring up in your reply are not about citation rates and writing quality. Perceived quality may affect citation rates but, as per my comment, I would expect the effect to be minor. The title work is interesting but humour in a title does not necessarily mean better writing in the paper. In fact, my comment was based on what you wrote in the post about how there is no consensus on writing quality and citations: ‘Sometimes “better” writing is associated with higher impact, sometimes with lesser impact, and sometimes there’s no pattern at all.’

        Like

        Reply
        1. ScientistSeesSquirrel Post author

          You’re quite right, title humour isn’t about “quality”; I meant that more as another example of how aspects of writing do, empirically, affect citation. Confusing, sorry! But the sentence you quote from the post has to do with the non-experimental literature, and given the likelihood of strong confounds, my point (and that of the preprint authors) is that the controlled experiment should reveal effects that those observational studies may not have been able to.
          But given that the preprint authors demonstrate an effect on (perceived) probability of acceptance for publication, I’m not sure how you can NOT think that would generate a citation effect. I think it takes some mental gymnastics to believe that unpublished papers will be cited as much as published ones 🙂

          Liked by 1 person

          Reply
          1. Jason Bosch

            That’s not quite an accurate description of my views. I have no problems with the better writing being seen as improved quality but the experimental approach is not directly measuring citations. The improved quality may lead to increased citations but that isn’t what is shown yet.

            I don’t think that the better writing would lead to an increased citation effect for two reasons.
            1) Unpublished papers will obviously never get cited but it would be wrong to assume that a lower probability of acceptance means a paper doesn’t get published (or even a lower publication rate). I have never heard of someone submitting a paper, getting a rejection and then just giving up on publishing that paper. I don’t have any evidence but I suspect if someone has decided to submit a paper they will just keep submitting to journals until it is eventually accepted.
            2) If it is published, the citations should reflect the relevance of the scientific content, not the quality of the writing. If the paper is poorly written but the science is relevant and important, it will be cited. Even if there were a poorly-written and a well-written paper on the same topic, I would assume most authors would cite both.

            (It’s entirely possible those papers would be submitted in less popular or worse journals and that would affect readership and citations but, as you point out, 0.4 points on an 11-point scale is not dramatic.)

            Like

            Reply

Comment on this post:

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.