Negative-news bias and “the disaster that is peer review”

Peer review is a dumpster fire, right?  At least, that’s what I hear – and there’s a reason for that.

Last month, I got reviews back on my latest paper.  Opening that particular email always makes me both excited and depressed, and this one ran true to form: a nicely complimentary opening from the editor and Reviewer 1 – followed by several pages of detailed critiques from Reviewer 2 – and Reviewer 3 – and, believe it or not, Reviewer 4. 

As usual, there were comments I immediately agreed with, and comments I immediately took umbrage at.  As I always do, I set the reviews aside for a couple of days so I could reconsider the latter kind.  But when I pulled it back out and got to work, and as my “Response to Reviews” document* grew, I felt the umbrage rising again. And I gave in to temptation.

In particular, I gave in after reading a comment from Reviewer 3, who said they were “concerned” that our hypothesis might not hold.  In a high dudgeon, I tweeted:

People liked this tweet.  They really, really liked it.  Seeing the Twitter “likes” accumulate, at first I felt validated.  People agreed with me! That reviewer comment really was dumb!**  But as the likes built up, along with replies and retweets using phrases like “the disaster that is peer review”, I started to feel a little guilty.  I try to stay pretty positive on Twitter, and here I’d given into the urge to complain – and that complaint was rapidly turning into one of my most popular tweets ever.

But then it occurred to me that maybe the blame wasn’t entirely mine.  I mean, I did the tweeting; but I didn’t do the liking.  Why was this cranky tweet so popular?

Well, quite possibly, because people like crankiness, and they like thinking things are dumpster fires.  This negative-news bias is a well-known phenomenon, in psychology and in business: bad news sells newspapers***.  But I had inadvertently created an opportunity to measure the bias, as it applies to peer review.  I just needed a control and a fair comparison.  So after exactly 48 hours, I took a screenshot of my tweet’s likes and retweets.  And exactly one week later, I tweeted admiringly about the same peer review:

I matched the timing (Sunday afternoons) and I did the best I could to match the original in structure and wording – leaving the only difference the tone.  After the same 48-hour interval, here are my results: Bad-news tweet: 641 likes and 74 retweets.  Good-news tweet: 122 likes, and just 7 retweets.

Twitter has spoken, and it isn’t pretty: we celebrate the idea that peer review is the ravings of idiots, and shrug at the suggestion that it might actually be helpful. (Granted, I have a sample size of two, but I’m an ecologist, and we rather like “two”.  There’s a nice little paper in it for anyone who wants to pick up the torch and do this properly.)

So it’s hardly surprising that peer review gets a bad reputation.  We give it that reputation by our selective attention to the times it gets things wrong.  Of course it gets things wrong: it’s done by humans, and humans – all humans – are fallible.  But it also, far more often, gets things right.  We just don’t trumpet those from the rooftops.  Who wants to stand at the water cooler telling everyone about how Reviewer 2 was so professional and polite and helpful?

So how do we counter the negative-news distortion of peer review (and everything else)?  Well, we need to remind ourselves, frequently, that negative-news bias exists.  We shouldn’t ignore bad news; bad news is real, and the world’s imperfections are important.  But we also shouldn’t wallow in those imperfections, and we should recognize the good that exists alongside the bad.  A lot of good, actually; far more than you hear about.  And we ought to be countering our negative-news bias by consciously deciding to share good news stories just as much as we do bad – or perhaps, sharing them even more.

This matters.  If we tell each other things are terrible, before long we’ll believe it – and not only that, we’ll have made it true.  I’m sorry I was a part of that, and I’m here to call myself out.  Yes, my reviewer said one dumb thing; but my paper will be greatly improved by their effort regardless.  Why did I think it was so clever to shine my spotlight on the bad?

 © Stephen Heard  October 16, 2017

*^The “response to reviews” document is extremely important – and often mysterious to early-career writers.  As an editor, I can tell you that a well-crafted response to reviews can have me mostly convinced to accept a revision even before I’ve read it.  I cover the response to reviews at length in Chapter 24 of The Scientist’s Guide to Writing.

**^It is dumb. In fact, as an objection to a piece of science, it’s so jaw-droppingly idiotic that the reviewer obviously had to mean something else.  I just couldn’t quite tell what, and I vented rather than working hard to figure it out.

***^For my readers under age 40: “newspapers” are called that because they were once printed on paper.  And vendors “sold” them; or in other word, people paid money for them.  How quaint!  The world has changed – although bad news is still with us.

11 thoughts on “Negative-news bias and “the disaster that is peer review”

  1. amlees

    The reason I hit like to your first tweet (and I did it just now, before I finished reading the blog) is because it is a comment about the nature of science. Also, there is a touch of irony and it’s a witty tweet..

    I read the second tweet while looking for the first tweet and I was merely confused by the concept of your not understanding the words ‘exotic’ and ‘invasive’. I have only read a few of your blog posts, but I had you pegged as someone who would know exactly what those words meant. The state of confusion this left me in meant that the ‘pro-review’ message was lost on me, so I didn’t hit like.

    Liked by 1 person

    1. ScientistSeesSquirrel Post author

      Yeah, I can certainly see your point – I had some trouble, and was only partly successful, in constructing a parallel positive tweet. (By the way, you’re right, I know exactly what those words mean, but I still messed them up in the manuscript!) That’s why I suggest in passing that there’s a nice science-studies research project in doing the comparison properly. I guess it won’t be me though…


  2. Manu Saunders

    Agree with amlees, I liked your first tweet at the time because of the science aspect rather than the peer review issue. But this post got me thinking – I tend to avoid liking other people’s tweets whingeing about reviewers, even if I agree, and I tend to avoid whingeing myself, for the reason that I actually love peer review and think it is one of the most valuable and rewarding collaborative processes in science. I think the tired old ‘peer review is broken’ argument is damaging to science, as it encourages newer researchers to think that peer review is a waste of time, and it potentially sends a confused message to the broader public – how do we expect people to trust peer-reviewed science when we constantly tell them the system is broken and reviewers don’t know what they’re talking about? Individuals make it frustrating, but the model of the system itself is sound. A couple of times I have been so frustrated, like you, that I resorted to tweeting about it – but I didn’t get anywhere near as many likes & engagements as you did and I kind of regretted it. This is something I’ve noticed with my tweets generally, whether I’m talking about agriculture, pollinators or peer review, I usually get much less engagement on ‘negative’ tweets than I do on positive or neutral tweets. Which is good, because it makes me focus more on being positive! 🙂


    1. ScientistSeesSquirrel Post author

      That your anecdata conflict with me anecdata definitely make it seem like someone should do a proper study! Not that negative-news bias isn’t a real thing – it is, there’s tons of literature – but that I agree with you that its existence, and measurement, in the peer-review context is an important thing.

      Liked by 1 person

  3. Pavel Dodonov

    Agreeing with amlees and Manu – I think your positive control wasn’t really a control, as confusing the meaning of two words can be a simple oversight, whereas being concerned about a hypothesis being wrong is really, really weird. Well, unless the concern is that the hypothesis is logically invalid – any chance the reviewer meant this? A better positive control would be a reviewer noticing that the result is likely a statistical artifact or that the conclusion is not really supported by the results…
    Still, I hear a lot more complains about reviewers than praise to them (but I read slightly more praise than complaints). Wondering how could one perform a study on it…


    1. ScientistSeesSquirrel Post author

      Yes, I’d like a better control too. I mean, in the field, confusing those two words is a huge boneheaded error – but that wouldn’t necessarily be obvious to people outside invasion ecology. So that’s why, as you noticed, I suggest one could/should do a better study. But not me, and not for a single blog post! Social scientists, especially in the field of science studies, would be well equipped to do such a thing.

      Liked by 1 person

  4. Pingback: From ego naming to cancer screening: type I error and rare events | Scientist Sees Squirrel

  5. Pingback: Moving courses online isn’t easy – or cheap | Scientist Sees Squirrel

  6. Pingback: What if the way Covid-19 forces us to teach is actually better? | Scientist Sees Squirrel

  7. Pingback: How many university instructors phoned it in during the pandemic? | Scientist Sees Squirrel

Comment on this post:

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.