Warning: header image captures this post pretty well.
Should peer review be open and transparent? Sounds appealing, doesn’t it? Who’d want to go on record as saying anything shouldn’t be made more open and transparent? Well, I’ll give it a go, because I’ve recently declined to review two manuscripts that looked interesting, for a reason that’s entirely new to me.* In both cases, the journals specified that by agreeing to review, I was consenting for my reviewer comments, and the authors’ response, to be published as a supplementary file with the paper. Sorry – I’m not having any part of that.
I guess I should start here: it’s not that I want the freedom to be nasty and unfair in private, anonymous reviews. Since I sign almost all of my reviews, if I need a reason not to be an ass while reviewing, I already have it.
So why is it that I’m willing for authors and editors to see my (signed) review, but unwilling to have other scientists see it (even unsigned)? In short, I can’t figure out what problem it solves, and I can think of new problems it introduces. And I have the same (negative) reaction to it as a reviewer, as an author, and as a reader.
First, as a reviewer. One major theme in my guidebook for scientific writers is that how you write a document depends on its audience – who they are, what they already know, and what you want them to learn. When I write a review, my audience is the authors and (to a lesser extent) the editor. That means I’m writing for an audience with lots of knowledge about the MS and its subject, and what I want them to learn is how to make it better. Adding in readers of the published paper as an audience changes everything! If I write the same review I normally would, those readers won’t understand it easily, if at all. (As a trivial example, consider the part where I write “what you say at line 173 seems to contradict line 67” – but you’ll be able to think of less trivial examples without much trouble.) Of course, I could write my review as an entirely different document that gives the “readers of the paper” audience what they need. However, that would be a much bigger job, and if I consent to do that, then I’ll agree to review only about 1/3 of the manuscripts I currently do. That’s a problem, of course – not because I’m personally indispensable (I’m not!), but because our publishing system is already drawing the available well of reviewers pretty close to dry. And if I put in that extra effort, what improvement in the progress of science will result? What evidence is there that readers of papers are hungering to read yet more supplementary material? What bad things have been happening in science that will be fixed by a handful of readers (and I suspect ‘handful’ is rather generous) being able to follow up their reading of a paper with readings of the original submitted version, then of the peer reviews, then of the authors response? And if these kinds of archives are really important, why don’t we ask for, and publish, every version of the paper as it goes through self- and friendly revision too? And every version that was previously submitted and revised at other journals?**
Second, as an author. If the reviews are published and so is my Response, the same multiple-audiences issue rears its head. I’ll have a choice: either write the Response as usual for editors and reviewers, and have it not be very useful to broader readers; or write for its new audience at the cost of effort I’d rather put into another paper. Plus, I’m grateful for the way reviewers help me find and improve weaknesses in my paper. Why would I want readers to see the pre-improvement version? If you’re like me, you can think of an embarrassing blunder or two a reviewer has saved you from. Thanks, I’ll keep my trash bin out of my Zoom background….
Third, as a reader. Honestly, as a reader I don’t care – or rather, I don’t care directly, or much. I can’t imagine a case where I’m likely to dig into the supplements to read the reviews, and the response, and the original version – except perhaps to indulge in a little shameful schadenfreude. But I am wary of the trend to more and more and more supplemental material. It inevitably eithers consume more of our limited publishing resources, or dilutes those resources (including attention paid by authors, editors, typsetters, and so on.). Without conviction that it makes my overall reading experience any better, I’m left assuming that it will make it, in some admittedly fuzzy and modest way, worse.
Because I sort of enjoy arguing with myself, let me close with my best case in favour of open peer review. That case is that unlike most of the other writing forms we practice, peer review comes without an easily available corpus of examples. I can tell my students to find examples of papers they find useful and readable, and to emulate them. I can’t tell them that about peer reviews (or the Response to Reviews). Open peer review would, I admit, solve that problem – although it’s peculiar that I’ve never once hear an advocate offer this as an argument. Perhaps they realize that although this would solve the problem, it would be a spectacularly inefficient way to solve it.***
It’s your chance now: use the Replies to convince me I’m wrong. Until someone succeeds, though, journals that require reviewers to consent to publication of reviews won’t be getting reviews from me. Which is OK, really – there are a ton of journals after me (just as there are after you, I’m sure). Filling my available reviewing hours isn’t ever going to be a problem.
© Stephen Heard June 29, 2021
*^I decline a lot more because the journals request unreasonably fast turnaround from their reviewers. As you’ll know if you’ve been hanging around here, I routinely refuse any review request with a deadline shorter than three weeks.
**^Some of you may (correctly) diagnose this last bit as a slippery-slope argument. Oddly, labeling an argument “slippery slope” is widely taken as a reason to disregard it. Really, though, slippery-slope arguments are some of the most important ones. That’s because the slipperiness of the slope indicates that it isn’t easy to come up with a defensible answer. Some of our most important decisions involve how and where to stop near, or on, a slippery slope.
***^There are, by the way, some excellent guides to writing peer reviews. There’s some coverage in The Scientist’s Guide to Writing, but admittedly not much. This guide from the British Ecological Society is particularly good.
I’m with you on this one Steve. I’m also thinking about this model of “open” could cause disproportionate discomfort or anxiety among people writing in an additional language, or those who are brand new to the practice of scientific publishing.
LikeLiked by 2 people
That’s a really good point, and as an editor I really value comments from early-career folk – who often give the very best reviews. If this would make newer folk less likely to review, that’s very bad. And as you point out, those writing in English as an additional language are already carrying a burden – will we lose them too?
LikeLiked by 1 person
I have once reviewed a manuscript knowing that my report would become public. I thought it for a while before agreeing. I felt that there maybe only a few people (outside the authors and editors) who ever will read my report. So, I didn’t even think about the potential extra readers. There can be comments such as: lines 190-198: Consider splitting this very long sentence as two or three separate sentences. OR lines 150, 201, 213: Add the scientific names of these species.
LikeLiked by 2 people
I agree as reviewer, reader as well as author! I must also add another argument: as I am not English native speaker it takes some additional effort for me to write scinetific paper and I need corrections to be done (by colleagues or otherwise). I cannot imagine to invest money or time of other people into review to be perfectly flawless and gramatically correct (of, course I am doing my best to do it undersandable for author) …
LikeLiked by 2 people
Totally agree Steve – I too refuse to review for those types of journal – I’m actually not a fan of signing reviews anyway as I feel that it can be misused and also make less senior scientists reluctant to say what they actually feel about a paper by a senior figure of the establishment
LikeLiked by 1 person
You state that you are “Often wrong” but you probably don’t believe it. In this case, you are wrong. Most people do not read the fine print of contracts, but imagine companies would not be forced to publish it. It takes only one person to read it to find some important, hidden detail.
And what do you gain from signing your review. Say, your review is wrong and the editor uses it to reject a paper. Now the author knows you wrote it, but the author is powerless. With open reviews, authors gain power and that cannot be a bad thing.
LikeLiked by 1 person
So, you’ve identified what you see as a benefit, but you haven’t addressed what I’ve identified as costs. So even if I were to stipulate to your claim of benefit, you won’t have convinced me! Unless benefit>cost, the existence of benefit is irrelevant, isn’t it?
LikeLiked by 1 person
Your costs are trivial. You wrote a review for authors and now somebody who is not an author may read it. What is the cost? A whole science called history is based on people reading things that were not written for them. I think most people think having access to ancient documents outweighs the cost.
LikeLike
Authors are powerless ? Not if they can refute the reviewer’s comments, and if they can’t then they deserve to get rejected.
LikeLiked by 1 person
That is not how it works, at least in my field. If your field has no conflict of interest, motivated biases, and paradigm prisons, good for you.
LikeLiked by 1 person
Your argument that open reviews give power to authors when the reviewer is wrong seems predicated on the idea that if the reviewer’s comments are open, there will be greater transparency in this process. But the transparency only occurs presumably when the paper is accepted and the review published, so there really doesn’t seem to be any advantage when the paper is rejected from a poor review. It seems to me unlikely that the mere possibility that others will see a review would then cause that review to be “less wrong”
There are a great many issues with the peer review process. Published reviews may address some of the symptoms, but doesn’t address any of the casual factors.
LikeLiked by 4 people
While not in favour of it, I can think of one benefit; it might make editors less likely to give manuscripts for review to people who do not have the required knowledge, espcecially if authors are forthright in their replies to petty nitpicking and personal axe grinding.
LikeLiked by 1 person
As an editor, it was not uncommon for me to have a reviewer misunderstand a paper and its significance, particularly for papers breaking new ground. In such cases, it’s usually best to ignore the review and focus on the remaining reviews (or seek more reviews). Publishing it would force the reviewer and allies to defend their positions and create pointless friction between factions. We have enough friction between factions already.
LikeLiked by 5 people
PS. I do like your update to Disraeli: We don’t show our trash bins in our Zoom backgrounds.
LikeLiked by 3 people
I generally find myself nodding along in enthusiastic agreement with your posts, Steven, but I can’t shout at the clouds along with you on this one, I’m afraid. I spent a great deal of time a few years back embroiled (with colleagues at Liverpool) in a case where peer review had failed dramatically — see this and this . Open peer review, via the PubPeer site, played an essential role in this debate. There are many similar examples of where peer review has failed dramatically.
Honestly, as a reader I don’t care – or rather, I don’t care directly, or much. I can’t imagine a case where I’m likely to dig into the supplements to read the reviews, and the response, and the original version – except perhaps to indulge in a little shameful schadenfreude.
Making reviewers’ reports available online contributes to making the peer review process more rigorous. If reviewers know that their reports are going to be publicly available it might make this type of fraud rather less likely to slip through the net: https://cen.acs.org/articles/92/web/2014/11/University-Utah-Concludes-Investigation-Controversial.html. (The reviewers were asleep at the wheel in this particular instance.)
“And if I put in that extra effort, what improvement in the progress of science will result? What evidence is there that readers of papers are hungering to read yet more supplementary material?”
A great deal of improvement will result — see above. Moreover, it’s not really any extra effort at all. The report does not need to be written for a broad audience; the reader understands that the report is part of expert review and should read it through that lens.
LikeLiked by 4 people
Thanks for the thoughtful pushback, Philip – exactly what I hoped someone would give. So I suppose it comes down to how often the kind of situation you raise comes up. It’s certainly not zero; but is it enough for benefit to outweigh cost? That might not be answerable in detail… As to your last comment, sure, I can simply upload as written, I guess, but that dramatically reduces the benefit. If readers cheerfully accept that they aren’t meant to actually understand the document, it seems odd that we would expect them to draw important conclusions from it.
Thanks for commenting – I hope readers dig in to what you say!
LikeLiked by 2 people
*of where traditional peer review has failed dramatically.
LikeLike
I agree that the available well of reviewers is now at a pretty low level. As authors of peer reviewed papers we are morally obligated to review, of course. An interesting analysis about how to calculate our “reviewer debt” is in
https://retractionwatch.com/2017/01/26/time-defend-peer-review-banal-metricisation-obligation/
However, finding committed reviewers is a really difficult task. As an associate editor in two journals, I get the two required acceptances after about five tries (once, after 17 tries). This, after carefully looking for suitable reviewers and inviting them (anonymously, thorough the journal’ platform). The high amount of time and energy spent in the review process has me made wonder more than once why I am doing that instead of being writing my own results. So, good luck to that journal. I wouldn´t accept to review under even more pressure than the one we have now.
LikeLiked by 2 people
Here’s a cost of doing this sort of Open Review: All of these documents have to be stored in digital form somewhere for a long time (I haven’t read anywhere anything about how long, however). The energy costs of digital storage of everything from Twitter to our august scientific papers are skyrocketting. I’ve seen estimates that these costs will soon be one of the, if not THE, major energy demands of our society. Is it worth the extra CO2?
I read something recently about new guidelines for self-plagiarism (mostly using the descriptions of methods in several papers by the same author(s)). I mention this only to point out that this is yet another solution, like Open Review, for a very, very small problem we never worried about but now people want us to. We used to worry about big things, like interesting hypotheses and experimental design. Now we spend more time worrying about increasingly trivial problems.
See, Steve – you aren’t the only old man yelling at clouds.
LikeLiked by 3 people
Lots of clouds up there, John, happy to have you yelling beside me 🙂
LikeLike
think the best argument in favor of publishing anonymous peer review reports and responses is that it shows due diligence was taken in the process. I have surfed some open reviews (PeerJ or F1000 I think) that were trivial, and some in eLife were substantive. Gave a sense of rigor or not. I just agreed to PLOS One publishing the packet on my article. The downside? My critics/rivals/disgruntled family members might dig it up and re-thump me with it? I suppose, but being able to demonstrate that the article was vetted by knowledgeable reviewers might count for something as well.
LikeLiked by 1 person
I think we first need to ask *why* open peer-review exists in the first place before evaluating whether it is good or not. I’m unconvinced that open review exists to promote scientific transparency but is instead a natural response to the changing publishing landscape.
Peer-review is imperfect, so we can distinguish between four editorial decisions: the rejection of a poor submission, the acceptance of a good submission, the rejection of a good submission (Type I error), and the acceptance of a poor submission (Type II error). While the goal of peer-review has always been to minimise total error, mistakes are inevitable, and journals have had to prioritise one type of error over the other.
My hypothesis is that academic publishing has traditionally preferred Type I errors because prestige and reputation underpinned their subscription-based business models. The negative consequences of rejecting a good submission were much smaller than accepting a poor paper. Nowadays, with the shift to open access, publishers carry the financial cost of Type I error because rejecting a paper costs the same as accepting it, but without generating any revenue. Therefore, publishers have an incentive to minimise Type I error, while risking an increase in Type II errors.
Journal reputations will still take a knock if poor papers are accepted, but open peer-review is a way of transferring the reputation damage caused by Type II errors from the journal to the reviewers. For example, if a paper is criticised after-publication, open peer-review allows journals to say, “this paper passed peer-review, here is the evidence. We trust the judgement of our expert peer-reviewers and any errors are not due to procedural mistakes on our part, but the fallibility of expert peer-reviewers”.
So, I’m quite indifferent to open peer-review because I think it is logical consequence of the shift from Type I to Type II publishing errors. Of course, my strongest preference is that poor papers are not accepted in the first place, but if I come across a dodgy paper in the literature and wonder how it was accepted, I’d appreciate the opportunity to see the peer-review reports.
LikeLiked by 4 people
This is a super interesting way to think about all this. You hypothesis is intriguing and plausible! It’s also a nice illustration of mismatched costs and benefits – in your scenario, the benefits accrue to the journal, but the costs fall on the reviewers (and authors). I suppose you could argue that’s in some ways true of the effort of doing the review in the first place, although I usually conceptualize that as a golden-rule thing: I want my papers reviewed, so I review yours. That conceptualization doesn’t work (for me) for the extra cost of writing for open review.
Thanks for the interesting perspective!
LikeLiked by 1 person
I think it’s important to distinguish between publishing reviews and publishing reviewer’s names with those reviews. Journals of the European Geophysical Union (EGU) publish submitted manuscripts upon submission with limited editorial review. Reviews are published, but reviewers can choose to be identified or not. This means that reviewers don’t have to polish the grammar and presentation of their review (one of the major concerns expressed above). Moreover, authors’ responses to reviews are published alongside each review.
I agree that very relatively few people will read the reviews or responses. But think what an advantage this system could be to students, or, in fact, to reader looking at a paper outside their area of expertise! And when authors don’t seriously address meaningful critiques, it’s not just one editor and the reviewer who can determine that the authors missed something.
LikeLiked by 1 person
I agree with tsdibble’s post above, distinguishing the “value” in named reviews vs. public reviews. I’m not so bothered about being able to see a reviewer’s name, but I do think there is potentially great value in being able to see the review and how the paper changed across revisions.
I’m a statistician who primarily teaches upper level undergrad classes and occasionally graduate level classes, and does outreach to other departments on research methods and statistical analysis. I have students and faculty coming to me often looking for advice on their research, and asking how to respond to requests from reviewers regarding their statistical analyses. I have seen a lot of reviewer feedback and requests that I disagree with, and that I think has made papers worse. When I talk to researchers about norms over data analysis in their fields, I hear stories about changes they made to papers that disturb me. Sometimes it’s just bad advice; oftentimes it’s reviewers and editors asking that information be removed from the manuscript because “it isn’t relevant”. Almost always it’s something I would, as a reader, want to know about. Some examples:
– Authors perform multiple analyses and report them; reviewer says “just report the one that worked” or “just report your final model”, because no one wants to read all that extra stuff.
– Authors report exact p-values, reviewer tells them to change these to “p>0.05” or “p<0.05" or "p<0.01".
– Authors report effect size estimates that are non-significant, reviewer tells them they shouldn't report the estimate if it isn't significant and they should instead report it simply as "not significant".
– Authors perform a t-test with n < 30, reviewer says that if n 0.05, things make more sense). The request to leave out details (exact p-values, exact effect size estimates) also hinders the ability of future meta-analysts to make good use of the results. The “n = 30” thing is just goofy, but it pops up often because unfortunately textbooks still include it as the “rule of thumb” for deciding whether the central limit theorem has “made it normal”.
And then, from the other side of things, sometimes I’ll see statistical results that raise a red flag, and it would be nice to know if the questions I have in my mind were raised by reviewers and answered by the authors.
All of this stuff is pretty understandable, given the lack of available reviewers and collaborators with statistical expertise. Sometimes bad analyses get published. Sometimes good analyses get turned into bad analyses because the reviewers had misconceptions (which they were probably taught in school) about how statistical inference works.
The details of how statistical analyses were originally performed and then revised can be very useful information, especially after the “replication crises” of the past decade and the renewed recognition of the importance of statistical decision making. Open review offers this.
And, again, the reviewers names don’t need to be on it.
LikeLiked by 3 people
Whoops, I didn’t realize my use of inequalities would affect the formatting; some stuff got removed. From the fourth bullet point, it should read:
– Authors perform a t-test with n great than 30, reviewer says that if n less than 30 they have to use a non-parametric method.
– Authors perform a Bayesian analysis, reviewer says people in this field don’t do that, change it to frequentist.
Some of these are worse than others. The request to leave out all analyses but the “final” one is in my opinion a request to perpetuate a culture of p-hacking and selective reporting, and can make it appear that the authors engaged in these practices when they really didn’t (e.g. if I see p = 0.032, p = 0.046, and p=0.027, I get suspicious… if I see these plus three more p greater than 0.05, things make more sense).
LikeLiked by 1 person
These points about statistics are actually quite interesting and I agree with them. It makes sense to be able to see how the presentation of the results changed during peer review, as well as to see when analyses were changed due to suggestions by reviewers and how this may have affected the results and the results’ reliability. For example, it might happen that a study was designed with a specific analysis in mind and the sampling design being most adequate for this analysis, but reviewers request an additional analysis, for which the data are not really appropriate, be performed.
LikeLiked by 1 person
Oh yes, I currently have an ms in review where data were extracted from camera trap videos at a single incidence of a scent-marking site from a species not previously known to use marking sites. I point out that camera traps are selective, not random, that N=1, that any sub-classes among the data are not comparable for a host of other reasons, and that for most of the classes the sample sizes are perilously small. And so I present actual hard data that establishes what goes on during at this newly discovered behaviour. Editors and reviewers response ?; you must do statistical modelling to test hypotheses. The current fashion in wildlife biology has become to gather huge piles of data, and knock it into a predefined shape with the Maslow’s hammer of “models” pulled out of the R toolbox, a prodecure that has replaced properly designed data collection and critical thinking.
LikeLiked by 1 person
Peter, this is unfortunate and I see it in many fields. I think statistics education deserves a lot of the blame, but it is also the case that each discipline establishes its own norms.
Your reviewers seem to have internalized the idea that all scientific hypotheses must correspond to a statistical hypothesis. This is a ridiculous claim when said aloud, but I don’t think many people ever stop to give it consideration (There is also its sibling: all statistical hypotheses must correspond to scientific hypotheses, equally preposterous and widely taken for granted). We are just so used to seeing p-values attached to everything, that we start to believe all statistical summaries must have a p-value attached, otherwise they “don’t count” or “aren’t real” or “were due to chance”. No thought is given as to whether the default null hypothesis is of scientific interest in the first place, whether we learn something by rejecting it, or whether the statistical model being proposed does a good job of approximating the real-world data generating mechanisms. Default statistical practices attain a kind of mysticism, where everyone must use them even when no one can give a coherent explanation as to why they must be used.
LikeLiked by 2 people
That pretty much captures my thoughts. I get the impression that “modelling” (which is nearly always nothing more than post hoc curve fitting) and hypotheses are supposed to be the hallmarks of the hard sciences that wildlife researchers are trying to imitate, while huge areas of wildlife science are still at the unexpected discovery stage with the equivalnet of alpha particles bouncing back from gold foil.
LikeLiked by 3 people
As an author, if a reviewer has a question/concern that can be addressed through further explanation, this is a sign that I should also rephrase the paper so other people don’t have the same question. On the flip side, as a reviewer, it bothers me when authors feel they can just respond to all of my comments in the response letter without making substantive changes. Good editors can police this.
LikeLiked by 1 person
Unexpected discoveries often do not arise from a formal hypothesis testing process, especially in field sciences where we are are still in a first exploration phase of research. But reviewers seem to expect hypotheses to be tested and models used to extrapolate from single or rare observation points!
There is value in reviews being archived for future historical research, and for legal purposes, but this does not mean they need to be made public immediately. As a reviewer I would be happy for a journal to keep my reviews as historical resources. The energy cost of keeping static digital files may be a lot less than those that are maintained as part of an instant, online access system.
LikeLiked by 2 people
Interesting discussions you kicked off! Seems many see little downside to the review history being made available as SI, and some benefit in understanding how the reviewers pressed the authors for changes (or not). For instance, I would love to see the review history for “arsenic life” article in Science. Compared to the post-publication backlash, were the reviewers appropriately skeptical of worldview changing results? Authors have persuasive rebuttals? Or did reviewers just miss?
Since the people who would bother to look at reviews and responses would likely be very close to the topic, the argument that anonymous reviews would need to be written for a different audience doesn’t follow. Regarding server load to host this stuff, these are kB sized simple text files, and are much smaller than common SI materials such as extra figures, experimental photos, video, and more.
LikeLiked by 2 people
Another point to consider here is that authors accumulate a lot more review history than that associated with the journal that actually publishes an article.
The most critical and significant reviews may be those that were obtained by journals that rejected an early version of a paper. To really capture the history of a particular published article, authors would need to be able (and willing) to submit all the review history, at their own discretion, with permissions in place from the journals and reviewers who have contributed to the full process. Such comprehensive review history would not belong to any particular journal, and might be better archived by a dedicated repository that is independent of the publishers.
There are complicated systems emerging of cross-journal exchange of review history, while a paper is in the process of being submitted to a succession of journals, but having a few recognised, specialised repositories might be more effective in the long term, for historical study of manuscript and research history.
LikeLiked by 1 person
I’m coming to this a bit late, Steve and am not looking to make a general argument about whether open peer reviews are a good or bad things. But I have been involved in an open peer review once and it was, by far, my most enjoyable experience as a reviewer. It was for a ‘meta-science’ paper by Bob Holt and Sam Scheiner in Frontiers in Ecology and Evolution and it involved a first review that the authors then responded to followed by a chance to respond to the authors response (I forget if Bob and Sam got the last word or not). It evolved into a scientific discussion grounded in very different opinions about how science should be done. Once the process began I no longer felt like the hurdle that authors have to get over to add to their cv and instead was part of a discussion with two very smart scientists – there are a few things more fun than that, but not many. And I was writing directly to the authors (y’know – dance like there’s nobody’s watching, sing like there’s nobody listening) so, didn’t feel like I had to write differently than I normally did. Now, I might have felt more free to state my thoughts than they did theirs, because there was almost no cost to stating my opinions – I wasn’t trying to get anything published. But at this stage in their careers I don’t imagine Bob or Sam were holding back trying to appease a reviewer. So for me, the cost was essentially the same as a traditional review – we often end up writing a second review for revised papers so even doing two reviews wasn’t a big deal. And for all reviews, I would be interested in hearing the author’s response to my comments – I’m rarely certain I’m right. Here, I got a direct response and not a response just designed to appease me and the editor, because I don’t think satisfying me was a condition for acceptance.
This kind of review was both fun and satisfying…for me. Pretty sure that’s not a good enough reason to adopt it generally. I actually heard from Bob after the process (we’ve never met) just to let me know that he enjoyed it as well. I’m not sure what Sam’s take was or what Brian Dennis (the other reviewer) thought.
It helped that Bob and Sam were taking on a ‘big’ idea – something narrower in scope may have led to a more pedestrian discussion. It probably also helped that it hit a sweet spot for me because it was about a topic I was deeply interested in. It also certainly helped that, though I disagreed with much of what Bob and Sam wrote, their responses were always considered, even-handed and generous. It may be that I got lucky and had a better experience than most. But if the offer arrives again – it might be worth a try.
LikeLiked by 2 people
Jeff, that certainly sounds like the best possible experience for open review. (And Bob and Sam are both terrific). But I’m curious: what parts of this exchange needed the *publication* of the reviews and response to happen? Because I’ve had similar back-and-forth with authors as part of regular “closed” (!) review. (This is a different question from whether other folks read, and benefited from reading, the exchange – you focused here on how this was rewarding for YOU.)
LikeLiked by 1 person
I guess I haven’t had the same kind of back and forth in closed reviews. I’ve re-reviewed papers where authors have responded to my comments…but it has always felt like serial monologues rather than a conversation – they were always talking to the editor (in my opinion) hoping to convince them that their response was adequate to warrant publication. This felt more like a real discussion. As for, why would this exchange need publication? I’m not sure. I suspect that the folks at Frontiers think that these exchanges will manage to be entertaining and/or illuminating, but I don’t know what the evidence is for either. My position is that I didn’t find your arguments against to be compelling enough for me to avoid doing this. That said, I don’t know if there is much value, if any, to the audience. The things you think of as costs, weren’t costs for me and I enjoyed the process – so, I would do it again. But any assertion that this is somehow a big ‘gain’ for science strikes me as dubious.
LikeLiked by 3 people
I disagree with the position, but it’s fair to say I hadn’t considered the view that if published the reviewers would feel that they were writing for the reader. I don’t think reviewers should write for the reader, though I can see how they might feel pressured to (particularly if signed).
I’m in favour for a couple of reasons, one more personal, one more professional. On the professional side, I think that it increases the transparency of the process. If a flawed paper is published, you can see the input that led to an editor deciding to publish it. More generally you can look at how papers across the field are reviewed and try to answer questions about whether common standards are being applied. When a concern is raised that some papers are given preferential treatment you can look at the process rather than just the product.
On a personal note, I’ve published work where the conclusion was diluted in response to reviewers demands. The original manuscript was better in many respects and the reviewers changes had a negative impact on follow up work. Alongside that manuscript, the historical record of why it was changed would be nice to point to.
LikeLike