Something a bit different today: this post is mostly just a link to a piece I’ve just published on jobs.ac.uk. There, I ask why early-career folks might get involved in peer reviewing, given that they aren’t paid to review (unlike many, if not most, more senior academics, for whom reviewing is part of the service component of the job). There are clear benefits to reviewing (which you can read about in the piece I linked to above*) but I don’t think one of them is giving you something you can list to good effect on your CV. Which raises the question: what is Publons for? That’s something I don’t pick apart in my why-review post. And I don’t have an answer, but rather, some half-formed thoughts.
When I review a paper these days, I’m routinely ask if I’d like to receive Publons credit for it. This puzzles me, because I can’t figure out what problem this is a solution to. Here’s (some of) what Publons’ web site says it can offer me:
Your verified peer review and journal editing history, powered by partnerships with thousands of scholarly journals.
Publons CV summarising your scholarly impact as an author, editor and peer reviewer.
Well, I already have a CV that summarizes my scholarly impact as an author, editor, and peer reviewer. (The peer review part of that doesn’t carry significant weight in any CV-assessment situation I’ve ever been involved in, but it’s there anyway.) So what good is having a Publons-“verified” record of this?
The only answer I can come up with is that Publons wants us not to take the peer-review record on someone’s CV at face value – rather, someone should “verify” that record. So: why do we want “verified” records of some things, but not others? Why do we request official transcripts for grades, but take someone’s word for their list of publications? Actually, it’s stronger than that – I’ve been involved with several exercises (both hiring and grant adjudication) in which I wasn’t allowed to do anything other than take someone’s word for it. That is, checking a CV claim, or acting on any knowledge not already represented on the CV, was explicitly forbidden. You do hear of occasional CV-falsification scandals, so people clearly do, sometimes, pad their academic CVs. Weirdly, Publons seems to offer a service to prevent this for precisely the part of a CV that’s the least important. So what am I missing?
Anyway, just some half-formed thoughts on what is, or needs to be, “verified” on a CV. But my real point was just to suggest you read about benefits to peer review, over here.
© Stephen Heard November 30, 2021
Image: A little chunk of the peer review section of my CV. My long-form one, I mean – I do understand that hardly anyone cares about this bit. Which is sort of my point.
*^Why is it over there on jobs.ac.uk rather than right here on Scientist Sees Squirrel? Well, it’s pretty simple: they paid me for it. No, I don’t think everything we do as scientists needs to be paid; but I’m not such a purist as to refuse to be paid for anything. And it’s free for you to read there, just as every post is here.
There are some parts of the world where outright falsification of employment histories, qualifications and experience are standard practice, and job advertisements routinely contain a phrase that applicants automatically give permission for claims to be checked. Given that qualifications come with certificates, publication records can be verified on Google Scholar, and previous employers can be contacted for verification of claims, anonymous refereeing was the last refuge of the scoundrel.
But are there parts of the world where refereeing is a currency of significance in hiring decisions (etc)? I can understand that it might help to fake a record of publications, but what would faking a record of peer reviewing get you??
Look, everyone, we all have to do a better job reviewing. It has little to do with boosting your career, although it may do that. It has everything to do with having and keeping a vibrant scientific community. I have been on the editorial boards of a number of prominent journals over the years. It is not unusual for me to have to solicit eight or more potential reviewers before I get a review. The same is true for many NSF (and maybe NRC) programs. That means that approximately 1 in 7-9 people are making decisions on what gets funded and what gets published. On a few occasions, I have reviewed a proposal for NSF, then a few years later I was asked to review a paper coming out of that (funded) research – that is giving little old me too much power over what makes it out there. And people grouse about an Old Boys Club!
You don’t have to review everything, but the default answer should be Yes. Many times, the reason why you are being asked to review is because one or more of your own papers are cited in the manuscript – don’t you want to see what the authors are saying about your work? If you want a number, review two papers for every paper you submit each year (since two people are going to review your manuscripts).
And many thanks for reviewing. I really appreciate thoughtful reviews – it makes my job as editor a lot more fun.
LikeLiked by 1 person
If my default answer was “yes” I would be reviewing 30-50 papers a year.
Is this one of those posts where you’re bending over backwards to be generous? To think of a good reason why someone would do/want X, or invite others to suggest such a reason? When in fact, you secretly suspect that the real reason people do/want X is not good?
You don’t have to answer that if you don’t want to. 🙂
Also: I am genuinely curious about uptake of Publons. How many people participate? How do the participants compare to those who’ve elected not to participate, on various dimensions? I honestly have no idea.
That one I cannot answer. But maybe, with the onus of proof for all claims being on the applicant, a link to Publons or something similar is easier and quicker than copies of all the tank you e-mails from editors ?
Except that, as the post notes, there rarely/never seems to be any onus of proof on applicants. I mean, I guess maybe some people use Publons because they’re under the mistaken impression that they might be asked to prove how much reviewing they’ve done? Seems unlikely to me, but maybe? But that’s pure speculation on my part. And I’d rather stay away from speculating about other people’s motives.
Personally, I don’t use Publons because I’ve never felt any need. I just do my fair share of reviewing (more, actually, if “my fair share” means “two reviews for every paper I submit”), summarize my reviewing in a pretty typical way on my cv (just a list of the journals I’ve reviewed for), and assume that everyone who reads my cv will trust that I’m not lying about the reviewing I’ve done (which they always have, as far as I know).
You are in the USA, in other parts of the world asking for proof of everything is routine.
Fair enough. On that hypothesis, most Publons users are based outside the US and Canada. I’d be curious to know if that’s true.
Hi Jeremy, you can check out the top reviewers on Publons here: https://publons.com/awards/peer-review/2019/
You can filter by field of study and country. Do you recognise any of the names in ecology? Or even the names of the top ecology reviewers from Canada? I’ll let you draw your own conclusions…
Your hypothesis that most Publons users are outside US and Canada seems to be true, at least for the top 1% of reviewers.
Late to this comment thread, but two things. First, if there are places in the world where prospective employers ask for proof of reviewing record, then the mystery is indeed solved. Or rather it’s shifted, onto why employers would want that sort of proof (is reviewing really something that influences hiring decisions??). Second: to your original question, Jeremy – no, I don’t suspect the real reason people use Publons is “not good”. But that’s only because I can’t think of a malevolent way to use Publons! So I guess it’s a failure of my imagination, not of my cynicism 🙂
Aha, some data! Thank you for this Falko. Indeed, it does seem that the most active Publons reviewers are almost all from outside the US and Canada.
And you’re right, I don’t recognize any of the names from ecology…
Thanks for this link. Fascinating and disturbing. Quantity is the metric. [“The top 1% of reviewers in each of the 22 Essential Science Indicators (ESI) research fields. Rankings are calculated by number of verified pre-publication reviews performed and added to Publons between 1 September 2018 and 1 September 2019.”]. So it says the counts are for a 12 month period, but by the counts of a few records I perused, that seems hard to believe. A few from within their ESI research field “environment and ecology”:
Andy Baker, a high flyer with a H-index of 61, papers on topics from caves to oceanography, and 220 Publon reviews, 191 of which were in ‘Water Research’, for which he is an editorial board member.
Aldo Muro has a prodigious 227 reviews 120 of which were for the MDPI journal Sustainability, the remainder look to be in other MDPI titles. His Publon record lists a H-Index of 1, and that one paper was in the NEJM and cited 438 times. That was curious enough to click on, and it listed 7 authors, nor of which were Aldo Muro. However, Publons allows users to claim papers, which he appears to have done.
Skipping to Yunquan Zhang, a prodigious young author (photo looks under 30), still only a university lecturer, has an H-Index of 23, with his top cited articles being several mass author pieces, where the author list goes on for pages. He had 173 verified reviews for the period, distributed among several journals.
200+ reviews in a year? How can this be? Maybe the middle of the alphabet is different, but I’m not that motivated, plus I have gold dammed review to finish.
The seeds of doubt are sown. (Get that Steve? Calling them out in passive voice. Or maybe that’s passive-aggressive voice.)
Chris, as with so many things in academia, the problem doesn’t lie with Publons, but with the researchers who game the system to boost their own metrics (even when it is unclear that these metrics are meaningful in the first place).
I agree with you that these hyper-reviewers can’t possibly be doing a thorough job. But to Publons credit, you can use it to check out which journals are using the same shallow pool of reviewers over and over again (https://publons.com/journal/?order_by=reviews). The top reviewers in some of these journals are reviewing a paper per week!
Also, I made a mistake with my previous reply to Jeremy. It turns out, USA is the most represented country on Publons (https://publons.com/country/?order_by=top_reviewers).
LikeLiked by 1 person
Re Publons is not the problem, it’s “the researchers who game the system to boost their own metrics.” Citation collector/gathering is one possible bad reason for hyper-reviewers. Certainly editors are in the position to manipulate citations, but so can reviewers. The outing of Artimi Cerda is a good read (https://scholarlykitchen.sspnet.org/2017/03/09/citation-cartel-or-editor-gone-rogue/). Otherwise, why would someone be a hyper-reviewer? They like feeling self-important? Just solid citizens?
And Falko, you make a good point- Publons adds transparency, at least for those self-selected participants. Otherwise, only editors/publishers have access to data on these reviewing patterns.
As has been mentioned in previous comments, in some places (such as Brazil), people don’t trust your CV without documentation. Basically everything that goes on your CV has to be documented somehow, otherwise it’s not counted. When applying for my position at the university, if I remember correctly, having reviewed manuscripts was worth quite little, but more than zero; so for each manuscript I reviewed I printed the thank-you email as proof. I think that Publons can make it easier if instead of having to prove every single review we can just attach include the Publons record (I think I did this on some reports of my activities after being hired by the university).
In addition, Publons helps me keep track of my reviews – it’s easier to just forward the email to publons than to manually insert it somewhere.
Finally, I think it can stimulate people to accept reviewing manuscripts, as some people may like seeing and being able to share the record. It may be seen as a form of recognition for the work we performed reviewing manuscripts.
LikeLiked by 1 person
This is the first I’ve heard of someone on a grant or hiring panel explicitly not being allowed to check further into CV claims. What was the reason? Super secrecy over the fact that someone was job hunting?
At least two different circumstances (one grant, one Univ hiring). In both cases, as I understand it it’s about a level playing field – wanting deliberations to be done the same for every candidate, based on the application and not on the special information committee members might have – because that might favour better connected candidates, etc. And I understand this logic; but at least once, I knew that a candidate’s claim was false, but was told I wasn’t allowed to share that information with the committee!
Rather like a legal jury where jurors are neither allowed to do their own research or to rely on their own knowledge. Yuck.
I’m an AE for a journal that uses the ScholarOne ManuscriptCentral manuscript management system. At the search for reviewers step, it provides a list of suggested names saying it’s powered by Publons and Web of Science. All 3 products are owned by Clarivate. The suggested reviewers are usually highly relevant (some of them anyway. They don’t say how it works. I assume it’s driven by a cited references in common matching and ranking algorithm from WoS. I’m not sure how Publons factors in, maybe up ranking authors who are known to do reviews. For the manuscripts I handle, I’d guess between a third and half click the box if they want credit in Publons, so it seems unlikely to be driving the reviewer suggestions.
Interesting, I haven’t noticed this as an AE (either “my” journals don’t use it, or I just haven’t noticed). Now makes me wonder, if a possible reviewer would be upranked for having done lots of reviews (tends to agree) or downranked (probably too busy to take on one more)!
My experience as a HE is that the reviewer suggestions in ScholarOne ManuscripCentral tend to be hotshots in the field. I never use these suggestions because they generally only include the same few names, regardless of the content of the submitted manuscript. I suspect the algorithm matches the keywords from the submitted manuscript to those of previously published work (so researchers with many papers are more likely to have keywords in common).
For all reviewers already in the system (not just the recommended reviewers), Manuscript Central also lists when last they reviewed for our journal specifically. So, I avoid inviting reviewers when they have already reviewed a manuscript for the journal in recent months.
Peer Reviews are a crucial factor for applicants of US Green Card in the EB1a and EB1b or outstanding researcher categories. Publons is an important way of showing evidence of your peer reviews, in addition to emails from journal editors, etc.
LikeLiked by 1 person