Image: Aad et al. 2015, Phys Rev Letters 114:191803 (short excerpt from author list)
Perhaps you’ve noticed that authorship lists are getting longer. If you haven’t, Aad et al. (2015, Phys Rev Letters 114:191803) is an interesting read – especially the last 25 pages, which are taken up by a list of its 5,154 coauthors. This is “mega-authorship”, and it’s attracted a lot of attention. Last week, even the Wall Street Journal noticed Aad et al., suggesting all kinds of reasons that mega-authorship is a problem for science. For example, the WSJ assures us, “scientists say that mass authorship makes it harder to tell who did what and who deserves the real credit for a breakthrough—or blame for misconduct”. As mega-authored papers have become more common (in the last few years, dozens of 1,000-author papers have appeared, mostly in particle physics), there’s been similar handwringing from other outlets, both lay and scientific.
Actually, neither the trend to increasing coauthorship nor the handwringing over it is particularly new. Coauthorship was rare in science for a long time: between 1655 (the birth of the first scientific journal) and 1800, coauthorship rates were less than 1% in biology (a little higher in astronomy, even lower in mathematics)*. Coauthorship began to increase gradually through the 19th century and then more quickly through the 20th; by 2000, 90% of all scientific papers were coauthored**. Average numbers of coauthors have risen too, and in parallel, so have expressions of concern over individual authors’ contributions to the work. Still: 5,154 authors! The Aad et al. paper (along with its 1,000+ author brethren) seems like a beast of an entirely new colour.
Does mega-authorship threaten our concept of authorship in science? It would be easy, and fun, to write with a scandalized tone about how mega-authorship corrupts all that is good and decent about scientific publishing. But does it really matter? I think both yes and (mostly) no.
It’s certainly clear that mega-authorship involves a very different concept of authorship than most of us are used to. For example, here are the International Committee of Medical Journal Editors’ criteria for authorship, and I think these would sit comfortably with most of us:
- “Substantial contributions to the conception or design of the work; or the acquisition, analysis, or interpretation of data for the work; AND
- Drafting the work or revising it critically for important intellectual content; AND
- Final approval of the version to be published; AND
- Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved
It’s pretty clear that the 5,154 authors of Aad et al. can’t possibly meet these criteria. Even having each author approve the final version would be so cumbersome it surely didn’t happen. As for drafting the paper: even if there were some logistically feasible way to have so many authors actually write together, the paper has more authors than it has words.
Actually, I didn’t need Holmesian deductions to conclude that Aad et al. aren’t using a conventional definition of authorship. It’s widely known*** that at least two groups in experimental particle physics operate under the policy that every scientist or engineer working on a particular detector is an author on every paper arising from that detector’s data. (Two such detectors at the Large Hadron Collider were used in the Aad et al paper, so the author list is the union of the “ATLAS collaboration” and the “CMS collaboration”.) The result of this authorship policy, of course, is lots of “authorships” for everyone: for the easily searchable George Aad, for instance, over 400 since 2008.
It’s clear, then, that authorship practices in experimental particle physics bear little resemblance to those in most other fields of science. That would matter a great deal, if it caught people by surprise. After all, we use authorship to assess each other all the time: for hiring, for tenure, for grant adjudication. Might we get the idea that scientists in ecology (say) are unproductive dilettantes next to particle physicists or genome sequencers? I hope not. The risk is that uncritical paper-counters leafing through stacks of CVs will make unfair assessments. This risk seems low when authorship variation is so blindingly obvious: nobody is going to be fooled by mega-authorship (and mega-authors aren’t trying to fool anyone). The real authorship problems are the small sins (like PIs who insist on being on every paper coming out of their labs), and the more subtle variation between subdisciplines (like the different connotations of first and last authorship in ecology vs. molecular biology). Assessment committees worth their salt work hard to discover and consider this sort of fine-grained variation, but it’s true that not every assessment committee is worth its salt. Good Chairs and Deans, with broad perspective on science, have an important role to play here.
And what of the WSJ’s concern for “who deserves the real credit for a breakthrough—or blame for misconduct”? This seems straightforward to me: participants in mega-authorship are agreeing to dilution of credit; but if misconduct occurs, are all likely to be tarred by the brush. This certainly has costs to authors (which they presumably weigh against the benefits of appearing on so many mega-authored papers), but it’s not clear to me how it damages the progress of science.
I’m sure I’ll never be a member of a 5,154-author team (and I’ll never publish 400 papers in 7 years, either). I’ll have to be content with my smaller number of fewer-authored papers (even the occasional solo-authored one). I may even think of mega-authorship, from where I sit as an ecologist, as vaguely silly. But I won’t panic when, inevitably, the first 10,000-author paper appears.
I may chuckle a little bit, though.
© Stephen Heard (firstname.lastname@example.org) August 18, 2015
UPDATE: Thanks to Chris Buddle for drawing my attention to this PeerJ paper suggesting a link between mega-authorship and fraud (NOT, I think, with any specific reference to Aad et al or the LHC groups). It’s a worthwhile read for a different perspective, and I’m going to think more about the fraud angle.
UPDATE (2): Hat tip to Gregor Kalinkat for pointing out this conference paper (International Conference on Scientometrics and and Informetrics), The authors provide some evidence that per-author rates of publication have grown, but average numbers of coauthors have grown somewhat faster, such that “fractional productivity” (counting papers divided by coauthors) has actually dropped. This could suggest that large-scale coauthorship (and especially mega-authorship) actually has costs in overall progress, perhaps due to the logistical overhead of maintaining huge collaborations. However, there are many other possibilities, including decreasing rates of basic-science funding in much of the West and shifting criteria for awarding coauthorship. It’s an intriguing result, though.
- More weird and wonderful papers from the early days of the Royal Society
- Why (more limited!) coauthorship is great fun
- Problems arising when coauthors aren’t available to approve submissions (and how to avoid them)
*^The very first “coauthored” paper seems to be this: “An Extract of a Letter Containing Some Observations, Made in the Ordering of Silk-Worms, Communicated by That Known Vertuoso, Mr. Dudley Palmer, from the Ingenuous Mr. Edward Digges” (Philosophical Transactions of the Royal Society 1:26-27, and isn’t that a great title?). But it’s a little hard to tell, because in the 1600s conventions for authorship – and even for listing authorship in print – hadn’t yet coalesced. Palmer’s role seems to have only been to forward to the Royal Society (as a member) a letter from his cousin Digges (a non-member). This wouldn’t of course, merit coauthorship today. One hopes.
**^Statistics from Glänzel and Schubert (2005) Analyzing scientific networks through co-authorship, in Moed et al. (eds) Handbook of quantitative science and technology research: the use of publication and patent statistics in studies of S&T systems Kluwer, New York, NY, pp. 257-276. My book, The Scientist’s Guide to Writing, includes a longer discussion of coauthorship.
***^By which I mean, it’s even in Wikipedia.
I like this post.
I think you’re right that kiloauthored papers are not a huge problem in the great scheme of things, but it’s a trend worth watching because there are lots of potential problems. As different research fields make the transition to larger numbers of authors, there will be growing pains and culture shocks. There’s the potential for big disputes over authorship before a field / collaboration team arrives at standards the community knows and understands. It would not be surprising if those performing evaluations (senior professors, administrators) were the slowest to catch on to what those big author lists mean.
I still think that ultimately, many fields are going to end up with credits that look more like movies (many contributors, with well specified roles) than books (one person responsible for pretty much everything).
As a side note: I think PIs automatically listing among co-authors is more than a small sin and may have a negative effect on science. (Note that I’m Not Saying That People In Any Field Are More Likely To Exhibit Such An Attitude!) Of course I lack a proper analysis (although not sure what type of analysis would need to be done) but I think it’s quite clear that being marginally involved in too many projects (because low effort thresholds to become a co-author reward such an attitude) inevitably dilutes one’s contribution to the paper. Because PI-ship (normally) correlates with experience and knowledge, what we get is that the most valuable minds give the least input.
How would the quality of publications evolve if people’s attitudes on co-authorship were more stringent?
Pingback: Link Round-Up: Academic Journals and the Publishing Industry
Thanks for the post. In the last two years, I have been working on the foundation of a new methodology called Crowd-Authoring, which enables an international crowd of academics to co-author a manuscript in an organised way. I have just published an article about this methodology in the Journal of Information Development (Impact Factor 0.491). This is the first published methodology on mass authorship. Here is a link to the post: http://idv.sagepub.com/content/early/2015/12/07/0266666915622044.abstract
There is also a website for this methodology: https://crowdauthoring.wordpress.com/
From what I understand about how the CMS and ATLAS projects are run within CERN, all members of the project could probably tick the authorship boxes / criteria that you list. Project members from PhD students to PIs take stints in the control rooms running the detectors, hence literally collecting the data.
They (certainly ATLAS) run everything in a hierarchical fashion getting approval of analysis, papers etc from across the project. This requires a lot of coordination but I get the impression this is something CERN has been doing well for some time. OK, so all 5000 probably didn’t physically write the paper but the analysis, data, and results will have been signed off by everyone using CERN’s system.
Pingback: Can a thesis chapter be coauthored? | Scientist Sees Squirrel
Pingback: What does it mean to “take responsibility for” a paper? | Scientist Sees Squirrel
Pingback: The scientific wisdom of Chief Inspector Armand Gamache | Scientist Sees Squirrel
Pingback: The case of the disappearing author | Scientist Sees Squirrel