Like everybody else, I’m fascinating by the fast-moving world of ChatGPT and its kin. Today: what do these tools mean for authorship, in a world where scientists are already using them to help them write their papers?
Tools like ChatGPT are often referred to as “artificial intelligence”, but that’s a terrible label for them – it misleads people, who are then surprised when ChatGPT output is dumb, or wrong, or just plain fabricated.* The better term is “large language model”, because that’s what ChatGPT is: a model of the English language (and a large one) that produces plausible and mostly grammatically sound text. And that, used properly, could be extremely helpful for a writer.
Imagine a writer who speaks English as an additional language using ChatGPT to polish a rough draft. Imagine a native-English writer who knows they have weaknesses in writing using ChatGPT to correct grammatical errors, improve logical flow, paraphrase old blocks of Methods text, or simplify the complex and turgid sentences we seem to love to write.** These things should, actually, be easy to imagine, because they’re happening already. If you aren’t doing it, I guarantee that someone you know is.
What’s interesting about this is that we seem to be tying ourselves in knots trying to figure out what this means for authorship – and I’m confused about why we’re doing that. So: is ChatGPT an author, an acknowledgement, a Method, or just a tool? Here’s what I think – please argue with me in the Replies.
ChatGPT is not an author. Sure, it’s already been listed as one a few times, but that doesn’t make any sense. In scientific writing, “authorship” doesn’t mean “agent who/that assembled words into a text”. It’s much more complicated and much more interesting than that.*** But, setting aside some of the complications, it’s clear that authorship carries responsibilities as well as rights, and that includes the responsibility of standing behind what’s written.**** Chat-GPT can’t do that, legally or practically, so it’s not an author. (Here’s a succinct statement along these lines from the Committee on Publication Ethics.)
ChatGPT is not an acknowledgement. OK, I don’t think I’ve ever in my career questioned what someone has written in their Acknowledgements section, and if someone really wants to acknowledge ChatGPT it’s probably harmless. After all, I don’t mind if they acknowledge their cat, and I once acknowledged the owner of a street food cart. But what I mean is that ChatGPT isn’t a “required” acknowledgement the way a funding source or permitting agency is, or a colleague who provided material or commented on a draft. What would be the point of acknowledging ChatGPT?
ChatGPT isn’t a (reportable) Method. Some journals are requiring that authors disclose their use of ChatGPT in the Methods section (or similar); and the Committee on Publication Ethics takes this position too. I’m completely stumped by this – how can it possibly make sense? How does it help a reader to know that ChatGPT was used in writing the text? If the text is good, as a reader I don’t care what kind of help got it that way. If the text is bad – or worse, factually misleading – as a reader I’ll blame the authors, and knowing the particular technological reason it’s bad (for example, that ChatGPT invented citations) doesn’t change that. The details that are needed in a Methods section are those – and only those – that change the way a reader might think about the results of your work.***** You don’t need to tell the reader that you took your field notes with a #2 pencil; and you don’t need to tell the reader that you revised the order of the sentences in your Introduction with the help of ChatGPT – or with the help of your friend Ramón, for that matter.
ChatGPT is a tool. If ChatGPT isn’t an author, and isn’t an acknowledgement, and isn’t a (reportable) method, what’s left seems obvious: ChatGPT is a tool. It’s a tool one can use to improve one’s writing (or, of course, use poorly to make one’s writing worse). It’s a tool just like a dictionary, a thesaurus, the grammar checker in Word, the online Corpus of Contemporary American English, the Hemingway app, or any number of other reference materials and software tools someone might use to help them write. You’d never list the dictionary as a coauthor, acknowledge Word’s grammar checker, or disclose your use of the Hemingway app in your Methods – right? So why on earth would ChatGPT be treated differently?
So ChatGPT is a writing tool, like many others; it isn’t an author, or an acknowledgement, or a method. But here’s the thing. This seems completely obvious to me, but other folks are having conniptions about it. And sometimes it’s the ideas that seem most obvious that ought to be questioned most carefully. So: if I’m missing something, what is it? Please have at it in the Replies.
© Stephen Heard May 23, 2023. Written without help from ChatGPT, or Ramón. But I did use a thesaurus once.
Image: ChatGPT interface, © Focal Foto CC BY-NC 2.0 via Flickr.com
*^For example, citations that don’t exist. This gets called “hallucination”, but that’s another terrible label. When ChatGPT invents a nonexistent citation it isn’t doing anything wrong; it’s doing exactly what it was designed to do. That is, ChatGPT is designed to produce plausible text that mimics human text patterns. It isn’t designed to know anything factual, or to find anything it doesn’t know, or to care whether the plausible text it produces is correct or incorrect. For the first few weeks it was amusing to see people discover, one after another and with horror each time, that ChatGPT does exactly what it’s designed to do and not anything else. It’s beginning to get old, though.
**^These are all things ChatGPT can do, and in many cases can do well. They all involve packaging content into text, not producing content. Now, because ChatGPT starts with a corpus of writing that’s already out there, one risk is that it may reproduce the bad habits that are baked into our literature already. A writer using it well will know that, and aspire to do better – but that’s no different from any other way in which a writer can either model the existing literature, or aspire to improve on it.
***^See, for example, this typical set of authorship guidelines, from the International Committee of Medical Journal Editors; and see this post about “mega-authorship” that clearly has nothing to do with those guidelines! For a broader exploration of what coauthorship means (and now to manage it), see chapter 27 of The Scientist’s Guide to Writing.
****^Or disavowing it later, if necessary, as has happened recently with the admirable responses of early-career coauthors of a couple of senior fraudsters in ecology and evolution.
*****^This is discussed in depth in Chapter 11 of The Scientist’s Guide to Writing. But what about reproducibility, you might ask? You can be conveniently annoyed, on that subject, by this post.
“correct grammatical errors, improve logical flow, paraphrase old blocks of Methods text, or simplify the complex and turgid sentences we seem to love to write.” A consummation devoutly to be wished, having commented on and corrected five drafts of a paper written by a co-author who doesn’t appear to have heard of the word “clarity”.
I think ChatGPT can be more than a writing tool. For instance, should it be listed as an author when it writes code for data processing and analysis? I think it should be context dependent.
The argument for ChatGPT going into methods, from where I’m sitting, is for the same reason I cite ggplot2 in my papers: in order to provide credit to those making software you’re using for research. It doesn’t change how people understand my results, but it is the right thing to do.
Now, I feel less strongly about this with ChatGPT than with ggplot2 (and less strongly with ggplot2 than with packages written by a small team of academics), because the citation credit matters to the private company less, but I still think it makes some sense.
LikeLiked by 1 person
That’s a good point. Citing ggplot in Methods makes no sense at all as a Method – the software you used to make a plot can’t possibly shape a reader’s thinking about your results – but giving credit is a good idea. One could argue that citation credit is especially important for people who wrote code as volunteers (ggplot) and rather not to those who wrote it as their jobs (ChatGPT), although I know others will disagree with that!
I agree about giving credit* to something like ggplot. That’s a group of people providing their time and money to make your life easier. ChatGPT uses the internet for training (i.e. using other people’s work) so I don’t see how one would be crediting the knowledge creators by citing or crediting ChatGPT. I like Emily Bender’s term “Stochastic parrot.”
* of course, it would already be apparent in the code submitted with one’s “reproducible” paper , right? 🙂
> of course, it would already be apparent in the code submitted with one’s “reproducible” paper , right?
Not to Google Scholar 😉 citations are still pretty much the only way to acknowledge software that “”counts”” for academic purposes. Numbers of users and download stats don’t mean a lot, it’s almost impossible to track non-citation mentions of software in publications, and a lot of the code published in archives or as supplemental materials might as well not exist as far as search engines and indexers are concerned.
Not to rant too much in response to a light-hearted comment, just making a plea for software citations for anyone else who reads these comments in the future 😄
M – I hadn’t really thought about the tracking effect on something like Google Scholar. I’ve never looked for a library through publication searches. The downloads/stats/reviews are the first thing I check when deciding to use a new library, but I see your point.
I don’t think there could be any hard and fast rules about this. If a library like ggplot significantly improved the ability to communicate my work, then I would mention it (and the citation) in the acknowledgements section. If a library could have potentially affected my results (e.g. a particular version of a stats library), then I would cite it in the paper’s methods. Obviously, just my opinion.
This all seems very subjective. Do any journals provide guidance on citing software and versions? Inclusion of something like a ChatGPT citation may be journal dependent or at least dependent on how it was used.
** Yes, it was light-hearted….it was my own plea for people to include their code so their work can be reproduced 🙂
By that logic, should we cite MS Word/Excel or our reference manager of choice, since they help with the research in a similar manner as Chat GPT?
LikeLiked by 2 people