Like everyone else, I’ve been watching the rise of “generative AI” with both interest and trepidation. (“Generative AI” is software that creates “new” text (ChatGPT) or images (DALL-E) from a user prompt – I’ll explain the quotes on “new” later.) Now, I know only a smattering about how generative AI works, so don’t expect technical insights here. But I’ve noticed an interesting gap between what I think these systems are doing and how people are reacting to them.
My interest in generative AI, especially text generators, is easily explained and probably obvious. Since I was in high school I’ve watched software get very slowly better at imitating the kind of writing humans do with great effort, and the kind of conversational interaction that humans do without a second thought.* The latest round is, superficially, really impressive: it can chatter pleasantly about nothing much, write a poem, program in R,** write an essay about Canadian history, explain linkage disequilibrium, and more. Or at least, it often looks like it can. Continue reading