April 5, 2023

What’s in a name? AI, LLMs, Chatbots and what we hope our words will accomplish

There’s a lot of debate in academic circles about what to call ChatGPT, the new Bing, Bard, and the set of new technologies that operate on similar principles.

“AI” or “Generative AI” are the terms preferred by industry. These terms are rightly criticized by many scholars in the fields of communications, science and technology studies, and even computer science as not only inaccurate (there is nothing “intelligent” about these systems in the way we usually think about intelligence; they have no understanding of the text they produce) but also as crass marketing ploys. These critics say these terms perpetuate a cynical “AI hype cycle” that’s simply intended to drive attention and investment capital to the companies that make these technologies—and deflect attention from their harms.

But I worry that the use of the term “AI hype cycle” isn’t doing the work its proponents think it is. My understanding is that those proponents hope the term will help people see through the breathless predictions of corporate interests and focus instead on the harms these interests are perpetrating in the present. I worry, however, that it reads, to the casual observer, as “nothing to see here.”

Surely that’s not the outcome we want.

“Chatbot” is another contender for the term of choice. And indeed it describes the kinds of interactions that most people are having with these technologies at the moment. But as Ted Underwood has said, “chatbot” doesn’t capture the range of appliacations that these technologies enable beyond chat through their APIs. They’re clearly more than the chatbots we used 20 years ago on AIM.

“LLM” (“Large Language Model”) seems to be preferred by academics (according to Simon Willison’s Mastodon poll), because it’s more accurate. LLM surely does a better job of describing of how these technologies work—by statistically predicting the next most likely word in a sequence—not by suggesting there’s any kind of deeper understanding at work.

But “LLM” has a serious problem. It’s an acronym, and therefore, it’s jargon. And therefore, it’s boring.

That leads me to my title’s second question: “What do we want our words to do?” Is the purpose of our words to asymptotically approach truth above and against all other considerations? Or do we want to draw people’s attention to that truth, even if the words we use are not quite as precise?

I have very mixed feelings about this. LLM is more accurate. But it’s also easy (for ordinary people, decision makers, regulators, politicians) to ignore. That is, it’s boring. I understand not wanting to amplify Silicon Valley’s hype, but we also don’t want to downplay the likely consequences of this technology. There has to be a way to tell people that something will be transformative (quite probably for the worse), and command their necessary attention, without being a cheerleader for it.

For example, the term “World Wide Web” was both completely hype and totally inaccurate when it was coined by Tim Berners Lee in 1989. But after the Internet being ignored by pretty much the entire world for 20 years, “the Web” surely captured the popular mind and brought the Internet to public attention.

Conversely, the terms “SARS-CoV-2” and “COVID-19” were both sober and accurate. But I worry that the clinical nature of the terms enabled people who were already predisposed to looking the other way to do so more easily. Calling it “Pangolin Flu” or “Raccoon Dog Virus” would have been less accurate, but they would have caught people’s attention. “Bird Flu,” “Swine Flu,” and Zika Virus, which have killed very few people in this country get a ton of attention relative to their impact. Surely terminology wasn’t the root cause of our society’s lazy response to the pandemic. But I don’t think it helped.

Now, I am NOT suggesting we develop intentionally misleading terminology just to get people’s attention. We can leave that to Silicon Valley. But I am suggesting that we think about what we want our words to do. Do we value accuracy to the exclusion of all other considerations? In that case, an acronym of some sort may be in order. Or do we also want people to pay attention to what we’re saying?

I don’t have a good suggestion for a specific term that accomplishes both goals (accuracy and attention), but I think we need one. And if we can’t come up with one that does both jobs, and the public conversation settles on “generative AI” or some other term coined by the industry, I don’t think we should spend too much time banging our heads against it or trying to push alternatives. We’ll be better served if our many and valid and urgent criticisms of “generative AI” and its industry are heard by people than by sticking earnestly with a term that lets people ignore us.

If there’s anything we should have learned from the past 25 years of the history of the internet it’s that academics “calling bullshit” is not a plan for dealing with unwanted technology outcomes.

Academics have two very deeply held and interrelated attachments. One is to accuracy and the truth. The other is to jargon. The one is good. The other can cause trouble. I hope in this case our attachment to the first doesn’t lead us to adopt a jargon-y language that enables people already predisposed to ignore the harms of the tech industry to do so more comfortably.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.