OPINION

OPINION | BRENDA LOOPER: On hallucination


I'm sensing a theme here. It's still early days, but thus far with two words of the year announced, artificial intelligence is the through line.

It's almost like we're a little concerned here with the rapid growth of chatbots. I have yet to use one, but I have played around with AI photo editing. Considering the strange imagery that resulted from a prompt for cats as founding fathers (it was for the blog, I swear), I'm not too worried about that just yet.

You might remember at the beginning of the month that Collins English Dictionary named "AI" as its word of the year. Well, now Cambridge Dictionary has named a new sense of "hallucinate" as its word of the year for 2023. In its announcement, it wrote: "This year has seen a surge in interest in generative artificial intelligence (AI) tools like ChatGPT, Bard and Grok, with public attention shifting towards the limitations of AI and whether they can be overcome.

"AI tools, especially those using large language models (LLMs), have proven capable of generating plausible prose, but they often do so using false, misleading or made-up 'facts.' They 'hallucinate' in a confident and sometimes believable manner."

The dictionary updated its definition of "hallucinate" to account for the new meaning (because, remember, language evolves), then named it its word of the year.

The traditional sense of the word is "to seem to see, hear, feel, or smell something that does not exist, usually because of a health condition or because you have taken a drug." Now the definition has been broadened to include: "When an artificial intelligence (a computer system that has some of the qualities that the human brain has, such as the ability to produce language in a way that seems human) hallucinates, it produces false information."

Cambridge further explained, "AI hallucinations, also known as confabulations, sometimes appear nonsensical. But they can also seem entirely plausible--even while being factually inaccurate or ultimately illogical. AI hallucinations have already had real-world impacts. A U.S. law firm used ChatGPT for legal research, which led to fictitious cases being cited in court. In Google's own promotional video for Bard, the AI tool made a factual error about the James Webb Space Telescope."

Cambridge Dictionary publishing manager Wendalyn Nichols said: "The fact that AIs can 'hallucinate' reminds us that humans still need to bring their critical thinking skills to the use of these tools. AIs are fantastic at churning through huge amounts of data to extract specific information and consolidate it. But the more original you ask them to be, the likelier they are to go astray.

"At their best, large language models can only be as reliable as their training data. Human expertise is arguably more important--and sought after--than ever, to create the authoritative and up-to-date information that LLMs can be trained on."

So humans are still needed. AI, at least for now, still needs a set of training wheels, so HAL and Skynet are still a ways off.

The shift in definition, Cambridge wrote, shows the growing tendency to attribute human characteristics or behavior to technology, just as many of us do with animals. Dr. Henry Shevlin, an AI ethicist at the University of Cambridge, told the dictionary: "The widespread use of the term 'hallucinate' to refer to mistakes by systems like ChatGPT provides a fascinating snapshot of how we're thinking about and anthropomorphising AI. Inaccurate or misleading information has long been with us, of course, whether in the form of rumours, propaganda, or 'fake news.'

"Whereas these are normally thought of as human products, 'hallucinate' is an evocative verb implying an agent experiencing a disconnect from reality. This linguistic choice reflects a subtle yet profound shift in perception: the AI, not the user, is the one 'hallucinating.' While this doesn't suggest a widespread belief in AI sentience, it underscores our readiness to ascribe human-like attributes to AI.

"As this decade progresses, I expect our psychological vocabulary will be further extended to encompass the strange abilities of the new intelligences we're creating."

OK, strike what I said earlier. I'm getting a little worried.

Other words besides those related to AI experienced spikes in lookups, Cambridge wrote, thanks to news events. The Titan submersible tragedy drove lookups for "implosion" (the act of falling inward with force; sudden and complete failure). The trial of French robber Rédoine Faïd for his second jailbreak (this one by helicopter in 2018) led to a spike in lookups of "ennui" (intense boredom) after he attributed his escape to that. "Grifter" (someone who gets money dishonestly through trickery) similarly saw a spike after public figures including Prince Harry and Meghan Markle were thus labeled.

A reminder for readers: The Lake Superior State University Banished Words List is due to be released at the end of the year. I'd like to know your guesses for the list, and what words and phrases you would love to bid adieu to this year. Email me at the address below.


Assistant Editor Brenda Looper is editor of the Voices page. Email her at blooper@adgnewsroom.com. Read her blog at blooper0223.wordpress.com.


Upcoming Events