logo
Menu
Spoiler Alert: It's All a Hallucination

Spoiler Alert: It's All a Hallucination

Why the problem of perception is so important for understanding generative AI.

David Priest
Amazon Employee
Published Feb 20, 2024
Hallucination: “a plausible but false or misleading response generated by an artificial intelligence algorithm”
-Merriam Webster
“It's important to note that hallucinations are distinct from illusions, which are misinterpretations of real external stimuli.”
-ChatGPT
Generative artificial intelligence, largely in the form of large language models, has been for the past year and a half breaking the internet. Excitement for and fear of its capabilities are reshaping industries, inspiring countless “expert” opinions, and even driving the action of popular summer blockbusters. The centuries-old question of whether a computer can fool us humans into believing it’s conscious has been answered with a resounding “yes.”
But unlike the AI so many ‘80s cyberpunk novelists imagined, the generative AI of today is not all-knowing. In fact, it’s surprisingly fallible. When you ask ChatGPT, say, a question about anthropology, it may not only produce errors; it may offer quotes from scholars who never said such things at all. In our typical habit of personifying our technology, we have decided to call such errors “hallucinations” — as though Clippy finally grew up and started dropping acid.
Much ado has been made about hallucinations — and rightly so. We shouldn’t blindly trust large language models. But we’re missing the bigger picture: to the LLM, it’s all a hallucination.

What is Intelligence, anyway?

When we talk about the degree to which AIs “think,” things can get philosophical fast. People easily stumble into one of a few pitfalls. First, it’s easy to overestimate the power of artificial intelligence and to dismiss its weaknesses.
Part of the problem here is that, when Alan Turing first asked whether a machine could imitate a human, fooling people became one of the primary measures of the “success” of an artificial intelligence — and it turns out that’s not a particularly effective measurement for what we might call “intelligence.” In fact, humans are embarrassingly easy to fool. When researchers started to hold competitions in the early 1990s to build artificial intelligences that could trick people into believing they were humans, one method for success was to introduce typos into responses — because, the human judges reasoned, surely computers wouldn’t make such mistakes! At the time, critics pointed out that programs were fooling judges not with artificial intelligence, but with artificial stupidity.
These days, we’ve developed more complicated ways to fool ourselves. Stochasticity, that is, artificially imposed randomness, is perhaps the most important method.
But wait, how is randomness fooling people?
Think about it like this: if I ask you what your favorite food is today (sushi), you may give me a different answer than if I ask you the same question tomorrow (burgers). Even if you give the same type of food (“I love burgers”), you’ll almost certainly phrase your answer differently (“burgers are the best”). Machines don’t typically behave this way: give them an input, and they’ll give a predictable, repeatable output.
Stochasticity is meant in part to “fix” this problem, making machines vary their outputs to resemble human unpredictability. And stochasticity is fooling people just like typos did in the early ‘90s.
Even the smartest people are surprisingly credulous, perhaps because on some level, we want the fantasies we’ve worked toward to be finally realized. Think of the Google engineer who made headlines in 2022 for claiming that one model he’d tested had become sentient, mostly, it turned out, because the model claimed to have become sentient.
Another pitfall for many AI apologists: underestimating the capacities of humans. Artificial intelligence may not perfectly imitate humans, goes the argument, but humans aren’t so special anyway. The structure of a large language model isn’t so different from the structure of a human brain — albeit on a much smaller scale and composed of different materials. It’s only a matter of time before our technology catches up and AI performance overtakes that of humans. But this argument has trouble accounting for the pesky idiosyncrasies that define humans — like values, beliefs, sensations, moods, and desires — and that don’t seem to be attributable simply to scale or material. (For the curious, this is sometimes called The Hard Problem of Consciousness.)
Some people minimize these human distinctions, dismiss the hard problem. Bertrand Russell, a Nobel Prize-winning writer, mathematician, and philosopher, wrote way back in 1935 that human memory is simply “a form of habit, and habit is a characteristic of nervous tissue, though it may occur elsewhere, for example in a roll of paper which rolls itself up again if it is unwound.” Even reason, Russell argued, is largely habitual. He imagined in another essay posing a math problem to two children, and receiving two different answers. “The one, we say, "knows" what six times nine is, the other does not. But all that we can observe is a certain language-habit. The one child has acquired the habit of saying "six times nine is fifty-four;" the other has not."
Incidentally, acquiring the language-habit of stating facts correctly is precisely what large language models do (and it’s part of why they’re surprisingly bad at math). But is it what humans do? Russell’s argument is hardly compelling to me, since I spend every morning teaching math to my six- and seven-year-old boys. Sure, they’re memorizing facts to perform speed drills; but they’re also memorizing mechanics that they can apply to math problems they’ve never encountered before. These are two different processes.
I would even argue that Russell himself, and his unique contributions to the philosophy of consciousness, are evidence of our inventiveness beyond mere “language-habits.”
Look: human consciousness is admittedly difficult to observe or define (although thinkers ranging as widely as Daniel Dennett and Marilynne Robinson have written compellingly about it), but if we simply take that difficulty as permission to reduce consciousness to its functional or material constituents — likening it to a piece of paper, for example — then we vastly underestimate human capacities, even as, like Russell, we demonstrate their unique value.
But let’s get more practical. The stochasticity I mentioned earlier — that artificially imposed unpredictability that makes AI responses appear more human-like — is often achieved using random number generators. That is to say, we’ve taken the unpredictable bits of human consciousness that makes it so strange and innovative — the features we have struggled for millennia to parse — and declared that flipped coins and rolled dice essentially accomplish the same effect. To any human living a life of any self-reflection, that should feel like an immediately insufficient stand-in.
When we discuss artificial intelligence, we should avoid the common pitfalls of overestimating its capacities because of our own credulity or underestimating our capacities because of our tendency to flatten the mysterious elements of human consciousness in pursuit of comprehensibility.
What does that leave us with? This might seem obvious, but generative artificial intelligence is fundamentally different from human intelligence. And our terminology of “hallucinations,” while limited, turns out to be useful for illuminating what is perhaps the most important difference.

Lucy in the Sky

A hundred fifty years ago, Edwin Abbott Abbott wrote a book called Flatland, in which a sphere tries to convince a square who inhabits a two-dimensional universe of the existence of a third dimension. Today, it’s not so difficult to imagine ourselves as spheres trying to help squares (that is, large language models) to understand a world they cannot perceive. After we explain our dimension to them (by exposing them to vast datasets of written language), we find their grasp of it serviceable, but not particularly reliable.
Why? Because most of what LLMs “know” is second-hand.
But let’s be more specific: LLMs treat words as referents, while humans understand words as referential. When a machine “thinks” of an apple (such as it does), it literally thinks of the word apple, and all of its verbal associations. When humans consider an apple, we may think of apples in literature, paintings, or movies (don’t trust the witch, Snow White!) — but we also recall sense-memories, emotional associations, tastes and opinions, and plenty of experiences with actual apples.
So when we write about apples, of course humans will produce different content than an LLM.
Another way of thinking about this problem is as one of translation: while humans largely derive language from the reality we inhabit (when we discover a new plant or animal, for instance, we first name it), LLMs derive their reality from our language. Just as a translation of a translation begins to lose meaning in literature, or a recording of a recording begins to lose fidelity, LLMs’ summaries of a reality they’ve never perceived will likely never truly resonate with anyone who’s experienced that reality.
And so we return to the idea of hallucination: content generated by LLMs that is inaccurate or even nonsensical. The idea that such errors are somehow lapses in performance is on a superficial level true. But it gestures toward a larger truth we must understand if we are to understand the large language model itself — that until we solve its perception problem, everything it produces is hallucinatory, an expression of a reality it cannot itself apprehend.
If you're curious to learn more about the big ideas of generative AI, subscribe to my newsletter, Into the VectorVerse. Or if you want to join the conversation, check out the generative AI space, where tons of people are grappling with the practical use cases of this emergent technology.
 

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

Comments