AI Can’t Hallucinate – Any More Than it Can Dream
We’ve all heard the warnings: OH NO the AI is hallucinating! It’s making things up. It lies with confidence. With the explosion of use of AI, hallucinations are a popular topic; one that gets a lot of clicks. (I hope anyway, which is why I’m stuffing this first paragraph with the word hallucinate)
The entire concept of AI ‘hallucinations’ is just wrong. They aren’t hallucinations because in order to hallucinate, you have to be able to think. But calling these issues an ‘hallucination’ humanizes the software. It turns a statistical error into a cognitive glitch. That’s not what an AI ‘hallucination’ is.
When you see an unexpected output from an LLM, what you’re seeing is a pattern-matching failure. Calling it an ‘hallucination’ is marketing spin.
Even so, the problem with the warnings about ‘hallucinations’ is that they miss the point. AI doesn’t hallucinate because it’s broken. It ‘hallucinates’ because it doesn’t reason — at least not the way we do.
Paradoxically, hallucinations are what makes LLMs such a powerful tool for marketers
Because AI LLMs don’t reason the way we do, they have an unparalleled ability to create combinations of words that describe a concept or an object from unlimited points of view. Why? An AI doesn’t have a point of view! What it has is a set of rules for assembling tokens that represent words. Sometimes this results in combinations of words that don’t belong together, AKA errors or according to some ‘hallucinations’.
Much like the challenge of returning a Dead Parrot, often you have to keep saying the same thing in different ways until you and the person you’re trying to communicate with come to a common understanding. BTW, nailing the feet of a dead parrot to the perch won’t bring it to life.
A good teacher (and marketer) needs to be able to explain things from many different points of view.

Warning: Marketing Sausage Making
Before I get into why AI LLMs make mistakes, I’m going to share a bit of marketing sausage making. If you’re an A->Z type you can skip to the next section.
I once worked as a technical instructor and my mentor challenged me to answer, “What makes a good teacher?” The answer is that a good teacher can explain the same concept from at least six different points of view. Why? Because every student (or potential buyer) interprets what they experience through their own personal perspective.
I’ve written before about how people consume information differently; linearly and non-linearly. But that’s just one type of perspective that you as an effective marketer should consider. I think part of the challenge with why we are stuck reacting and trying to ‘fix’ the AI hallucination issue is that the concept keeps being presented to us in the same way.
Again like so many things in life, this isn’t bad. Having consistent messaging is important, but I’d argue that gaining alignment in understanding is far more valuable than the ability to memorize and spout a 100-word elevator pitch verbatim. (Yes, I still have the emotional scars from that experience.)
So, in this post I’m going to describe why AI LLMs ‘hallucinate’ and more importantly why you should take advantage of it.
Perspective 1: An AI LLM is like a Blender
Our brains are wired to make sense of the world. They take in messy signals and interpret them based on logic, memory, emotion, context: our perspective. And since we all have lived different lives – our perspectives are different.
AI doesn’t do that. It’s more like a blender: toss in the Internet, chop it all up into text snippets, give them labels, and add an inference engine. Now it’ll mix it into a coherent-sounding sentence.
The LLM is assembling words together based on past attempts to group symbols in a way that its trainers have told it work. It’s been ‘taught’ that assembling word #e234a52 with word #9b024c2 is a success. It doesn’t know the meaning of the words; it has no idea what it just made.
Perspective 2: An AI LLM is a Puzzle Solver
I’ve taken a few road trips over the last several years. Anyone else had breakfast at a Cracker Barrel? Do you recall those triangle shaped puzzles with the golf tees that they have on every table? The goal is to jump the tees until you only have one left. I think I may have solved it once in hundreds of tries. Did you know that there is more than one solution to these puzzles? And no matter which solution path you follow if you end up with only one tee in the end, your solution is correct.
An LMM is doing the same thing, it’s trying to solve a puzzle. It ingests the puzzle in the form of a prompt that you gave it. Like the peg puzzle, there are multiple ways to solve it. Sometimes the AI responds with words that fit together but do not represent an accurate perspective of the real-world, a pattern matching error, an ‘hallucination’.
This doesn’t mean that the LLMs response is not a solution for the puzzle you gave it. Instead, it’s a non-sensical perspective.
Perspective 3: LLMs are Perpetual Word Machines with No Brakes
Did you have one of those Play-Doh shape makers as a kid? You took your different colors of Play-doh smashed them together, pressed the lever and shapes came out. I was always trying to make different patterns – like a star with a blue middle and yellow arms. What I usually got was a star that was mostly green with some blue and yellow in various places. I think it was because I am an over-enthusiastic mixer.
An AI LLM is like the Play-doh fun factory. For raw material, instead of different colors of clay we’ve stuffed it with a ton of data from the Internet. Instead of different shape molds we have given it rules about the structure of language and images. You as the user add your prompt into the mixing chamber and push on the lever and a response comes out.

The response is a mix of the prompt, the rules, and the stuff from the internet. It usually looks a lot like what we wanted. But here’s the rub: when you push on the lever, you’re going to get an output. It may be that the LLM Fun Factory doesn’t have enough raw materials to output what you’ve asked for, but it’s going to give you an output just the same. That output is going to be in the shape that you asked for. As far as the AI is concerned it did what you asked and, in this respect, IT’S NOT WRONG.
Hallucinating for Fun and Profit
I hope these different perspectives have helped you develop a better understanding of how AI LLMs function.
The irony is, while LLMs can’t think like humans, they’re amazing at communicating like them. They can rephrase, reframe, and retell an idea from countless angles. That’s their superpower. If you know what you want to say, AI can help you say it in a dozen different ways—each one tailored to a different audience. Don’t forget, if you ask an LLM for a response it must provide an output. It has no choice.
Ultimately when you’re working with an LLM, there’s no getting around it, you have to provide the thinking.

Complaining that LMM is hallucinating, is sort of like complaining about dead parrots. The LLM doesn’t care it’s nailed to the perch. But the marketing people that sell those LLMs will be pleased that you’re talking about it as if it was alive.
Sign up here to have these posts delivered to your inbox.