What they are, and why they happen
It seems like a new generative AI pops up everywhere you turn. ChatGPT remains popular, Google Bard keeps getting updates, Dall-E2 is at the top of the image creation game, and so on. These artificial intelligence apps are user-friendly. You can log in to your Android right now and use them to bring up all kinds of information in almost any format you want.
That’s been a headache for teachers trying to catch students using chatbots to write essays. Professionals are experimenting with using AIs to write emails, product listings, and thought pieces with mixed results. This experimentation has kicked up a common and difficult problem that every AI user must know: hallucinations.
AI hallucinations sound like a cheap plot in a sci-fi show, but these falsehoods are a problem in AI algorithms and have consequences for people relying on AI. Here’s what you need to know about them.
What’s an AI hallucination?
Source: Wikimedia Commons
An AI hallucination is when a generative AI model provides information for a prompt that is not correct or doesn’t exist in the real world. For example, if you ask AI, “How many tulips are in the world?” and the AI responds, “There are three tulips for every living person in the world,” that’s a hallucination.
That statistic isn’t right. Nobody ever created it or studied it in the real world. The AI came up with it all on its own. That happens all the time with data, statistics, and numbers that AI cites. Chatbots can even create fake court cases that land lawyers in trouble or cite the wrong medical research to support a medical theory.
For image-based AIs, a hallucination may be like the habit of not understanding how human hands work or mistaking a tiger for a house cat in a prompt. However, hallucinations are more commonly associated with conversational AI.
What causes AI hallucinations?
Generative AI is created from LLMs, or large language models, a machine learning model that trains many human-generated datasets (content online). They parse lots of language found online (or images, depending on their specialty) and break it down into identifiers and formulas they can use to create human-sounding sentences. It’s like an advanced form of imitation, and AIs need lots of careful training before they can sound human. Eventually, conversational AI tools can provide information in natural language, but that doesn’t mean what they say is accurate.
Conversational AI systems are designed to respond to prompts like in OpenAI’s ChatGPT and other AI chatbots. They have to say something. Between imitating human language and scanning online data, they sometimes make things up to answer prompts naturally. AI-powered bots aim to sound like humans, not necessarily to get everything right, so incorrect information can slip in. When AIs hallucinate, it’s caused by things like:
- Poor or lacking AI training, including a poor selection of training data or low-quality training data.
- “Overfitting” or training an AI too precisely on a limited subject so that it doesn’t know what to say to prompts and spits out chaotic information.
- Repeating misinformation that’s been spread online by humans, including malicious disinformation.
- Human biases or emotions that AI tools pick up and imitate at the wrong times.
- Using a complex model that picks up and analyzes so much information, with so many options for content generation, that even good AIs struggle to identify what’s real and what’s not.
Do all AIs have hallucinations?
All AIs can make mistakes. Hallucinations are usually a specific issue with generative AI, or AI designed to answer prompts. When it comes to those kinds of AI, none of them are perfect. Every one of them has been found to hallucinate at least occasionally. Consumer-facing AI tends to have some built-in checks to prevent hallucinations, but nothing’s perfect. At some point, you’ll discover inaccurate information.
How do I recognize an AI hallucination?
Source: Pixabay
Part of the problem is that advanced chatbots and similar AI can sound convincing. They cite hallucinations because they “believe” they are as true as any other kind of data. The user must handle the proofreading. Double-check when a generative AI makes a claim, especially involving numbers, dates, people, or events. Watch for contradictory information in an answer or numbers that don’t seem right for the prompt.
Don’t trust the bots to provide reliable information without looking closer. There are examples of ChatGPT confusing fictional characters and other false information you may not expect. Always practice your own fact-checking.
Can I prevent hallucinations when using AI?
You can’t prevent hallucination problems, but you can cut down on hallucinations and get a better answer. Practice habits like these:
- Lower the “temperature” of the AI. This setting controls how randomized the AI can be and may limit inaccuracies (it also cuts down on the creativity of the answers).
- Avoid using slang terms when creating a prompt.
- Don’t ask open-ended questions. Ask for specific information.
- Fill prompts with helpful info that the AI can use to narrow down its response.
- Frame the question by asking the AI to imitate a professional in the field, like a teacher or banker (it sounds strange, but it often helps).
- Tell the AI what you don’t want to know if it provides random information. You can also ask it for citations that you can check.
Are AI hallucinations dangerous?
AI hallucinations are dangerous in two ways. First, they can provide the wrong information when you want to get it right. That can be funny, like when a student submits an AI-created essay that claims Abraham Lincoln developed the Lincoln automobile. It can also be irritating, like when an AI creates a recipe for you but says to use two tablespoons of sulfuric acid instead of baking soda or fakes a line on your resume. At worst, people following incorrect instructions may injure themselves or get in trouble.
On a deeper level, AI hallucinations can create big trouble with misinformation. What happens if you ask an AI about statistics relating to a hot-button topic like immigration or vaccines, and the AI hallucinates numbers? People have killed and died for those subjects based on information they found online. When users believe a hallucination, people can get hurt. That’s one reason for the focus on getting rid of hallucinations.
Can developers fix AI hallucinations?
They’re working on it. GPT-4 is better than GPT 3.5, and so on. Developers can use plenty of tricks to create parameters and guardrails to keep AI from hallucinating. It can take additional training, retraining, or processing power, so costs are involved. Ideally, generative AI keeps getting more refined, and hallucinations become steadily rarer.
Beware the common AI hallucination
Now you know the ins and outs of AI hallucinations and why you can’t trust generative AI with specifics. Keep that in mind when asking AI for answers or reports on a topic, and you’ll never be surprised by this flaw. In the meantime, stop by our articles on how Google wants your photos for AI and how an LLM works.