Here's the thing nobody tells you when you first start using AI: it will confidently tell you things that are completely wrong.
Not maliciously. Not even reluctantly. With full, articulate, well-structured confidence. It sounds exactly like something that is true. Except it isn't.
This is called hallucination, and understanding it changes how you use these tools — for the better.
🌀 What is hallucination, actually?
Remember from the last post: AI generates text by predicting what word comes next, based on patterns in its training data. It doesn't "know" facts the way you know facts. It knows what facts look like.
So when it doesn't have a clear answer, it doesn't say "I don't know." It generates something that looks like the right kind of answer — a plausible pattern. A book title that might exist. A statistic that might be real. A quote that sounds like something someone might have said.
The result can be impressively wrong.
✅ Where it's generally trustworthy
✓ Drafting and writing
If you're generating text that you'll review before using, hallucination matters a lot less. You're the editor.
✓ Explaining concepts
How does compound interest work? What's a SWOT analysis? Well-known concepts it explains well and reliably.
✓ Brainstorming ideas
It's a brilliant ideation tool. Ideas don't have to be factually accurate — they just have to be interesting starting points.
✓ Summarising your own content
If you paste in a document, it summarises what's actually there. No invention needed.
⚠️ Where to be careful
✗ Specific facts and figures
Statistics, dates, names, prices. Always verify these from the original source before using them.
✗ Legal and medical advice
It will give confident-sounding answers. They may be wrong or outdated. Use a professional.
✗ Recent events
AI has a training cutoff date. It doesn't know what happened after a certain point. Ask and it might tell you wrong things.
✗ Citations and sources
It will cite papers, books, and articles that sound real but don't exist. Never trust a citation without verifying it.
🔍 The practical rule
Think about the consequence if it's wrong. Low consequence (helping brainstorm names for a project) — use it freely. High consequence (a fact in a public document, medical information, legal advice) — verify everything independently.
Another useful habit: ask it to flag uncertainty. "Tell me where you're less confident" or "what should I double-check?" Often prompts the model to be more candid about its limitations.
The people who get burned by AI are usually the ones who didn't read what it produced before sending or publishing it. The people who get the most value treat the output as a very capable first draft — and apply their own judgement before it goes anywhere important.