bhaskar saikia

the Galactic Nomad


AI, Hallucination & Truth: The Intersection

We stand at the cusp of a profound shift in how humans seek knowledge. Search engines, once the gateways to the world’s information, are gradually losing their charm. Typing queries into Google will soon feel as quaint as flipping through a phonebook. Why? Because people are no longer searching — they’re asking.

With the rise of AI assistants like Siri, Alexa, and now generative AI chatbots, the way we interact with information is being reshaped. Conversations with machines are replacing keyword searches and this trend is only gaining momentum. But beneath this seemingly convenient revolution lies a question few are asking: What will happen to truth?

Convenience or Complacency
It’s human nature to seek shortcuts. If asking a question to an AI is easier than researching, why wouldn’t we choose the path of least resistance? The problem isn’t the ease of access but the quality of the answers. Generative AI systems don’t “know” facts the way humans do. They predict responses based on patterns in the data they’ve been trained on — data that is often incomplete, outdated or biased.

In AI-speak, when a system produces an answer that sounds convincing but is factually incorrect, this is called hallucination. Now imagine this: a future where young minds rely almost entirely on AI for learning — absorbing not just facts, but hallucinated fragments of reality presented with the same confidence as unquestionable truth.

The Generation Raised by Hallucinations
Every generation is shaped by the tools it uses to understand the world. Books, teachers, films, the internet — each has left its mark on human thought. But if AI becomes the primary medium of knowledge, the risk isn’t just misinformation; it’s systemic distortion.

If children learn from hallucinating AIs, they will carry forward an understanding of the world built on warped foundations. As these children grow up, some will become teachers, scientists, policymakers, CEOs and even presidents. Their decisions, beliefs and values will be shaped by these AI-fed realities.

The most unsettling part? When an entire generation shares the same hallucinated truths, they will no longer feel like distortions. They will be accepted — unquestioned, unchallenged and treated as absolute reality.

Conversations in a Hallucinated World
This leads to a deeper and more troubling thought: what will conversations look like in such a world?

When people base their worldviews on distorted knowledge, meaningful dialogue becomes nearly impossible. Debates won’t revolve around ideas, but around conflicting ‘realities.’ Each side will be armed not with facts, but with AI-generated interpretations of facts. The gap between perception and reality will widen, and conversations will turn into echo chambers of artificially constructed belief systems.

The very fabric of intellectual growth — curiosity, skepticism, and inquiry — could unravel. After all, why question anything when an AI has already answered?

Can We Teach Machines to Be Honest?
The future isn’t set in stone. Developers are working hard to reduce AI hallucinations, but the question remains: can machines ever fully grasp truth, or only simulate it?

And more importantly — can humans retain the habit of questioning in an age where machines seem to know it all?

Maybe the answer isn’t better AI. Maybe the answer is raising better humans — people who know that truth and reality are never the same things. Truth is rarely delivered fully formed, but rather discovered through patience, doubt and critical thinking.

The age of AI is a double-edged sword. On one side lies the promise of knowledge at our fingertips. On the other, the quiet danger of a world where hallucinated reality is mistaken for the ultimate truth. The future won’t just test the intelligence of machines — it will test the wisdom of humanity.

So the next time your AI assistant answers a question, ask yourself: Is this the truth — or just a convincing illusion?



Leave a comment