Let's start with a scenario. You ask an AI chatbot whether a medication is safe to take with alcohol. It answers confidently, without hesitation.

You follow its advice. But what if it was wrong? What if the AI simply made up an answer because it was designed to always sound sure of itself — even when it isn't?

This isn't science fiction. It's something that researchers, ethicists, and security exp