The Wall Street Journal

How to Reduce AI Chatbot Hallucinations

Some mistakes are inevitable. But there are ways to ask a chatbot questions that make it more likely that it won’t make stuff up.

How to Reduce AI Chatbot Hallucinations
Illustration: Adrià Voltà

You can’t stop an AI chatbot from sometimes hallucinating—giving misleading or mistaken answers to a prompt, or even making things up. But there are some things you can do to limit the amount of faulty information a chatbot gives you in response to your request.

AI hallucinations arise from a couple of things, says Matt Kropp, chief technology officer at BCG X, a unit of Boston Consulting Group. One is that the data on which an AI chatbot was trained contained conflicting, incorrect or incomplete information about the subject you’re asking about. You can’t do anything about that. The second is that “you haven’t specified enough of what you want,” Kropp says—and that is something you can address.  

WSJ Logo