By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
Button Text

Mitigating the Risk of AI Hallucinations

Written by
Amrish Singh
Mitigating the risk of AI hallucinations

Generative AI has one well-documented problem: hallucination. When responding to prompts, large language models sometimes make up information. When asked whether the information is correct, a large language model may make up sources to back up the fabricated claim. For insurance companies, this is a serious concern. However, it’s possible to overcome the AI hallucination challenge with the right platform. 

How AI Hallucinations Have Caused Problems

CNN lists several examples of AI hallucinations. One is when Google unveiled a demo of Bard. When asked about new discoveries from the James Webb Space Telescope, the AI chatbot incorrectly said the telescope took the first pictures of a planet outside our own solar system. Although this might seem like a trivial error, CNN says shares in Google’s parent company dropped by 7.7% after the hallucination, reducing its market value by $100 billion. 

In another famous hallucination, CNN says a lawyer used ChatGPT to research a brief for a case. The AI chatbot provided fabricated judicial decisions. When the lawyer asked ChatGPT if one of the cases was real, the AI confirmed it was and provided fake citations.

CNET has also encountered problems with hallucinations, according to CNN. The news outlet reportedly used the AI to write multiple articles and then had to issue corrections due to the mistakes in those articles, some of which were “substantial.”

Hallucinations have even led to litigation. According to the Verge, a radio host filed a lawsuit against OpenAI (the company behind ChatGPT) after the AI chatbot incorrectly stated that the radio host had been accused of embezzling funds. 

Implications for the Insurance Industry

In the insurance industry, incorrect information is a serious challenge. Risk selection, risk pricing and claims decisions all hinge on the availability of accurate data.  

Hallucinations can also harm policyholders. If a customer calls to ask about his coverage and the AI chatbot tells the policyholder that his homeowners insurance includes coverage for flood damage, but it doesn’t, there could be serious implications later on.  

Mitigating the AI Hallucination Risk

The good news is it’s possible to mitigate hallucination risks. 

TechTarget says several factors may lead to hallucinations. One major issue is data quality: if the source content has incorrect information, hallucinations may occur. Bias created during training could also lead to hallucinations, as could unclear, inconsistent, or contradictory prompts. 

Once you understand the primary causes of hallucinations, it’s possible to devise ways to mitigate the risk. An expert in AI who spoke to TechCrunch explains that it’s possible to reduce hallucinations through careful training and deployments of LLMs, specifically by curating a high-quality knowledge base that the LLM draws on.

IBM also emphasizes the importance of using high-quality training data. Other steps – such as defining the purpose of the AI model and limiting responses – can also prevent hallucinations. 

How Liberate Reduces the Risk of Hallucination 

Liberate’s AI-powered platform provides an LLM agent that automates underwriting, claims handling, and other insurance processes. To ensure the platform is successful, we’ve taken many steps to reduce the chance of AI hallucinations. 

We use an agent hierarchy that includes a master, intake, fraud assessor, and policy verification agent (in addition to a cyber security agent to address another critical AI issue: data safety). The platform also enables agents to plug in their reasoning capabilities with typed workflows, thereby establishing logical processes and reducing random outputs.

AI hallucination is a serious risk. As insurers adopt new AI tools, and select new AI partners, they need to verify that measures are in place to minimize this risk. To see Liberate’s system in action, book a demo.