Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

Reducing AI RAG Hallucination by Optimizing Routing Techniques

Conference ·
OSTI ID:2474834

Large Language Models (LLMs), such as ChatGPT, tend to “hallucinate”, meaning they confidently generate false information. Retrieval Augmented Generation (RAG) attempts to diminish hallucination by providing context to the LLM from data stores (indexes) containing relevant information. The LLM uses this context to formulate its response. RAG systems can still suffer from hallucination because of bad embeddings or ineffective routing. For example, a router will often return context from an irrelevant index, resulting in a hallucinated answer. In this study, we aim to minimize the frequency of routing hallucinations by optimizing Index Summary Routing.

Research Organization:
Idaho National Laboratory (INL), Idaho Falls, ID (United States)
Sponsoring Organization:
58
DOE Contract Number:
AC07-05ID14517
OSTI ID:
2474834
Report Number(s):
INL/MIS-24-79615-Rev000
Country of Publication:
United States
Language:
English

Similar Records

RAG for FLAG: AI Assistance for a Physics Code
Technical Report · Thu Sep 18 00:00:00 EDT 2025 · OSTI ID:2588973

LLaMP v0.1.0
Software · Wed Apr 23 20:00:00 EDT 2025 · OSTI ID:code-170337

Improving Reliability of Large Language Models for Nuclear Power Plant Diagnostics [Poster]
Technical Report · Wed Jul 24 00:00:00 EDT 2024 · OSTI ID:2440146