Role of Uncertainty Quantification in the Explainability of Large Language Models for the Nuclear Industry
Conference
·
OSTI ID:3028731
- Idaho National Laboratory
The meteoric rise of generative artificial intelligence (AI) large language models (LLMs) has created an opportunity to utilize them to increase efficiencies in a multitude of industries. While LLMs carry great potential to revolutionize the manner in which work is performed, numerous known deficiencies limit their utility, including the black box nature of the models, the stochastic nature of the response (i.e., presenting the same prompt multiple times results in different responses), and the potential for hallucination. Widespread adoption of LLMs in safety-critical industries such as nuclear will require some form of explainability to assure end users that the LLM’s response to a given query is valid. Model uncertainty is inherently linked to the concepts of trust and explainability, and can be used to identify situations in which the model is insufficiently certain about its answer. Although uncertainty is not enough in and of itself to determine the suitability of an answer—a model can be very certain of an inaccurate answer—it still provides valuable supporting information. Practical methodologies for gauging or quantifying the uncertainty in LLM outputs are presented herein, along with examples based on nuclear-specific prompts.
- Research Organization:
- Idaho National Laboratory (INL), Idaho Falls, ID (United States)
- Sponsoring Organization:
- USDOE Office of Nuclear Energy (NE); USDOE Office of Nuclear Energy (NE), Nuclear Reactor Technologies (NE-7)
- DOE Contract Number:
- AC07-05ID14517;
- OSTI ID:
- 3028731
- Report Number(s):
- INL/CON-25-83089
- Resource Type:
- Conference proceedings
- Conference Information:
- NPIC&HMIT 2025, Chicago, 06/15/2025 - 06/18/2025
- Country of Publication:
- United States
- Language:
- English
Similar Records
AI for Interpreting Nuclear Power Plant Documents for Power Uprates
Reducing AI RAG Hallucination by Optimizing Routing Techniques
Trustworthiness and Trust: Identifying Factors that Drive Successful Human-AI Interaction in Nuclear Power Plant Applications
Conference
·
Sun Jul 06 20:00:00 EDT 2025
·
OSTI ID:3024448
Reducing AI RAG Hallucination by Optimizing Routing Techniques
Conference
·
Thu Aug 15 20:00:00 EDT 2024
·
OSTI ID:2474834
Trustworthiness and Trust: Identifying Factors that Drive Successful Human-AI Interaction in Nuclear Power Plant Applications
Conference
·
Tue Jun 17 20:00:00 EDT 2025
·
OSTI ID:3024592