Generalized Bayesian Framework for Evaluation of Integral Benchmark Experiments
- Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States)
A recently published generalized Bayesian optimization framework has provided a way to retract any or all of the three common assumptions underlying the conventional Generalized Linear Least Squares (GLLS) optimization method based on the concepts introduced in Ref. [2]. These assumptions are: 1. Perfection: The model used for data evaluation and the prior probability distribution function (PDF) of generalized data are perfect. 2. Normality: The prior and posterior PDF are normal. 3. Linearity: The model is linear. In this work we outline how the framework in [1] could be directly adopted for improved evaluation of nuclear criticality integral benchmark experiments (IBEs) by: 1. Removing the first assumption alone by utilizing the concept of imperfections introduced in [1] to enable evaluation in the presence of discrepancies between the data and model or of missing covariance information by a GLLS method that will be seen as a generalization of the conventional GLLS method employed by the TSURFER code, and by 2. Removing the remaining two assumptions by implementing a Markov Chain Monte Carlo method for computation of the posterior PDF in the SAMPLER code, where TSURFER and SAMPLER are the uncertainty quantification (UQ) codes for IBEs in the SCALE code system based on the GLLS and the stochastic method, respectively. The graphic in Figure 1 categorizes the methods discussed in terms of the assumptions that they employ to determine posterior PDFs.
- Research Organization:
- Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States)
- Sponsoring Organization:
- USDOE National Nuclear Security Administration (NNSA), Nuclear Criticality Safety Program (NCSP)
- DOE Contract Number:
- AC05-00OR22725
- OSTI ID:
- 1994538
- Report Number(s):
- 194595
- Country of Publication:
- United States
- Language:
- English
Similar Records
Generalized Bayesian Framework for Evaluation of Integral Benchmark Experiments
Bayesian Monte Carlo Evaluation Framework for Imperfect Data and Models [Abstract]
Bayesian Optimization Framework for Imperfect Data or Models
Conference
·
Mon Nov 29 23:00:00 EST 2021
·
OSTI ID:1886588
Bayesian Monte Carlo Evaluation Framework for Imperfect Data and Models [Abstract]
Conference
·
Tue Jul 26 00:00:00 EDT 2022
·
OSTI ID:1899767
Bayesian Optimization Framework for Imperfect Data or Models
Technical Report
·
Fri Apr 01 00:00:00 EDT 2022
·
OSTI ID:1874643