Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

Measuring the Predictive Capability of Computational Models: Principles and Methods, Issues and Illustrations

Technical Report ·
DOI:https://doi.org/10.2172/780290· OSTI ID:780290

It is critically important, for the sake of credible computational predictions, that model-validation experiments be designed, conducted, and analyzed in ways that provide for measuring predictive capability. I first develop a conceptual framework for designing and conducting a suite of physical experiments and calculations (ranging from phenomenological to integral levels), then analyzing the results first to (statistically) measure predictive capability in the experimental situations then to provide a basis for inferring the uncertainty of a computational-model prediction of system or component performance in an application environment or configuration that cannot or will not be tested. Several attendant issues are discussed in general, then illustrated via a simple linear model and a shock physics example. The primary messages I wish to convey are: (1) The only way to measure predictive capability is via suites of experiments and corresponding computations in testable environments and configurations; (2) Any measurement of predictive capability is a function of experimental data and hence is statistical in nature; (3) A critical inferential link is required to connect observed prediction errors in experimental contexts to bounds on prediction errors in untested applications. Such a connection may require extrapolating both the computational model and the observed extra-model variability (the prediction errors: nature minus model); (4) Model validation is not binary. Passing a validation test does not mean that the model can be used as a surrogate for nature; (5) Model validation experiments should be designed and conducted in ways that permit a realistic estimate of prediction errors, or extra-model variability, in application environments; (6) Code uncertainty-propagation analyses do not (and cannot) characterize prediction error (nature vs. computational prediction); (7) There are trade-offs between model complexity and the ability to measure a computer model's predictive capability that need to be addressed in any particular application; and (8) Adequate quantification of predictive capability, even in greatly simplified situations, can require a substantial number of model-validation experiments.

Research Organization:
Sandia National Labs., Albuquerque, NM (US); Sandia National Labs., Livermore, CA (US)
Sponsoring Organization:
US Department of Energy (US)
DOE Contract Number:
AC04-94AL85000
OSTI ID:
780290
Report Number(s):
SAND2001-0243
Country of Publication:
United States
Language:
English

Similar Records

Relation of validation experiments to applications.
Technical Report · Sat Jan 31 23:00:00 EST 2009 · OSTI ID:949007

Validation of SAS4A/SASSYS-1 for predicting steady-state single-phase natural circulation
Journal Article · Thu Mar 18 00:00:00 EDT 2021 · Nuclear Engineering and Design · OSTI ID:1798462

OECD/NEA EGMPEBV Activities in Multi-Physics Verification and Validation
Journal Article · Sat Jul 01 00:00:00 EDT 2017 · Transactions of the American Nuclear Society · OSTI ID:23050378