# Implications of Monte Carlo Statistical Errors in Criticality Safety Assessments

## Abstract

Most criticality safety calculations are performed using Monte Carlo techniques because of Monte Carlo's ability to handle complex three-dimensional geometries. For Monte Carlo calculations, the more histories sampled, the lower the standard deviation of the resulting estimates. The common intuition is, therefore, that the more histories, the better; as a result, analysts tend to run Monte Carlo analyses as long as possible (or at least to a minimum acceptable uncertainty). For Monte Carlo criticality safety analyses, however, the optimization situation is complicated by the fact that procedures usually require that an extra margin of safety be added because of the statistical uncertainty of the Monte Carlo calculations. This additional safety margin affects the impact of the choice of the calculational standard deviation, both on production and on safety. This paper shows that, under the assumptions of normally distributed benchmarking calculational errors and exact compliance with the upper subcritical limit (USL), the standard deviation that optimizes production is zero, but there is a non-zero value of the calculational standard deviation that minimizes the risk of inadvertently labeling a supercritical configuration as subcritical. Furthermore, this value is shown to be a simple function of the typical benchmarking step outcomes--the bias, the standardmore »

- Authors:

- Publication Date:

- Research Org.:
- Office of Scientific and Technical Information, Oak Ridge, TN; Oak Ridge Y-12 Plant (Y-12), Oak Ridge, TN

- Sponsoring Org.:
- USDOE - Office of Defense Programs (DP)

- OSTI Identifier:
- 859274

- Report Number(s):
- Y/DD-1191

4300020173; TRN: US0504705

- DOE Contract Number:
- DE-AC05-00OR22800

- Resource Type:
- Technical Report

- Country of Publication:
- United States

- Language:
- English

- Subject:
- 42 ENGINEERING; 99 GENERAL AND MISCELLANEOUS//MATHEMATICS, COMPUTING, AND INFORMATION SCIENCE; CRITICALITY; SAFETY ANALYSIS; MONTE CARLO METHOD; ERRORS

### Citation Formats

```
Pevey, Ronald E.
```*Implications of Monte Carlo Statistical Errors in Criticality Safety Assessments*. United States: N. p., 2005.
Web. doi:10.2172/859274.

```
Pevey, Ronald E.
```*Implications of Monte Carlo Statistical Errors in Criticality Safety Assessments*. United States. doi:10.2172/859274.

```
Pevey, Ronald E. Thu .
"Implications of Monte Carlo Statistical Errors in Criticality Safety Assessments". United States. doi:10.2172/859274. https://www.osti.gov/servlets/purl/859274.
```

```
@article{osti_859274,
```

title = {Implications of Monte Carlo Statistical Errors in Criticality Safety Assessments},

author = {Pevey, Ronald E},

abstractNote = {Most criticality safety calculations are performed using Monte Carlo techniques because of Monte Carlo's ability to handle complex three-dimensional geometries. For Monte Carlo calculations, the more histories sampled, the lower the standard deviation of the resulting estimates. The common intuition is, therefore, that the more histories, the better; as a result, analysts tend to run Monte Carlo analyses as long as possible (or at least to a minimum acceptable uncertainty). For Monte Carlo criticality safety analyses, however, the optimization situation is complicated by the fact that procedures usually require that an extra margin of safety be added because of the statistical uncertainty of the Monte Carlo calculations. This additional safety margin affects the impact of the choice of the calculational standard deviation, both on production and on safety. This paper shows that, under the assumptions of normally distributed benchmarking calculational errors and exact compliance with the upper subcritical limit (USL), the standard deviation that optimizes production is zero, but there is a non-zero value of the calculational standard deviation that minimizes the risk of inadvertently labeling a supercritical configuration as subcritical. Furthermore, this value is shown to be a simple function of the typical benchmarking step outcomes--the bias, the standard deviation of the bias, the upper subcritical limit, and the number of standard deviations added to calculated k-effectives before comparison to the USL.},

doi = {10.2172/859274},

journal = {},

number = ,

volume = ,

place = {United States},

year = {2005},

month = {9}

}