Sensitivity and Uncertainty Analysis
Summary Notes from 15 November 2007 Generic Technical Issue Discussion on Sensitivity and Uncertainty Analysis and Model Support
Direct Aerosol Forcing Uncertainty
DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]
Mccomiskey, Allison
2008-01-15
Understanding sources of uncertainty in aerosol direct radiative forcing (DRF), the difference in a given radiative flux component with and without aerosol, is essential to quantifying changes in Earth's radiation budget. We examine the uncertainty in DRF due to measurement uncertainty in the quantities on which it depends: aerosol optical depth, single scattering albedo, asymmetry parameter, solar geometry, and surface albedo. Direct radiative forcing at the top of the atmosphere and at the surface as well as sensitivities, the changes in DRF in response to unit changes in individual aerosol or surface properties, are calculated at three locations representing distinct aerosol types and radiative environments. The uncertainty in DRF associated with a given property is computed as the product of the sensitivity and typical measurement uncertainty in the respective aerosol or surface property. Sensitivity and uncertainty values permit estimation of total uncertainty in calculated DRF and identification of properties that most limit accuracy in estimating forcing. Total uncertainties in modeled local diurnally averaged forcing range from 0.2 to 1.3 W m-2 (42 to 20%) depending on location (from tropical to polar sites), solar zenith angle, surface reflectance, aerosol type, and aerosol optical depth. The largest contributor to total uncertainty in DRF is usually single scattering albedo; however decreasing measurement uncertainties for any property would increase accuracy in DRF. Comparison of two radiative transfer models suggests the contribution of modeling error is small compared to the total uncertainty although comparable to uncertainty arising from some individual properties.
U.S. Department of Energy (DOE) all webpages (Extended Search)
Climate Uncertainty The uncertainty in climate change and in its impacts is of great concern to the international community. While the ever-growing body of scientific evidence substantiates present climate change, the driving concern about this issue lies in the consequences it poses to humanity. Policy makers will most likely need to make decisions about climate policy before climate scientists have quantified all relevant uncertainties about the impacts of climate change. Sandia scientists
Physical Uncertainty Bounds (PUB)
Vaughan, Diane Elizabeth; Preston, Dean L.
2015-03-19
This paper introduces and motivates the need for a new methodology for determining upper bounds on the uncertainties in simulations of engineered systems due to limited fidelity in the composite continuum-level physics models needed to simulate the systems. We show that traditional uncertainty quantification methods provide, at best, a lower bound on this uncertainty. We propose to obtain bounds on the simulation uncertainties by first determining bounds on the physical quantities or processes relevant to system performance. By bounding these physics processes, as opposed to carrying out statistical analyses of the parameter sets of specific physics models or simply switching out the available physics models, one can obtain upper bounds on the uncertainties in simulated quantities of interest.
Measurement uncertainty relations
Busch, Paul; Lahti, Pekka; Werner, Reinhard F.
2014-04-15
Measurement uncertainty relations are quantitative bounds on the errors in an approximate joint measurement of two observables. They can be seen as a generalization of the error/disturbance tradeoff first discussed heuristically by Heisenberg. Here we prove such relations for the case of two canonically conjugate observables like position and momentum, and establish a close connection with the more familiar preparation uncertainty relations constraining the sharpness of the distributions of the two observables in the same state. Both sets of relations are generalized to means of order ? rather than the usual quadratic means, and we show that the optimal constants are the same for preparation and for measurement uncertainty. The constants are determined numerically and compared with some bounds in the literature. In both cases, the near-saturation of the inequalities entails that the state (resp. observable) is uniformly close to a minimizing one.
A High-Performance Embedded Hybrid Methodology for Uncertainty Quantification With Applications
Iaccarino, Gianluca
2014-04-01
Multiphysics processes modeled by a system of unsteady di erential equations are natu- rally suited for partitioned (modular) solution strategies. We consider such a model where probabilistic uncertainties are present in each module of the system and represented as a set of random input parameters. A straightforward approach in quantifying uncertainties in the predicted solution would be to sample all the input parameters into a single set, and treat the full system as a black-box. Although this method is easily parallelizable and requires minimal modi cations to deterministic solver, it is blind to the modular structure of the underlying multiphysical model. On the other hand, using spectral representations polynomial chaos expansions (PCE) can provide richer structural information regarding the dynamics of these uncertainties as they propagate from the inputs to the predicted output, but can be prohibitively expensive to implement in the high-dimensional global space of un- certain parameters. Therefore, we investigated hybrid methodologies wherein each module has the exibility of using sampling or PCE based methods of capturing local uncertainties while maintaining accuracy in the global uncertainty analysis. For the latter case, we use a conditional PCE model which mitigates the curse of dimension associated with intru- sive Galerkin or semi-intrusive Pseudospectral methods. After formalizing the theoretical framework, we demonstrate our proposed method using a numerical viscous ow simulation and benchmark the performance against a solely Monte-Carlo method and solely spectral method.
Calibration Under Uncertainty.
Swiler, Laura Painton; Trucano, Timothy Guy
2005-03-01
This report is a white paper summarizing the literature and different approaches to the problem of calibrating computer model parameters in the face of model uncertainty. Model calibration is often formulated as finding the parameters that minimize the squared difference between the model-computed data (the predicted data) and the actual experimental data. This approach does not allow for explicit treatment of uncertainty or error in the model itself: the model is considered the %22true%22 deterministic representation of reality. While this approach does have utility, it is far from an accurate mathematical treatment of the true model calibration problem in which both the computed data and experimental data have error bars. This year, we examined methods to perform calibration accounting for the error in both the computer model and the data, as well as improving our understanding of its meaning for model predictability. We call this approach Calibration under Uncertainty (CUU). This talk presents our current thinking on CUU. We outline some current approaches in the literature, and discuss the Bayesian approach to CUU in detail.
RUMINATIONS ON NDA MEASUREMENT UNCERTAINTY COMPARED TO DA UNCERTAINTY
Salaymeh, S.; Ashley, W.; Jeffcoat, R.
2010-06-17
It is difficult to overestimate the importance that physical measurements performed with nondestructive assay instruments play throughout the nuclear fuel cycle. They underpin decision making in many areas and support: criticality safety, radiation protection, process control, safeguards, facility compliance, and waste measurements. No physical measurement is complete or indeed meaningful, without a defensible and appropriate accompanying statement of uncertainties and how they combine to define the confidence in the results. The uncertainty budget should also be broken down in sufficient detail suitable for subsequent uses to which the nondestructive assay (NDA) results will be applied. Creating an uncertainty budget and estimating the total measurement uncertainty can often be an involved process, especially for non routine situations. This is because data interpretation often involves complex algorithms and logic combined in a highly intertwined way. The methods often call on a multitude of input data subject to human oversight. These characteristics can be confusing and pose a barrier to developing and understanding between experts and data consumers. ASTM subcommittee C26-10 recognized this problem in the context of how to summarize and express precision and bias performance across the range of standards and guides it maintains. In order to create a unified approach consistent with modern practice and embracing the continuous improvement philosophy a consensus arose to prepare a procedure covering the estimation and reporting of uncertainties in non destructive assay of nuclear materials. This paper outlines the needs analysis, objectives and on-going development efforts. In addition to emphasizing some of the unique challenges and opportunities facing the NDA community we hope this article will encourage dialog and sharing of best practice and furthermore motivate developers to revisit the treatment of measurement uncertainty.
Entropic uncertainty relations and entanglement
Guehne, Otfried; Lewenstein, Maciej
2004-08-01
We discuss the relationship between entropic uncertainty relations and entanglement. We present two methods for deriving separability criteria in terms of entropic uncertainty relations. In particular, we show how any entropic uncertainty relation on one part of the system results in a separability condition on the composite system. We investigate the resulting criteria using the Tsallis entropy for two and three qubits.
Computational Fluid Dynamics & Large-Scale Uncertainty Quantification...
U.S. Department of Energy (DOE) all webpages (Extended Search)
... (CFD) simulations and uncertainty analyses. The project developed new mathematical uncertainty quantification techniques and applied them, in combination with high-fidelity CFD ...
Uncertainty quantification and multiscale mathematics. (Conference...
Office of Scientific and Technical Information (OSTI)
Uncertainty quantification and multiscale mathematics. Citation Details In-Document Search Title: Uncertainty quantification and multiscale mathematics. No abstract prepared. ...
Whitepaper on Uncertainty Quantification for MPACT
Williams, Mark L.
2015-12-17
The MPACT code provides the ability to perform high-fidelity deterministic calculations to obtain a wide variety of detailed results for very complex reactor core models. However MPACT currently does not have the capability to propagate the effects of input data uncertainties to provide uncertainties in the calculated results. This white paper discusses a potential method for MPACT uncertainty quantification (UQ) based on stochastic sampling.
Capturing the uncertainty in adversary attack simulations.
Darby, John L.; Brooks, Traci N.; Berry, Robert Bruce
2008-09-01
This work provides a comprehensive uncertainty technique to evaluate uncertainty, resulting in a more realistic evaluation of PI, thereby requiring fewer resources to address scenarios and allowing resources to be used across more scenarios. For a given set of dversary resources, two types of uncertainty are associated with PI for a scenario: (1) aleatory (random) uncertainty for detection probabilities and time delays and (2) epistemic (state of knowledge) uncertainty for the adversary resources applied during an attack. Adversary esources consist of attributes (such as equipment and training) and knowledge about the security system; to date, most evaluations have assumed an adversary with very high resources, adding to the conservatism in the evaluation of PI. The aleatory uncertainty in PI is ddressed by assigning probability distributions to detection probabilities and time delays. A numerical sampling technique is used to evaluate PI, addressing the repeated variable dependence in the equation for PI.
Uncertainty in Integrated Assessment Scenarios
Mort Webster
2005-10-17
The determination of climate policy is a decision under uncertainty. The uncertainty in future climate change impacts is large, as is the uncertainty in the costs of potential policies. Rational and economically efficient policy choices will therefore seek to balance the expected marginal costs with the expected marginal benefits. This approach requires that the risks of future climate change be assessed. The decision process need not be formal or quantitative for descriptions of the risks to be useful. Whatever the decision procedure, a useful starting point is to have as accurate a description of climate risks as possible. Given the goal of describing uncertainty in future climate change, we need to characterize the uncertainty in the main causes of uncertainty in climate impacts. One of the major drivers of uncertainty in future climate change is the uncertainty in future emissions, both of greenhouse gases and other radiatively important species such as sulfur dioxide. In turn, the drivers of uncertainty in emissions are uncertainties in the determinants of the rate of economic growth and in the technologies of production and how those technologies will change over time. This project uses historical experience and observations from a large number of countries to construct statistical descriptions of variability and correlation in labor productivity growth and in AEEI. The observed variability then provides a basis for constructing probability distributions for these drivers. The variance of uncertainty in growth rates can be further modified by expert judgment if it is believed that future variability will differ from the past. But often, expert judgment is more readily applied to projected median or expected paths through time. Analysis of past variance and covariance provides initial assumptions about future uncertainty for quantities that are less intuitive and difficult for experts to estimate, and these variances can be normalized and then applied to mean
Evaluation of machine learning algorithms for prediction of regions of high RANS uncertainty
Ling, Julia; Templeton, Jeremy Alan
2015-08-04
Reynolds Averaged Navier Stokes (RANS) models are widely used in industry to predict fluid flows, despite their acknowledged deficiencies. Not only do RANS models often produce inaccurate flow predictions, but there are very limited diagnostics available to assess RANS accuracy for a given flow configuration. If experimental or higher fidelity simulation results are not available for RANS validation, there is no reliable method to evaluate RANS accuracy. This paper explores the potential of utilizing machine learning algorithms to identify regions of high RANS uncertainty. Three different machine learning algorithms were evaluated: support vector machines, Adaboost decision trees, and random forests.more » The algorithms were trained on a database of canonical flow configurations for which validated direct numerical simulation or large eddy simulation results were available, and were used to classify RANS results on a point-by-point basis as having either high or low uncertainty, based on the breakdown of specific RANS modeling assumptions. Classifiers were developed for three different basic RANS eddy viscosity model assumptions: the isotropy of the eddy viscosity, the linearity of the Boussinesq hypothesis, and the non-negativity of the eddy viscosity. It is shown that these classifiers are able to generalize to flows substantially different from those on which they were trained. As a result, feature selection techniques, model evaluation, and extrapolation detection are discussed in the context of turbulence modeling applications.« less
Evaluation of machine learning algorithms for prediction of regions of high RANS uncertainty
Ling, Julia; Templeton, Jeremy Alan
2015-08-04
Reynolds Averaged Navier Stokes (RANS) models are widely used in industry to predict fluid flows, despite their acknowledged deficiencies. Not only do RANS models often produce inaccurate flow predictions, but there are very limited diagnostics available to assess RANS accuracy for a given flow configuration. If experimental or higher fidelity simulation results are not available for RANS validation, there is no reliable method to evaluate RANS accuracy. This paper explores the potential of utilizing machine learning algorithms to identify regions of high RANS uncertainty. Three different machine learning algorithms were evaluated: support vector machines, Adaboost decision trees, and random forests. The algorithms were trained on a database of canonical flow configurations for which validated direct numerical simulation or large eddy simulation results were available, and were used to classify RANS results on a point-by-point basis as having either high or low uncertainty, based on the breakdown of specific RANS modeling assumptions. Classifiers were developed for three different basic RANS eddy viscosity model assumptions: the isotropy of the eddy viscosity, the linearity of the Boussinesq hypothesis, and the non-negativity of the eddy viscosity. It is shown that these classifiers are able to generalize to flows substantially different from those on which they were trained. As a result, feature selection techniques, model evaluation, and extrapolation detection are discussed in the context of turbulence modeling applications.
Ling, Julia; Templeton, Jeremy Alan
2015-08-04
Reynolds Averaged Navier Stokes (RANS) models are widely used in industry to predict fluid flows, despite their acknowledged deficiencies. Not only do RANS models often produce inaccurate flow predictions, but there are very limited diagnostics available to assess RANS accuracy for a given flow configuration. If experimental or higher fidelity simulation results are not available for RANS validation, there is no reliable method to evaluate RANS accuracy. This paper explores the potential of utilizing machine learning algorithms to identify regions of high RANS uncertainty. Three different machine learning algorithms were evaluated: support vector machines, Adaboost decision trees, and random forests. The algorithms were trained on a database of canonical flow configurations for which validated direct numerical simulation or large eddy simulation results were available, and were used to classify RANS results on a point-by-point basis as having either high or low uncertainty, based on the breakdown of specific RANS modeling assumptions. Classifiers were developed for three different basic RANS eddy viscosity model assumptions: the isotropy of the eddy viscosity, the linearity of the Boussinesq hypothesis, and the non-negativity of the eddy viscosity. It is shown that these classifiers are able to generalize to flows substantially different from those on which they were trained. As a result, feature selection techniques, model evaluation, and extrapolation detection are discussed in the context of turbulence modeling applications.
Ling, Julia; Templeton, Jeremy Alan
2015-08-04
Reynolds Averaged Navier Stokes (RANS) models are widely used in industry to predict fluid flows, despite their acknowledged deficiencies. Not only do RANS models often produce inaccurate flow predictions, but there are very limited diagnostics available to assess RANS accuracy for a given flow configuration. If experimental or higher fidelity simulation results are not available for RANS validation, there is no reliable method to evaluate RANS accuracy. This paper explores the potential of utilizing machine learning algorithms to identify regions of high RANS uncertainty. Three different machine learning algorithms were evaluated: support vector machines, Adaboost decision trees, and random forests.more » The algorithms were trained on a database of canonical flow configurations for which validated direct numerical simulation or large eddy simulation results were available, and were used to classify RANS results on a point-by-point basis as having either high or low uncertainty, based on the breakdown of specific RANS modeling assumptions. Classifiers were developed for three different basic RANS eddy viscosity model assumptions: the isotropy of the eddy viscosity, the linearity of the Boussinesq hypothesis, and the non-negativity of the eddy viscosity. It is shown that these classifiers are able to generalize to flows substantially different from those on which they were trained. As a result, feature selection techniques, model evaluation, and extrapolation detection are discussed in the context of turbulence modeling applications.« less
Rodriguez, E. A.; Pepin, J. E.; Thacker, B. H.; Riha, D. S.
2002-01-01
Los Alamos National Laboratory (LANL), in cooperation with Southwest Research Institute, has been developing capabilities to provide reliability-based structural evaluation techniques for performing weapon component and system reliability assessments. The development and applications of Probabilistic Structural Analysis Methods (PSAM) is an important ingredient in the overall weapon reliability assessments. Focus, herein, is placed on the uncertainty quantification associated with the structural response of a containment vessel for high-explosive (HE) experiments. The probabilistic dynamic response of the vessel is evaluated through the coupling of the probabilistic code NESSUS with the non-linear structural dynamics code, DYNA-3D. The probabilistic model includes variations in geometry and mechanical properties, such as Young's Modulus, yield strength, and material flow characteristics. Finally, the probability of exceeding a specified strain limit, which is related to vessel failure, is determined.
Reducing Petroleum Despendence in California: Uncertainties About...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Petroleum Despendence in California: Uncertainties About Light-Duty Diesel Reducing Petroleum Despendence in California: Uncertainties About Light-Duty Diesel 2002 DEER Conference ...
Uncertainty Analysis Technique for OMEGA Dante Measurements ...
Office of Scientific and Technical Information (OSTI)
Uncertainty Analysis Technique for OMEGA Dante Measurements Citation Details In-Document Search Title: Uncertainty Analysis Technique for OMEGA Dante Measurements You are...
Report: Technical Uncertainty and Risk Reduction
Office of Environmental Management (EM)
TECHNICAL UNCERTAINTY AND RISK REDUCTION Background In FY 2007 EMAB was tasked to assess EM's ability to reduce risk and technical uncertainty. Board members explored this topic ...
Statistics, Uncertainty, and Transmitted Variation
Wendelberger, Joanne Roth
2014-11-05
The field of Statistics provides methods for modeling and understanding data and making decisions in the presence of uncertainty. When examining response functions, variation present in the input variables will be transmitted via the response function to the output variables. This phenomenon can potentially have significant impacts on the uncertainty associated with results from subsequent analysis. This presentation will examine the concept of transmitted variation, its impact on designed experiments, and a method for identifying and estimating sources of transmitted variation in certain settings.
Experimental uncertainty estimation and statistics for data having interval uncertainty.
Kreinovich, Vladik; Oberkampf, William Louis; Ginzburg, Lev; Ferson, Scott; Hajagos, Janos
2007-05-01
This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.
ARM - PI Product - Direct Aerosol Forcing Uncertainty
U.S. Department of Energy (DOE) all webpages (Extended Search)
ProductsDirect Aerosol Forcing Uncertainty ARM Data Discovery Browse Data Comments? We would love to hear from you! Send us a note below or call us at 1-888-ARM-DATA. Send PI Product : Direct Aerosol Forcing Uncertainty Understanding sources of uncertainty in aerosol direct radiative forcing (DRF), the difference in a given radiative flux component with and without aerosol, is essential to quantifying changes in Earth's radiation budget. We examine the uncertainty in DRF due to measurement
Uncertainty quantification and multiscale mathematics. (Conference...
Office of Scientific and Technical Information (OSTI)
quantification and multiscale mathematics. Citation Details In-Document Search Title: Uncertainty quantification and multiscale mathematics. Authors: Trucano, Timothy Guy ...
Extended Forward Sensitivity Analysis for Uncertainty Quantification
Haihua Zhao; Vincent A. Mousseau
2011-09-01
Verification and validation (V&V) are playing more important roles to quantify uncertainties and realize high fidelity simulations in engineering system analyses, such as transients happened in a complex nuclear reactor system. Traditional V&V in the reactor system analysis focused more on the validation part or did not differentiate verification and validation. The traditional approach to uncertainty quantification is based on a 'black box' approach. The simulation tool is treated as an unknown signal generator, a distribution of inputs according to assumed probability density functions is sent in and the distribution of the outputs is measured and correlated back to the original input distribution. The 'black box' method mixes numerical errors with all other uncertainties. It is also not efficient to perform sensitivity analysis. Contrary to the 'black box' method, a more efficient sensitivity approach can take advantage of intimate knowledge of the simulation code. In these types of approaches equations for the propagation of uncertainty are constructed and the sensitivities are directly solved for as variables in the simulation. This paper presents the forward sensitivity analysis as a method to help uncertainty qualification. By including time step and potentially spatial step as special sensitivity parameters, the forward sensitivity method is extended as one method to quantify numerical errors. Note that by integrating local truncation errors over the whole system through the forward sensitivity analysis process, the generated time step and spatial step sensitivity information reflect global numerical errors. The discretization errors can be systematically compared against uncertainties due to other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty qualification. By knowing the relative sensitivity of time and space steps with other interested physical parameters, the simulation is allowed
Incorporating Forecast Uncertainty in Utility Control Center
Makarov, Yuri V.; Etingov, Pavel V.; Ma, Jian
2014-07-09
Uncertainties in forecasting the output of intermittent resources such as wind and solar generation, as well as system loads are not adequately reflected in existing industry-grade tools used for transmission system management, generation commitment, dispatch and market operation. There are other sources of uncertainty such as uninstructed deviations of conventional generators from their dispatch set points, generator forced outages and failures to start up, load drops, losses of major transmission facilities and frequency variation. These uncertainties can cause deviations from the system balance, which sometimes require inefficient and costly last minute solutions in the near real-time timeframe. This Chapter considers sources of uncertainty and variability, overall system uncertainty model, a possible plan for transition from deterministic to probabilistic methods in planning and operations, and two examples of uncertainty-based fools for grid operations.This chapter is based on work conducted at the Pacific Northwest National Laboratory (PNNL)
Uncertainty quantification for discrimination of nuclear events...
Office of Scientific and Technical Information (OSTI)
comprehensive nuclear-test-ban treaty Title: Uncertainty quantification for discrimination of nuclear events as violations of the comprehensive nuclear-test-ban treaty Authors: ...
Uncertainty Quantification in Climate Modeling - Discovering...
Office of Scientific and Technical Information (OSTI)
in Climate Modeling - Discovering Sparsity and Building Surrogates. Citation Details In-Document Search Title: Uncertainty Quantification in Climate Modeling - Discovering ...
Direct Aerosol Forcing Uncertainty (Dataset) | Data Explorer
Office of Scientific and Technical Information (OSTI)
comparable to uncertainty arising from some individual properties. less Authors: Mccomiskey, Allison Publication Date: 2008-01-15 OSTI Identifier: 1169526 DOE Contract ...
Validation and Uncertainty Characterization for Energy Simulation
Validation and Uncertainty Characterization for Energy Simulation (1530) Philip Haves (LBNL) Co-PI's: Ron Judkoff (NREL), Joshua New (ORNL), Ralph Muehleisen (ANL) BTO Merit ...
From Deterministic Inversion to Uncertainty Quantification: Planning...
Office of Scientific and Technical Information (OSTI)
Glen's Flow Law exponent) Main sources of uncertainty: Problem definition Goal: ... Common y Proposed U: computed depth averaged velocity H: ice thickness basal sliding ...
From Deterministic Inversion to Uncertainty Quantification: Planning...
Office of Scientific and Technical Information (OSTI)
Planning a Long Journey in Ice Sheet Modeling. Citation Details In-Document Search Title: From Deterministic Inversion to Uncertainty Quantification: Planning a Long Journey in ...
Uncertainty Quantification and Propagation in Nuclear Density...
Office of Scientific and Technical Information (OSTI)
theoretical tools used to study the properties of heavy and ... of model uncertainties and Bayesian inference methods. ... Country of Publication: United States Language: English ...
Uncertainties in global aerosol simulations: Assessment using...
Office of Scientific and Technical Information (OSTI)
Title: Uncertainties in global aerosol simulations: Assessment using three meteorological data sets Current global aerosol models use different physical and chemical schemes and 4 ...
Uncertainty Quantification for Nuclear Density Functional Theory...
Office of Scientific and Technical Information (OSTI)
Uncertainty Quantification for Nuclear Density Functional Theory and Information Content of New Measurements Citation Details In-Document Search This content will become publicly...
Habte, Aron
2015-06-25
This presentation summarizes uncertainty estimation of radiometric data using the Guide to the Expression of Uncertainty (GUM) method.
Sensitivity and Uncertainty Analysis Shell
Energy Science and Technology Software Center (OSTI)
1999-04-20
SUNS (Sensitivity and Uncertainty Analysis Shell) is a 32-bit application that runs under Windows 95/98 and Windows NT. It is designed to aid in statistical analyses for a broad range of applications. The class of problems for which SUNS is suitable is generally defined by two requirements: 1. A computer code is developed or acquired that models some processes for which input is uncertain and the user is interested in statistical analysis of the outputmore » of that code. 2. The statistical analysis of interest can be accomplished using the Monte Carlo analysis. The implementation then requires that the user identify which input to the process model is to be manipulated for statistical analysis. With this information, the changes required to loosely couple SUNS with the process model can be completed. SUNS is then used to generate the required statistical sample and the user-supplied process model analyses the sample. The SUNS post processor displays statistical results from any existing file that contains sampled input and output values.« less
Estimating the uncertainty in underresolved nonlinear dynamics
Chorin, Alelxandre; Hald, Ole
2013-06-12
The Mori-Zwanzig formalism of statistical mechanics is used to estimate the uncertainty caused by underresolution in the solution of a nonlinear dynamical system. A general approach is outlined and applied to a simple example. The noise term that describes the uncertainty turns out to be neither Markovian nor Gaussian. It is argued that this is the general situation.
Uncertainty Analysis for Photovoltaic Degradation Rates (Poster)
Jordan, D.; Kurtz, S.; Hansen, C.
2014-04-01
Dependable and predictable energy production is the key to the long-term success of the PV industry. PV systems show over the lifetime of their exposure a gradual decline that depends on many different factors such as module technology, module type, mounting configuration, climate etc. When degradation rates are determined from continuous data the statistical uncertainty is easily calculated from the regression coefficients. However, total uncertainty that includes measurement uncertainty and instrumentation drift is far more difficult to determine. A Monte Carlo simulation approach was chosen to investigate a comprehensive uncertainty analysis. The most important effect for degradation rates is to avoid instrumentation that changes over time in the field. For instance, a drifting irradiance sensor, which can be achieved through regular calibration, can lead to a substantially erroneous degradation rates. However, the accuracy of the irradiance sensor has negligible impact on degradation rate uncertainty emphasizing that precision (relative accuracy) is more important than absolute accuracy.
Quantifying uncertainty in stable isotope mixing models
Davis, Paul; Syme, James; Heikoop, Jeffrey; Fessenden-Rahn, Julianna; Perkins, George; Newman, Brent; Chrystal, Abbey E.; Hagerty, Shannon B.
2015-05-19
Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, SIAR [Parnell et al., 2010] a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ15N and δ18O) but all methods testedmore » are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated
Quantifying uncertainty in stable isotope mixing models
Davis, Paul; Syme, James; Heikoop, Jeffrey; Fessenden-Rahn, Julianna; Perkins, George; Newman, Brent; Chrystal, Abbey E.; Hagerty, Shannon B.
2015-05-19
Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, SIAR [Parnell et al., 2010] a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (?^{15}N and ?^{18}O) but all methods tested are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the
Quantifying uncertainty in stable isotope mixing models
Davis, Paul; Syme, James; Heikoop, Jeffrey; Fessenden-Rahn, Julianna; Perkins, George; Newman, Brent; Chrystal, Abbey E.; Hagerty, Shannon B.
2015-05-19
Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, SIAR [Parnell et al., 2010] a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ^{15}N and δ^{18}O) but all methods tested are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the
Numerical uncertainty in computational engineering and physics
Hemez, Francois M
2009-01-01
Obtaining a solution that approximates ordinary or partial differential equations on a computational mesh or grid does not necessarily mean that the solution is accurate or even 'correct'. Unfortunately assessing the quality of discrete solutions by questioning the role played by spatial and temporal discretizations generally comes as a distant third to test-analysis comparison and model calibration. This publication is contributed to raise awareness of the fact that discrete solutions introduce numerical uncertainty. This uncertainty may, in some cases, overwhelm in complexity and magnitude other sources of uncertainty that include experimental variability, parametric uncertainty and modeling assumptions. The concepts of consistency, convergence and truncation error are overviewed to explain the articulation between the exact solution of continuous equations, the solution of modified equations and discrete solutions computed by a code. The current state-of-the-practice of code and solution verification activities is discussed. An example in the discipline of hydro-dynamics illustrates the significant effect that meshing can have on the quality of code predictions. A simple method is proposed to derive bounds of solution uncertainty in cases where the exact solution of the continuous equations, or its modified equations, is unknown. It is argued that numerical uncertainty originating from mesh discretization should always be quantified and accounted for in the overall uncertainty 'budget' that supports decision-making for applications in computational physics and engineering.
The Role of Uncertainty Quantification for Reactor Physics
Salvatores, Massimo; Palmiotti, Giuseppe; Aliberti, G.
2015-01-01
The quantification of uncertainties is a crucial step in design. The comparison of a-priori uncertainties with the target accuracies, allows to define needs and priorities for uncertainty reduction. In view of their impact, the uncertainty analysis requires a reliability assessment of the uncertainty data used. The choice of the appropriate approach and the consistency of different approaches are discussed.
An uncertainty principle for unimodular quantum groups
Crann, Jason; Kalantar, Mehrdad E-mail: mkalanta@math.carleton.ca
2014-08-15
We present a generalization of Hirschman's entropic uncertainty principle for locally compact Abelian groups to unimodular locally compact quantum groups. As a corollary, we strengthen a well-known uncertainty principle for compact groups, and generalize the relation to compact quantum groups of Kac type. We also establish the complementarity of finite-dimensional quantum group algebras. In the non-unimodular setting, we obtain an uncertainty relation for arbitrary locally compact groups using the relative entropy with respect to the Haar weight as the measure of uncertainty. We also show that when restricted to q-traces of discrete quantum groups, the relative entropy with respect to the Haar weight reduces to the canonical entropy of the random walk generated by the state.
Reducing Petroleum Despendence in California: Uncertainties About
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Light-Duty Diesel | Department of Energy Petroleum Despendence in California: Uncertainties About Light-Duty Diesel Reducing Petroleum Despendence in California: Uncertainties About Light-Duty Diesel 2002 DEER Conference Presentation: Center for Energy Efficiency and Renewable Technologies 2002_deer_phillips.pdf (62.04 KB) More Documents & Publications Diesel Use in California Future Potential of Hybrid and Diesel Powertrains in the U.S. Light-Duty Vehicle Market Dumping Dirty Diesels:
Cost Analysis: Technology, Competitiveness, Market Uncertainty | Department
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
of Energy Technology to Market » Cost Analysis: Technology, Competitiveness, Market Uncertainty Cost Analysis: Technology, Competitiveness, Market Uncertainty As a basis for strategic planning, competitiveness analysis, funding metrics and targets, SunShot supports analysis teams at national laboratories to assess technology costs, location-specific competitive advantages, policy impacts on system financing, and to perform detailed levelized cost of energy (LCOE) analyses. This shows the
Addressing Uncertainties in Design Inputs: A Case Study of Probabilist...
Office of Environmental Management (EM)
Addressing Uncertainties in Design Inputs: A Case Study of Probabilistic Settlement Evaluations for Soft Zone Collapse at SWPF Addressing Uncertainties in Design Inputs: A Case ...
Analysis of the Uncertainty in Wind Measurements from the Atmospheric...
Office of Scientific and Technical Information (OSTI)
Analysis of the Uncertainty in Wind Measurements from the Atmospheric Radiation ... Citation Details In-Document Search Title: Analysis of the Uncertainty in Wind ...
Gradient-Enhanced Universal Kriging for Uncertainty Propagation...
Office of Scientific and Technical Information (OSTI)
Gradient-Enhanced Universal Kriging for Uncertainty Propagation Citation Details In-Document Search Title: Gradient-Enhanced Universal Kriging for Uncertainty Propagation Authors: ...
Improvements of Nuclear Data and Its Uncertainties by Theoretical...
Office of Scientific and Technical Information (OSTI)
Improvements of Nuclear Data and Its Uncertainties by Theoretical Modeling Citation Details In-Document Search Title: Improvements of Nuclear Data and Its Uncertainties by ...
Early solar mass loss, opacity uncertainties, and the solar abundance...
Office of Scientific and Technical Information (OSTI)
Early solar mass loss, opacity uncertainties, and the solar abundance problem Citation Details In-Document Search Title: Early solar mass loss, opacity uncertainties, and the solar ...
Analysis and Reduction of Chemical Models under Uncertainty ...
Office of Scientific and Technical Information (OSTI)
Analysis and Reduction of Chemical Models under Uncertainty Citation Details In-Document Search Title: Analysis and Reduction of Chemical Models under Uncertainty Abstract not ...
ENHANCED UNCERTAINTY ANALYSIS FOR SRS COMPOSITE ANALYSIS
Smith, F.; Phifer, M.
2011-06-30
The Composite Analysis (CA) performed for the Savannah River Site (SRS) in 2009 (SRS CA 2009) included a simplified uncertainty analysis. The uncertainty analysis in the CA (Smith et al. 2009b) was limited to considering at most five sources in a separate uncertainty calculation performed for each POA. To perform the uncertainty calculations in a reasonable amount of time, the analysis was limited to using 400 realizations, 2,000 years of simulated transport time, and the time steps used for the uncertainty analysis were increased from what was used in the CA base case analysis. As part of the CA maintenance plan, the Savannah River National Laboratory (SRNL) committed to improving the CA uncertainty/sensitivity analysis. The previous uncertainty analysis was constrained by the standard GoldSim licensing which limits the user to running at most four Monte Carlo uncertainty calculations (also called realizations) simultaneously. Some of the limitations on the number of realizations that could be practically run and the simulation time steps were removed by building a cluster of three HP Proliant windows servers with a total of 36 64-bit processors and by licensing the GoldSim DP-Plus distributed processing software. This allowed running as many as 35 realizations simultaneously (one processor is reserved as a master process that controls running the realizations). These enhancements to SRNL computing capabilities made uncertainty analysis: using 1000 realizations, using the time steps employed in the base case CA calculations, with more sources, and simulating radionuclide transport for 10,000 years feasible. In addition, an importance screening analysis was performed to identify the class of stochastic variables that have the most significant impact on model uncertainty. This analysis ran the uncertainty model separately testing the response to variations in the following five sets of model parameters: (a) K{sub d} values (72 parameters for the 36 CA elements in
Error propagation equations for estimating the uncertainty in high-speed wind tunnel test results
Clark, E.L.
1994-07-01
Error propagation equations, based on the Taylor series model, are derived for the nondimensional ratios and coefficients most often encountered in high-speed wind tunnel testing. These include pressure ratio and coefficient, static force and moment coefficients, dynamic stability coefficients, and calibration Mach number. The error equations contain partial derivatives, denoted as sensitivity coefficients, which define the influence of free-steam Mach number, M{infinity}, on various aerodynamic ratios. To facilitate use of the error equations, sensitivity coefficients are derived and evaluated for five fundamental aerodynamic ratios which relate free-steam test conditions to a reference condition.
Strategies for Application of Isotopic Uncertainties in Burnup Credit
Gauld, I.C.
2002-12-23
Uncertainties in the predicted isotopic concentrations in spent nuclear fuel represent one of the largest sources of overall uncertainty in criticality calculations that use burnup credit. The methods used to propagate the uncertainties in the calculated nuclide concentrations to the uncertainty in the predicted neutron multiplication factor (k{sub eff}) of the system can have a significant effect on the uncertainty in the safety margin in criticality calculations and ultimately affect the potential capacity of spent fuel transport and storage casks employing burnup credit. Methods that can provide a more accurate and realistic estimate of the uncertainty may enable increased spent fuel cask capacity and fewer casks needing to be transported, thereby reducing regulatory burden on licensee while maintaining safety for transporting spent fuel. This report surveys several different best-estimate strategies for considering the effects of nuclide uncertainties in burnup-credit analyses. The potential benefits of these strategies are illustrated for a prototypical burnup-credit cask design. The subcritical margin estimated using best-estimate methods is discussed in comparison to the margin estimated using conventional bounding methods of uncertainty propagation. To quantify the comparison, each of the strategies for estimating uncertainty has been performed using a common database of spent fuel isotopic assay measurements for pressurized-light-water reactor fuels and predicted nuclide concentrations obtained using the current version of the SCALE code system. The experimental database applied in this study has been significantly expanded to include new high-enrichment and high-burnup spent fuel assay data recently published for a wide range of important burnup-credit actinides and fission products. Expanded rare earth fission-product measurements performed at the Khlopin Radium Institute in Russia that contain the only known publicly-available measurement for {sup 103
Bennett, C.T.
1995-02-23
Post hoc analyses have demonstrated clearly that macro-system, organizational processes have played important roles in such major catastrophes as Three Mile Island, Bhopal, Exxon Valdez, Chernobyl, and Piper Alpha. How can managers of such high-consequence organizations as nuclear power plants and nuclear explosives handling facilities be sure that similar macro-system processes are not operating in their plants? To date, macro-system effects have not been integrated into risk assessments. Part of the reason for not using macro-system analyses to assess risk may be the impression that standard organizational measurement tools do not provide hard data that can be managed effectively. In this paper, I argue that organizational dimensions, like those in ISO 9000, can be quantified and integrated into standard risk assessments.
Fowler, Michael J.; Howard, Marylesa; Luttman, Aaron; Mitchell, Stephen E.; Webb, Timothy J.
2015-06-03
One of the primary causes of blur in a high-energy X-ray imaging system is the shape and extent of the radiation source, or ‘spot’. It is important to be able to quantify the size of the spot as it provides a lower bound on the recoverable resolution for a radiograph, and penumbral imaging methods – which involve the analysis of blur caused by a structured aperture – can be used to obtain the spot’s spatial profile. We present a Bayesian approach for estimating the spot shape that, unlike variational methods, is robust to the initial choice of parameters. The posterior is obtained from a normal likelihood, which was constructed from a weighted least squares approximation to a Poisson noise model, and prior assumptions that enforce both smoothness and non-negativity constraints. A Markov chain Monte Carlo algorithm is used to obtain samples from the target posterior, and the reconstruction and uncertainty estimates are the computed mean and variance of the samples, respectively. Lastly, synthetic data-sets are used to demonstrate accurate reconstruction, while real data taken with high-energy X-ray imaging systems are used to demonstrate applicability and feasibility.
Fowler, Michael J.; Howard, Marylesa; Luttman, Aaron; Mitchell, Stephen E.; Webb, Timothy J.
2015-06-03
One of the primary causes of blur in a high-energy X-ray imaging system is the shape and extent of the radiation source, or ‘spot’. It is important to be able to quantify the size of the spot as it provides a lower bound on the recoverable resolution for a radiograph, and penumbral imaging methods – which involve the analysis of blur caused by a structured aperture – can be used to obtain the spot’s spatial profile. We present a Bayesian approach for estimating the spot shape that, unlike variational methods, is robust to the initial choice of parameters. The posteriormore » is obtained from a normal likelihood, which was constructed from a weighted least squares approximation to a Poisson noise model, and prior assumptions that enforce both smoothness and non-negativity constraints. A Markov chain Monte Carlo algorithm is used to obtain samples from the target posterior, and the reconstruction and uncertainty estimates are the computed mean and variance of the samples, respectively. Lastly, synthetic data-sets are used to demonstrate accurate reconstruction, while real data taken with high-energy X-ray imaging systems are used to demonstrate applicability and feasibility.« less
Photovoltaic System Modeling. Uncertainty and Sensitivity Analyses
Hansen, Clifford W.; Martin, Curtis E.
2015-08-01
We report an uncertainty and sensitivity analysis for modeling AC energy from ph otovoltaic systems . Output from a PV system is predicted by a sequence of models. We quantify u ncertainty i n the output of each model using empirical distribution s of each model's residuals. We propagate uncertainty through the sequence of models by sampli ng these distributions to obtain a n empirical distribution of a PV system's output. We consider models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane - of - array irradiance; (2) estimate effective irradiance; (3) predict cell temperature; (4) estimate DC voltage, current and power ; (5) reduce DC power for losses due to inefficient maximum power point tracking or mismatch among modules; and (6) convert DC to AC power . O ur analysis consider s a notional PV system com prising an array of FirstSolar FS - 387 modules and a 250 kW AC inverter ; we use measured irradiance and weather at Albuquerque, NM. We found the uncertainty in PV syste m output to be relatively small, on the order of 1% for daily energy. We found that unce rtainty in the models for POA irradiance and effective irradiance to be the dominant contributors to uncertainty in predicted daily energy. Our analysis indicates that efforts to reduce the uncertainty in PV system output predictions may yield the greatest improvements by focusing on the POA and effective irradiance models.
McKenna, Sean Andrew; Yoon, Hongkyu; Hart, David Blaine
2010-12-01
Heterogeneity plays an important role in groundwater flow and contaminant transport in natural systems. Since it is impossible to directly measure spatial variability of hydraulic conductivity, predictions of solute transport based on mathematical models are always uncertain. While in most cases groundwater flow and tracer transport problems are investigated in two-dimensional (2D) systems, it is important to study more realistic and well-controlled 3D systems to fully evaluate inverse parameter estimation techniques and evaluate uncertainty in the resulting estimates. We used tracer concentration breakthrough curves (BTCs) obtained from a magnetic resonance imaging (MRI) technique in a small flow cell (14 x 8 x 8 cm) that was packed with a known pattern of five different sands (i.e., zones) having cm-scale variability. In contrast to typical inversion systems with head, conductivity and concentration measurements at limited points, the MRI data included BTCs measured at a voxel scale ({approx}0.2 cm in each dimension) over 13 x 8 x 8 cm with a well controlled boundary condition, but did not have direct measurements of head and conductivity. Hydraulic conductivity and porosity were conceptualized as spatial random fields and estimated using pilot points along layers of the 3D medium. The steady state water flow and solute transport were solved using MODFLOW and MODPATH. The inversion problem was solved with a nonlinear parameter estimation package - PEST. Two approaches to parameterization of the spatial fields are evaluated: (1) The detailed zone information was used as prior information to constrain the spatial impact of the pilot points and reduce the number of parameters; and (2) highly parameterized inversion at cm scale (e.g., 1664 parameters) using singular value decomposition (SVD) methodology to significantly reduce the run-time demands. Both results will be compared to measured BTCs. With MRI, it is easy to change the averaging scale of the observed
Model development and data uncertainty integration
Swinhoe, Martyn Thomas
2015-12-02
The effect of data uncertainties is discussed, with the epithermal neutron multiplicity counter as an illustrative example. Simulation using MCNP6, cross section perturbations and correlations are addressed, along with the effect of the ^{240}Pu spontaneous fission neutron spectrum, the effect of P(ν) for ^{240}Pu spontaneous fission, and the effect of spontaneous fission and (α,n) intensity. The effect of nuclear data is the product of the initial uncertainty and the sensitivity -- both need to be estimated. In conclusion, a multi-parameter variation method has been demonstrated, the most significant parameters are the basic emission rates of spontaneous fission and (α,n) processes, and uncertainties and important data depend on the analysis technique chosen.
Coping with uncertainties of mercury regulation
Reich, K.
2006-09-15
The thermometer is rising as coal-fired plants cope with the uncertainties of mercury regulation. The paper deals with a diagnosis and a suggested cure. It describes the state of mercury emission rules in the different US states, many of which had laws or rules in place before the Clean Air Mercury Rule (CAMR) was promulgated.
Uncertainty in Simulating Wheat Yields Under Climate Change
Asseng, S.; Ewert, F.; Rosenzweig, C.; Jones, J.W.; Hatfield, Jerry; Ruane, Alex; Boote, K. J.; Thorburn, Peter; Rotter, R.P.; Cammarano, D.; Brisson, N.; Basso, B.; Martre, P.; Aggarwal, P.K.; Angulo, C.; Bertuzzi, P.; Biernath, C.; Challinor, AJ; Doltra, J.; Gayler, S.; Goldberg, R.; Grant, Robert; Heng, L.; Hooker, J.; Hunt, L.A.; Ingwersen, J.; Izaurralde, Roberto C.; Kersebaum, K.C.; Mueller, C.; Naresh Kumar, S.; Nendel, C.; O'Leary, G.O.; Olesen, JE; Osborne, T.; Palosuo, T.; Priesack, E.; Ripoche, D.; Semenov, M.A.; Shcherbak, I.; Steduto, P.; Stockle, Claudio O.; Stratonovitch, P.; Streck, T.; Supit, I.; Tao, F.; Travasso, M.; Waha, K.; Wallach, D.; White, J.W.; Williams, J.R.; Wolf, J.
2013-09-01
Anticipating the impacts of climate change on crop yields is critical for assessing future food security. Process-based crop simulation models are the most commonly used tools in such assessments1,2. Analysis of uncertainties in future greenhouse gas emissions and their impacts on future climate change has been increasingly described in the literature3,4 while assessments of the uncertainty in crop responses to climate change are very rare. Systematic and objective comparisons across impact studies is difficult, and thus has not been fully realized5. Here we present the largest coordinated and standardized crop model intercomparison for climate change impacts on wheat production to date. We found that several individual crop models are able to reproduce measured grain yields under current diverse environments, particularly if sufficient details are provided to execute them. However, simulated climate change impacts can vary across models due to differences in model structures and algorithms. The crop-model component of uncertainty in climate change impact assessments was considerably larger than the climate-model component from Global Climate Models (GCMs). Model responses to high temperatures and temperature-by-CO2 interactions are identified as major sources of simulated impact uncertainties. Significant reductions in impact uncertainties through model improvements in these areas and improved quantification of uncertainty through multi-model ensembles are urgently needed for a more reliable translation of climate change scenarios into agricultural impacts in order to develop adaptation strategies and aid policymaking.
Uncertainty in BWR power during ATWS events
Diamond, D.J.
1986-01-01
A study was undertaken to improve our understanding of BWR conditions following the closure of main steam isolation valves and the failure of reactor trip. Of particular interest was the power during the period when the core had reached a quasi-equilibrium condition with a natural circulation flow rate determined by the water level in the downcomer. Insights into the uncertainity in the calculation of this power with sophisticated computer codes were quantified using a simple model which relates power to the principal thermal-hydraulic variables and reactivity coefficients; the latter representing the link between the thermal-hydraulics and the neutronics. Assumptions regarding the uncertainty in these variables and coefficients were then used to determine the uncertainty in power.
On solar geoengineering and climate uncertainty
MacMartin, Douglas; Kravitz, Benjamin S.; Rasch, Philip J.
2015-09-03
Uncertainty in the climate system response has been raised as a concern regarding solar geoengineering. Here we show that model projections of regional climate change outcomes may have greater agreement under solar geoengineering than with CO2 alone. We explore the effects of geoengineering on one source of climate system uncertainty by evaluating the inter-model spread across 12 climate models participating in the Geoengineering Model Intercomparison project (GeoMIP). The model spread in regional temperature and precipitation changes is reduced with CO2 and a solar reduction, in comparison to the case with increased CO2 alone. That is, the intermodel spread in predictions of climate change and the model spread in the response to solar geoengineering are not additive but rather partially cancel. Furthermore, differences in efficacy explain most of the differences between models in their temperature response to an increase in CO2 that is offset by a solar reduction. These conclusions are important for clarifying geoengineering risks.
Uncertainty estimates for derivatives and intercepts
Clark, E.L.
1994-09-01
Straight line least squares fits of experimental data are widely used in the analysis of test results to provide derivatives and intercepts. A method for evaluating the uncertainty in these parameters is described. The method utilizes conventional least squares results and is applicable to experiments where the independent variable is controlled, but not necessarily free of error. A Monte Carlo verification of the method is given.
Interpolation Uncertainties Across the ARM SGP Area
U.S. Department of Energy (DOE) all webpages (Extended Search)
Interpolation Uncertainties Across the ARM SGP Area J. E. Christy, C. N. Long, and T. R. Shippert Pacific Northwest National Laboratory Richland, Washington Interpolation Grids Across the SGP Network Area The U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Program operates a network of surface radiation measurement sites across north central Oklahoma and south central Kansas. This Southern Great Plains (SGP) network consists of 21 sites unevenly spaced from 95.5 to 99.5
Uncertainties in risk assessment at USDOE facilities
Hamilton, L.D.; Holtzman, S.; Meinhold, A.F.; Morris, S.C.; Rowe, M.D.
1994-01-01
The United States Department of Energy (USDOE) has embarked on an ambitious program to remediate environmental contamination at its facilities. Decisions concerning cleanup goals, choices among cleanup technologies, and funding prioritization should be largely risk-based. Risk assessments will be used more extensively by the USDOE in the future. USDOE needs to develop and refine risk assessment methods and fund research to reduce major sources of uncertainty in risk assessments at USDOE facilities. The terms{open_quote} risk assessment{close_quote} and{open_quote} risk management{close_quote} are frequently confused. The National Research Council (1983) and the United States Environmental Protection Agency (USEPA, 1991a) described risk assessment as a scientific process that contributes to risk management. Risk assessment is the process of collecting, analyzing and integrating data and information to identify hazards, assess exposures and dose responses, and characterize risks. Risk characterization must include a clear presentation of {open_quotes}... the most significant data and uncertainties...{close_quotes} in an assessment. Significant data and uncertainties are {open_quotes}...those that define and explain the main risk conclusions{close_quotes}. Risk management integrates risk assessment information with other considerations, such as risk perceptions, socioeconomic and political factors, and statutes, to make and justify decisions. Risk assessments, as scientific processes, should be made independently of the other aspects of risk management (USEPA, 1991a), but current methods for assessing health risks are based on conservative regulatory principles, causing unnecessary public concern and misallocation of funds for remediation.
Representation of analysis results involving aleatory and epistemic uncertainty.
Johnson, Jay Dean; Helton, Jon Craig; Oberkampf, William Louis; Sallaberry, Cedric J.
2008-08-01
Procedures are described for the representation of results in analyses that involve both aleatory uncertainty and epistemic uncertainty, with aleatory uncertainty deriving from an inherent randomness in the behavior of the system under study and epistemic uncertainty deriving from a lack of knowledge about the appropriate values to use for quantities that are assumed to have fixed but poorly known values in the context of a specific study. Aleatory uncertainty is usually represented with probability and leads to cumulative distribution functions (CDFs) or complementary cumulative distribution functions (CCDFs) for analysis results of interest. Several mathematical structures are available for the representation of epistemic uncertainty, including interval analysis, possibility theory, evidence theory and probability theory. In the presence of epistemic uncertainty, there is not a single CDF or CCDF for a given analysis result. Rather, there is a family of CDFs and a corresponding family of CCDFs that derive from epistemic uncertainty and have an uncertainty structure that derives from the particular uncertainty structure (i.e., interval analysis, possibility theory, evidence theory, probability theory) used to represent epistemic uncertainty. Graphical formats for the representation of epistemic uncertainty in families of CDFs and CCDFs are investigated and presented for the indicated characterizations of epistemic uncertainty.
October 16, 2014 Webinar - Decisional Analysis under Uncertainty |
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Department of Energy 6, 2014 Webinar - Decisional Analysis under Uncertainty October 16, 2014 Webinar - Decisional Analysis under Uncertainty Webinar - October 16, 2014, 11 am - 12:40 pm EDT: Dr. Paul Black (Neptune, Inc), Decisional Analysis under Uncertainty Agenda - October 16, 2014 - P&RA CoP Webinar (59.42 KB) Presentation - Decision Making under Uncertainty: Introduction to Structured Decision Analysis for Performance Assessments (4.02 MB) More Documents & Publications Status
Brown, J.; Goossens, L.H.J.; Kraan, B.C.P.
1997-06-01
This volume is the first of a two-volume document that summarizes a joint project conducted by the US Nuclear Regulatory Commission and the European Commission to assess uncertainties in the MACCS and COSYMA probabilistic accident consequence codes. These codes were developed primarily for estimating the risks presented by nuclear reactors based on postulated frequencies and magnitudes of potential accidents. This document reports on an ongoing project to assess uncertainty in the MACCS and COSYMA calculations for the offsite consequences of radionuclide releases by hypothetical nuclear power plant accidents. A panel of sixteen experts was formed to compile credible and traceable uncertainty distributions for food chain variables that affect calculations of offsite consequences. The expert judgment elicitation procedure and its outcomes are described in these volumes. Other panels were formed to consider uncertainty in other aspects of the codes. Their results are described in companion reports. Volume 1 contains background information and a complete description of the joint consequence uncertainty study. Volume 2 contains appendices that include (1) a summary of the MACCS and COSYMA consequence codes, (2) the elicitation questionnaires and case structures for both panels, (3) the rationales and results for the panels on soil and plant transfer and animal transfer, (4) short biographies of the experts, and (5) the aggregated results of their responses.
HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks
Paulson, Patrick R.; Purohit, Sumit; Rodriguez, Luke R.
2015-05-01
This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.
Entropic uncertainty relations in multidimensional position and momentum spaces
Huang Yichen
2011-05-15
Commutator-based entropic uncertainty relations in multidimensional position and momentum spaces are derived, twofold generalizing previous entropic uncertainty relations for one-mode states. They provide optimal lower bounds and imply the multidimensional variance-based uncertainty principle. The article concludes with an open conjecture.
Tutorial examples for uncertainty quantification methods.
De Bord, Sarah
2015-08-01
This report details the work accomplished during my 2015 SULI summer internship at Sandia National Laboratories in Livermore, CA. During this internship, I worked on multiple tasks with the common goal of making uncertainty quantification (UQ) methods more accessible to the general scientific community. As part of my work, I created a comprehensive numerical integration example to incorporate into the user manual of a UQ software package. Further, I developed examples involving heat transfer through a window to incorporate into tutorial lectures that serve as an introduction to UQ methods.
Microsoft Word - Price Uncertainty Supplement .docx
Gasoline and Diesel Fuel Update
1 1 January 2011 Short-Term Energy Outlook Energy Price Volatility and Forecast Uncertainty 1 January 11, 2011 Release Crude Oil Prices. West Texas Intermediate (WTI) crude oil spot prices averaged over $89 per barrel in December, about $5 per barrel higher than the November average. Expectations of higher oil demand, combined with unusually cold weather in both Europe and the U.S. Northeast, contributed to prices. EIA has raised the first quarter 2011 WTI spot price forecast by $8 per barrel
Microsoft Word - Price Uncertainty Supplement.doc
Gasoline and Diesel Fuel Update
April 2010 Short-Term Energy Outlook Energy Price Volatility and Forecast Uncertainty 1 April 6, 2010 Release Crude Oil Prices. WTI crude oil spot prices averaged $81 per barrel in March 2010, almost $5 per barrel above the prior month's average and $3 per barrel higher than forecast in last month's Outlook. Oil prices rose from a low this year of $71.15 per barrel on February 5 to $80 per barrel by the end of February, generally on news of robust economic and energy demand growth in non-OECD
Microsoft Word - Price Uncertainty Supplement.doc
Gasoline and Diesel Fuel Update
0 1 August 2010 Short-Term Energy Outlook Energy Price Volatility and Forecast Uncertainty 1 August 10, 2010 Release WTI crude oil spot prices averaged $76.32 per barrel in July 2010 or about $1 per barrel above the prior month's average, and close to the $77 per barrel projected in last month's Outlook. EIA projects WTI prices will average about $80 per barrel over the second half of this year and rise to $85 by the end of next year (West Texas Intermediate Crude Oil Price Chart). Energy price
Microsoft Word - Price Uncertainty Supplement.doc
Gasoline and Diesel Fuel Update
December 2010 Short-Term Energy Outlook Energy Price Volatility and Forecast Uncertainty 1 December 7, 2010 Release Crude Oil Prices. West Texas Intermediate (WTI) crude oil spot prices averaged over $84 per barrel in November, more than $2 per barrel higher than the October average. EIA has raised the average winter 2010-2011 period WTI spot price forecast by $1 per barrel from the last monthʹs Outlook to $84 per barrel. WTI spot prices rise to $89 per barrel by the end of next year, $2 per
Microsoft Word - Price Uncertainty Supplement.doc
Gasoline and Diesel Fuel Update
0 1 July 2010 Short-Term Energy Outlook Energy Price Volatility and Forecast Uncertainty 1 July 7, 2010 Release Crude Oil Prices. WTI crude oil spot prices averaged $75.34 per barrel in June 2010 ($1.60 per barrel above the prior month's average), close to the $76 per barrel projected in the forecast in last month's Outlook. EIA projects WTI prices will average about $79 per barrel over the second half of this year and rise to $84 by the end of next year (West Texas Intermediate Crude Oil Price
Microsoft Word - Price Uncertainty Supplement.doc
Gasoline and Diesel Fuel Update
0 1 June 2010 Short-Term Energy Outlook Energy Price Volatility and Forecast Uncertainty 1 June 8, 2010 Release Crude Oil Prices. WTI crude oil spot prices averaged less than $74 per barrel in May 2010, almost $11 per barrel below the prior month's average and $7 per barrel lower than forecast in last month's Outlook. EIA projects WTI prices will average about $79 per barrel over the second half of this year and rise to $84 by the end of next year, a decrease of about $3 per barrel from the
Microsoft Word - Price Uncertainty Supplement.doc
Gasoline and Diesel Fuel Update
March 2010 Short-Term Energy Outlook Energy Price Volatility and Forecast Uncertainty 1 March 9, 2010 Release Crude Oil Prices. WTI crude oil spot prices averaged $76.39 per barrel in February 2010, almost $2 per barrel lower than the prior month's average and very near the $76 per barrel forecast in last month's Outlook. Last month, the WTI spot price reached a low of $71.15 on February 5 and peaked at $80.04 on February 22. EIA expects WTI prices to average above $80 per barrel this spring,
Microsoft Word - Price Uncertainty Supplement.doc
Gasoline and Diesel Fuel Update
May 2010 Short-Term Energy Outlook Energy Price Volatility and Forecast Uncertainty 1 May 11, 2010 Release Crude Oil Prices. WTI crude oil spot prices averaged $84 per barrel in April 2010, about $3 per barrel above the prior month's average and $2 per barrel higher than forecast in last month's Outlook. EIA projects WTI prices will average about $84 per barrel over the second half of this year and rise to $87 by the end of next year, an increase of about $2 per barrel from the previous Outlook
Microsoft Word - Price Uncertainty Supplement.doc
Gasoline and Diesel Fuel Update
November 2010 Short-Term Energy Outlook Energy Price Volatility and Forecast Uncertainty 1 November 9, 2010 Release Crude Oil Prices. WTI crude oil spot prices averaged almost $82 per barrel in October, about $7 per barrel higher than the September average, as expectations of higher oil demand pushed up prices. EIA has raised the average fourth quarter 2010 WTI spot price forecast to about $83 per barrel compared with $79 per barrel in last monthʹs Outlook. WTI spot prices rise to $87 per
Microsoft Word - Price Uncertainty Supplement.doc
Gasoline and Diesel Fuel Update
0 1 September 2010 Short-Term Energy Outlook Energy Price Volatility and Forecast Uncertainty 1 September 8, 2010 Release Crude Oil Prices. West Texas Intermediate (WTI) crude oil spot prices averaged about $77 per barrel in August 2010, very close to the July average, but $3 per barrel lower than projected in last month's Outlook. WTI spot prices averaged almost $82 per barrel over the first 10 days of August but then fell by $9 per barrel over the next 2 weeks as the market reacted to a series
Methodology for characterizing modeling and discretization uncertainties in computational simulation
ALVIN,KENNETH F.; OBERKAMPF,WILLIAM L.; RUTHERFORD,BRIAN M.; DIEGERT,KATHLEEN V.
2000-03-01
This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.
Adaptive strategies for materials design using uncertainties
Balachandran, Prasanna V.; Xue, Dezhen; Theiler, James; Hogden, John; Lookman, Turab
2016-01-21
Here, we compare several adaptive design strategies using a data set of 223 M2AX family of compounds for which the elastic properties [bulk (B), shear (G), and Young’s (E) modulus] have been computed using density functional theory. The design strategies are decomposed into an iterative loop with two main steps: machine learning is used to train a regressor that predicts elastic properties in terms of elementary orbital radii of the individual components of the materials; and a selector uses these predictions and their uncertainties to choose the next material to investigate. The ultimate goal is to obtain a material withmore » desired elastic properties in as few iterations as possible. We examine how the choice of data set size, regressor and selector impact the design. We find that selectors that use information about the prediction uncertainty outperform those that don’t. Our work is a step in illustrating how adaptive design tools can guide the search for new materials with desired properties.« less
Uncertainty Budget Analysis for Dimensional Inspection Processes (U)
Valdez, Lucas M.
2012-07-26
This paper is intended to provide guidance and describe how to prepare an uncertainty analysis of a dimensional inspection process through the utilization of an uncertainty budget analysis. The uncertainty analysis is stated in the same methodology as that of the ISO GUM standard for calibration and testing. There is a specific distinction between how Type A and Type B uncertainty analysis is used in a general and specific process. All theory and applications are utilized to represent both a generalized approach to estimating measurement uncertainty and how to report and present these estimations for dimensional measurements in a dimensional inspection process. The analysis of this uncertainty budget shows that a well-controlled dimensional inspection process produces a conservative process uncertainty, which can be attributed to the necessary assumptions in place for best possible results.
Efendiev, Yalchin; Datta-Gupta, Akhil; Jafarpour, Behnam; Mallick, Bani; Vassilevski, Panayot
2015-11-09
In this proposal, we have worked on Bayesian uncertainty quantification for predictions of fows in highly heterogeneous media. The research in this proposal is broad and includes: prior modeling for heterogeneous permeability fields; effective parametrization of heterogeneous spatial priors; efficient ensemble- level solution techniques; efficient multiscale approximation techniques; study of the regularity of complex posterior distribution and the error estimates due to parameter reduction; efficient sampling techniques; applications to multi-phase ow and transport. We list our publications below and describe some of our main research activities. Our multi-disciplinary team includes experts from the areas of multiscale modeling, multilevel solvers, Bayesian statistics, spatial permeability modeling, and the application domain.
Survey and Evaluate Uncertainty Quantification Methodologies
Lin, Guang; Engel, David W.; Eslinger, Paul W.
2012-02-01
The Carbon Capture Simulation Initiative (CCSI) is a partnership among national laboratories, industry and academic institutions that will develop and deploy state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technologies from discovery to development, demonstration, and ultimately the widespread deployment to hundreds of power plants. The CCSI Toolset will provide end users in industry with a comprehensive, integrated suite of scientifically validated models with uncertainty quantification, optimization, risk analysis and decision making capabilities. The CCSI Toolset will incorporate commercial and open-source software currently in use by industry and will also develop new software tools as necessary to fill technology gaps identified during execution of the project. The CCSI Toolset will (1) enable promising concepts to be more quickly identified through rapid computational screening of devices and processes; (2) reduce the time to design and troubleshoot new devices and processes; (3) quantify the technical risk in taking technology from laboratory-scale to commercial-scale; and (4) stabilize deployment costs more quickly by replacing some of the physical operational tests with virtual power plant simulations. The goal of CCSI is to deliver a toolset that can simulate the scale-up of a broad set of new carbon capture technologies from laboratory scale to full commercial scale. To provide a framework around which the toolset can be developed and demonstrated, we will focus on three Industrial Challenge Problems (ICPs) related to carbon capture technologies relevant to U.S. pulverized coal (PC) power plants. Post combustion capture by solid sorbents is the technology focus of the initial ICP (referred to as ICP A). The goal of the uncertainty quantification (UQ) task (Task 6) is to provide a set of capabilities to the user community for the quantification of uncertainties associated with the carbon
Lidar arc scan uncertainty reduction through scanning geometry optimization
Wang, H.; Barthelmie, R. J.; Pryor, S. C.; Brown, G.
2015-10-07
Doppler lidars are frequently operated in a mode referred to as arc scans, wherein the lidar beam scans across a sector with a fixed elevation angle and the resulting measurements are used to derive an estimate of the n minute horizontal mean wind velocity (speed and direction). Previous studies have shown that the uncertainty in the measured wind speed originates from turbulent wind fluctuations and depends on the scan geometry (the arc span and the arc orientation). This paper is designed to provide guidance on optimal scan geometries for two key applications in the wind energy industry: wind turbine powermoreperformance analysis and annual energy production. We present a quantitative analysis of the retrieved wind speed uncertainty derived using a theoretical model with the assumption of isotropic and frozen turbulence, and observations from three sites that are onshore with flat terrain, onshore with complex terrain and offshore, respectively. The results from both the theoretical model and observations show that the uncertainty is scaled with the turbulence intensity such that the relative standard error on the 10 min mean wind speed is about 30 % of the turbulence intensity. The uncertainty in both retrieved wind speeds and derived wind energy production estimates can be reduced by aligning lidar beams with the dominant wind direction, increasing the arc span and lowering the number of beams per arc scan. Large arc spans should be used at sites with high turbulence intensity and/or large wind direction variation when arc scans are used for wind resource assessment.less
Radiotherapy Dose Fractionation under Parameter Uncertainty
Davison, Matt; Kim, Daero; Keller, Harald
2011-11-30
In radiotherapy, radiation is directed to damage a tumor while avoiding surrounding healthy tissue. Tradeoffs ensue because dose cannot be exactly shaped to the tumor. It is particularly important to ensure that sensitive biological structures near the tumor are not damaged more than a certain amount. Biological tissue is known to have a nonlinear response to incident radiation. The linear quadratic dose response model, which requires the specification of two clinically and experimentally observed response coefficients, is commonly used to model this effect. This model yields an optimization problem giving two different types of optimal dose sequences (fractionation schedules). Which fractionation schedule is preferred depends on the response coefficients. These coefficients are uncertainly known and may differ from patient to patient. Because of this not only the expected outcomes but also the uncertainty around these outcomes are important, and it might not be prudent to select the strategy with the best expected outcome.
Geostatistical evaluation of travel time uncertainties
Devary, J.L.
1983-08-01
Data on potentiometric head and hydraulic conductivity, gathered from the Wolfcamp Formation of the Permian System, have exhibited tremendous spatial variability as a result of heterogeneities in the media and the presence of petroleum and natural gas deposits. Geostatistical data analysis and error propagation techniques (kriging and conditional simulation) were applied to determine the effect of potentiometric head uncertainties on radionuclide travel paths and travel times through the Wolfcamp Formation. Blok-average kriging was utilized to remove measurement error from potentiometric head data. The travel time calculations have been enhanced by the use of an inverse technique to determine the relative hydraulic conductivity along travel paths. In this way, the spatial variability of the hydraulic conductivity corresponding to streamline convergence and divergence may be included in the analysis. 22 references, 11 figures, 1 table.
Intrinsic Uncertainties in Modeling Complex Systems.
Cooper, Curtis S; Bramson, Aaron L.; Ames, Arlo L.
2014-09-01
Models are built to understand and predict the behaviors of both natural and artificial systems. Because it is always necessary to abstract away aspects of any non-trivial system being modeled, we know models can potentially leave out important, even critical elements. This reality of the modeling enterprise forces us to consider the prospective impacts of those effects completely left out of a model - either intentionally or unconsidered. Insensitivity to new structure is an indication of diminishing returns. In this work, we represent a hypothetical unknown effect on a validated model as a finite perturba- tion whose amplitude is constrained within a control region. We find robustly that without further constraints, no meaningful bounds can be placed on the amplitude of a perturbation outside of the control region. Thus, forecasting into unsampled regions is a very risky proposition. We also present inherent difficulties with proper time discretization of models and representing in- herently discrete quantities. We point out potentially worrisome uncertainties, arising from math- ematical formulation alone, which modelers can inadvertently introduce into models of complex systems. Acknowledgements This work has been funded under early-career LDRD project %23170979, entitled %22Quantify- ing Confidence in Complex Systems Models Having Structural Uncertainties%22, which ran from 04/2013 to 09/2014. We wish to express our gratitude to the many researchers at Sandia who con- tributed ideas to this work, as well as feedback on the manuscript. In particular, we would like to mention George Barr, Alexander Outkin, Walt Beyeler, Eric Vugrin, and Laura Swiler for provid- ing invaluable advice and guidance through the course of the project. We would also like to thank Steven Kleban, Amanda Gonzales, Trevor Manzanares, and Sarah Burwell for their assistance in managing project tasks and resources.
Entanglement criteria via concave-function uncertainty relations
Huang Yichen
2010-07-15
A general theorem as a necessary condition for the separability of quantum states in both finite and infinite dimensional systems, based on concave-function uncertainty relations, is derived. Two special cases of the general theorem are stronger than two known entanglement criteria based on the Shannon entropic uncertainty relation and the Landau-Pollak uncertainty relation, respectively; other special cases are able to detect entanglement where some famous entanglement criteria fail.
Improvements to Nuclear Data and Its Uncertainties by Theoretical...
Office of Scientific and Technical Information (OSTI)
data evaluation process much more accurately, and lead to a new generation of uncertainty quantification files. ... of AFCI. While in the past the design, construction and operation of ...
Improvements of Nuclear Data and Its Uncertainties by Theoretical...
Office of Scientific and Technical Information (OSTI)
Improvements of Nuclear Data and Its Uncertainties by Theoretical Modeling Talou, Patrick Los Alamos National Laboratory; Nazarewicz, Witold University of Tennessee, Knoxville,...
A density-matching approach for optimization under uncertainty...
Office of Scientific and Technical Information (OSTI)
A density-matching approach for optimization under uncertainty Citation Details ... Journal Name: Computer Methods in Applied Mechanics and Engineering Additional Journal ...
Early solar mass loss, opacity uncertainties, and the solar abundance...
Office of Scientific and Technical Information (OSTI)
Early solar mass loss, opacity uncertainties, and the solar abundance problem Citation ... Visit OSTI to utilize additional information resources in energy science and technology. A ...
Estimation of uncertainty for contour method residual stress measurements
Olson, Mitchell D.; DeWald, Adrian T.; Prime, Michael B.; Hill, Michael R.
2014-12-03
This paper describes a methodology for the estimation of measurement uncertainty for the contour method, where the contour method is an experimental technique for measuring a two-dimensional map of residual stress over a plane. Random error sources including the error arising from noise in displacement measurements and the smoothing of the displacement surfaces are accounted for in the uncertainty analysis. The output is a two-dimensional, spatially varying uncertainty estimate such that every point on the cross-section where residual stress is determined has a corresponding uncertainty value. Both numerical and physical experiments are reported, which are used to support the usefulness of the proposed uncertainty estimator. The uncertainty estimator shows the contour method to have larger uncertainty near the perimeter of the measurement plane. For the experiments, which were performed on a quenched aluminum bar with a cross section of 51 76 mm, the estimated uncertainty was approximately 5 MPa (?/E = 7 10??) over the majority of the cross-section, with localized areas of higher uncertainty, up to 10 MPa (?/E = 14 10??).
Uncertainty quantification of US Southwest climate from IPCC...
Office of Scientific and Technical Information (OSTI)
Title: Uncertainty quantification of US Southwest climate from IPCC projections. The Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4) made extensive ...
Measurement uncertainty analysis techniques applied to PV performance measurements
Wells, C.
1992-10-01
The purpose of this presentation is to provide a brief introduction to measurement uncertainty analysis, outline how it is done, and illustrate uncertainty analysis with examples drawn from the PV field, with particular emphasis toward its use in PV performance measurements. The uncertainty information we know and state concerning a PV performance measurement or a module test result determines, to a significant extent, the value and quality of that result. What is measurement uncertainty analysis It is an outgrowth of what has commonly been called error analysis. But uncertainty analysis, a more recent development, gives greater insight into measurement processes and tests, experiments, or calibration results. Uncertainty analysis gives us an estimate of the I interval about a measured value or an experiment's final result within which we believe the true value of that quantity will lie. Why should we take the time to perform an uncertainty analysis A rigorous measurement uncertainty analysis: Increases the credibility and value of research results; allows comparisons of results from different labs; helps improve experiment design and identifies where changes are needed to achieve stated objectives (through use of the pre-test analysis); plays a significant role in validating measurements and experimental results, and in demonstrating (through the post-test analysis) that valid data have been acquired; reduces the risk of making erroneous decisions; demonstrates quality assurance and quality control measures have been accomplished; define Valid Data as data having known and documented paths of: Origin, including theory; measurements; traceability to measurement standards; computations; uncertainty analysis of results.
Uncertainty quantification in fission cross section measurements at LANSCE
Tovesson, F.
2015-01-09
Neutron-induced fission cross sections have been measured for several isotopes of uranium and plutonium at the Los Alamos Neutron Science Center (LANSCE) over a wide range of incident neutron energies. The total uncertainties in these measurements are in the range 3–5% above 100 keV of incident neutron energy, which results from uncertainties in the target, neutron source, and detector system. The individual sources of uncertainties are assumed to be uncorrelated, however correlation in the cross section across neutron energy bins are considered. The quantification of the uncertainty contributions will be described here.
Uncertainties in Air Exchange using Continuous-Injection, Long...
Office of Scientific and Technical Information (OSTI)
people to minimize experimental costs. In this article we will conduct a first-principles error analysis to estimate the uncertainties and then compare that analysis to CILTS...
Estimation of uncertainty for contour method residual stress measurements
Olson, Mitchell D.; DeWald, Adrian T.; Prime, Michael B.; Hill, Michael R.
2014-12-03
This paper describes a methodology for the estimation of measurement uncertainty for the contour method, where the contour method is an experimental technique for measuring a two-dimensional map of residual stress over a plane. Random error sources including the error arising from noise in displacement measurements and the smoothing of the displacement surfaces are accounted for in the uncertainty analysis. The output is a two-dimensional, spatially varying uncertainty estimate such that every point on the cross-section where residual stress is determined has a corresponding uncertainty value. Both numerical and physical experiments are reported, which are used to support the usefulness of the proposed uncertainty estimator. The uncertainty estimator shows the contour method to have larger uncertainty near the perimeter of the measurement plane. For the experiments, which were performed on a quenched aluminum bar with a cross section of 51 × 76 mm, the estimated uncertainty was approximately 5 MPa (σ/E = 7 · 10⁻⁵) over the majority of the cross-section, with localized areas of higher uncertainty, up to 10 MPa (σ/E = 14 · 10⁻⁵).
Estimation of uncertainty for contour method residual stress measurements
Olson, Mitchell D.; DeWald, Adrian T.; Prime, Michael B.; Hill, Michael R.
2014-12-03
This paper describes a methodology for the estimation of measurement uncertainty for the contour method, where the contour method is an experimental technique for measuring a two-dimensional map of residual stress over a plane. Random error sources including the error arising from noise in displacement measurements and the smoothing of the displacement surfaces are accounted for in the uncertainty analysis. The output is a two-dimensional, spatially varying uncertainty estimate such that every point on the cross-section where residual stress is determined has a corresponding uncertainty value. Both numerical and physical experiments are reported, which are used to support the usefulnessmore » of the proposed uncertainty estimator. The uncertainty estimator shows the contour method to have larger uncertainty near the perimeter of the measurement plane. For the experiments, which were performed on a quenched aluminum bar with a cross section of 51 × 76 mm, the estimated uncertainty was approximately 5 MPa (σ/E = 7 · 10⁻⁵) over the majority of the cross-section, with localized areas of higher uncertainty, up to 10 MPa (σ/E = 14 · 10⁻⁵).« less
Characterizing Uncertainty for Regional Climate Change Mitigation and Adaptation Decisions
Unwin, Stephen D.; Moss, Richard H.; Rice, Jennie S.; Scott, Michael J.
2011-09-30
This white paper describes the results of new research to develop an uncertainty characterization process to help address the challenges of regional climate change mitigation and adaptation decisions.
Estimation and Uncertainty Analysis of Impacts of Future Heat...
Office of Scientific and Technical Information (OSTI)
However, the estimation of excess mortality attributable to future heat waves is subject to large uncertainties, which have not been examined under the latest greenhouse gas ...
Comparison of Uncertainty of Two Precipitation Prediction Models...
Office of Scientific and Technical Information (OSTI)
Prediction Models at Los Alamos National Lab Technical Area 54 Citation Details In-Document Search Title: Comparison of Uncertainty of Two Precipitation Prediction Models ...
Variable Grid Method for Visualizing Uncertainty Associated with...
U.S. Department of Energy (DOE) all webpages (Extended Search)
interpretation. In general, NETL's VGM applies a grid system where the size of the cell represents the uncertainty associated with the original point data sources or their...
Output-Based Error Estimation and Adaptation for Uncertainty...
U.S. Department of Energy (DOE) all webpages (Extended Search)
Output-Based Error Estimation and Adaptation for Uncertainty Quantification Isaac M. Asher and Krzysztof J. Fidkowski University of Michigan US National Congress on Computational...
A Unified Approach for Reporting ARM Measurement Uncertainties...
U.S. Department of Energy (DOE) all webpages (Extended Search)
0 A Unified Approach for Reporting ARM Measurement Uncertainties Technical Report E Campos DL Sisterson October 2015 DISCLAIMER This report was prepared as an account of work ...
Uncertainty Estimates for SIRS, SKYRAD, & GNDRAD Data and Reprocessing...
Office of Scientific and Technical Information (OSTI)
Title: Uncertainty Estimates for SIRS, SKYRAD, & GNDRAD Data and Reprocessing the Pyrgeometer Data (Presentation) The National Renewable Energy Laboratory (NREL) and the ...
Harper, F.T.; Young, M.L.; Miller, L.A.; Hora, S.C.; Lui, C.H.; Goossens, L.H.J.; Cooke, R.M.; Paesler-Sauer, J.; Helton, J.C.
1995-01-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the risks presented by nuclear installations based on postulated frequencies and magnitudes of potential accidents. In 1991, the US Nuclear Regulatory Commission (NRC) and the Commission of the European Communities (CEC) began a joint uncertainty analysis of the two codes. The ultimate objective of the joint effort was to develop credible and traceable uncertainty distributions for the input variables of the codes. Expert elicitation was identified as the best technology available for developing a library of uncertainty distributions for the selected consequence parameters. The study was formulated jointly and was limited to the current code models and to physical quantities that could be measured in experiments. Experts developed their distributions independently. To validate the distributions generated for the wet deposition input variables, samples were taken from these distributions and propagated through the wet deposition code model. Resulting distributions closely replicated the aggregated elicited wet deposition distributions. To validate the distributions generated for the dispersion code input variables, samples from the distributions and propagated through the Gaussian plume model (GPM) implemented in the MACCS and COSYMA codes. Project teams from the NRC and CEC cooperated successfully to develop and implement a unified process for the elaboration of uncertainty distributions on consequence code input parameters. Formal expert judgment elicitation proved valuable for synthesizing the best available information. Distributions on measurable atmospheric dispersion and deposition parameters were successfully elicited from experts involved in the many phenomenological areas of consequence analysis. This volume is the first of a three-volume document describing the project.
Harper, F.T.; Young, M.L.; Miller, L.A.; Hora, S.C.; Lui, C.H.; Goossens, L.H.J.; Cooke, R.M.; Paesler-Sauer, J.; Helton, J.C.
1995-01-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, completed in 1990, estimate the risks presented by nuclear installations based on postulated frequencies and magnitudes of potential accidents. In 1991, the US Nuclear Regulatory Commission (NRC) and the Commission of the European Communities (CEC) began a joint uncertainty analysis of the two codes. The objective was to develop credible and traceable uncertainty distributions for the input variables of the codes. Expert elicitation, developed independently, was identified as the best technology available for developing a library of uncertainty distributions for the selected consequence parameters. The study was formulated jointly and was limited to the current code models and to physical quantities that could be measured in experiments. To validate the distributions generated for the wet deposition input variables, samples were taken from these distributions and propagated through the wet deposition code model along with the Gaussian plume model (GPM) implemented in the MACCS and COSYMA codes. Resulting distributions closely replicated the aggregated elicited wet deposition distributions. Project teams from the NRC and CEC cooperated successfully to develop and implement a unified process for the elaboration of uncertainty distributions on consequence code input parameters. Formal expert judgment elicitation proved valuable for synthesizing the best available information. Distributions on measurable atmospheric dispersion and deposition parameters were successfully elicited from experts involved in the many phenomenological areas of consequence analysis. This volume is the second of a three-volume document describing the project and contains two appendices describing the rationales for the dispersion and deposition data along with short biographies of the 16 experts who participated in the project.
Goossens, L.H.J.; Kraan, B.C.P.; Cooke, R.M.; Harrison, J.D.; Harper, F.T.; Hora, S.C.
1998-04-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequence from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the US Nuclear Regulatory Commission and the Commission of the European Communities began cosponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables. A formal expert judgment elicitation and evaluation process was identified as the best technology available for developing a library of uncertainty distributions for these consequence parameters. This report focuses on the results of the study to develop distribution for variables related to the MACCS and COSYMA internal dosimetry models. This volume contains appendices that include (1) a summary of the MACCS and COSYMA consequence codes, (2) the elicitation questionnaires and case structures, (3) the rationales and results for the panel on internal dosimetry, (4) short biographies of the experts, and (5) the aggregated results of their responses.
Haskin, F.E.; Harper, F.T.; Goossens, L.H.J.; Kraan, B.C.P.
1997-12-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequence from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the US Nuclear Regulatory Commission and the Commission of the European Communities began cosponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables. A formal expert judgment elicitation and evaluation process was identified as the best technology available for developing a library of uncertainty distributions for these consequence parameters. This report focuses on the results of the study to develop distribution for variables related to the MACCS and COSYMA early health effects models. This volume contains appendices that include (1) a summary of the MACCS and COSYMA consequence codes, (2) the elicitation questionnaires and case structures, (3) the rationales and results for the panel on early health effects, (4) short biographies of the experts, and (5) the aggregated results of their responses.
Investigation of uncertainty components in Coulomb blockade thermometry
Hahtela, O. M.; Heinonen, M.; Manninen, A.; Meschke, M.; Savin, A.; Pekola, J. P.; Gunnarsson, D.; Prunnila, M.; Penttil, J. S.; Roschier, L.
2013-09-11
Coulomb blockade thermometry (CBT) has proven to be a feasible method for primary thermometry in every day laboratory use at cryogenic temperatures from ca. 10 mK to a few tens of kelvins. The operation of CBT is based on single electron charging effects in normal metal tunnel junctions. In this paper, we discuss the typical error sources and uncertainty components that limit the present absolute accuracy of the CBT measurements to the level of about 1 % in the optimum temperature range. Identifying the influence of different uncertainty sources is a good starting point for improving the measurement accuracy to the level that would allow the CBT to be more widely used in high-precision low temperature metrological applications and for realizing thermodynamic temperature in accordance to the upcoming new definition of kelvin.
Size exclusion deep bed filtration: Experimental and modelling uncertainties
Badalyan, Alexander You, Zhenjiang; Aji, Kaiser; Bedrikovetsky, Pavel; Carageorgos, Themis; Zeinijahromi, Abbas
2014-01-15
A detailed uncertainty analysis associated with carboxyl-modified latex particle capture in glass bead-formed porous media enabled verification of the two theoretical stochastic models for prediction of particle retention due to size exclusion. At the beginning of this analysis it is established that size exclusion is a dominant particle capture mechanism in the present study: calculated significant repulsive Derjaguin-Landau-Verwey-Overbeek potential between latex particles and glass beads is an indication of their mutual repulsion, thus, fulfilling the necessary condition for size exclusion. Applying linear uncertainty propagation method in the form of truncated Taylor's series expansion, combined standard uncertainties (CSUs) in normalised suspended particle concentrations are calculated using CSUs in experimentally determined parameters such as: an inlet volumetric flowrate of suspension, particle number in suspensions, particle concentrations in inlet and outlet streams, particle and pore throat size distributions. Weathering of glass beads in high alkaline solutions does not appreciably change particle size distribution, and, therefore, is not considered as an additional contributor to the weighted mean particle radius and corresponded weighted mean standard deviation. Weighted mean particle radius and LogNormal mean pore throat radius are characterised by the highest CSUs among all experimental parameters translating to high CSU in the jamming ratio factor (dimensionless particle size). Normalised suspended particle concentrations calculated via two theoretical models are characterised by higher CSUs than those for experimental data. The model accounting the fraction of inaccessible flow as a function of latex particle radius excellently predicts normalised suspended particle concentrations for the whole range of jamming ratios. The presented uncertainty analysis can be also used for comparison of intra- and inter-laboratory particle size exclusion data.
Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint
Campanelli, Mark; Duck, Benjamin; Emery, Keith
2015-09-28
Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data points can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.
Helton, Jon Craig; Sallaberry, Cedric M.; Hansen, Clifford W.
2010-05-01
Extensive work has been carried out by the U.S. Department of Energy (DOE) in the development of a proposed geologic repository at Yucca Mountain (YM), Nevada, for the disposal of high-level radioactive waste. As part of this development, an extensive performance assessment (PA) for the YM repository was completed in 2008 [1] and supported a license application by the DOE to the U.S. Nuclear Regulatory Commission (NRC) for the construction of the YM repository [2]. This presentation provides an overview of the conceptual and computational structure of the indicated PA (hereafter referred to as the 2008 YM PA) and the roles that uncertainty analysis and sensitivity analysis play in this structure.
Avoiding climate change uncertainties in Strategic Environmental Assessment
Larsen, Sanne Vammen; Krnv, Lone; Driscoll, Patrick
2013-11-15
This article is concerned with how Strategic Environmental Assessment (SEA) practice handles climate change uncertainties within the Danish planning system. First, a hypothetical model is set up for how uncertainty is handled and not handled in decision-making. The model incorporates the strategies reduction and resilience, denying, ignoring and postponing. Second, 151 Danish SEAs are analysed with a focus on the extent to which climate change uncertainties are acknowledged and presented, and the empirical findings are discussed in relation to the model. The findings indicate that despite incentives to do so, climate change uncertainties were systematically avoided or downplayed in all but 5 of the 151 SEAs that were reviewed. Finally, two possible explanatory mechanisms are proposed to explain this: conflict avoidance and a need to quantify uncertainty.
Lidar arc scan uncertainty reduction through scanning geometry optimization
Wang, Hui; Barthelmie, Rebecca J.; Pryor, Sara C.; Brown, Gareth.
2016-04-13
Doppler lidars are frequently operated in a mode referred to as arc scans, wherein the lidar beam scans across a sector with a fixed elevation angle and the resulting measurements are used to derive an estimate of the n minute horizontal mean wind velocity (speed and direction). Previous studies have shown that the uncertainty in the measured wind speed originates from turbulent wind fluctuations and depends on the scan geometry (the arc span and the arc orientation). This paper is designed to provide guidance on optimal scan geometries for two key applications in the wind energy industry: wind turbine power performance analysis and annualmore » energy production prediction. We present a quantitative analysis of the retrieved wind speed uncertainty derived using a theoretical model with the assumption of isotropic and frozen turbulence, and observations from three sites that are onshore with flat terrain, onshore with complex terrain and offshore, respectively. The results from both the theoretical model and observations show that the uncertainty is scaled with the turbulence intensity such that the relative standard error on the 10 min mean wind speed is about 30 % of the turbulence intensity. The uncertainty in both retrieved wind speeds and derived wind energy production estimates can be reduced by aligning lidar beams with the dominant wind direction, increasing the arc span and lowering the number of beams per arc scan. Large arc spans should be used at sites with high turbulence intensity and/or large wind direction variation.« less
Lidar arc scan uncertainty reduction through scanning geometry optimization
Wang, Hui; Barthelmie, Rebecca J.; Pryor, Sara C.; Brown, Gareth.
2016-04-13
Doppler lidars are frequently operated in a mode referred to as arc scans, wherein the lidar beam scans across a sector with a fixed elevation angle and the resulting measurements are used to derive an estimate of the n minute horizontal mean wind velocity (speed and direction). Previous studies have shown that the uncertainty in the measured wind speed originates from turbulent wind fluctuations and depends on the scan geometry (the arc span and the arc orientation). This paper is designed to provide guidance on optimal scan geometries for two key applications in the wind energy industry: wind turbine power performance analysis and annualmore » energy production prediction. We present a quantitative analysis of the retrieved wind speed uncertainty derived using a theoretical model with the assumption of isotropic and frozen turbulence, and observations from three sites that are onshore with flat terrain, onshore with complex terrain and offshore, respectively. The results from both the theoretical model and observations show that the uncertainty is scaled with the turbulence intensity such that the relative standard error on the 10 min mean wind speed is about 30% of the turbulence intensity. The uncertainty in both retrieved wind speeds and derived wind energy production estimates can be reduced by aligning lidar beams with the dominant wind direction, increasing the arc span and lowering the number of beams per arc scan. As a result, large arc spans should be used at sites with high turbulence intensity and/or large wind direction variation.« less
Error and uncertainty in Raman thermal conductivity measurements
Thomas Edwin Beechem; Yates, Luke; Graham, Samuel
2015-04-22
We investigated error and uncertainty in Raman thermal conductivity measurements via finite element based numerical simulation of two geometries often employed -- Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materialsmore » under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter -- termed the Raman stress factor -- is derived to identify when stress effects will induce large levels of error. Together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.« less
Error and uncertainty in Raman thermal conductivity measurements
Thomas Edwin Beechem; Yates, Luke; Graham, Samuel
2015-04-22
We investigated error and uncertainty in Raman thermal conductivity measurements via finite element based numerical simulation of two geometries often employed -- Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materials under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter -- termed the Raman stress factor -- is derived to identify when stress effects will induce large levels of error. Together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.
Díez, C.J.; Cabellos, O.; Martínez, J.S.
2015-01-15
Several approaches have been developed in last decades to tackle nuclear data uncertainty propagation problems of burn-up calculations. One approach proposed was the Hybrid Method, where uncertainties in nuclear data are propagated only on the depletion part of a burn-up problem. Because only depletion is addressed, only one-group cross sections are necessary, and hence, their collapsed one-group uncertainties. This approach has been applied successfully in several advanced reactor systems like EFIT (ADS-like reactor) or ESFR (Sodium fast reactor) to assess uncertainties on the isotopic composition. However, a comparison with using multi-group energy structures was not carried out, and has to be performed in order to analyse the limitations of using one-group uncertainties.
Measurement uncertainty analysis techniques applied to PV performance measurements
Wells, C.
1992-10-01
The purpose of this presentation is to provide a brief introduction to measurement uncertainty analysis, outline how it is done, and illustrate uncertainty analysis with examples drawn from the PV field, with particular emphasis toward its use in PV performance measurements. The uncertainty information we know and state concerning a PV performance measurement or a module test result determines, to a significant extent, the value and quality of that result. What is measurement uncertainty analysis? It is an outgrowth of what has commonly been called error analysis. But uncertainty analysis, a more recent development, gives greater insight into measurement processes and tests, experiments, or calibration results. Uncertainty analysis gives us an estimate of the I interval about a measured value or an experiment`s final result within which we believe the true value of that quantity will lie. Why should we take the time to perform an uncertainty analysis? A rigorous measurement uncertainty analysis: Increases the credibility and value of research results; allows comparisons of results from different labs; helps improve experiment design and identifies where changes are needed to achieve stated objectives (through use of the pre-test analysis); plays a significant role in validating measurements and experimental results, and in demonstrating (through the post-test analysis) that valid data have been acquired; reduces the risk of making erroneous decisions; demonstrates quality assurance and quality control measures have been accomplished; define Valid Data as data having known and documented paths of: Origin, including theory; measurements; traceability to measurement standards; computations; uncertainty analysis of results.
Calibration and Measurement Uncertainty Estimation of Radiometric Data: Preprint
Habte, A.; Sengupta, M.; Reda, I.; Andreas, A.; Konings, J.
2014-11-01
Evaluating the performance of photovoltaic cells, modules, and arrays that form large solar deployments relies on accurate measurements of the available solar resource. Therefore, determining the accuracy of these solar radiation measurements provides a better understanding of investment risks. This paper provides guidelines and recommended procedures for estimating the uncertainty in calibrations and measurements by radiometers using methods that follow the International Bureau of Weights and Measures Guide to the Expression of Uncertainty (GUM). Standardized analysis based on these procedures ensures that the uncertainty quoted is well documented.
Preliminary Results on Uncertainty Quantification for Pattern Analytics
Stracuzzi, David John; Brost, Randolph; Chen, Maximillian Gene; Malinas, Rebecca; Peterson, Matthew Gregor; Phillips, Cynthia A.; Robinson, David G.; Woodbridge, Diane
2015-09-01
This report summarizes preliminary research into uncertainty quantification for pattern ana- lytics within the context of the Pattern Analytics to Support High-Performance Exploitation and Reasoning (PANTHER) project. The primary focus of PANTHER was to make large quantities of remote sensing data searchable by analysts. The work described in this re- port adds nuance to both the initial data preparation steps and the search process. Search queries are transformed from does the specified pattern exist in the data? to how certain is the system that the returned results match the query? We show example results for both data processing and search, and discuss a number of possible improvements for each.
PDF uncertainties at large x and gauge boson production
Accardi, Alberto
2012-10-01
I discuss how global QCD fits of parton distribution functions can make the somewhat separated fields of high-energy particle physics and lower energy hadronic and nuclear physics interact to the benefit of both. In particular, I will argue that large rapidity gauge boson production at the Tevatron and the LHC has the highest short-term potential to constrain the theoretical nuclear corrections to DIS data on deuteron targets necessary for up/down flavor separation. This in turn can considerably reduce the PDF uncertainty on cross section calculations of heavy mass particles such as W' and Z' bosons.
Integration of Uncertainty Information into Power System Operations
Makarov, Yuri V.; Lu, Shuai; Samaan, Nader A.; Huang, Zhenyu; Subbarao, Krishnappa; Etingov, Pavel V.; Ma, Jian; Hafen, Ryan P.; Diao, Ruisheng; Lu, Ning
2011-10-10
Contemporary power systems face uncertainties coming from multiple sources, including forecast errors of load, wind and solar generation, uninstructed deviation and forced outage of traditional generators, loss of transmission lines, and others. With increasing amounts of wind and solar generation being integrated into the system, these uncertainties have been growing significantly. It is critical important to build knowledge of major sources of uncertainty, learn how to simulate them, and then incorporate this information into the decision-making processes and power system operations, for better reliability and efficiency. This paper gives a comprehensive view on the sources of uncertainty in power systems, important characteristics, available models, and ways of their integration into system operations. It is primarily based on previous works conducted at the Pacific Northwest National Laboratory (PNNL).
Probabilistic Accident Consequence Uncertainty - A Joint CEC/USNRC Study
Gregory, Julie J.; Harper, Frederick T.
1999-07-28
The joint USNRC/CEC consequence uncertainty study was chartered after the development of two new probabilistic accident consequence codes, MACCS in the U.S. and COSYMA in Europe. Both the USNRC and CEC had a vested interest in expanding the knowledge base of the uncertainty associated with consequence modeling, and teamed up to co-sponsor a consequence uncertainty study. The information acquired from the study was expected to provide understanding of the strengths and weaknesses of current models as well as a basis for direction of future research. This paper looks at the elicitation process implemented in the joint study and discusses some of the uncertainty distributions provided by eight panels of experts from the U.S. and Europe that were convened to provide responses to the elicitation. The phenomenological areas addressed by the expert panels include atmospheric dispersion and deposition, deposited material and external doses, food chain, early health effects, late health effects and internal dosimetry.
Dasymetric Modeling and Uncertainty (Journal Article) | SciTech...
Office of Scientific and Technical Information (OSTI)
Journal Article: Dasymetric Modeling and Uncertainty Citation Details In-Document Search ... DOE Contract Number: DE-AC05-00OR22725 Resource Type: Journal Article Resource Relation: ...
Risk Analysis and Decision-Making Under Uncertainty: A Strategy...
Office of Environmental Management (EM)
Uncertainty Analysis and Parameter Estimation Since 2002 To view all the P&RA CoP ... Update of Hydrogen from Biomass - Determination of the Delivered Cost of Hydrogen: ...
Uncertainty analysis of multi-rate kinetics of uranium desorption...
Office of Scientific and Technical Information (OSTI)
in the multi-rate model to simualte U(VI) desorption; 3) however, long-term prediction and its uncertainty may be significantly biased by the lognormal assumption for the ...
Problem Solving Environment for Uncertainty Analysis and Design Exploration
Energy Science and Technology Software Center (OSTI)
2011-10-26
PSUADE is an software system that is used to study the releationships between the inputs and outputs of gerneral simulation models for the purpose of performing uncertainty and sensitivity analyses on simulation models.
Gas Exploration Software for Reducing Uncertainty in Gas Concentration
U.S. Department of Energy (DOE) all webpages (Extended Search)
Estimates - Energy Innovation Portal Energy Analysis Energy Analysis Find More Like This Return to Search Gas Exploration Software for Reducing Uncertainty in Gas Concentration Estimates Lawrence Berkeley National Laboratory Contact LBL About This Technology Technology Marketing SummaryEstimating reservoir parameters for gas exploration from geophysical data is subject to a large degree of uncertainty. Seismic imaging techniques, such as seismic amplitude versus angle (AVA) analysis, can
Improvements of Nuclear Data and Its Uncertainties by Theoretical Modeling
Office of Scientific and Technical Information (OSTI)
(Technical Report) | SciTech Connect Technical Report: Improvements of Nuclear Data and Its Uncertainties by Theoretical Modeling Citation Details In-Document Search Title: Improvements of Nuclear Data and Its Uncertainties by Theoretical Modeling Authors: Talou, Patrick [1] ; Nazarewicz, Witold [2] ; Prinja, Anil [3] ; Danon, Yaron [4] + Show Author Affiliations Los Alamos National Laboratory University of Tennessee, Knoxville, TN 37996, USA University of New Mexico, USA Rensselaer
PROJECT PROFILE: Reducing PV Performance Uncertainty by Accurately
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Quantifying the "PV Resource" | Department of Energy Reducing PV Performance Uncertainty by Accurately Quantifying the "PV Resource" PROJECT PROFILE: Reducing PV Performance Uncertainty by Accurately Quantifying the "PV Resource" Funding Opportunity: SuNLaMP SunShot Subprogram: Photovoltaics Location: National Renewable Energy Laboratory, Golden, CO Amount Awarded: $2,500,000 The procedures used today for prediction of the solar resource available to
Direct Aerosol Forcing: Sensitivity to Uncertainty in Measurements of
U.S. Department of Energy (DOE) all webpages (Extended Search)
Aerosol Optical and Situational Properties Direct Aerosol Forcing: Sensitivity to Uncertainty in Measurements of Aerosol Optical and Situational Properties McComiskey, Allison CIRES / NOAA Schwartz, Stephen Brookhaven National Laboratory Ricchiazzi, Paul University of California, Santa Barbara Lewis, Ernie Brookhaven National Laboratory Michalsky, Joseph DOC/NOAA/OAR/ESRL/GMD Ogren, John NOAA/CMDL Category: Radiation Understanding sources of uncertainty in estimating aerosol direct radiative
Practical uncertainty reduction and quantification in shock physics measurements
Akin, M. C.; Nguyen, J. H.
2015-04-20
We report the development of a simple error analysis sampling method for identifying intersections and inflection points to reduce total uncertainty in experimental data. This technique was used to reduce uncertainties in sound speed measurements by 80% over conventional methods. Here, we focused on its impact on a previously published set of Mo sound speed data and possible implications for phase transition and geophysical studies. However, this technique's application can be extended to a wide range of experimental data.
Microsoft Word - Documentation - Price Forecast Uncertainty.doc
U.S. Energy Information Administration (EIA) (indexed site)
October 2009 1 October 2009 Short-Term Energy Outlook Supplement: Energy Price Volatility and Forecast Uncertainty 1 Summary It is often noted that energy prices are quite volatile, reflecting market participants' adjustments to new information from physical energy markets and/or markets in energy- related financial derivatives. Price volatility is an indication of the level of uncertainty, or risk, in the market. This paper describes how markets price risk and how the market- clearing process
Uncertainties in the Anti-neutrino Production at Nuclear Reactors
Djurcic, Zelimir; Detwiler, Jason A.; Piepke, Andreas; Foster Jr., Vince R.; Miller, Lester; Gratta, Giorgio
2008-08-06
Anti-neutrino emission rates from nuclear reactors are determined from thermal power measurements and fission rate calculations. The uncertainties in these quantities for commercial power plants and their impact on the calculated interaction rates in {bar {nu}}{sub e} detectors is examined. We discuss reactor-to-reactor correlations between the leading uncertainties, and their relevance to reactor {bar {nu}}{sub e} experiments.
Quantifying Uncertainty in Computer Predictions | netl.doe.gov
U.S. Department of Energy (DOE) all webpages (Extended Search)
Quantifying Uncertainty in Computer Predictions Quantifying Uncertainty in Computer Model Predictions The U.S. Department of Energy has great interest in technologies that will lead to reducing the CO2 emissions of fossil-fuel-burning power plants. Advanced energy technologies such as Integrated Gasification Combined Cycle (IGCC) and Carbon Capture and Storage (CCS) can potentially lead to the clean and efficient use of fossil fuels to power our nation. The development of new energy
Analysis of the Uncertainty in Wind Measurements from the Atmospheric
Office of Scientific and Technical Information (OSTI)
Radiation Measurement Doppler Lidar during XPIA: Field Campaign Report (Program Document) | SciTech Connect Program Document: Analysis of the Uncertainty in Wind Measurements from the Atmospheric Radiation Measurement Doppler Lidar during XPIA: Field Campaign Report Citation Details In-Document Search Title: Analysis of the Uncertainty in Wind Measurements from the Atmospheric Radiation Measurement Doppler Lidar during XPIA: Field Campaign Report In March and April of 2015, the ARM Doppler
Determining Best Estimates and Uncertainties in Cloud Microphysical
Office of Scientific and Technical Information (OSTI)
Parameters from ARM Field Data: Implications for Models, Retrieval Schemes and Aerosol-Cloud-Radiation Interactions (Technical Report) | SciTech Connect Determining Best Estimates and Uncertainties in Cloud Microphysical Parameters from ARM Field Data: Implications for Models, Retrieval Schemes and Aerosol-Cloud-Radiation Interactions Citation Details In-Document Search Title: Determining Best Estimates and Uncertainties in Cloud Microphysical Parameters from ARM Field Data: Implications for
Lab RFP: Validation and Uncertainty Characterization | Department of Energy
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Validation and Uncertainty Characterization Lab RFP: Validation and Uncertainty Characterization LBNL's FLEXLAB test facility, which includes four test cells each split into two half-cells to enable side-by-side comparative experiments. The cells have one active, reconfigurable facade, and individual, reconfigurable single-zone HVAC systems. The cell facing the camera sits on 270 degree turntable. Photo credit: LBNL. Bottom: ORNL's two-story flexible research platform test building. The building
How incorporating more data reduces uncertainty in recovery predictions
Campozana, F.P.; Lake, L.W.; Sepehrnoori, K.
1997-08-01
From the discovery to the abandonment of a petroleum reservoir, there are many decisions that involve economic risks because of uncertainty in the production forecast. This uncertainty may be quantified by performing stochastic reservoir modeling (SRM); however, it is not practical to apply SRM every time the model is updated to account for new data. This paper suggests a novel procedure to estimate reservoir uncertainty (and its reduction) as a function of the amount and type of data used in the reservoir modeling. Two types of data are analyzed: conditioning data and well-test data. However, the same procedure can be applied to any other data type. Three performance parameters are suggested to quantify uncertainty. SRM is performed for the following typical stages: discovery, primary production, secondary production, and infill drilling. From those results, a set of curves is generated that can be used to estimate (1) the uncertainty for any other situation and (2) the uncertainty reduction caused by the introduction of new wells (with and without well-test data) into the description.
Fuzzy-probabilistic calculations of water-balance uncertainty
Faybishenko, B.
2009-10-01
Hydrogeological systems are often characterized by imprecise, vague, inconsistent, incomplete, or subjective information, which may limit the application of conventional stochastic methods in predicting hydrogeologic conditions and associated uncertainty. Instead, redictions and uncertainty analysis can be made using uncertain input parameters expressed as probability boxes, intervals, and fuzzy numbers. The objective of this paper is to present the theory for, and a case study as an application of, the fuzzyprobabilistic approach, ombining probability and possibility theory for simulating soil water balance and assessing associated uncertainty in the components of a simple waterbalance equation. The application of this approach is demonstrated using calculations with the RAMAS Risk Calc code, to ssess the propagation of uncertainty in calculating potential evapotranspiration, actual evapotranspiration, and infiltration-in a case study at the Hanford site, Washington, USA. Propagation of uncertainty into the results of water-balance calculations was evaluated by hanging he types of models of uncertainty incorporated into various input parameters. The results of these fuzzy-probabilistic calculations are compared to the conventional Monte Carlo simulation approach and estimates from field observations at the Hanford site.
Uncertainty Estimation Improves Energy Measurement and Verification Procedures
Walter, Travis; Price, Phillip N.; Sohn, Michael D.
2014-05-14
Implementing energy conservation measures in buildings can reduce energy costs and environmental impacts, but such measures cost money to implement so intelligent investment strategies require the ability to quantify the energy savings by comparing actual energy used to how much energy would have been used in absence of the conservation measures (known as the baseline energy use). Methods exist for predicting baseline energy use, but a limitation of most statistical methods reported in the literature is inadequate quantification of the uncertainty in baseline energy use predictions. However, estimation of uncertainty is essential for weighing the risks of investing in retrofits. Most commercial buildings have, or soon will have, electricity meters capable of providing data at short time intervals. These data provide new opportunities to quantify uncertainty in baseline predictions, and to do so after shorter measurement durations than are traditionally used. In this paper, we show that uncertainty estimation provides greater measurement and verification (M&V) information and helps to overcome some of the difficulties with deciding how much data is needed to develop baseline models and to confirm energy savings. We also show that cross-validation is an effective method for computing uncertainty. In so doing, we extend a simple regression-based method of predicting energy use using short-interval meter data. We demonstrate the methods by predicting energy use in 17 real commercial buildings. We discuss the benefits of uncertainty estimates which can provide actionable decision making information for investing in energy conservation measures.
MONTE-CARLO BURNUP CALCULATION UNCERTAINTY QUANTIFICATION AND PROPAGATION DETERMINATION
Nichols, T.; Sternat, M.; Charlton, W.
2011-05-08
MONTEBURNS is a Monte-Carlo depletion routine utilizing MCNP and ORIGEN 2.2. Uncertainties exist in the MCNP transport calculation, but this information is not passed to the depletion calculation in ORIGEN or saved. To quantify this transport uncertainty and determine how it propagates between burnup steps, a statistical analysis of a multiple repeated depletion runs is performed. The reactor model chosen is the Oak Ridge Research Reactor (ORR) in a single assembly, infinite lattice configuration. This model was burned for a 25.5 day cycle broken down into three steps. The output isotopics as well as effective multiplication factor (k-effective) were tabulated and histograms were created at each burnup step using the Scott Method to determine the bin width. It was expected that the gram quantities and k-effective histograms would produce normally distributed results since they were produced from a Monte-Carlo routine, but some of results do not. The standard deviation at each burnup step was consistent between fission product isotopes as expected, while the uranium isotopes created some unique results. The variation in the quantity of uranium was small enough that, from the reaction rate MCNP tally, round off error occurred producing a set of repeated results with slight variation. Statistical analyses were performed using the {chi}{sup 2} test against a normal distribution for several isotopes and the k-effective results. While the isotopes failed to reject the null hypothesis of being normally distributed, the {chi}{sup 2} statistic grew through the steps in the k-effective test. The null hypothesis was rejected in the later steps. These results suggest, for a high accuracy solution, MCNP cell material quantities less than 100 grams and greater kcode parameters are needed to minimize uncertainty propagation and minimize round off effects.
Thermal hydraulic limits analysis using statistical propagation of parametric uncertainties
Chiang, K. Y.; Hu, L. W.; Forget, B.
2012-07-01
The MIT Research Reactor (MITR) is evaluating the conversion from highly enriched uranium (HEU) to low enrichment uranium (LEU) fuel. In addition to the fuel element re-design, a reactor power upgraded from 6 MW to 7 MW is proposed in order to maintain the same reactor performance of the HEU core. Previous approach in analyzing the impact of engineering uncertainties on thermal hydraulic limits via the use of engineering hot channel factors (EHCFs) was unable to explicitly quantify the uncertainty and confidence level in reactor parameters. The objective of this study is to develop a methodology for MITR thermal hydraulic limits analysis by statistically combining engineering uncertainties with an aim to eliminate unnecessary conservatism inherent in traditional analyses. This method was employed to analyze the Limiting Safety System Settings (LSSS) for the MITR, which is the avoidance of the onset of nucleate boiling (ONB). Key parameters, such as coolant channel tolerances and heat transfer coefficients, were considered as normal distributions using Oracle Crystal Ball to calculate ONB. The LSSS power is determined with 99.7% confidence level. The LSSS power calculated using this new methodology is 9.1 MW, based on core outlet coolant temperature of 60 deg. C, and primary coolant flow rate of 1800 gpm, compared to 8.3 MW obtained from the analytical method using the EHCFs with same operating conditions. The same methodology was also used to calculate the safety limit (SL) for the MITR, conservatively determined using onset of flow instability (OFI) as the criterion, to verify that adequate safety margin exists between LSSS and SL. The calculated SL is 10.6 MW, which is 1.5 MW higher than LSSS. (authors)
LCOE Uncertainty Analysis for Hydropower using Monte Carlo Simulations
Chalise, Dol Raj; O'Connor, Patrick W; DeNeale, Scott T; Uria Martinez, Rocio; Kao, Shih-Chieh
2015-01-01
Levelized Cost of Energy (LCOE) is an important metric to evaluate the cost and performance of electricity production generation alternatives, and combined with other measures, can be used to assess the economics of future hydropower development. Multiple assumptions on input parameters are required to calculate the LCOE, which each contain some level of uncertainty, in turn affecting the accuracy of LCOE results. This paper explores these uncertainties, their sources, and ultimately the level of variability they introduce at the screening level of project evaluation for non-powered dams (NPDs) across the U.S. Owing to site-specific differences in site design, the LCOE for hydropower varies significantly from project to project unlike technologies with more standardized configurations such as wind and gas. Therefore, to assess the impact of LCOE input uncertainty on the economics of U.S. hydropower resources, these uncertainties must be modeled across the population of potential opportunities. To demonstrate the impact of uncertainty, resource data from a recent nationwide non-powered dam (NPD) resource assessment (Hadjerioua et al., 2012) and screening-level predictive cost equations (O Connor et al., 2015) are used to quantify and evaluate uncertainties in project capital and operations & maintenance costs, and generation potential at broad scale. LCOE dependence on financial assumptions is also evaluated on a sensitivity basis to explore ownership/investment implications on project economics for the U.S. hydropower fleet. The results indicate that the LCOE for U.S. NPDs varies substantially. The LCOE estimates for the potential NPD projects of capacity greater than 1 MW range from 40 to 182 $/MWh, with average of 106 $/MWh. 4,000 MW could be developed through projects with individual LCOE values below 100 $/MWh. The results also indicate that typically 90 % of LCOE uncertainty can be attributed to uncertainties in capital costs and energy production; however
Uncertainty and sensitivity analysis for photovoltaic system modeling.
Hansen, Clifford W.; Pohl, Andrew Phillip; Jordan, Dirk
2013-12-01
We report an uncertainty and sensitivity analysis for modeling DC energy from photovoltaic systems. We consider two systems, each comprised of a single module using either crystalline silicon or CdTe cells, and located either at Albuquerque, NM, or Golden, CO. Output from a PV system is predicted by a sequence of models. Uncertainty in the output of each model is quantified by empirical distributions of each model's residuals. We sample these distributions to propagate uncertainty through the sequence of models to obtain an empirical distribution for each PV system's output. We considered models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane-of-array irradiance; (2) estimate effective irradiance from plane-of-array irradiance; (3) predict cell temperature; and (4) estimate DC voltage, current and power. We found that the uncertainty in PV system output to be relatively small, on the order of 1% for daily energy. Four alternative models were considered for the POA irradiance modeling step; we did not find the choice of one of these models to be of great significance. However, we observed that the POA irradiance model introduced a bias of upwards of 5% of daily energy which translates directly to a systematic difference in predicted energy. Sensitivity analyses relate uncertainty in the PV system output to uncertainty arising from each model. We found that the residuals arising from the POA irradiance and the effective irradiance models to be the dominant contributors to residuals for daily energy, for either technology or location considered. This analysis indicates that efforts to reduce the uncertainty in PV system output should focus on improvements to the POA and effective irradiance models.
Habte, A.; Sengupta, M.; Reda, I.
2015-03-01
Radiometric data with known and traceable uncertainty is essential for climate change studies to better understand cloud radiation interactions and the earth radiation budget. Further, adopting a known and traceable method of estimating uncertainty with respect to SI ensures that the uncertainty quoted for radiometric measurements can be compared based on documented methods of derivation.Therefore, statements about the overall measurement uncertainty can only be made on an individual basis, taking all relevant factors into account. This poster provides guidelines and recommended procedures for estimating the uncertainty in calibrations and measurements from radiometers. The approach follows the Guide to the Expression of Uncertainty in Measurement (GUM). derivation.Therefore, statements about the overall measurement uncertainty can only be made on an individual basis, taking all relevant factors into account. This poster provides guidelines and recommended procedures for estimating the uncertainty in calibrations and measurements from radiometers. The approach follows the Guide to the Expression of Uncertainty in Measurement (GUM).
Investment and Upgrade in Distributed Generation under Uncertainty
Siddiqui, Afzal; Maribu, Karl
2008-08-18
The ongoing deregulation of electricity industries worldwide is providing incentives for microgrids to use small-scale distributed generation (DG) and combined heat and power (CHP) applications via heat exchangers (HXs) to meet local energy loads. Although the electric-only efficiency of DG is lower than that of central-station production, relatively high tariff rates and the potential for CHP applications increase the attraction of on-site generation. Nevertheless, a microgrid contemplatingthe installation of gas-fired DG has to be aware of the uncertainty in the natural gas price. Treatment of uncertainty via real options increases the value of the investment opportunity, which then delays the adoption decision as the opportunity cost of exercising the investment option increases as well. In this paper, we take the perspective of a microgrid that can proceed in a sequential manner with DG capacity and HX investment in order to reduce its exposure to risk from natural gas price volatility. In particular, with the availability of the HX, the microgrid faces a tradeoff between reducing its exposure to the natural gas price and maximising its cost savings. By varying the volatility parameter, we find that the microgrid prefers a direct investment strategy for low levels of volatility and a sequential one for higher levels of volatility.
Systematic uncertainties from halo asphericity in dark matter searches
Bernal, Nicols; Forero-Romero, Jaime E.; Garani, Raghuveer; Palomares-Ruiz, Sergio E-mail: je.forero@uniandes.edu.co E-mail: sergio.palomares.ruiz@ific.uv.es
2014-09-01
Although commonly assumed to be spherical, dark matter halos are predicted to be non-spherical by N-body simulations and their asphericity has a potential impact on the systematic uncertainties in dark matter searches. The evaluation of these uncertainties is the main aim of this work, where we study the impact of aspherical dark matter density distributions in Milky-Way-like halos on direct and indirect searches. Using data from the large N-body cosmological simulation Bolshoi, we perform a statistical analysis and quantify the systematic uncertainties on the determination of local dark matter density and the so-called J factors for dark matter annihilations and decays from the galactic center. We find that, due to our ignorance about the extent of the non-sphericity of the Milky Way dark matter halo, systematic uncertainties can be as large as 35%, within the 95% most probable region, for a spherically averaged value for the local density of 0.3-0.4 GeV/cm {sup 3}. Similarly, systematic uncertainties on the J factors evaluated around the galactic center can be as large as 10% and 15%, within the 95% most probable region, for dark matter annihilations and decays, respectively.
Fuel cycle cost uncertainty from nuclear fuel cycle comparison
Li, J.; McNelis, D.; Yim, M.S.
2013-07-01
This paper examined the uncertainty in fuel cycle cost (FCC) calculation by considering both model and parameter uncertainty. Four different fuel cycle options were compared in the analysis including the once-through cycle (OT), the DUPIC cycle, the MOX cycle and a closed fuel cycle with fast reactors (FR). The model uncertainty was addressed by using three different FCC modeling approaches with and without the time value of money consideration. The relative ratios of FCC in comparison to OT did not change much by using different modeling approaches. This observation was consistent with the results of the sensitivity study for the discount rate. Two different sets of data with uncertainty range of unit costs were used to address the parameter uncertainty of the FCC calculation. The sensitivity study showed that the dominating contributor to the total variance of FCC is the uranium price. In general, the FCC of OT was found to be the lowest followed by FR, MOX, and DUPIC. But depending on the uranium price, the FR cycle was found to have lower FCC over OT. The reprocessing cost was also found to have a major impact on FCC.
PIV Uncertainty Methodologies for CFD Code Validation at the MIR Facility
Piyush Sabharwall; Richard Skifton; Carl Stoots; Eung Soo Kim; Thomas Conder
2013-12-01
Currently, computational fluid dynamics (CFD) is widely used in the nuclear thermal hydraulics field for design and safety analyses. To validate CFD codes, high quality multi dimensional flow field data are essential. The Matched Index of Refraction (MIR) Flow Facility at Idaho National Laboratory has a unique capability to contribute to the development of validated CFD codes through the use of Particle Image Velocimetry (PIV). The significance of the MIR facility is that it permits non intrusive velocity measurement techniques, such as PIV, through complex models without requiring probes and other instrumentation that disturb the flow. At the heart of any PIV calculation is the cross-correlation, which is used to estimate the displacement of particles in some small part of the image over the time span between two images. This image displacement is indicated by the location of the largest peak. In the MIR facility, uncertainty quantification is a challenging task due to the use of optical measurement techniques. Currently, this study is developing a reliable method to analyze uncertainty and sensitivity of the measured data and develop a computer code to automatically analyze the uncertainty/sensitivity of the measured data. The main objective of this study is to develop a well established uncertainty quantification method for the MIR Flow Facility, which consists of many complicated uncertainty factors. In this study, the uncertainty sources are resolved in depth by categorizing them into uncertainties from the MIR flow loop and PIV system (including particle motion, image distortion, and data processing). Then, each uncertainty source is mathematically modeled or adequately defined. Finally, this study will provide a method and procedure to quantify the experimental uncertainty in the MIR Flow Facility with sample test results.
A Unified Approach for Reporting ARM Measurement Uncertainties Technical Report
Campos, E; Sisterson, DL
2015-10-01
The Atmospheric Radiation Measurement (ARM) Climate Research Facility is observationally based, and quantifying the uncertainty of its measurements is critically important. With over 300 widely differing instruments providing over 2,500 datastreams, concise expression of measurement uncertainty is quite challenging. The ARM Facility currently provides data and supporting metadata (information about the data or data quality) to its users through a number of sources. Because the continued success of the ARM Facility depends on the known quality of its measurements, the Facility relies on instrument mentors and the ARM Data Quality Office (DQO) to ensure, assess, and report measurement quality. Therefore, an easily-accessible, well-articulated estimate of ARM measurement uncertainty is needed.
Uncertainty quantification and validation of combined hydrological and macroeconomic analyses.
Hernandez, Jacquelynne; Parks, Mancel Jordan; Jennings, Barbara Joan; Kaplan, Paul Garry; Brown, Theresa Jean; Conrad, Stephen Hamilton
2010-09-01
Changes in climate can lead to instabilities in physical and economic systems, particularly in regions with marginal resources. Global climate models indicate increasing global mean temperatures over the decades to come and uncertainty in the local to national impacts means perceived risks will drive planning decisions. Agent-based models provide one of the few ways to evaluate the potential changes in behavior in coupled social-physical systems and to quantify and compare risks. The current generation of climate impact analyses provides estimates of the economic cost of climate change for a limited set of climate scenarios that account for a small subset of the dynamics and uncertainties. To better understand the risk to national security, the next generation of risk assessment models must represent global stresses, population vulnerability to those stresses, and the uncertainty in population responses and outcomes that could have a significant impact on U.S. national security.
Hybrid processing of stochastic and subjective uncertainty data
Cooper, J.A.; Ferson, S.; Ginzburg, L.
1995-11-01
Uncertainty analyses typically recognize separate stochastic and subjective sources of uncertainty, but do not systematically combine the two, although a large amount of data used in analyses is partly stochastic and partly subjective. We have developed methodology for mathematically combining stochastic and subjective data uncertainty, based on new ``hybrid number`` approaches. The methodology can be utilized in conjunction with various traditional techniques, such as PRA (probabilistic risk assessment) and risk analysis decision support. Hybrid numbers have been previously examined as a potential method to represent combinations of stochastic and subjective information, but mathematical processing has been impeded by the requirements inherent in the structure of the numbers, e.g., there was no known way to multiply hybrids. In this paper, we will demonstrate methods for calculating with hybrid numbers that avoid the difficulties. By formulating a hybrid number as a probability distribution that is only fuzzy known, or alternatively as a random distribution of fuzzy numbers, methods are demonstrated for the full suite of arithmetic operations, permitting complex mathematical calculations. It will be shown how information about relative subjectivity (the ratio of subjective to stochastic knowledge about a particular datum) can be incorporated. Techniques are also developed for conveying uncertainty information visually, so that the stochastic and subjective constituents of the uncertainty, as well as the ratio of knowledge about the two, are readily apparent. The techniques demonstrated have the capability to process uncertainty information for independent, uncorrelated data, and for some types of dependent and correlated data. Example applications are suggested, illustrative problems are worked, and graphical results are given.
Uncertainty Analysis for RELAP5-3D
Aaron J. Pawel; Dr. George L. Mesina
2011-08-01
In its current state, RELAP5-3D is a 'best-estimate' code; it is one of our most reliable programs for modeling what occurs within reactor systems in transients from given initial conditions. This code, however, remains an estimator. A statistical analysis has been performed that begins to lay the foundation for a full uncertainty analysis. By varying the inputs over assumed probability density functions, the output parameters were shown to vary. Using such statistical tools as means, variances, and tolerance intervals, a picture of how uncertain the results are based on the uncertainty of the inputs has been obtained.
Uncertainty quantification in lattice QCD calculations for nuclear physics
Beane, Silas R.; Detmold, William; Orginos, Kostas; Savage, Martin J.
2015-02-05
The numerical technique of Lattice QCD holds the promise of connecting the nuclear forces, nuclei, the spectrum and structure of hadrons, and the properties of matter under extreme conditions with the underlying theory of the strong interactions, quantum chromodynamics. A distinguishing, and thus far unique, feature of this formulation is that all of the associated uncertainties, both statistical and systematic can, in principle, be systematically reduced to any desired precision with sufficient computational and human resources. As a result, we review the sources of uncertainty inherent in Lattice QCD calculations for nuclear physics, and discuss how each is quantified in current efforts.
Distributed Generation Investment by a Microgrid under Uncertainty
Marnay, Chris; Siddiqui, Afzal; Marnay, Chris
2008-08-11
This paper examines a California-based microgrid?s decision to invest in a distributed generation (DG) unit fuelled by natural gas. While the long-term natural gas generation cost is stochastic, we initially assume that the microgrid may purchase electricity at a fixed retail rate from its utility. Using the real options approach, we find a natural gas generation cost threshold that triggers DG investment. Furthermore, the consideration of operational flexibility by the microgrid increases DG investment, while the option to disconnect from the utility is not attractive. By allowing the electricity price to be stochastic, we next determine an investment threshold boundary and find that high electricity price volatility relative to that of natural gas generation cost delays investment while simultaneously increasing the value of the investment. We conclude by using this result to find the implicit option value of the DG unit when two sources of uncertainty exist.
IAEA CRP on HTGR Uncertainty Analysis: Benchmark Definition and Test Cases
Gerhard Strydom; Frederik Reitsma; Hans Gougar; Bismark Tyobeka; Kostadin Ivanov
2012-11-01
Uncertainty and sensitivity studies are essential elements of the reactor simulation code verification and validation process. Although several international uncertainty quantification activities have been launched in recent years in the LWR, BWR and VVER domains (e.g. the OECD/NEA BEMUSE program [1], from which the current OECD/NEA LWR Uncertainty Analysis in Modelling (UAM) benchmark [2] effort was derived), the systematic propagation of uncertainties in cross-section, manufacturing and model parameters for High Temperature Reactor (HTGR) designs has not been attempted yet. This paper summarises the scope, objectives and exercise definitions of the IAEA Coordinated Research Project (CRP) on HTGR UAM [3]. Note that no results will be included here, as the HTGR UAM benchmark was only launched formally in April 2012, and the specification is currently still under development.
Harp, Dylan; Atchley, Adam; Painter, Scott L; Coon, Ethan T.; Wilson, Cathy; Romanovsky, Vladimir E; Rowland, Joel
2016-01-01
The effect of soil property uncertainties on permafrost thaw projections are studied using a three-phase subsurface thermal hydrology model and calibration-constrained uncertainty analysis. The Null-Space Monte Carlo method is used to identify soil hydrothermal parameter combinations that are consistent with borehole temperature measurements at the study site, the Barrow Environmental Observatory. Each parameter combination is then used in a forward projection of permafrost conditions for the 21$^{st}$ century (from calendar year 2006 to 2100) using atmospheric forcings from the Community Earth System Model (CESM) in the Representative Concentration Pathway (RCP) 8.5 greenhouse gas concentration trajectory. A 100-year projection allows for the evaluation of intra-annual uncertainty due to soil properties and the inter-annual variability due to year to year differences in CESM climate forcings. After calibrating to borehole temperature data at this well-characterized site, soil property uncertainties are still significant and result in significant intra-annual uncertainties in projected active layer thickness and annual thaw depth-duration even with a specified future climate. Intra-annual uncertainties in projected soil moisture content and Stefan number are small. A volume and time integrated Stefan number decreases significantly in the future climate, indicating that latent heat of phase change becomes more important than heat conduction in future climates. Out of 10 soil parameters, ALT, annual thaw depth-duration, and Stefan number are highly dependent on mineral soil porosity, while annual mean liquid saturation of the active layer is highly dependent on the mineral soil residual saturation and moderately dependent on peat residual saturation. By comparing the ensemble statistics to the spread of projected permafrost metrics using different climate models, we show that the effect of calibration-constrained uncertainty in soil properties, although significant
Harp, D. R.; Atchley, A. L.; Painter, S. L.; Coon, E. T.; Wilson, C. J.; Romanovsky, V. E.; Rowland, J. C.
2015-06-29
The effect of soil property uncertainties on permafrost thaw projections are studied using a three-phase subsurface thermal hydrology model and calibration-constrained uncertainty analysis. The Null-Space Monte Carlo method is used to identify soil hydrothermal parameter combinations that are consistent with borehole temperature measurements at the study site, the Barrow Environmental Observatory. Each parameter combination is then used in a forward projection of permafrost conditions for the 21st century (from calendar year 2006 to 2100) using atmospheric forcings from the Community Earth System Model (CESM) in the Representative Concentration Pathway (RCP) 8.5 greenhouse gas concentration trajectory. A 100-year projection allows formore » the evaluation of intra-annual uncertainty due to soil properties and the inter-annual variability due to year to year differences in CESM climate forcings. After calibrating to borehole temperature data at this well-characterized site, soil property uncertainties are still significant and result in significant intra-annual uncertainties in projected active layer thickness and annual thaw depth-duration even with a specified future climate. Intra-annual uncertainties in projected soil moisture content and Stefan number are small. A volume and time integrated Stefan number decreases significantly in the future climate, indicating that latent heat of phase change becomes more important than heat conduction in future climates. Out of 10 soil parameters, ALT, annual thaw depth-duration, and Stefan number are highly dependent on mineral soil porosity, while annual mean liquid saturation of the active layer is highly dependent on the mineral soil residual saturation and moderately dependent on peat residual saturation. By comparing the ensemble statistics to the spread of projected permafrost metrics using different climate models, we show that the effect of calibration-constrained uncertainty in soil properties, although
Harp, D. R.; Atchley, A. L.; Painter, S. L.; Coon, E. T.; Wilson, C. J.; Romanovsky, V. E.; Rowland, J. C.
2015-06-29
The effect of soil property uncertainties on permafrost thaw projections are studied using a three-phase subsurface thermal hydrology model and calibration-constrained uncertainty analysis. The Null-Space Monte Carlo method is used to identify soil hydrothermal parameter combinations that are consistent with borehole temperature measurements at the study site, the Barrow Environmental Observatory. Each parameter combination is then used in a forward projection of permafrost conditions for the 21st century (from calendar year 2006 to 2100) using atmospheric forcings from the Community Earth System Model (CESM) in the Representative Concentration Pathway (RCP) 8.5 greenhouse gas concentration trajectory. A 100-year projection allows for the evaluation of intra-annual uncertainty due to soil properties and the inter-annual variability due to year to year differences in CESM climate forcings. After calibrating to borehole temperature data at this well-characterized site, soil property uncertainties are still significant and result in significant intra-annual uncertainties in projected active layer thickness and annual thaw depth-duration even with a specified future climate. Intra-annual uncertainties in projected soil moisture content and Stefan number are small. A volume and time integrated Stefan number decreases significantly in the future climate, indicating that latent heat of phase change becomes more important than heat conduction in future climates. Out of 10 soil parameters, ALT, annual thaw depth-duration, and Stefan number are highly dependent on mineral soil porosity, while annual mean liquid saturation of the active layer is highly dependent on the mineral soil residual saturation and moderately dependent on peat residual saturation. By comparing the ensemble statistics to the spread of projected permafrost metrics using different climate models, we show that the effect of calibration-constrained uncertainty in soil properties, although significant, is
Harp, Dylan R.; Atchley, Adam L.; Painter, Scott L.; Coon, Ethan T.; Wilson, Cathy J.; Romanovsky, Vladimir E.; Rowland, Joel C.
2016-02-11
Here, the effect of soil property uncertainties on permafrost thaw projections are studied using a three-phase subsurface thermal hydrology model and calibration-constrained uncertainty analysis. The Null-Space Monte Carlo method is used to identify soil hydrothermal parameter combinations that are consistent with borehole temperature measurements at the study site, the Barrow Environmental Observatory. Each parameter combination is then used in a forward projection of permafrost conditions for the 21more » $$^{st}$$ century (from calendar year 2006 to 2100) using atmospheric forcings from the Community Earth System Model (CESM) in the Representative Concentration Pathway (RCP) 8.5 greenhouse gas concentration trajectory. A 100-year projection allows for the evaluation of intra-annual uncertainty due to soil properties and the inter-annual variability due to year to year differences in CESM climate forcings. After calibrating to borehole temperature data at this well-characterized site, soil property uncertainties are still significant and result in significant intra-annual uncertainties in projected active layer thickness and annual thaw depth-duration even with a specified future climate. Intra-annual uncertainties in projected soil moisture content and Stefan number are small. A volume and time integrated Stefan number decreases significantly in the future climate, indicating that latent heat of phase change becomes more important than heat conduction in future climates. Out of 10 soil parameters, ALT, annual thaw depth-duration, and Stefan number are highly dependent on mineral soil porosity, while annual mean liquid saturation of the active layer is highly dependent on the mineral soil residual saturation and moderately dependent on peat residual saturation. By comparing the ensemble statistics to the spread of projected permafrost metrics using different climate models, we show that the effect of calibration-constrained uncertainty in soil properties
Reducing uncertainty in geostatistical description with well testing pressure data
Reynolds, A.C.; He, Nanqun; Oliver, D.S.
1997-08-01
Geostatistics has proven to be an effective tool for generating realizations of reservoir properties conditioned to static data, e.g., core and log data and geologic knowledge. Due to the lack of closely spaced data in the lateral directions, there will be significant variability in reservoir descriptions generated by geostatistical simulation, i.e., significant uncertainty in the reservoir descriptions. In past work, we have presented procedures based on inverse problem theory for generating reservoir descriptions (rock property fields) conditioned to pressure data and geostatistical information represented as prior means for log-permeability and porosity and variograms. Although we have shown that the incorporation of pressure data reduces the uncertainty below the level contained in the geostatistical model based only on static information (the prior model), our previous results assumed did not explicitly account for uncertainties in the prior means and the parameters defining the variogram model. In this work, we investigate how pressure data can help detect errors in the prior means. If errors in the prior means are large and are not taken into account, realizations conditioned to pressure data represent incorrect samples of the a posteriori probability density function for the rock property fields, whereas, if the uncertainty in the prior mean is incorporated properly into the model, one obtains realistic realizations of the rock property fields.
Improved lower bound on the entropic uncertainty relation
Jafarpour, Mojtaba; Sabour, Abbass
2011-09-15
We present a lower bound on the entropic uncertainty relation for the distinguished measurements of two observables in a d-dimensional Hilbert space for d up to 5. This bound provides an improvement over the best one yet available. The feasibility of the obtained bound presenting an improvement for higher dimensions is also discussed.
Reducing the uncertainties in particle therapy
Oancea, C.; Shipulin, K. N.; Mytsin, G. V.; Luchin, Y. I.
2015-02-24
The use of fundamental Nuclear Physics in Nuclear Medicine has a significant impact in the fight against cancer. Hadrontherapy is an innovative cancer radiotherapy method using nuclear particles (protons, neutrons and ions) for the treatment of early and advanced tumors. The main goal of proton therapy is to deliver high radiation doses to the tumor volume with minimal damage to healthy tissues and organs. The purpose of this work was to investigate the dosimetric errors in clinical proton therapy dose calculation due to the presence of metallic implants in the treatment plan, and to determine the impact of the errors. The results indicate that the errors introduced by the treatment planning systems are higher than 10% in the prediction of the dose at isocenter when the proton beam is passing directly through a metallic titanium alloy implant. In conclusion, we recommend that pencil-beam algorithms not be used when planning treatment for patients with titanium alloy implants, and to consider implementing methods to mitigate the effects of the implants.
River meander modeling and confronting uncertainty.
Posner, Ari J.
2011-05-01
This study examines the meandering phenomenon as it occurs in media throughout terrestrial, glacial, atmospheric, and aquatic environments. Analysis of the minimum energy principle, along with theories of Coriolis forces (and random walks to explain the meandering phenomenon) found that these theories apply at different temporal and spatial scales. Coriolis forces might induce topological changes resulting in meandering planforms. The minimum energy principle might explain how these forces combine to limit the sinuosity to depth and width ratios that are common throughout various media. The study then compares the first order analytical solutions for flow field by Ikeda, et al. (1981) and Johannesson and Parker (1989b). Ikeda's et al. linear bank erosion model was implemented to predict the rate of bank erosion in which the bank erosion coefficient is treated as a stochastic variable that varies with physical properties of the bank (e.g., cohesiveness, stratigraphy, or vegetation density). The developed model was used to predict the evolution of meandering planforms. Then, the modeling results were analyzed and compared to the observed data. Since the migration of a meandering channel consists of downstream translation, lateral expansion, and downstream or upstream rotations several measures are formulated in order to determine which of the resulting planforms is closest to the experimental measured one. Results from the deterministic model highly depend on the calibrated erosion coefficient. Since field measurements are always limited, the stochastic model yielded more realistic predictions of meandering planform evolutions. Due to the random nature of bank erosion coefficient, the meandering planform evolution is a stochastic process that can only be accurately predicted by a stochastic model.
Strydom, Gerhard; Bostelmann, Friederike; Ivanov, Kostadin
2014-10-01
The continued development of High Temperature Gas Cooled Reactors (HTGRs) requires verification of HTGR design and safety features with reliable high fidelity physics models and robust, efficient, and accurate codes. One way to address the uncertainties in the HTGR analysis tools is to assess the sensitivity of critical parameters (such as the calculated maximum fuel temperature during loss of coolant accidents) to a few important input uncertainties. The input parameters were identified by engineering judgement in the past but are today typically based on a Phenomena Identification Ranking Table (PIRT) process. The input parameters can also be derived from sensitivity studies and are then varied in the analysis to find a spread in the parameter of importance. However, there is often no easy way to compensate for these uncertainties. In engineering system design, a common approach for addressing performance uncertainties is to add compensating margins to the system, but with passive properties credited it is not so clear how to apply it in the case of modular HTGR heat removal path. Other more sophisticated uncertainty modelling approaches, including Monte Carlo analysis, have also been proposed and applied. Ideally one wishes to apply a more fundamental approach to determine the predictive capability and accuracies of coupled neutronics/thermal-hydraulics and depletion simulations used for reactor design and safety assessment. Today there is a broader acceptance of the use of uncertainty analysis even in safety studies, and it has been accepted by regulators in some cases to replace the traditional conservative analysis. Therefore some safety analysis calculations may use a mixture of these approaches for different parameters depending upon the particular requirements of the analysis problem involved. Sensitivity analysis can for example be used to provide information as part of an uncertainty analysis to determine best estimate plus uncertainty results to the
Giant dipole resonance parameters with uncertainties from photonuclear cross sections
Plujko, V.A.; Capote, R.; Gorbachenko, O.M.
2011-09-15
Updated values and corresponding uncertainties of isovector giant dipole resonance (IVGDR or GDR) model parameters are presented that are obtained by the least-squares fitting of theoretical photoabsorption cross sections to experimental data. The theoretical photoabsorption cross section is taken as a sum of the components corresponding to excitation of the GDR and quasideuteron contribution to the experimental photoabsorption cross section. The present compilation covers experimental data as of January 2010. - Highlights: {yields} Experimental {sigma} ({gamma}, abs) or a sum of partial cross sections are taken as input to the fitting. {yields} Data include contributions from photoproton reactions. {yields} Standard (SLO) or modified (SMLO) Lorentzian approaches are used for formulating GDR models. {yields} Spherical or axially deformed nuclear shapes are used in GDR least-squares fit. {yields} Values and uncertainties of the SLO and SMLO GDR model parameters are tabulated.
Reduction in maximum time uncertainty of paired time signals
Theodosiou, G.E.; Dawson, J.W.
1983-10-04
Reduction in the maximum time uncertainty (t[sub max]--t[sub min]) of a series of paired time signals t[sub 1] and t[sub 2] varying between two input terminals and representative of a series of single events where t[sub 1][<=]t[sub 2] and t[sub 1]+t[sub 2] equals a constant, is carried out with a circuit utilizing a combination of OR and AND gates as signal selecting means and one or more time delays to increase the minimum value (t[sub min]) of the first signal t[sub 1] closer to t[sub max] and thereby reduce the difference. The circuit may utilize a plurality of stages to reduce the uncertainty by factors of 20--800. 6 figs.
Reduction in maximum time uncertainty of paired time signals
Theodosiou, George E.; Dawson, John W.
1983-01-01
Reduction in the maximum time uncertainty (t.sub.max -t.sub.min) of a series of paired time signals t.sub.1 and t.sub.2 varying between two input terminals and representative of a series of single events where t.sub.1 .ltoreq.t.sub.2 and t.sub.1 +t.sub.2 equals a constant, is carried out with a circuit utilizing a combination of OR and AND gates as signal selecting means and one or more time delays to increase the minimum value (t.sub.min) of the first signal t.sub.1 closer to t.sub.max and thereby reduce the difference. The circuit may utilize a plurality of stages to reduce the uncertainty by factors of 20-800.
Reduction in maximum time uncertainty of paired time signals
Theodosiou, G.E.; Dawson, J.W.
1981-02-11
Reduction in the maximum time uncertainty (t/sub max/ - t/sub min/) of a series of paired time signals t/sub 1/ and t/sub 2/ varying between two input terminals and representative of a series of single events where t/sub 1/ less than or equal to t/sub 2/ and t/sub 1/ + t/sub 2/ equals a constant, is carried out with a circuit utilizing a combination of OR and AND gates as signal selecting means and one or more time delays to increase the minimum value (t/sub min/) of the first signal t/sub 1/ closer to t/sub max/ and thereby reduce the difference. The circuit may utilize a plurality of stages to reduce the uncertainty by factors of 20 to 800.
Uncertainty Quantification and Propagation in Nuclear Density Functional Theory
Schunck, N; McDonnell, J D; Higdon, D; Sarich, J; Wild, S M
2015-03-17
Nuclear density functional theory (DFT) is one of the main theoretical tools used to study the properties of heavy and superheavy elements, or to describe the structure of nuclei far from stability. While on-going eff orts seek to better root nuclear DFT in the theory of nuclear forces, energy functionals remain semi-phenomenological constructions that depend on a set of parameters adjusted to experimental data in fi nite nuclei. In this paper, we review recent eff orts to quantify the related uncertainties, and propagate them to model predictions. In particular, we cover the topics of parameter estimation for inverse problems, statistical analysis of model uncertainties and Bayesian inference methods. Illustrative examples are taken from the literature.
Charges versus taxes under uncertainty: A look from investment theory
Kohlhaas, M.
1996-12-31
The search for more cost-efficient economic instruments for environmental protection has been conducted with different priorities in the USA and most European countries. Whereas in Europe there seems to prevail a preference for charges and taxes, in the USA tradeable permits are preferred. Environmental policy has to take into consideration the trade-off between price uncertainty and ecological effectiveness. In the short run and on a national level, ecological effectiveness is less important, and price uncertainty is smaller with environmental taxes. In the long run and in an international framework, environmental constraints and repercussions on world market prices of energy become more important. Since taxes are politically more accepted in Europe, it would make sense to start with an ecological tax reform, which can be transformed or integrated into a system of internationally tradeable permits gradually.
Uncertainties in nuclear transition matrix elements of neutrinoless ?? decay
Rath, P. K.
2013-12-30
To estimate the uncertainties associated with the nuclear transition matrix elements M{sup (K)} (K=0?/0N) for the 0{sup +} ? 0{sup +} transitions of electron and positron emitting modes of the neutrinoless ?? decay, a statistical analysis has been performed by calculating sets of eight (twelve) different nuclear transition matrix elements M{sup (K)} in the PHFB model by employing four different parameterizations of a Hamiltonian with pairing plus multipolar effective two-body interaction and two (three) different parameterizations of Jastrow short range correlations. The averages in conjunction with their standard deviations provide an estimate of the uncertainties associated the nuclear transition matrix elements M{sup (K)} calculated within the PHFB model, the maximum of which turn out to be 13% and 19% owing to the exchange of light and heavy Majorana neutrinos, respectively.
Neutron reactions and climate uncertainties earn Los Alamos scientists DOE
U.S. Department of Energy (DOE) all webpages (Extended Search)
Early Career awards DOE Early Career awards Neutron reactions and climate uncertainties earn Los Alamos scientists DOE Early Career awards Marian Jandel and Nathan Urban are among the 61 national recipients of the Energy Department's Early Career Research Program awards for 2013. May 10, 2013 Marian Jandel (left) and Nathan Urban Marian Jandel (left) and Nathan Urban Contact Nancy Ambrosiano Communications Office (505) 667-0471 Email Marian and Nathan are excellent examples of the
Optimization of Complex Energy System Under Uncertainty | Argonne
U.S. Department of Energy (DOE) all webpages (Extended Search)
Leadership Computing Facility Optimization of Complex Energy System Under Uncertainty PI Name: Mihai Anitescu PI Email: anitescu@mcs.anl.gov Institution: Argonne National Laboratory Allocation Program: INCITE Allocation Hours at ALCF: 10,000,000 Year: 2012 Research Domain: Energy Technologies The U.S. electrical power system is at a crossroads between its mission to deliver cheap and safe electrical energy, a strategic aim to increase the penetration of renewable energy, an increased
Optimization of Complex Energy System Under Uncertainty | Argonne
U.S. Department of Energy (DOE) all webpages (Extended Search)
Leadership Computing Facility On the left: This shows the Illinois power grid system overlaid on fields portraying electricity prices under a deterministic economic dispatch scenario. Dark blue areas have the lowest prices while red and yellow have the highest. Argonne National Laboratory researchers use a model of the Illinois grid to test algorithms for making power dispatch decisions under uncertainty. On the right: This shows electricity prices in Illinois under a stochastic economic
Comment on ''Improved bounds on entropic uncertainty relations''
Bosyk, G. M.; Portesi, M.; Plastino, A.; Zozor, S.
2011-11-15
We provide an analytical proof of the entropic uncertainty relations presented by J. I. de Vicente and J. Sanchez-Ruiz [Phys. Rev. A 77, 042110 (2008)] and also show that the replacement of Eq. (27) by Eq. (29) in that reference introduces solutions that do not take fully into account the constraints of the problem, which in turn lead to some mistakes in their treatment.
Computational Fluid Dynamics & Large-Scale Uncertainty Quantification for
U.S. Department of Energy (DOE) all webpages (Extended Search)
Wind Energy Fluid Dynamics & Large-Scale Uncertainty Quantification for Wind Energy - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery
Errors and Uncertainties in Dose Reconstruction for Radiation Effects Research
Strom, Daniel J.
2008-04-14
Dose reconstruction for studies of the health effects of ionizing radiation have been carried out for many decades. Major studies have included Japanese bomb survivors, atomic veterans, downwinders of the Nevada Test Site and Hanford, underground uranium miners, and populations of nuclear workers. For such studies to be credible, significant effort must be put into applying the best science to reconstructing unbiased absorbed doses to tissues and organs as a function of time. In many cases, more and more sophisticated dose reconstruction methods have been developed as studies progressed. For the example of the Japanese bomb survivors, the dose surrogate “distance from the hypocenter” was replaced by slant range, and then by TD65 doses, DS86 doses, and more recently DS02 doses. Over the years, it has become increasingly clear that an equal level of effort must be expended on the quantitative assessment of uncertainty in such doses, and to reducing and managing uncertainty. In this context, this paper reviews difficulties in terminology, explores the nature of Berkson and classical uncertainties in dose reconstruction through examples, and proposes a path forward for Joint Coordinating Committee for Radiation Effects Research (JCCRER) Project 2.4 that requires a reasonably small level of effort for DOSES-2008.
Composite Multilinearity, Epistemic Uncertainty and Risk Achievement Worth
E. Borgonovo; C. L. Smith
2012-10-01
Risk Achievement Worth is one of the most widely utilized importance measures. RAW is defined as the ratio of the risk metric value attained when a component has failed over the base case value of the risk metric. Traditionally, both the numerator and denominator are point estimates. Relevant literature has shown that inclusion of epistemic uncertainty i) induces notable variability in the point estimate ranking and ii) causes the expected value of the risk metric to differ from its nominal value. We obtain the conditions under which the equality holds between the nominal and expected values of a reliability risk metric. Among these conditions, separability and state-of-knowledge independence emerge. We then study how the presence of epistemic uncertainty aspects RAW and the associated ranking. We propose an extension of RAW (called ERAW) which allows one to obtain a ranking robust to epistemic uncertainty. We discuss the properties of ERAW and the conditions under which it coincides with RAW. We apply our findings to a probabilistic risk assessment model developed for the safety analysis of NASA lunar space missions.
Perko, Z.; Gilli, L.; Lathouwers, D.; Kloosterman, J. L.
2013-07-01
Uncertainty quantification plays an increasingly important role in the nuclear community, especially with the rise of Best Estimate Plus Uncertainty methodologies. Sensitivity analysis, surrogate models, Monte Carlo sampling and several other techniques can be used to propagate input uncertainties. In recent years however polynomial chaos expansion has become a popular alternative providing high accuracy at affordable computational cost. This paper presents such polynomial chaos (PC) methods using adaptive sparse grids and adaptive basis set construction, together with an application to a Gas Cooled Fast Reactor transient. Comparison is made between a new sparse grid algorithm and the traditionally used technique proposed by Gerstner. An adaptive basis construction method is also introduced and is proved to be advantageous both from an accuracy and a computational point of view. As a demonstration the uncertainty quantification of a 50% loss of flow transient in the GFR2400 Gas Cooled Fast Reactor design was performed using the CATHARE code system. The results are compared to direct Monte Carlo sampling and show the superior convergence and high accuracy of the polynomial chaos expansion. Since PC techniques are easy to implement, they can offer an attractive alternative to traditional techniques for the uncertainty quantification of large scale problems. (authors)
Achieving Robustness to Uncertainty for Financial Decision-making
Barnum, George M.; Van Buren, Kendra L.; Hemez, Francois M.; Song, Peter
2014-01-10
This report investigates the concept of robustness analysis to support financial decision-making. Financial models, that forecast future stock returns or market conditions, depend on assumptions that might be unwarranted and variables that might exhibit large fluctuations from their last-known values. The analysis of robustness explores these sources of uncertainty, and recommends model settings such that the forecasts used for decision-making are as insensitive as possible to the uncertainty. A proof-of-concept is presented with the Capital Asset Pricing Model. The robustness of model predictions is assessed using info-gap decision theory. Info-gaps are models of uncertainty that express the “distance,” or gap of information, between what is known and what needs to be known in order to support the decision. The analysis yields a description of worst-case stock returns as a function of increasing gaps in our knowledge. The analyst can then decide on the best course of action by trading-off worst-case performance with “risk”, which is how much uncertainty they think needs to be accommodated in the future. The report also discusses the Graphical User Interface, developed using the MATLAB® programming environment, such that the user can control the analysis through an easy-to-navigate interface. Three directions of future work are identified to enhance the present software. First, the code should be re-written using the Python scientific programming software. This change will achieve greater cross-platform compatibility, better portability, allow for a more professional appearance, and render it independent from a commercial license, which MATLAB® requires. Second, a capability should be developed to allow users to quickly implement and analyze their own models. This will facilitate application of the software to the evaluation of proprietary financial models. The third enhancement proposed is to add the ability to evaluate multiple models simultaneously
Haihua Zhao; Vincent A. Mousseau; Nam T. Dinh
2010-10-01
Code Scaling, Applicability, and Uncertainty (CSAU) methodology was developed in late 1980s by US NRC to systematically quantify reactor simulation uncertainty. Basing on CSAU methodology, Best Estimate Plus Uncertainty (BEPU) methods have been developed and widely used for new reactor designs and existing LWRs power uprate. In spite of these successes, several aspects of CSAU have been criticized for further improvement: i.e., (1) subjective judgement in PIRT process; (2) high cost due to heavily relying large experimental database, needing many experts man-years work, and very high computational overhead; (3) mixing numerical errors with other uncertainties; (4) grid dependence and same numerical grids for both scaled experiments and real plants applications; (5) user effects; Although large amount of efforts have been used to improve CSAU methodology, the above issues still exist. With the effort to develop next generation safety analysis codes, new opportunities appear to take advantage of new numerical methods, better physical models, and modern uncertainty qualification methods. Forward sensitivity analysis (FSA) directly solves the PDEs for parameter sensitivities (defined as the differential of physical solution with respective to any constant parameter). When the parameter sensitivities are available in a new advanced system analysis code, CSAU could be significantly improved: (1) Quantifying numerical errors: New codes which are totally implicit and with higher order accuracy can run much faster with numerical errors quantified by FSA. (2) Quantitative PIRT (Q-PIRT) to reduce subjective judgement and improving efficiency: treat numerical errors as special sensitivities against other physical uncertainties; only parameters having large uncertainty effects on design criterions are considered. (3) Greatly reducing computational costs for uncertainty qualification by (a) choosing optimized time steps and spatial sizes; (b) using gradient information
Strydom, Gerhard; Bostelmann, F.
2015-09-01
The continued development of High Temperature Gas Cooled Reactors (HTGRs) requires verification of HTGR design and safety features with reliable high fidelity physics models and robust, efficient, and accurate codes. The predictive capability of coupled neutronics/thermal-hydraulics and depletion simulations for reactor design and safety analysis can be assessed with sensitivity analysis (SA) and uncertainty analysis (UA) methods. Uncertainty originates from errors in physical data, manufacturing uncertainties, modelling and computational algorithms. (The interested reader is referred to the large body of published SA and UA literature for a more complete overview of the various types of uncertainties, methodologies and results obtained). SA is helpful for ranking the various sources of uncertainty and error in the results of core analyses. SA and UA are required to address cost, safety, and licensing needs and should be applied to all aspects of reactor multi-physics simulation. SA and UA can guide experimental, modelling, and algorithm research and development. Current SA and UA rely either on derivative-based methods such as stochastic sampling methods or on generalized perturbation theory to obtain sensitivity coefficients. Neither approach addresses all needs. In order to benefit from recent advances in modelling and simulation and the availability of new covariance data (nuclear data uncertainties) extensive sensitivity and uncertainty studies are needed for quantification of the impact of different sources of uncertainties on the design and safety parameters of HTGRs. Only a parallel effort in advanced simulation and in nuclear data improvement will be able to provide designers with more robust and well validated calculation tools to meet design target accuracies. In February 2009, the Technical Working Group on Gas-Cooled Reactors (TWG-GCR) of the International Atomic Energy Agency (IAEA) recommended that the proposed Coordinated Research Program (CRP) on
Bixler, Nathan E.; Osborn, Douglas M.; Sallaberry, Cedric Jean-Marie; Eckert-Gallup, Aubrey Celia; Mattie, Patrick D.; Ghosh, S. Tina
2014-02-01
This paper describes the convergence of MELCOR Accident Consequence Code System, Version 2 (MACCS2) probabilistic results of offsite consequences for the uncertainty analysis of the State-of-the-Art Reactor Consequence Analyses (SOARCA) unmitigated long-term station blackout scenario at the Peach Bottom Atomic Power Station. The consequence metrics evaluated are individual latent-cancer fatality (LCF) risk and individual early fatality risk. Consequence results are presented as conditional risk (i.e., assuming the accident occurs, risk per event) to individuals of the public as a result of the accident. In order to verify convergence for this uncertainty analysis, as recommended by the Nuclear Regulatory Commission’s Advisory Committee on Reactor Safeguards, a ‘high’ source term from the original population of Monte Carlo runs has been selected to be used for: (1) a study of the distribution of consequence results stemming solely from epistemic uncertainty in the MACCS2 parameters (i.e., separating the effect from the source term uncertainty), and (2) a comparison between Simple Random Sampling (SRS) and Latin Hypercube Sampling (LHS) in order to validate the original results obtained with LHS. Three replicates (each using a different random seed) of size 1,000 each using LHS and another set of three replicates of size 1,000 using SRS are analyzed. The results show that the LCF risk results are well converged with either LHS or SRS sampling. The early fatality risk results are less well converged at radial distances beyond 2 miles, and this is expected due to the sparse data (predominance of “zero” results).
Harper, F.T.; Young, M.L.; Miller, L.A.
1995-01-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, completed in 1990, estimate the risks presented by nuclear installations based on postulated frequencies and magnitudes of potential accidents. In 1991, the US Nuclear Regulatory Commission (NRC) and the Commission of the European Communities (CEC) began a joint uncertainty analysis of the two codes. The objective was to develop credible and traceable uncertainty distributions for the input variables of the codes. Expert elicitation, developed independently, was identified as the best technology available for developing a library of uncertainty distributions for the selected consequence parameters. The study was formulated jointly and was limited to the current code models and to physical quantities that could be measured in experiments. To validate the distributions generated for the wet deposition input variables, samples were taken from these distributions and propagated through the wet deposition code model along with the Gaussian plume model (GPM) implemented in the MACCS and COSYMA codes. Resulting distributions closely replicated the aggregated elicited wet deposition distributions. Project teams from the NRC and CEC cooperated successfully to develop and implement a unified process for the elaboration of uncertainty distributions on consequence code input parameters. Formal expert judgment elicitation proved valuable for synthesizing the best available information. Distributions on measurable atmospheric dispersion and deposition parameters were successfully elicited from experts involved in the many phenomenological areas of consequence analysis. This volume is the third of a three-volume document describing the project and contains descriptions of the probability assessment principles; the expert identification and selection process; the weighting methods used; the inverse modeling methods; case structures; and summaries of the consequence codes.
Uncertainty and sensitivity analysis of fission gas behavior in engineering-scale fuel modeling
Pastore, Giovanni; Swiler, L. P.; Hales, Jason D.; Novascone, Stephen R.; Perez, Danielle M.; Spencer, Benjamin W.; Luzzi, Lelio; Uffelen, Paul Van; Williamson, Richard L.
2014-10-12
The role of uncertainties in fission gas behavior calculations as part of engineering-scale nuclear fuel modeling is investigated using the BISON fuel performance code and a recently implemented physics-based model for the coupled fission gas release and swelling. Through the integration of BISON with the DAKOTA software, a sensitivity analysis of the results to selected model parameters is carried out based on UO2 single-pellet simulations covering different power regimes. The parameters are varied within ranges representative of the relative uncertainties and consistent with the information from the open literature. The study leads to an initial quantitative assessment of the uncertainty in fission gas behavior modeling with the parameter characterization presently available. Also, the relative importance of the single parameters is evaluated. Moreover, a sensitivity analysis is carried out based on simulations of a fuel rod irradiation experiment, pointing out a significant impact of the considered uncertainties on the calculated fission gas release and cladding diametral strain. The results of the study indicate that the commonly accepted deviation between calculated and measured fission gas release by a factor of 2 approximately corresponds to the inherent modeling uncertainty at high fission gas release. Nevertheless, higher deviations may be expected for values around 10% and lower. Implications are discussed in terms of directions of research for the improved modeling of fission gas behavior for engineering purposes.
Uncertainty and sensitivity analysis of fission gas behavior in engineering-scale fuel modeling
G. Pastore; L.P. Swiler; J.D. Hales; S.R. Novascone; D.M. Perez; B.W. Spencer; L. Luzzi; P. Van Uffelen; R.L. Williamson
2014-10-01
The role of uncertainties in fission gas behavior calculations as part of engineering-scale nuclear fuel modeling is investigated using the BISON fuel performance code and a recently implemented physics-based model for the coupled fission gas release and swelling. Through the integration of BISON with the DAKOTA software, a sensitivity analysis of the results to selected model parameters is carried out based on UO2 single-pellet simulations covering different power regimes. The parameters are varied within ranges representative of the relative uncertainties and consistent with the information from the open literature. The study leads to an initial quantitative assessment of the uncertainty in fission gas behavior modeling with the parameter characterization presently available. Also, the relative importance of the single parameters is evaluated. Moreover, a sensitivity analysis is carried out based on simulations of a fuel rod irradiation experiment, pointing out a significant impact of the considered uncertainties on the calculated fission gas release and cladding diametral strain. The results of the study indicate that the commonly accepted deviation between calculated and measured fission gas release by a factor of 2 approximately corresponds to the inherent modeling uncertainty at high fission gas release. Nevertheless, higher deviations may be expected for values around 10% and lower. Implications are discussed in terms of directions of research for the improved modeling of fission gas behavior for engineering purposes.
Donald M. McEligot; Hugh M. McIlroy, Jr.; Ryan C. Johnson
2007-11-01
The purpose of the fluid dynamics experiments in the MIR (Matched-Index-of-Refraction) flow system at Idaho National Laboratory (INL) is to develop benchmark databases for the assessment of Computational Fluid Dynamics (CFD) solutions of the momentum equations, scalar mixing, and turbulence models for typical Very High Temperature Reactor (VHTR) plenum geometries in the limiting case of negligible buoyancy and constant fluid properties. The experiments use optical techniques, primarily particle image velocimetry (PIV) in the INL MIR flow system. The benefit of the MIR technique is that it permits optical measurements to determine flow characteristics in passages and around objects to be obtained without locating a disturbing transducer in the flow field and without distortion of the optical paths. The objective of the present report is to develop understanding of the magnitudes of experimental uncertainties in the results to be obtained in such experiments. Unheated MIR experiments are first steps when the geometry is complicated. One does not want to use a computational technique, which will not even handle constant properties properly. This report addresses the general background, requirements for benchmark databases, estimation of experimental uncertainties in mean velocities and turbulence quantities, the MIR experiment, PIV uncertainties, positioning uncertainties, and other contributing measurement uncertainties.
Zettlemoyer, M.D.
1990-01-01
The Air Force Toxic Chemical Dispersion (AFTOX) model is a Gaussian puff dispersion model that predicts plumes, concentrations, and hazard distances of toxic chemical spills. A measurement uncertainty propagation formula derived by Freeman et al. (1986) is used within AFTOX to estimate resulting concentration uncertainties due to the effects of data input uncertainties in wind speed, spill height, emission rate, and the horizontal and vertical Gaussian dispersion parameters, and the results are compared to true uncertainties as estimated by standard deviations computed by Monte Carlo simulations. The measurement uncertainty uncertainty propagation formula was found to overestimate measurement uncertainty in AFTOX-calculated concentrations by at least 350 percent, with overestimates worsening with increasing stability and/or increasing measurement uncertainty.
A Two-Step Approach to Uncertainty Quantification of Core Simulators...
Office of Scientific and Technical Information (OSTI)
Journal Article: A Two-Step Approach to Uncertainty Quantification of Core Simulators Citation Details In-Document Search Title: A Two-Step Approach to Uncertainty Quantification of ...
Ideas underlying quantification of margins and uncertainties(QMU): a white paper.
Helton, Jon Craig; Trucano, Timothy Guy; Pilch, Martin M.
2006-09-01
This report describes key ideas underlying the application of Quantification of Margins and Uncertainties (QMU) to nuclear weapons stockpile lifecycle decisions at Sandia National Laboratories. While QMU is a broad process and methodology for generating critical technical information to be used in stockpile management, this paper emphasizes one component, which is information produced by computational modeling and simulation. In particular, we discuss the key principles of developing QMU information in the form of Best Estimate Plus Uncertainty, the need to separate aleatory and epistemic uncertainty in QMU, and the risk-informed decision making that is best suited for decisive application of QMU. The paper is written at a high level, but provides a systematic bibliography of useful papers for the interested reader to deepen their understanding of these ideas.
TRITIUM UNCERTAINTY ANALYSIS FOR SURFACE WATER SAMPLES AT THE SAVANNAH RIVER SITE
Atkinson, R.
2012-07-31
Radiochemical analyses of surface water samples, in the framework of Environmental Monitoring, have associated uncertainties for the radioisotopic results reported. These uncertainty analyses pertain to the tritium results from surface water samples collected at five locations on the Savannah River near the U.S. Department of Energy's Savannah River Site (SRS). Uncertainties can result from the field-sampling routine, can be incurred during transport due to the physical properties of the sample, from equipment limitations, and from the measurement instrumentation used. The uncertainty reported by the SRS in their Annual Site Environmental Report currently considers only the counting uncertainty in the measurements, which is the standard reporting protocol for radioanalytical chemistry results. The focus of this work is to provide an overview of all uncertainty components associated with SRS tritium measurements, estimate the total uncertainty according to ISO 17025, and to propose additional experiments to verify some of the estimated uncertainties. The main uncertainty components discovered and investigated in this paper are tritium absorption or desorption in the sample container, HTO/H{sub 2}O isotopic effect during distillation, pipette volume, and tritium standard uncertainty. The goal is to quantify these uncertainties and to establish a combined uncertainty in order to increase the scientific depth of the SRS Annual Site Environmental Report.
Entropic uncertainty relations for the ground state of a coupled system
Santhanam, M.S.
2004-04-01
There is a renewed interest in the uncertainty principle, reformulated from the information theoretic point of view, called the entropic uncertainty relations. They have been studied for various integrable systems as a function of their quantum numbers. In this work, focussing on the ground state of a nonlinear, coupled Hamiltonian system, we show that approximate eigenstates can be constructed within the framework of adiabatic theory. Using the adiabatic eigenstates, we estimate the information entropies and their sum as a function of the nonlinearity parameter. We also briefly look at the information entropies for the highly excited states in the system.
Identifying and bounding uncertainties in nuclear reactor thermal power calculations
Phillips, J.; Hauser, E.; Estrada, H.
2012-07-01
Determination of the thermal power generated in the reactor core of a nuclear power plant is a critical element in the safe and economic operation of the plant. Direct measurement of the reactor core thermal power is made using neutron flux instrumentation; however, this instrumentation requires frequent calibration due to changes in the measured flux caused by fuel burn-up, flux pattern changes, and instrumentation drift. To calibrate the nuclear instruments, steam plant calorimetry, a process of performing a heat balance around the nuclear steam supply system, is used. There are four basic elements involved in the calculation of thermal power based on steam plant calorimetry: The mass flow of the feedwater from the power conversion system, the specific enthalpy of that feedwater, the specific enthalpy of the steam delivered to the power conversion system, and other cycle gains and losses. Of these elements, the accuracy of the feedwater mass flow and the feedwater enthalpy, as determined from its temperature and pressure, are typically the largest contributors to the calorimetric calculation uncertainty. Historically, plants have been required to include a margin of 2% in the calculation of the reactor thermal power for the licensed maximum plant output to account for instrumentation uncertainty. The margin is intended to ensure a cushion between operating power and the power for which safety analyses are performed. Use of approved chordal ultrasonic transit-time technology to make the feedwater flow and temperature measurements (in place of traditional differential-pressure- based instruments and resistance temperature detectors [RTDs]) allows for nuclear plant thermal power calculations accurate to 0.3%-0.4% of plant rated power. This improvement in measurement accuracy has allowed many plant operators in the U.S. and around the world to increase plant power output through Measurement Uncertainty Recapture (MUR) up-rates of up to 1.7% of rated power, while also
Molecular nonlinear dynamics and protein thermal uncertainty quantification
Xia, Kelin [Department of Mathematics, Michigan State University, Michigan 48824 (United States)] [Department of Mathematics, Michigan State University, Michigan 48824 (United States); Wei, Guo-Wei, E-mail: wei@math.msu.edu [Department of Mathematics, Michigan State University, Michigan 48824 (United States) [Department of Mathematics, Michigan State University, Michigan 48824 (United States); Department of Electrical and Computer Engineering, Michigan State University, Michigan 48824 (United States); Department of Biochemistry and Molecular Biology, Michigan State University, Michigan 48824 (United States)
2014-03-15
This work introduces molecular nonlinear dynamics (MND) as a new approach for describing protein folding and aggregation. By using a mode system, we show that the MND of disordered proteins is chaotic while that of folded proteins exhibits intrinsically low dimensional manifolds (ILDMs). The stability of ILDMs is found to strongly correlate with protein energies. We propose a novel method for protein thermal uncertainty quantification based on persistently invariant ILDMs. Extensive comparison with experimental data and the state-of-the-art methods in the field validate the proposed new method for protein B-factor prediction.
PROBABILISTIC SENSITIVITY AND UNCERTAINTY ANALYSIS WORKSHOP SUMMARY REPORT
Seitz, R
2008-06-25
Stochastic or probabilistic modeling approaches are being applied more frequently in the United States and globally to quantify uncertainty and enhance understanding of model response in performance assessments for disposal of radioactive waste. This increased use has resulted in global interest in sharing results of research and applied studies that have been completed to date. This technical report reflects the results of a workshop that was held to share results of research and applied work related to performance assessments conducted at United States Department of Energy sites. Key findings of this research and applied work are discussed and recommendations for future activities are provided.
Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty
Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.; Cantrell, Kirk J.
2004-03-01
The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates based on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four
Uncertainty Analysis of RELAP5-3D
Alexandra E Gertman; Dr. George L Mesina
2012-07-01
As world-wide energy consumption continues to increase, so does the demand for the use of alternative energy sources, such as Nuclear Energy. Nuclear Power Plants currently supply over 370 gigawatts of electricity, and more than 60 new nuclear reactors have been commissioned by 15 different countries. The primary concern for Nuclear Power Plant operation and lisencing has been safety. The safety of the operation of Nuclear Power Plants is no simple matter- it involves the training of operators, design of the reactor, as well as equipment and design upgrades throughout the lifetime of the reactor, etc. To safely design, operate, and understand nuclear power plants, industry and government alike have relied upon the use of best-estimate simulation codes, which allow for an accurate model of any given plant to be created with well-defined margins of safety. The most widely used of these best-estimate simulation codes in the Nuclear Power industry is RELAP5-3D. Our project focused on improving the modeling capabilities of RELAP5-3D by developing uncertainty estimates for its calculations. This work involved analyzing high, medium, and low ranked phenomena from an INL PIRT on a small break Loss-Of-Coolant Accident as wall as an analysis of a large break Loss-Of- Coolant Accident. Statistical analyses were performed using correlation coefficients. To perform the studies, computer programs were written that modify a template RELAP5 input deck to produce one deck for each combination of key input parameters. Python scripting enabled the running of the generated input files with RELAP5-3D on INL’s massively parallel cluster system. Data from the studies was collected and analyzed with SAS. A summary of the results of our studies are presented.
Design Features and Technology Uncertainties for the Next Generation Nuclear Plant
John M. Ryskamp; Phil Hildebrandt; Osamu Baba; Ron Ballinger; Robert Brodsky; Hans-Wolfgang Chi; Dennis Crutchfield; Herb Estrada; Jeane-Claude Garnier; Gerald Gordon; Richard Hobbins; Dan Keuter; Marilyn Kray; Philippe Martin; Steve Melancon; Christian Simon; Henry Stone; Robert Varrin; Werner von Lensa
2004-06-01
This report presents the conclusions, observations, and recommendations of the Independent Technology Review Group (ITRG) regarding design features and important technology uncertainties associated with very-high-temperature nuclear system concepts for the Next Generation Nuclear Plant (NGNP). The ITRG performed its reviews during the period November 2003 through April 2004.
Quantifying uncertainty in material damage from vibrational data
Butler, T.; Huhtala, A.; Juntunen, M.
2015-02-15
The response of a vibrating beam to a force depends on many physical parameters including those determined by material properties. Damage caused by fatigue or cracks results in local reductions in stiffness parameters and may drastically alter the response of the beam. Data obtained from the vibrating beam are often subject to uncertainties and/or errors typically modeled using probability densities. The goal of this paper is to estimate and quantify the uncertainty in damage modeled as a local reduction in stiffness using uncertain data. We present various frameworks and methods for solving this parameter determination problem. We also describe a mathematical analysis to determine and compute useful output data for each method. We apply the various methods in a specified sequence that allows us to interface the various inputs and outputs of these methods in order to enhance the inferences drawn from the numerical results obtained from each method. Numerical results are presented using both simulated and experimentally obtained data from physically damaged beams.
The Uncertainty in the Local Seismic Response Analysis
Pasculli, A.; Pugliese, A.; Romeo, R. W.; Sano, T.
2008-07-08
In the present paper is shown the influence on the local seismic response analysis exerted by considering dispersion and uncertainty in the seismic input as well as in the dynamic properties of soils. In a first attempt a 1D numerical model is developed accounting for both the aleatory nature of the input motion and the stochastic variability of the dynamic properties of soils. The seismic input is introduced in a non-conventional way through a power spectral density, for which an elastic response spectrum, derived--for instance--by a conventional seismic hazard analysis, is required with an appropriate level of reliability. The uncertainty in the geotechnical properties of soils are instead investigated through a well known simulation technique (Monte Carlo method) for the construction of statistical ensembles. The result of a conventional local seismic response analysis given by a deterministic elastic response spectrum is replaced, in our approach, by a set of statistical elastic response spectra, each one characterized by an appropriate level of probability to be reached or exceeded. The analyses have been carried out for a well documented real case-study. Lastly, we anticipate a 2D numerical analysis to investigate also the spatial variability of soil's properties.
The role of the uncertainty in code development
Barre, F.
1997-07-01
From a general point of view, all the results of a calculation should be given with their uncertainty. It is of most importance in nuclear safety where sizing of the safety systems, therefore protection of the population and the environment essentially depends on the calculation results. Until these last years, the safety analysis was performed with conservative tools. Two types of critics can be made. Firstly, conservative margins can be too large and it may be possible to reduce the cost of the plant or its operation with a best estimate approach. Secondly, some of the conservative hypotheses may not really conservative in the full range of physical events which can occur during an accident. Simpson gives an interesting example: in some cases, the majoration of the residual power during a small break LOCA can lead to an overprediction of the swell level and thus of an overprediction of the core cooling, which is opposite to a conservative prediction. A last question is: does the accumulation of conservative hypotheses for a problem always give a conservative result? The two phase flow physics, mainly dealing with situation of mechanical and thermal non-equilibrium, is too much complicated to answer these questions with a simple engineer judgement. The objective of this paper is to make a review of the quantification of the uncertainties which can be made during code development and validation.
Data Filtering Impact on PV Degradation Rates and Uncertainty (Poster)
Jordan, D. C.; Kurtz, S. R.
2012-03-01
To sustain the commercial success of photovoltaics (PV) it becomes vital to know how power output decreases with time. In order to predict power delivery, degradation rates must be determined accurately. Data filtering, any data treatment assessment of long-term field behavior, is discussed as part of a more comprehensive uncertainty analysis and can be one of the greatest sources of uncertainty in long-term performance studies. Several distinct filtering methods such as outlier removal and inclusion of only sunny days on several different metrics such as PVUSA, performance ratio, DC power to plane-of-array irradiance ratio, uncorrected, and temperature-corrected were examined. PVUSA showed the highest sensitivity while temperature-corrected power over irradiance ratio was found to be the least sensitive to data filtering conditions. Using this ratio it is demonstrated that quantification of degradation rates with a statistical accuracy of +/- 0.2%/year within 4 years of field data is possible on two crystalline silicon and two thin-film systems.
Analysis and Reduction of Complex Networks Under Uncertainty.
Ghanem, Roger G
2014-07-31
This effort was a collaboration with Youssef Marzouk of MIT, Omar Knio of Duke University (at the time at Johns Hopkins University) and Habib Najm of Sandia National Laboratories. The objective of this effort was to develop the mathematical and algorithmic capacity to analyze complex networks under uncertainty. Of interest were chemical reaction networks and smart grid networks. The statements of work for USC focused on the development of stochastic reduced models for uncertain networks. The USC team was led by Professor Roger Ghanem and consisted of one graduate student and a postdoc. The contributions completed by the USC team consisted of 1) methodology and algorithms to address the eigenvalue problem, a problem of significance in the stability of networks under stochastic perturbations, 2) methodology and algorithms to characterize probability measures on graph structures with random flows. This is an important problem in characterizing random demand (encountered in smart grid) and random degradation (encountered in infrastructure systems), as well as modeling errors in Markov Chains (with ubiquitous relevance !). 3) methodology and algorithms for treating inequalities in uncertain systems. This is an important problem in the context of models for material failure and network flows under uncertainty where conditions of failure or flow are described in the form of inequalities between the state variables.
WE-B-19A-01: SRT II: Uncertainties in SRT
Dieterich, S; Schlesinger, D; Geneser, S
2014-06-15
SRS delivery has undergone major technical changes in the last decade, transitioning from predominantly frame-based treatment delivery to imageguided, frameless SRS. It is important for medical physicists working in SRS to understand the magnitude and sources of uncertainty involved in delivering SRS treatments for a multitude of technologies (Gamma Knife, CyberKnife, linac-based SRS and protons). Sources of SRS planning and delivery uncertainty include dose calculation, dose fusion, and intra- and inter-fraction motion. Dose calculations for small fields are particularly difficult because of the lack of electronic equilibrium and greater effect of inhomogeneities within and near the PTV. Going frameless introduces greater setup uncertainties that allows for potentially increased intra- and interfraction motion, The increased use of multiple imaging modalities to determine the tumor volume, necessitates (deformable) image and contour fusion, and the resulting uncertainties introduced in the image registration process further contribute to overall treatment planning uncertainties. Each of these uncertainties must be quantified and their impact on treatment delivery accuracy understood. If necessary, the uncertainties may then be accounted for during treatment planning either through techniques to make the uncertainty explicit, or by the appropriate addition of PTV margins. Further complicating matters, the statistics of 1-5 fraction SRS treatments differ from traditional margin recipes relying on Poisson statistics. In this session, we will discuss uncertainties introduced during each step of the SRS treatment planning and delivery process and present margin recipes to appropriately account for such uncertainties. Learning Objectives: To understand the major contributors to the total delivery uncertainty in SRS for Gamma Knife, CyberKnife, and linac-based SRS. Learn the various uncertainties introduced by image fusion, deformable image registration, and contouring
Quantification of initial-data uncertainty on a shock-accelerated gas cylinder
Tritschler, V. K. Avdonin, A.; Hickel, S.; Hu, X. Y.; Adams, N. A.
2014-02-15
We quantify initial-data uncertainties on a shock accelerated heavy-gas cylinder by two-dimensional well-resolved direct numerical simulations. A high-resolution compressible multicomponent flow simulation model is coupled with a polynomial chaos expansion to propagate the initial-data uncertainties to the output quantities of interest. The initial flow configuration follows previous experimental and numerical works of the shock accelerated heavy-gas cylinder. We investigate three main initial-data uncertainties, (i) shock Mach number, (ii) contamination of SF{sub 6} with acetone, and (iii) initial deviations of the heavy-gas region from a perfect cylindrical shape. The impact of initial-data uncertainties on the mixing process is examined. The results suggest that the mixing process is highly sensitive to input variations of shock Mach number and acetone contamination. Additionally, our results indicate that the measured shock Mach number in the experiment of Tomkins et al. [An experimental investigation of mixing mechanisms in shock-accelerated flow, J. Fluid. Mech. 611, 131 (2008)] and the estimated contamination of the SF{sub 6} region with acetone [S. K. Shankar, S. Kawai, and S. K. Lele, Two-dimensional viscous flow simulation of a shock accelerated heavy gas cylinder, Phys. Fluids 23, 024102 (2011)] exhibit deviations from those that lead to best agreement between our simulations and the experiment in terms of overall flow evolution.
Chapman, J. B.; Pohlmann, K.; Pohll, G.; Hassan, A.; Sanders, P.; Sanchez, M.; Jaunarajs, S.
2002-02-26
The Faultless underground nuclear test, conducted in central Nevada, is the site of an ongoing environmental remediation effort that has successfully progressed through numerous technical challenges due to close cooperation between the U.S. Department of Energy, (DOE) National Nuclear Security Administration and the State of Nevada Division of Environmental Protection (NDEP). The challenges faced at this site are similar to those of many other sites of groundwater contamination: substantial uncertainties due to the relative lack of data from a highly heterogeneous subsurface environment. Knowing when, where, and how to devote the often enormous resources needed to collect new data is a common problem, and one that can cause remediators and regulators to disagree and stall progress toward closing sites. For Faultless, a variety of numerical modeling techniques and statistical tools are used to provide the information needed for DOE and NDEP to confidently move forward along the remediation path to site closure. A general framework for remediation was established in an agreement and consent order between DOE and the State of Nevada that recognized that no cost-effective technology currently exists to remove the source of contaminants in nuclear cavities. Rather, the emphasis of the corrective action is on identifying the impacted groundwater resource and ensuring protection of human health and the environment from the contamination through monitoring. As a result, groundwater flow and transport modeling is the linchpin in the remediation effort. An early issue was whether or not new site data should be collected via drilling and testing prior to modeling. After several iterations of the Corrective Action Investigation Plan, all parties agreed that sufficient data existed to support a flow and transport model for the site. Though several aspects of uncertainty were included in the subsequent modeling work, concerns remained regarding uncertainty in individual
Uncertainty and sensitivity analysis of fission gas behavior in engineering-scale fuel modeling
Pastore, Giovanni; Swiler, L. P.; Hales, Jason D.; Novascone, Stephen R.; Perez, Danielle M.; Spencer, Benjamin W.; Luzzi, Lelio; Uffelen, Paul Van; Williamson, Richard L.
2014-10-12
The role of uncertainties in fission gas behavior calculations as part of engineering-scale nuclear fuel modeling is investigated using the BISON fuel performance code and a recently implemented physics-based model for the coupled fission gas release and swelling. Through the integration of BISON with the DAKOTA software, a sensitivity analysis of the results to selected model parameters is carried out based on UO2 single-pellet simulations covering different power regimes. The parameters are varied within ranges representative of the relative uncertainties and consistent with the information from the open literature. The study leads to an initial quantitative assessment of the uncertaintymore » in fission gas behavior modeling with the parameter characterization presently available. Also, the relative importance of the single parameters is evaluated. Moreover, a sensitivity analysis is carried out based on simulations of a fuel rod irradiation experiment, pointing out a significant impact of the considered uncertainties on the calculated fission gas release and cladding diametral strain. The results of the study indicate that the commonly accepted deviation between calculated and measured fission gas release by a factor of 2 approximately corresponds to the inherent modeling uncertainty at high fission gas release. Nevertheless, higher deviations may be expected for values around 10% and lower. Implications are discussed in terms of directions of research for the improved modeling of fission gas behavior for engineering purposes.« less
Reda, I.
2011-07-01
The uncertainty of measuring solar irradiance is fundamentally important for solar energy and atmospheric science applications. Without an uncertainty statement, the quality of a result, model, or testing method cannot be quantified, the chain of traceability is broken, and confidence cannot be maintained in the measurement. Measurement results are incomplete and meaningless without a statement of the estimated uncertainty with traceability to the International System of Units (SI) or to another internationally recognized standard. This report explains how to use International Guidelines of Uncertainty in Measurement (GUM) to calculate such uncertainty. The report also shows that without appropriate corrections to solar measuring instruments (solar radiometers), the uncertainty of measuring shortwave solar irradiance can exceed 4% using present state-of-the-art pyranometers and 2.7% using present state-of-the-art pyrheliometers. Finally, the report demonstrates that by applying the appropriate corrections, uncertainties may be reduced by at least 50%. The uncertainties, with or without the appropriate corrections might not be compatible with the needs of solar energy and atmospheric science applications; yet, this report may shed some light on the sources of uncertainties and the means to reduce overall uncertainty in measuring solar irradiance.
Science based stockpile stewardship, uncertainty quantification, and fission fragment beams
Stoyer, M A; McNabb, D; Burke, J; Bernstein, L A; Wu, C Y
2009-09-14
Stewardship of this nation's nuclear weapons is predicated on developing a fundamental scientific understanding of the physics and chemistry required to describe weapon performance without the need to resort to underground nuclear testing and to predict expected future performance as a result of intended or unintended modifications. In order to construct more reliable models, underground nuclear test data is being reanalyzed in novel ways. The extent to which underground experimental data can be matched with simulations is one measure of the credibility of our capability to predict weapon performance. To improve the interpretation of these experiments with quantified uncertainties, improved nuclear data is required. As an example, the fission yield of a device was often determined by measuring fission products. Conversion of the measured fission products to yield was accomplished through explosion code calculations (models) and a good set of nuclear reaction cross-sections. Because of the unique high-fluence environment of an exploding nuclear weapon, many reactions occurred on radioactive nuclides, for which only theoretically calculated cross-sections are available. Inverse kinematics reactions at CARIBU offer the opportunity to measure cross-sections on unstable neutron-rich fission fragments and thus improve the quality of the nuclear reaction cross-section sets. One of the fission products measured was {sup 95}Zr, the accumulation of all mass 95 fission products of Y, Sr, Rb and Kr (see Fig. 1). Subsequent neutron-induced reactions on these short lived fission products were assumed to cancel out - in other words, the destruction of mass 95 nuclides was more or less equal to the production of mass 95 nuclides. If a {sup 95}Sr was destroyed by an (n,2n) reaction it was also produced by (n,2n) reactions on {sup 96}Sr, for example. However, since these nuclides all have fairly short half-lives (seconds to minutes or even less), no experimental nuclear reaction
Uncertainty of silicon 1-MeV damage function
Danjaji, M.B.; Griffin, P.J.
1997-02-01
The electronics radiation hardness-testing community uses the ASTM E722-93 Standard Practice to define the energy dependence of the nonionizing neutron damage to silicon semiconductors. This neutron displacement damage response function is defined to be equal to the silicon displacement kerma as calculated from the ORNL Si cross-section evaluation. Experimental work has shown that observed damage ratios at various test facilities agree with the defined response function to within 5%. Here, a covariance matrix for the silicon 1-MeV neutron displacement damage function is developed. This uncertainty data will support the electronic radiation hardness-testing community and will permit silicon displacement damage sensors to be used in least squares spectrum adjustment codes.
Uncertainty analysis for probabilistic pipe fracture evaluations in LBB applications
Rahman, S.; Ghadiali, N.; Wilkowski, G.
1997-04-01
During the NRC`s Short Cracks in Piping and Piping Welds Program at Battelle, a probabilistic methodology was developed to conduct fracture evaluations of circumferentially cracked pipes for application to leak-rate detection. Later, in the IPIRG-2 program, several parameters that may affect leak-before-break and other pipe flaw evaluations were identified. This paper presents new results from several uncertainty analyses to evaluate the effects of normal operating stresses, normal plus safe-shutdown earthquake stresses, off-centered cracks, restraint of pressure-induced bending, and dynamic and cyclic loading rates on the conditional failure probability of pipes. systems in BWR and PWR. For each parameter, the sensitivity to conditional probability of failure and hence, its importance on probabilistic leak-before-break evaluations were determined.
Uncertainty in terahertz time-domain spectroscopy measurement
Withayachumnankul, Withawat; Fischer, Bernd M.; Lin Hungyen; Abbott, Derek
2008-06-15
Measurements of optical constants at terahertz--or T-ray--frequencies have been performed extensively using terahertz time-domain spectroscopy (THz-TDS). Spectrometers, together with physical models explaining the interaction between a sample and T-ray radiation, are progressively being developed. Nevertheless, measurement errors in the optical constants, so far, have not been systematically analyzed. This situation calls for a comprehensive analysis of measurement uncertainty in THz-TDS systems. The sources of error existing in a terahertz spectrometer and throughout the parameter estimation process are identified. The analysis herein quantifies the impact of each source on the output optical constants. The resulting analytical model is evaluated against experimental THz-TDS data.
Long-term energy generation planning under uncertainty
Escudero, L.F.; Paradinas, I.; Salmeron, J.; Sanchez, M.
1998-07-01
In this work the authors deal with the hydro-thermal coordination problem under uncertainty in generators availability, fuel costs, exogenous water inflow and energy demand. The objective is to minimize the system operating cost. The decision variables are the fuel procurement for each thermal generation site, the energy generated by each thermal and hydro-generator and the release and spilled water from reservoirs. Control variables are the stored water in reservoirs and the stored fuel in thermal plants at the end of each time period. The main contribution on the proposed topic focus in the simultaneous inclusion of the hydro-network and the thermal generation related constraints, as well as the stochastic aspect of the aforementioned parameters. The authors report their computational experience on real problems drawn from the Spanish hydro-thermal generation system. A case tested includes 85 generators (42 thermal plants with a global 27084MW capacity) and 57 reservoirs.
Luis, Alfredo
2011-09-15
We show within a very simple framework that different measures of fluctuations lead to uncertainty relations resulting in contradictory conclusions. More specifically we focus on Tsallis and Renyi entropic uncertainty relations and we get that the minimum joint uncertainty states for some fluctuation measures are the maximum joint uncertainty states of other fluctuation measures, and vice versa.
Principles and applications of measurement and uncertainty analysis in research and calibration
Wells, C.V.
1992-11-01
Interest in Measurement Uncertainty Analysis has grown in the past several years as it has spread to new fields of application, and research and development of uncertainty methodologies have continued. This paper discusses the subject from the perspectives of both research and calibration environments. It presents a history of the development and an overview of the principles of uncertainty analysis embodied in the United States National Standard, ANSI/ASME PTC 19.1-1985, Measurement Uncertainty. Examples are presented in which uncertainty analysis was utilized or is needed to gain further knowledge of a particular measurement process and to characterize final results. Measurement uncertainty analysis provides a quantitative estimate of the interval about a measured value or an experiment result within which the true value of that quantity is expected to lie. Years ago, Harry Ku of the United States National Bureau of Standards stated that The informational content of the statement of uncertainty determines, to a large extent, the worth of the calibrated value.'' Today, that statement is just as true about calibration or research results as it was in 1968. Why is that true What kind of information should we include in a statement of uncertainty accompanying a calibrated value How and where do we get the information to include in an uncertainty statement How should we interpret and use measurement uncertainty information This discussion will provide answers to these and other questions about uncertainty in research and in calibration. The methodology to be described has been developed by national and international groups over the past nearly thirty years, and individuals were publishing information even earlier. Yet the work is largely unknown in many science and engineering arenas. I will illustrate various aspects of uncertainty analysis with some examples drawn from the radiometry measurement and calibration discipline from research activities.
Principles and applications of measurement and uncertainty analysis in research and calibration
Wells, C.V.
1992-11-01
Interest in Measurement Uncertainty Analysis has grown in the past several years as it has spread to new fields of application, and research and development of uncertainty methodologies have continued. This paper discusses the subject from the perspectives of both research and calibration environments. It presents a history of the development and an overview of the principles of uncertainty analysis embodied in the United States National Standard, ANSI/ASME PTC 19.1-1985, Measurement Uncertainty. Examples are presented in which uncertainty analysis was utilized or is needed to gain further knowledge of a particular measurement process and to characterize final results. Measurement uncertainty analysis provides a quantitative estimate of the interval about a measured value or an experiment result within which the true value of that quantity is expected to lie. Years ago, Harry Ku of the United States National Bureau of Standards stated that ``The informational content of the statement of uncertainty determines, to a large extent, the worth of the calibrated value.`` Today, that statement is just as true about calibration or research results as it was in 1968. Why is that true? What kind of information should we include in a statement of uncertainty accompanying a calibrated value? How and where do we get the information to include in an uncertainty statement? How should we interpret and use measurement uncertainty information? This discussion will provide answers to these and other questions about uncertainty in research and in calibration. The methodology to be described has been developed by national and international groups over the past nearly thirty years, and individuals were publishing information even earlier. Yet the work is largely unknown in many science and engineering arenas. I will illustrate various aspects of uncertainty analysis with some examples drawn from the radiometry measurement and calibration discipline from research activities.
Final Report: DOE Project: DE-SC-0005399 Linking the uncertainty...
Office of Scientific and Technical Information (OSTI)
Linking the uncertainty of low frequency variability in tropical forcing in regional climate change Citation Details In-Document Search Title: Final Report: DOE Project:...
Emery, K.
2009-08-01
Discusses NREL Photovoltaic Cell and Module Performance Characterization Group's procedures to achieve lowest practical uncertainty in measuring PV performance with respect to reference conditions.
MASS MEASUREMENT UNCERTAINTY FOR PLUTONIUM ALIQUOTS ASSAYED BY CONTROLLED-POTENTIAL COULOMETRY
Holland, M.; Cordaro, J.
2009-03-18
Minimizing plutonium measurement uncertainty is essential to nuclear material control and international safeguards. In 2005, the International Organization for Standardization (ISO) published ISO 12183 'Controlled-potential coulometric assay of plutonium', 2nd edition. ISO 12183:2005 recommends a target of {+-}0.01% for the mass of original sample in the aliquot because it is a critical assay variable. Mass measurements in radiological containment were evaluated and uncertainties estimated. The uncertainty estimate for the mass measurement also includes uncertainty in correcting for buoyancy effects from air acting as a fluid and from decreased pressure of heated air from the specific heat of the plutonium isotopes.
Uncertainty analysis of an IGCC system with single-stage entrained-flow gasifier
Shastri, Y.; Diwekar, U.; Zitney, S.
2008-01-01
Integrated Gasification Combined Cycle (IGCC) systems using coal gasification is an attractive option for future energy plants. Consequenty, understanding the system operation and optimizing gasifier performance in the presence of uncertain operating conditions is essential to extract the maximum benefits from the system. This work focuses on conducting such a study using an IGCC process simulation and a high-fidelity gasifier simulation coupled with stochastic simulation and multi-objective optimization capabilities. Coal gasifiers are the necessary basis of IGCC systems, and hence effective modeling and uncertainty analysis of the gasification process constitutes an important element of overall IGCC process design and operation. In this work, an Aspen Plus{reg_sign} steady-state process model of an IGCC system with carbon capture enables us to conduct simulation studies so that the effect of gasification variability on the whole process can be understood. The IGCC plant design consists of an single-stage entrained-flow gasifier, a physical solvent-based acid gas removal process for carbon capture, two model-7FB combustion turbine generators, two heat recovery steam generators, and one steam turbine generator in a multi-shaft 2x2x1 configuration. In the Aspen Plus process simulation, the gasifier is represented as a simplified lumped-parameter, restricted-equilibrium reactor model. In this work, we also make use of a distributed-parameter FLUENT{reg_sign} computational fluid dynamics (CFD) model to characterize the uncertainty for the entrained-flow gasifier. The CFD-based gasifer model is much more comprehensive, predictive, and hence better suited to understand the effects of uncertainty. The possible uncertain parameters of the gasifier model are identified. This includes input coal composition as well as mass flow rates of coal, slurry water, and oxidant. Using a selected number of random (Monte Carlo) samples for the different parameters, the CFD model is
WHEELER, TIMOTHY A.; WYSS, GREGORY D.; HARPER, FREDERICK T.
2000-11-01
Uncertainty distributions for specific parameters of the Cassini General Purpose Heat Source Radioisotope Thermoelectric Generator (GPHS-RTG) Final Safety Analysis Report consequence risk analysis were revised and updated. The revisions and updates were done for all consequence parameters for which relevant information exists from the joint project on Probabilistic Accident Consequence Uncertainty Analysis by the United States Nuclear Regulatory Commission and the Commission of European Communities.
CALiPER Exploratory Study: Accounting for Uncertainty in Lumen Measurements
Bergman, Rolf; Paget, Maria L.; Richman, Eric E.
2011-03-31
With a well-defined and shared understanding of uncertainty in lumen measurements, testing laboratories can better evaluate their processes, contributing to greater consistency and credibility of lighting testing a key component of the U.S. Department of Energy (DOE) Commercially Available LED Product Evaluation and Reporting (CALiPER) program. Reliable lighting testing is a crucial underlying factor contributing toward the success of many energy-efficient lighting efforts, such as the DOE GATEWAY demonstrations, Lighting Facts Label, ENERGY STAR energy efficient lighting programs, and many others. Uncertainty in measurements is inherent to all testing methodologies, including photometric and other lighting-related testing. Uncertainty exists for all equipment, processes, and systems of measurement in individual as well as combined ways. A major issue with testing and the resulting accuracy of the tests is the uncertainty of the complete process. Individual equipment uncertainties are typically identified, but their relative value in practice and their combined value with other equipment and processes in the same test are elusive concepts, particularly for complex types of testing such as photometry. The total combined uncertainty of a measurement result is important for repeatable and comparative measurements for light emitting diode (LED) products in comparison with other technologies as well as competing products. This study provides a detailed and step-by-step method for determining uncertainty in lumen measurements, working closely with related standards efforts and key industry experts. This report uses the structure proposed in the Guide to Uncertainty Measurements (GUM) for evaluating and expressing uncertainty in measurements. The steps of the procedure are described and a spreadsheet format adapted for integrating sphere and goniophotometric uncertainty measurements is provided for entering parameters, ordering the information, calculating intermediate
Nguyen, J.; Moteabbed, M.; Paganetti, H.
2015-01-15
Purpose: Theoretical dose–response models offer the possibility to assess second cancer induction risks after external beam therapy. The parameters used in these models are determined with limited data from epidemiological studies. Risk estimations are thus associated with considerable uncertainties. This study aims at illustrating uncertainties when predicting the risk for organ-specific second cancers in the primary radiation field illustrated by choosing selected treatment plans for brain cancer patients. Methods: A widely used risk model was considered in this study. The uncertainties of the model parameters were estimated with reported data of second cancer incidences for various organs. Standard error propagation was then subsequently applied to assess the uncertainty in the risk model. Next, second cancer risks of five pediatric patients treated for cancer in the head and neck regions were calculated. For each case, treatment plans for proton and photon therapy were designed to estimate the uncertainties (a) in the lifetime attributable risk (LAR) for a given treatment modality and (b) when comparing risks of two different treatment modalities. Results: Uncertainties in excess of 100% of the risk were found for almost all organs considered. When applied to treatment plans, the calculated LAR values have uncertainties of the same magnitude. A comparison between cancer risks of different treatment modalities, however, does allow statistically significant conclusions. In the studied cases, the patient averaged LAR ratio of proton and photon treatments was 0.35, 0.56, and 0.59 for brain carcinoma, brain sarcoma, and bone sarcoma, respectively. Their corresponding uncertainties were estimated to be potentially below 5%, depending on uncertainties in dosimetry. Conclusions: The uncertainty in the dose–response curve in cancer risk models makes it currently impractical to predict the risk for an individual external beam treatment. On the other hand, the ratio
A preliminary study to Assess Model Uncertainties in Fluid Flows
Marc Oliver Delchini; Jean C. Ragusa
2009-09-01
The goal of this study is to assess the impact of various flow models for a simplified primary coolant loop of a light water nuclear reactor. The various fluid flow models are based on the Euler equations with an additional friction term, gravity term, momentum source, and energy source. The geometric model is purposefully chosen simple and consists of a one-dimensional (1D) loop system in order to focus the study on the validity of various fluid flow approximations. The 1D loop system is represented by a rectangle; the fluid is heated up along one of the vertical legs and cooled down along the opposite leg. A pressurizer and a pump are included in the horizontal legs. The amount of energy transferred and removed from the system is equal in absolute value along the two vertical legs. The various fluid flow approximations are compressible vs. incompressible, and complete momentum equation vs. Darcys approximation. The ultimate goal is to compute the fluid flow models uncertainties and, if possible, to generate validity ranges for these models when applied to reactor analysis. We also limit this study to single phase flows with low-Mach numbers. As a result, sound waves carry a very small amount of energy in this particular case. A standard finite volume method is used for the spatial discretization of the system.
Mesh refinement for uncertainty quantification through model reduction
Li, Jing Stinis, Panos
2015-01-01
We present a novel way of deciding when and where to refine a mesh in probability space in order to facilitate uncertainty quantification in the presence of discontinuities in random space. A discontinuity in random space makes the application of generalized polynomial chaos expansion techniques prohibitively expensive. The reason is that for discontinuous problems, the expansion converges very slowly. An alternative to using higher terms in the expansion is to divide the random space in smaller elements where a lower degree polynomial is adequate to describe the randomness. In general, the partition of the random space is a dynamic process since some areas of the random space, particularly around the discontinuity, need more refinement than others as time evolves. In the current work we propose a way to decide when and where to refine the random space mesh based on the use of a reduced model. The idea is that a good reduced model can monitor accurately, within a random space element, the cascade of activity to higher degree terms in the chaos expansion. In turn, this facilitates the efficient allocation of computational sources to the areas of random space where they are more needed. For the Kraichnan–Orszag system, the prototypical system to study discontinuities in random space, we present theoretical results which show why the proposed method is sound and numerical results which corroborate the theory.
Srinivasan, Sanjay
2014-09-30
In-depth understanding of the long-term fate of CO₂ in the subsurface requires study and analysis of the reservoir formation, the overlaying caprock formation, and adjacent faults. Because there is significant uncertainty in predicting the location and extent of geologic heterogeneity that can impact the future migration of CO₂ in the subsurface, there is a need to develop algorithms that can reliably quantify this uncertainty in plume migration. This project is focused on the development of a model selection algorithm that refines an initial suite of subsurface models representing the prior uncertainty to create a posterior set of subsurface models that reflect injection performance consistent with that observed. Such posterior models can be used to represent uncertainty in the future migration of the CO₂ plume. Because only injection data is required, the method provides a very inexpensive method to map the migration of the plume and the associated uncertainty in migration paths. The model selection method developed as part of this project mainly consists of assessing the connectivity/dynamic characteristics of a large prior ensemble of models, grouping the models on the basis of their expected dynamic response, selecting the subgroup of models that most closely yield dynamic response closest to the observed dynamic data, and finally quantifying the uncertainty in plume migration using the selected subset of models. The main accomplishment of the project is the development of a software module within the SGEMS earth modeling software package that implements the model selection methodology. This software module was subsequently applied to analyze CO₂ plume migration in two field projects – the In Salah CO₂ Injection project in Algeria and CO₂ injection into the Utsira formation in Norway. These applications of the software revealed that the proxies developed in this project for quickly assessing the dynamic characteristics of the reservoir were
Kim, Alex G.; Miquel, Ramon
2005-09-26
We present a new technique to extract the cosmological information from high-redshift supernova data in the presence of calibration errors and extinction due to dust. While in the traditional technique the distance modulus of each supernova is determined separately, in our approach we determine all distance moduli at once, in a process that achieves a significant degree of self-calibration. The result is a much reduced sensitivity of the cosmological parameters to the calibration uncertainties. As an example, for a strawman mission similar to that outlined in the SNAP satellite proposal, the increased precision obtained with the new approach is roughly equivalent to a factor of five decrease in the calibration uncertainty.
Helton, Jon Craig; Sallaberry, Cedric M.; Hansen, Clifford W.
2010-10-01
The 2008 performance assessment (PA) for the proposed repository for high-level radioactive waste at Yucca Mountain (YM), Nevada, illustrates the conceptual structure of risk assessments for complex systems. The 2008 YM PA is based on the following three conceptual entities: a probability space that characterizes aleatory uncertainty; a function that predicts consequences for individual elements of the sample space for aleatory uncertainty; and a probability space that characterizes epistemic uncertainty. These entities and their use in the characterization, propagation and analysis of aleatory and epistemic uncertainty are described and illustrated with results from the 2008 YM PA.
Uncertainty in Resilience to Climate Change in India and Indian States
Malone, Elizabeth L.; Brenkert, Antoinette L.
2008-10-03
This study builds on an earlier analysis of resilience of India and Indian states to climate change. The previous study (Brenkert and Malone 2005) assessed current resilience; this research uses the Vulnerability-Resilience Indicators Model (VRIM) to project resilience to 2095 and to perform an uncertainty analysis on the deterministic results. Projections utilized two SRES-based scenarios, one with fast-and-high growth, one with delayed growth. A detailed comparison of two states, the Punjab and Orissa, points to the kinds of insights that can be obtained using the VRIM. The scenarios differ most significantly in the timing of the uncertainty in economic prosperity (represented by GDP per capita) as a major factor in explaining the uncertainty in the resilience index. In the fast-and-high growth scenario the states differ most markedly regarding the role of ecosystem sensitivity, land use and water availability. The uncertainty analysis shows, for example, that resilience in the Punjab might be enhanced, especially in the delayed growth scenario, if early attention is paid to the impact of ecosystems sensitivity on environmental well-being of the state. By the same token, later in the century land-use pressures might be avoided if land is managed through intensification rather than extensification of agricultural land. Thus, this methodology illustrates how a policy maker can be informed about where to focus attention on specific issues, by understanding the potential changes at a specific location and time – and, thus, what might yield desired outcomes. Model results can point to further analyses of the potential for resilience-building.
Users manual for the FORSS sensitivity and uncertainty analysis code system
Lucius, J.L.; Weisbin, C.R.; Marable, J.H.; Drischler, J.D.; Wright, R.Q.; White, J.E.
1981-01-01
FORSS is a code system used to study relationships between nuclear reaction cross sections, integral experiments, reactor performance parameter predictions and associated uncertainties. This report describes the computing environment and the modules currently used to implement FORSS Sensitivity and Uncertainty Methodology.
Effect of the Generalized Uncertainty Principle on post-inflation preheating
Chemissany, Wissam; Das, Saurya; Ali, Ahmed Farag; Vagenas, Elias C. E-mail: saurya.das@uleth.ca E-mail: evagenas@academyofathens.gr
2011-12-01
We examine effects of the Generalized Uncertainty Principle, predicted by various theories of quantum gravity to replace the Heisenberg's uncertainty principle near the Planck scale, on post inflation preheating in cosmology, and show that it can predict either an increase or a decrease in parametric resonance and a corresponding change in particle production. Possible implications are considered.
Using Uncertainty Analysis to Guide the Development of Accelerated Stress Tests (Presentation)
Kempe, M.
2014-03-01
Extrapolation of accelerated testing to the long-term results expected in the field has uncertainty associated with the acceleration factors and the range of possible stresses in the field. When multiple stresses (such as temperature and humidity) can be used to increase the acceleration, the uncertainty may be reduced according to which stress factors are used to accelerate the degradation.
TOTAL MEASUREMENT UNCERTAINTY IN HOLDUP MEASUREMENTS AT THE PLUTONIUM FINISHING PLANT (PFP)
KEELE, B.D.
2007-07-05
An approach to determine the total measurement uncertainty (TMU) associated with Generalized Geometry Holdup (GGH) [1,2,3] measurements was developed and implemented in 2004 and 2005 [4]. This paper describes a condensed version of the TMU calculational model, including recent developments. Recent modifications to the TMU calculation model include a change in the attenuation uncertainty, clarifying the definition of the forward background uncertainty, reducing conservatism in the random uncertainty by selecting either a propagation of counting statistics or the standard deviation of the mean, and considering uncertainty in the width and height as a part of the self attenuation uncertainty. In addition, a detection limit is calculated for point sources using equations derived from summary equations contained in Chapter 20 of MARLAP [5]. The Defense Nuclear Facilities Safety Board (DNFSB) Recommendation 2007-1 to the Secretary of Energy identified a lack of requirements and a lack of standardization for performing measurements across the U.S. Department of Energy (DOE) complex. The DNFSB also recommended that guidance be developed for a consistent application of uncertainty values. As such, the recent modifications to the TMU calculational model described in this paper have not yet been implemented. The Plutonium Finishing Plant (PFP) is continuing to perform uncertainty calculations as per Reference 4. Publication at this time is so that these concepts can be considered in developing a consensus methodology across the complex.
Uncertainty analysis in geospatial merit matrix–based hydropower resource assessment
Pasha, M. Fayzul K.; Yeasmin, Dilruba; Saetern, Sen; Yang, Majntxov; Kao, Shih -Chieh; Smith, Brennan T.
2016-03-30
Hydraulic head and mean annual streamflow, two main input parameters in hydropower resource assessment, are not measured at every point along the stream. Translation and interpolation are used to derive these parameters, resulting in uncertainties. This study estimates the uncertainties and their effects on model output parameters: the total potential power and the number of potential locations (stream-reach). These parameters are quantified through Monte Carlo Simulation (MCS) linking with a geospatial merit matrix based hydropower resource assessment (GMM-HRA) Model. The methodology is applied to flat, mild, and steep terrains. Results show that the uncertainty associated with the hydraulic head ismore » within 20% for mild and steep terrains, and the uncertainty associated with streamflow is around 16% for all three terrains. Output uncertainty increases as input uncertainty increases. However, output uncertainty is around 10% to 20% of the input uncertainty, demonstrating the robustness of the GMM-HRA model. Hydraulic head is more sensitive to output parameters in steep terrain than in flat and mild terrains. Furthermore, mean annual streamflow is more sensitive to output parameters in flat terrain.« less
TSUNAMI Primer: A Primer for Sensitivity/Uncertainty Calculations with SCALE
Rearden, Bradley T; Mueller, Don; Bowman, Stephen M; Busch, Robert D.; Emerson, Scott
2009-01-01
This primer presents examples in the application of the SCALE/TSUNAMI tools to generate k{sub eff} sensitivity data for one- and three-dimensional models using TSUNAMI-1D and -3D and to examine uncertainties in the computed k{sub eff} values due to uncertainties in the cross-section data used in their calculation. The proper use of unit cell data and need for confirming the appropriate selection of input parameters through direct perturbations are described. The uses of sensitivity and uncertainty data to identify and rank potential sources of computational bias in an application system and TSUNAMI tools for assessment of system similarity using sensitivity and uncertainty criteria are demonstrated. Uses of these criteria in trending analyses to assess computational biases, bias uncertainties, and gap analyses are also described. Additionally, an application of the data adjustment tool TSURFER is provided, including identification of specific details of sources of computational bias.
Sun, Y.; Tong, C.; Trainor-Guitten, W. J.; Lu, C.; Mansoor, K.; Carroll, S. A.
2012-12-20
The risk of CO2 leakage from a deep storage reservoir into a shallow aquifer through a fault is assessed and studied using physics-specific computer models. The hypothetical CO2 geological sequestration system is composed of three subsystems: a deep storage reservoir, a fault in caprock, and a shallow aquifer, which are modeled respectively by considering sub-domain-specific physics. Supercritical CO2 is injected into the reservoir subsystem with uncertain permeabilities of reservoir, caprock, and aquifer, uncertain fault location, and injection rate (as a decision variable). The simulated pressure and CO2/brine saturation are connected to the fault-leakage model as a boundary condition. CO2 andmore » brine fluxes from the fault-leakage model at the fault outlet are then imposed in the aquifer model as a source term. Moreover, uncertainties are propagated from the deep reservoir model, to the fault-leakage model, and eventually to the geochemical model in the shallow aquifer, thus contributing to risk profiles. To quantify the uncertainties and assess leakage-relevant risk, we propose a global sampling-based method to allocate sub-dimensions of uncertain parameters to sub-models. The risk profiles are defined and related to CO2 plume development for pH value and total dissolved solids (TDS) below the EPA's Maximum Contaminant Levels (MCL) for drinking water quality. A global sensitivity analysis is conducted to select the most sensitive parameters to the risk profiles. The resulting uncertainty of pH- and TDS-defined aquifer volume, which is impacted by CO2 and brine leakage, mainly results from the uncertainty of fault permeability. Subsequently, high-resolution, reduced-order models of risk profiles are developed as functions of all the decision variables and uncertain parameters in all three subsystems.« less
Climate uncertainty and implications for U.S. state-level risk assessment through 2050.
Loose, Verne W.; Lowry, Thomas Stephen; Malczynski, Leonard A.; Tidwell, Vincent Carroll; Stamber, Kevin Louis; Kelic, Andjelka; Backus, George A.; Warren, Drake E.; Zagonel, Aldo A.; Ehlen, Mark Andrew; Klise, Geoffrey T.; Vargas, Vanessa N.
2009-10-01
Decisions for climate policy will need to take place in advance of climate science resolving all relevant uncertainties. Further, if the concern of policy is to reduce risk, then the best-estimate of climate change impacts may not be so important as the currently understood uncertainty associated with realizable conditions having high consequence. This study focuses on one of the most uncertain aspects of future climate change - precipitation - to understand the implications of uncertainty on risk and the near-term justification for interventions to mitigate the course of climate change. We show that the mean risk of damage to the economy from climate change, at the national level, is on the order of one trillion dollars over the next 40 years, with employment impacts of nearly 7 million labor-years. At a 1% exceedance-probability, the impact is over twice the mean-risk value. Impacts at the level of individual U.S. states are then typically in the multiple tens of billions dollar range with employment losses exceeding hundreds of thousands of labor-years. We used results of the Intergovernmental Panel on Climate Change's (IPCC) Fourth Assessment Report 4 (AR4) climate-model ensemble as the referent for climate uncertainty over the next 40 years, mapped the simulated weather hydrologically to the county level for determining the physical consequence to economic activity at the state level, and then performed a detailed, seventy-industry, analysis of economic impact among the interacting lower-48 states. We determined industry GDP and employment impacts at the state level, as well as interstate population migration, effect on personal income, and the consequences for the U.S. trade balance.
McDonnell, J. D.; Schunck, N.; Higdon, D.; Sarich, J.; Wild, S. M.; Nazarewicz, W.
2015-03-24
Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squares optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. In addition, the example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.
McDonnell, J. D.; Schunck, N.; Higdon, D.; Sarich, J.; Wild, S. M.; Nazarewicz, W.
2015-03-24
Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squares optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. As a result, the example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.
McDonnell, J. D.; Schunck, N.; Higdon, D.; Sarich, J.; Wild, S. M.; Nazarewicz, W.
2015-03-24
Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squaresmore » optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. In addition, the example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.« less
Uncertainty evaluation for the matrix 'solidified state' of fissionable elements
Iliescu, Elena; Iancso, Georgeta
2012-09-06
fissionable elements (Thorium e.g.), of which, heavy charged particles, in this case the alpha radiations naturally emitted, were registered in the CR-39 track detectors. The density of alpha track from the obtained track micromaps was studied through common optic microscopy. Micromaps were studied counting the tracks on equal areas, in different measurement points. For the study of the foils prepared within the paper, the studied area was of 4.9 mm2, formed of 10 fields of 0.49 mm2 area each. The estimation of the uncertainty was carried out for all the sizes that were measured within the paper, no matter if they participate, directly or indirectly, in the estimation of the uncertainty regarding the homogeneity of the Thorium atoms distribution in the 'solidified state' foils of the standard solution calibrated in Thorium, such as: i) the weighted masses, ii) the dropped volumes of solution, iii) the alpha duration of exposure of the detectors, iv) the area studied on the surface of the micromap and v) the densities of alpha tracks. The procedure suggested allowed us to considerate that the homogeneity of alpha tracks distribution, on the surface and in thickness, is within the limits of 3.1%.
Uncertainty Quantification of Hypothesis Testing for the Integrated Knowledge Engine
Cuellar, Leticia
2012-05-31
The Integrated Knowledge Engine (IKE) is a tool of Bayesian analysis, based on Bayesian Belief Networks or Bayesian networks for short. A Bayesian network is a graphical model (directed acyclic graph) that allows representing the probabilistic structure of many variables assuming a localized type of dependency called the Markov property. The Markov property in this instance makes any node or random variable to be independent of any non-descendant node given information about its parent. A direct consequence of this property is that it is relatively easy to incorporate new evidence and derive the appropriate consequences, which in general is not an easy or feasible task. Typically we use Bayesian networks as predictive models for a small subset of the variables, either the leave nodes or the root nodes. In IKE, since most applications deal with diagnostics, we are interested in predicting the likelihood of the root nodes given new observations on any of the children nodes. The root nodes represent the various possible outcomes of the analysis, and an important problem is to determine when we have gathered enough evidence to lean toward one of these particular outcomes. This document presents criteria to decide when the evidence gathered is sufficient to draw a particular conclusion or decide in favor of a particular outcome by quantifying the uncertainty in the conclusions that are drawn from the data. The material in this document is organized as follows: Section 2 presents briefly a forensics Bayesian network, and we explore evaluating the information provided by new evidence by looking first at the posterior distribution of the nodes of interest, and then at the corresponding posterior odds ratios. Section 3 presents a third alternative: Bayes Factors. In section 4 we finalize by showing the relation between the posterior odds ratios and Bayes factors and showing examples these cases, and in section 5 we conclude by providing clear guidelines of how to use these
Turinsky, Paul J; Abdel-Khalik, Hany S; Stover, Tracy E
2011-03-31
An optimization technique has been developed to select optimized experimental design specifications to produce data specifically designed to be assimilated to optimize a given reactor concept. Data from the optimized experiment is assimilated to generate posteriori uncertainties on the reactor concept’s core attributes from which the design responses are computed. The reactor concept is then optimized with the new data to realize cost savings by reducing margin. The optimization problem iterates until an optimal experiment is found to maximize the savings. A new generation of innovative nuclear reactor designs, in particular fast neutron spectrum recycle reactors, are being considered for the application of closing the nuclear fuel cycle in the future. Safe and economical design of these reactors will require uncertainty reduction in basic nuclear data which are input to the reactor design. These data uncertainty propagate to design responses which in turn require the reactor designer to incorporate additional safety margin into the design, which often increases the cost of the reactor. Therefore basic nuclear data needs to be improved and this is accomplished through experimentation. Considering the high cost of nuclear experiments, it is desired to have an optimized experiment which will provide the data needed for uncertainty reduction such that a reactor design concept can meet its target accuracies or to allow savings to be realized by reducing the margin required due to uncertainty propagated from basic nuclear data. However, this optimization is coupled to the reactor design itself because with improved data the reactor concept can be re-optimized itself. It is thus desired to find the experiment that gives the best optimized reactor design. Methods are first established to model both the reactor concept and the experiment and to efficiently propagate the basic nuclear data uncertainty through these models to outputs. The representativity of the experiment
SU-E-J-159: Analysis of Total Imaging Uncertainty in Respiratory-Gated Radiotherapy
Suzuki, J; Okuda, T; Sakaino, S; Yokota, N
2015-06-15
Purpose: In respiratory-gated radiotherapy, the gating phase during treatment delivery needs to coincide with the corresponding phase determined during the treatment plan. However, because radiotherapy is performed based on the image obtained for the treatment plan, the time delay, motion artifact, volume effect, and resolution in the images are uncertain. Thus, imaging uncertainty is the most basic factor that affects the localization accuracy. Therefore, these uncertainties should be analyzed. This study aims to analyze the total imaging uncertainty in respiratory-gated radiotherapy. Methods: Two factors of imaging uncertainties related to respiratory-gated radiotherapy were analyzed. First, CT image was used to determine the target volume and 4D treatment planning for the Varian Realtime Position Management (RPM) system. Second, an X-ray image was acquired for image-guided radiotherapy (IGRT) for the BrainLAB ExacTrac system. These factors were measured using a respiratory gating phantom. The conditions applied during phantom operation were as follows: respiratory wave form, sine curve; respiratory cycle, 4 s; phantom target motion amplitude, 10, 20, and 29 mm (which is maximum phantom longitudinal motion). The target and cylindrical marker implanted in the phantom coverage of the CT images was measured and compared with the theoretically calculated coverage from the phantom motion. The theoretical position of the cylindrical marker implanted in the phantom was compared with that acquired from the X-ray image. The total imaging uncertainty was analyzed from these two factors. Results: In the CT image, the uncertainty between the target and cylindrical marker’s actual coverage and the coverage of CT images was 1.19 mm and 2.50mm, respectively. In the Xray image, the uncertainty was 0.39 mm. The total imaging uncertainty from the two factors was 1.62mm. Conclusion: The total imaging uncertainty in respiratory-gated radiotherapy was clinically acceptable. However
Results for Phase I of the IAEA Coordinated Research Program on HTGR Uncertainties
Strydom, Gerhard; Bostelmann, Friederike; Yoon, Su Jong
2015-01-01
The quantification of uncertainties in design and safety analysis of reactors is today not only broadly accepted, but in many cases became the preferred way to replace traditional conservative analysis for safety and licensing analysis. The use of a more fundamental methodology is also consistent with the reliable high fidelity physics models and robust, efficient, and accurate codes available today. To facilitate uncertainty analysis applications a comprehensive approach and methodology must be developed and applied. High Temperature Gas-cooled Reactors (HTGR) has its own peculiarities, coated particle design, large graphite quantities, different materials and high temperatures that also require other simulation requirements. The IAEA has therefore launched a Coordinated Research Project (CRP) on the HTGR Uncertainty Analysis in Modeling (UAM) in 2013 to study uncertainty propagation specifically in the HTGR analysis chain. Two benchmark problems are defined, with the prismatic design represented by the General Atomics (GA) MHTGR-350 and a 250 MW modular pebble bed design similar to the HTR-PM (INET, China). This report summarizes the contributions of the HTGR Methods Simulation group at Idaho National Laboratory (INL) up to this point of the CRP. The activities at INL have been focused so far on creating the problem specifications for the prismatic design, as well as providing reference solutions for the exercises defined for Phase I. An overview is provided of the HTGR UAM objectives and scope, and the detailed specifications for Exercises I-1, I-2, I-3 and I-4 are also included here for completeness. The main focus of the report is the compilation and discussion of reference results for Phase I (i.e. for input parameters at their nominal or best-estimate values), which is defined as the first step of the uncertainty quantification process. These reference results can be used by other CRP participants for comparison with other codes or their own reference
Kamp, F.; Brueningk, S.C.; Wilkens, J.J.
2014-06-15
Purpose: In particle therapy, treatment planning and evaluation are frequently based on biological models to estimate the relative biological effectiveness (RBE) or the equivalent dose in 2 Gy fractions (EQD2). In the context of the linear-quadratic model, these quantities depend on biological parameters (α, β) for ions as well as for the reference radiation and on the dose per fraction. The needed biological parameters as well as their dependency on ion species and ion energy typically are subject to large (relative) uncertainties of up to 20–40% or even more. Therefore it is necessary to estimate the resulting uncertainties in e.g. RBE or EQD2 caused by the uncertainties of the relevant input parameters. Methods: We use a variance-based sensitivity analysis (SA) approach, in which uncertainties in input parameters are modeled by random number distributions. The evaluated function is executed 10{sup 4} to 10{sup 6} times, each run with a different set of input parameters, randomly varied according to their assigned distribution. The sensitivity S is a variance-based ranking (from S = 0, no impact, to S = 1, only influential part) of the impact of input uncertainties. The SA approach is implemented for carbon ion treatment plans on 3D patient data, providing information about variations (and their origin) in RBE and EQD2. Results: The quantification enables 3D sensitivity maps, showing dependencies of RBE and EQD2 on different input uncertainties. The high number of runs allows displaying the interplay between different input uncertainties. The SA identifies input parameter combinations which result in extreme deviations of the result and the input parameter for which an uncertainty reduction is the most rewarding. Conclusion: The presented variance-based SA provides advantageous properties in terms of visualization and quantification of (biological) uncertainties and their impact. The method is very flexible, model independent, and enables a broad assessment
A Two-Step Approach to Uncertainty Quantification of Core Simulators
Yankov, Artem; Collins, Benjamin; Klein, Markus; Jessee, Matthew A.; Zwermann, Winfried; Velkov, Kiril; Pautz, Andreas; Downar, Thomas
2012-01-01
For the multiple sources of error introduced into the standard computational regime for simulating reactor cores, rigorous uncertainty analysis methods are available primarily to quantify the effects of cross section uncertainties. Two methods for propagating cross section uncertainties through core simulators are the XSUSA statistical approach and the “two-step” method. The XSUSA approach, which is based on the SUSA code package, is fundamentally a stochastic sampling method. Alternatively, the two-step method utilizes generalized perturbation theory in the first step and stochastic sampling in the second step. The consistency of these two methods in quantifying uncertainties in the multiplication factor andmore » in the core power distribution was examined in the framework of phase I-3 of the OECD Uncertainty Analysis in Modeling benchmark. With the Three Mile Island Unit 1 core as a base model for analysis, the XSUSA and two-step methods were applied with certain limitations, and the results were compared to those produced by other stochastic sampling-based codes. Based on the uncertainty analysis results, conclusions were drawn as to the method that is currently more viable for computing uncertainties in burnup and transient calculations.« less
Statistical Assessment of Proton Treatment Plans Under Setup and Range Uncertainties
Park, Peter C.; Cheung, Joey P.; Zhu, X. Ronald; Lee, Andrew K.; Sahoo, Narayan; Tucker, Susan L.; Liu, Wei; Li, Heng; Mohan, Radhe; Court, Laurence E.; Dong, Lei
2013-08-01
Purpose: To evaluate a method for quantifying the effect of setup errors and range uncertainties on dose distribution and dosevolume histogram using statistical parameters; and to assess existing planning practice in selected treatment sites under setup and range uncertainties. Methods and Materials: Twenty passively scattered proton lung cancer plans, 10 prostate, and 1 brain cancer scanning-beam proton plan(s) were analyzed. To account for the dose under uncertainties, we performed a comprehensive simulation in which the dose was recalculated 600 times per given plan under the influence of random and systematic setup errors and proton range errors. On the basis of simulation results, we determined the probability of dose variations and calculated the expected values and standard deviations of dosevolume histograms. The uncertainties in dose were spatially visualized on the planning CT as a probability map of failure to target coverage or overdose of critical structures. Results: The expected value of target coverage under the uncertainties was consistently lower than that of the nominal value determined from the clinical target volume coverage without setup error or range uncertainty, with a mean difference of ?1.1% (?0.9% for breath-hold), ?0.3%, and ?2.2% for lung, prostate, and a brain cases, respectively. The organs with most sensitive dose under uncertainties were esophagus and spinal cord for lung, rectum for prostate, and brain stem for brain cancer. Conclusions: A clinically feasible robustness plan analysis tool based on direct dose calculation and statistical simulation has been developed. Both the expectation value and standard deviation are useful to evaluate the impact of uncertainties. The existing proton beam planning method used in this institution seems to be adequate in terms of target coverage. However, structures that are small in volume or located near the target area showed greater sensitivity to uncertainties.
A Probabilistic Framework for Quantifying Mixed Uncertainties in Cyber Attacker Payoffs
Chatterjee, Samrat; Tipireddy, Ramakrishna; Oster, Matthew R.; Halappanavar, Mahantesh
2015-12-28
Quantification and propagation of uncertainties in cyber attacker payoffs is a key aspect within multiplayer, stochastic security games. These payoffs may represent penalties or rewards associated with player actions and are subject to various sources of uncertainty, including: (1) cyber-system state, (2) attacker type, (3) choice of player actions, and (4) cyber-system state transitions over time. Past research has primarily focused on representing defender beliefs about attacker payoffs as point utility estimates. More recently, within the physical security domain, attacker payoff uncertainties have been represented as Uniform and Gaussian probability distributions, and mathematical intervals. For cyber-systems, probability distributions may help address statistical (aleatory) uncertainties where the defender may assume inherent variability or randomness in the factors contributing to the attacker payoffs. However, systematic (epistemic) uncertainties may exist, where the defender may not have sufficient knowledge or there is insufficient information about the attackers payoff generation mechanism. Such epistemic uncertainties are more suitably represented as generalizations of probability boxes. This paper explores the mathematical treatment of such mixed payoff uncertainties. A conditional probabilistic reasoning approach is adopted to organize the dependencies between a cyber-systems state, attacker type, player actions, and state transitions. This also enables the application of probabilistic theories to propagate various uncertainties in the attacker payoffs. An example implementation of this probabilistic framework and resulting attacker payoff distributions are discussed. A goal of this paper is also to highlight this uncertainty quantification problem space to the cyber security research community and encourage further advancements in this area.
Balance Calibration A Method for Assigning a Direct-Reading Uncertainty to an Electronic Balance.
Mike Stears
2010-07-01
Paper Title: Balance Calibration A method for assigning a direct-reading uncertainty to an electronic balance. Intended Audience: Those who calibrate or use electronic balances. Abstract: As a calibration facility, we provide on-site (at the customers location) calibrations of electronic balances for customers within our company. In our experience, most of our customers are not using their balance as a comparator, but simply putting an unknown quantity on the balance and reading the displayed mass value. Manufacturers specifications for balances typically include specifications such as readability, repeatability, linearity, and sensitivity temperature drift, but what does this all mean when the balance user simply reads the displayed mass value and accepts the reading as the true value? This paper discusses a method for assigning a direct-reading uncertainty to a balance based upon the observed calibration data and the environment where the balance is being used. The method requires input from the customer regarding the environment where the balance is used and encourages discussion with the customer regarding sources of uncertainty and possible means for improvement; the calibration process becomes an educational opportunity for the balance user as well as calibration personnel. This paper will cover the uncertainty analysis applied to the calibration weights used for the field calibration of balances; the uncertainty is calculated over the range of environmental conditions typically encountered in the field and the resulting range of air density. The temperature stability in the area of the balance is discussed with the customer and the temperature range over which the balance calibration is valid is decided upon; the decision is based upon the uncertainty needs of the customer and the desired rigor in monitoring by the customer. Once the environmental limitations are decided, the calibration is performed and the measurement data is entered into a custom
Uncertainty Analysis Framework - Hanford Site-Wide Groundwater Flow and Transport Model
Cole, Charles R.; Bergeron, Marcel P.; Murray, Christopher J.; Thorne, Paul D.; Wurstner, Signe K.; Rogers, Phillip M.
2001-11-09
Pacific Northwest National Laboratory (PNNL) embarked on a new initiative to strengthen the technical defensibility of the predictions being made with a site-wide groundwater flow and transport model at the U.S. Department of Energy Hanford Site in southeastern Washington State. In FY 2000, the focus of the initiative was on the characterization of major uncertainties in the current conceptual model that would affect model predictions. The long-term goals of the initiative are the development and implementation of an uncertainty estimation methodology in future assessments and analyses using the site-wide model. This report focuses on the development and implementation of an uncertainty analysis framework.
Use of SUSA in Uncertainty and Sensitivity Analysis for INL VHTR Coupled Codes
Gerhard Strydom
2010-06-01
The need for a defendable and systematic Uncertainty and Sensitivity approach that conforms to the Code Scaling, Applicability, and Uncertainty (CSAU) process, and that could be used for a wide variety of software codes, was defined in 2008.The GRS (Gesellschaft fr Anlagen und Reaktorsicherheit) company of Germany has developed one type of CSAU approach that is particularly well suited for legacy coupled core analysis codes, and a trial version of their commercial software product SUSA (Software for Uncertainty and Sensitivity Analyses) was acquired on May 12, 2010. This interim milestone report provides an overview of the current status of the implementation and testing of SUSA at the INL VHTR Project Office.
Dakota uncertainty quantification methods applied to the NEK-5000 SAHEX model.
Weirs, V. Gregory
2014-03-01
This report summarizes the results of a NEAMS project focused on the use of uncertainty and sensitivity analysis methods within the NEK-5000 and Dakota software framework for assessing failure probabilities as part of probabilistic risk assessment. NEK-5000 is a software tool under development at Argonne National Laboratory to perform computational fluid dynamics calculations for applications such as thermohydraulics of nuclear reactor cores. Dakota is a software tool developed at Sandia National Laboratories containing optimization, sensitivity analysis, and uncertainty quantification algorithms. The goal of this work is to demonstrate the use of uncertainty quantification methods in Dakota with NEK-5000.
Cardoni, Jeffrey N.; Kalinich, Donald A.
2014-02-01
Sandia National Laboratories (SNL) plans to conduct uncertainty analyses (UA) on the Fukushima Daiichi unit (1F1) plant with the MELCOR code. The model to be used was developed for a previous accident reconstruction investigation jointly sponsored by the US Department of Energy (DOE) and Nuclear Regulatory Commission (NRC). However, that study only examined a handful of various model inputs and boundary conditions, and the predictions yielded only fair agreement with plant data and current release estimates. The goal of this uncertainty study is to perform a focused evaluation of uncertainty in core melt progression behavior and its effect on key figures-of-merit (e.g., hydrogen production, vessel lower head failure, etc.). In preparation for the SNL Fukushima UA work, a scoping study has been completed to identify important core melt progression parameters for the uncertainty analysis. The study also lays out a preliminary UA methodology.
Uncertainty Analysis for a Virtual Flow Meter Using an Air-Handling...
Office of Scientific and Technical Information (OSTI)
Unit Chilled Water Valve Citation Details In-Document Search Title: Uncertainty Analysis for a Virtual Flow Meter Using an Air-Handling Unit Chilled Water Valve A virtual ...
The Multi-Step CADIS method for shutdown dose rate calculations and uncertainty propagation
Ibrahim, Ahmad M.; Peplow, Douglas E.; Grove, Robert E.; Peterson, Joshua L.; Johnson, Seth R.
2015-12-01
Shutdown dose rate (SDDR) analysis requires (a) a neutron transport calculation to estimate neutron flux fields, (b) an activation calculation to compute radionuclide inventories and associated photon sources, and (c) a photon transport calculation to estimate final SDDR. In some applications, accurate full-scale Monte Carlo (MC) SDDR simulations are needed for very large systems with massive amounts of shielding materials. However, these simulations are impractical because calculation of space- and energy-dependent neutron fluxes throughout the structural materials is needed to estimate distribution of radioisotopes causing the SDDR. Biasing the neutron MC calculation using an importance function is not simple becausemore » it is difficult to explicitly express the response function, which depends on subsequent computational steps. Furthermore, the typical SDDR calculations do not consider how uncertainties in MC neutron calculation impact SDDR uncertainty, even though MC neutron calculation uncertainties usually dominate SDDR uncertainty.« less
Pruet, J
2007-06-23
This report describes Kiwi, a program developed at Livermore to enable mature studies of the relation between imperfectly known nuclear physics and uncertainties in simulations of complicated systems. Kiwi includes a library of evaluated nuclear data uncertainties, tools for modifying data according to these uncertainties, and a simple interface for generating processed data used by transport codes. As well, Kiwi provides access to calculations of k eigenvalues for critical assemblies. This allows the user to check implications of data modifications against integral experiments for multiplying systems. Kiwi is written in python. The uncertainty library has the same format and directory structure as the native ENDL used at Livermore. Calculations for critical assemblies rely on deterministic and Monte Carlo codes developed by B division.
The Multi-Step CADIS method for shutdown dose rate calculations and uncertainty propagation
Ibrahim, Ahmad M.; Peplow, Douglas E.; Grove, Robert E.; Peterson, Joshua L.; Johnson, Seth R.
2015-12-01
Shutdown dose rate (SDDR) analysis requires (a) a neutron transport calculation to estimate neutron flux fields, (b) an activation calculation to compute radionuclide inventories and associated photon sources, and (c) a photon transport calculation to estimate final SDDR. In some applications, accurate full-scale Monte Carlo (MC) SDDR simulations are needed for very large systems with massive amounts of shielding materials. However, these simulations are impractical because calculation of space- and energy-dependent neutron fluxes throughout the structural materials is needed to estimate distribution of radioisotopes causing the SDDR. Biasing the neutron MC calculation using an importance function is not simple because it is difficult to explicitly express the response function, which depends on subsequent computational steps. Furthermore, the typical SDDR calculations do not consider how uncertainties in MC neutron calculation impact SDDR uncertainty, even though MC neutron calculation uncertainties usually dominate SDDR uncertainty.
A simplified analysis of uncertainty propagation in inherently controlled ATWS events
Wade, D.C.
1987-01-01
The quasi static approach can be used to provide useful insight concerning the propagation of uncertainties in the inherent response to ATWS events. At issue is how uncertainties in the reactivity coefficients and in the thermal-hydraulics and materials properties propagate to yield uncertainties in the asymptotic temperatures attained upon inherent shutdown. The basic notion to be quantified is that many of the same physical phenomena contribute to both the reactivity increase of power reduction and the reactivity decrease of core temperature rise. Since these reactivities cancel by definition, a good deal of uncertainty cancellation must also occur of necessity. For example, if the Doppler coefficient is overpredicted, too large a positive reactivity insertion is predicted upon power reduction and collapse of the ..delta..T across the fuel pin. However, too large a negative reactivity is also predicted upon the compensating increase in the isothermal core average temperature - which includes the fuel Doppler effect.
Sig Drellack, Lance Prothro
2007-12-01
The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result of the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The
CANTALOUB, M.G.
2002-01-02
The Waste Receiving and Processing (WRAP) facility, located on the Hanford Site in southeast Washington, is a key link in the certification of Hanford's transuranic (TRU) waste for shipment to the Waste Isolation Pilot Plant (WIPP). Waste characterization is one of the vital functions performed at WRAP, and nondestructive assay (NDA) measurement of TRU waste containers is one of the methods used for waste characterization. Various programs exist to ensure the validity of waste characterization data; all of these cite the need for clearly defied knowledge of the uncertainties associated with any measurements performed. All measurements have an inherent uncertainty associated with them The combined effect of all uncertainties associated with a measurement is referred to as the Total Measurement Uncertainty (TMU). The NDA measurement uncertainties can be numerous and complex. In addition to system-induced measurement uncertainty, other factors contribute to the TMU, each associated with a particular measurement. The NDA measurements at WRAP are based on processes (radioactive decay and induced fission) which are statistical in nature. As a result, the proper statistical summation of the various uncertainty components is essential. This report examines the contributing factors to NDA measurement uncertainty at WRAP. The significance of each factor to the TMU is analyzed, and a final method is given for determining the TMU for NDA measurements at WRAP. A brief description of the data flow paths for the analytical process is also included in this report. As more data becomes available, and WRAP gains in operational experience, this report will be reviewed semi-annually and updated as necessary.
Advancing Inverse Sensitivity/Uncertainty Methods for Nuclear Fuel Cycle Applications
Arbanas, Goran; Williams, Mark L; Leal, Luiz C; Dunn, Michael E; Khuwaileh, Bassam A.; Wang, C; Abdel-Khalik, Hany
2015-01-01
The inverse sensitivity/uncertainty quantification (IS/UQ) method has recently been implemented in the Inverse Sensitivity/UnceRtainty Estimiator (INSURE) module of the AMPX system [1]. The IS/UQ method aims to quantify and prioritize the cross section measurements along with uncer- tainties needed to yield a given nuclear application(s) target response uncertainty, and doing this at a minimum cost. Since in some cases the extant uncertainties of the differential cross section data are already near the limits of the present-day state-of-the-art measurements, requiring significantly smaller uncertainties may be unrealistic. Therefore we have incorporated integral benchmark exper- iments (IBEs) data into the IS/UQ method using the generalized linear least-squares method, and have implemented it in the INSURE module. We show how the IS/UQ method could be applied to systematic and statistical uncertainties in a self-consistent way. We show how the IS/UQ method could be used to optimize uncertainties of IBE s and differential cross section data simultaneously.
Position-momentum uncertainty relations in the presence of quantum memory
Furrer, Fabian; Berta, Mario; Tomamichel, Marco; Scholz, Volkher B.; Christandl, Matthias
2014-12-15
A prominent formulation of the uncertainty principle identifies the fundamental quantum feature that no particle may be prepared with certain outcomes for both position and momentum measurements. Often the statistical uncertainties are thereby measured in terms of entropies providing a clear operational interpretation in information theory and cryptography. Recently, entropic uncertainty relations have been used to show that the uncertainty can be reduced in the presence of entanglement and to prove security of quantum cryptographic tasks. However, much of this recent progress has been focused on observables with only a finite number of outcomes not including Heisenbergs original setting of position and momentum observables. Here, we show entropic uncertainty relations for general observables with discrete but infinite or continuous spectrum that take into account the power of an entangled observer. As an illustration, we evaluate the uncertainty relations for position and momentum measurements, which is operationally significant in that it implies security of a quantum key distribution scheme based on homodyne detection of squeezed Gaussian states.
Grid and basis adaptive polynomial chaos techniques for sensitivity and uncertainty analysis
Perk, Zoltn Gilli, Luca Lathouwers, Danny Kloosterman, Jan Leen
2014-03-01
The demand for accurate and computationally affordable sensitivity and uncertainty techniques is constantly on the rise and has become especially pressing in the nuclear field with the shift to Best Estimate Plus Uncertainty methodologies in the licensing of nuclear installations. Besides traditional, already well developed methods such as first order perturbation theory or Monte Carlo sampling Polynomial Chaos Expansion (PCE) has been given a growing emphasis in recent years due to its simple application and good performance. This paper presents new developments of the research done at TU Delft on such Polynomial Chaos (PC) techniques. Our work is focused on the Non-Intrusive Spectral Projection (NISP) approach and adaptive methods for building the PCE of responses of interest. Recent efforts resulted in a new adaptive sparse grid algorithm designed for estimating the PC coefficients. The algorithm is based on Gerstner's procedure for calculating multi-dimensional integrals but proves to be computationally significantly cheaper, while at the same it retains a similar accuracy as the original method. More importantly the issue of basis adaptivity has been investigated and two techniques have been implemented for constructing the sparse PCE of quantities of interest. Not using the traditional full PC basis set leads to further reduction in computational time since the high order grids necessary for accurately estimating the near zero expansion coefficients of polynomial basis vectors not needed in the PCE can be excluded from the calculation. Moreover the sparse PC representation of the response is easier to handle when used for sensitivity analysis or uncertainty propagation due to the smaller number of basis vectors. The developed grid and basis adaptive methods have been implemented in Matlab as the Fully Adaptive Non-Intrusive Spectral Projection (FANISP) algorithm and were tested on four analytical problems. These show consistent good performance both in
Qin, A; Yan, D
2014-06-15
Purpose: To evaluate uncertainties of organ specific Deformable Image Registration (DIR) for H and N cancer Adaptive Radiation Therapy (ART). Methods: A commercial DIR evaluation tool, which includes a digital phantom library of 8 patients, and the corresponding “Ground truth Deformable Vector Field” (GT-DVF), was used in the study. Each patient in the phantom library includes the GT-DVF created from a pair of CT images acquired prior to and at the end of the treatment course. Five DIR tools, including 2 commercial tools (CMT1, CMT2), 2 in-house (IH-FFD1, IH-FFD2), and a classic DEMON algorithms, were applied on the patient images. The resulting DVF was compared to the GT-DVF voxel by voxel. Organ specific DVF uncertainty was calculated for 10 ROIs: Whole Body, Brain, Brain Stem, Cord, Lips, Mandible, Parotid, Esophagus and Submandibular Gland. Registration error-volume histogram was constructed for comparison. Results: The uncertainty is relatively small for brain stem, cord and lips, while large in parotid and submandibular gland. CMT1 achieved best overall accuracy (on whole body, mean vector error of 8 patients: 0.98±0.29 mm). For brain, mandible, parotid right, parotid left and submandibular glad, the classic Demon algorithm got the lowest uncertainty (0.49±0.09, 0.51±0.16, 0.46±0.11, 0.50±0.11 and 0.69±0.47 mm respectively). For brain stem, cord and lips, the DVF from CMT1 has the best accuracy (0.28±0.07, 0.22±0.08 and 0.27±0.12 mm respectively). All algorithms have largest right parotid uncertainty on patient #7, which has image artifact caused by tooth implantation. Conclusion: Uncertainty of deformable CT image registration highly depends on the registration algorithm, and organ specific. Large uncertainty most likely appears at the location of soft-tissue organs far from the bony structures. Among all 5 DIR methods, the classic DEMON and CMT1 seem to be the best to limit the uncertainty within 2mm for all OARs. Partially supported by
Gerstl, S.A.W.
1980-01-01
SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections of standard multigroup cross section sets and for secondary energy distributions (SEDs) of multigroup scattering matrices. In the design sensitivity mode, SENSIT computes changes in an integral response due to design changes and gives the appropriate sensitivity coefficients. Cross section uncertainty analyses are performed for three types of input data uncertainties: cross-section covariance matrices for pairs of multigroup reaction cross sections, spectral shape uncertainty parameters for secondary energy distributions (integral SED uncertainties), and covariance matrices for energy-dependent response functions. For all three types of data uncertainties SENSIT computes the resulting variance and estimated standard deviation in an integral response of interest, on the basis of generalized perturbation theory. SENSIT attempts to be more comprehensive than earlier sensitivity analysis codes, such as SWANLAKE.
Yurko, J. P.; Buongiorno, J.
2012-07-01
Propagating parameter uncertainty for a nuclear reactor system code is a challenging problem due to often non-linear system response to the numerous parameters involved and lengthy computational times; issues that compound when a statistical sampling procedure is adopted, since the code must be run many times. The number of parameters sampled must therefore be limited to as few as possible that still accurately characterize the uncertainty in the system response. A Quantitative Phenomena Identification and Ranking Table (QPIRT) was developed to accomplish this goal. The QPIRT consists of two steps: a 'Top-Down' step focusing on identifying the dominant physical phenomena controlling the system response, and a 'Bottom-Up' step which focuses on determining the correlations from those key physical phenomena that significantly contribute to the response uncertainty. The Top-Down step evaluates phenomena using the governing equations of the system code at nominal parameter values, providing a 'fast' screening step. The Bottom-Up step then analyzes the correlations and models for the phenomena identified from the Top-Down step to find which parameters to sample. The QPIRT, through the Top-Down and Bottom-Up steps thus provides a systematic approach to determining the limited set of physically relevant parameters that influence the uncertainty of the system response. This strategy was demonstrated through an application to the RELAP5-based analysis of a PWR Total Loss of main Feedwater Flow (TLOFW) accident, also known as feed and bleed' scenario, . Ultimately, this work is the first component in a larger task of building a calibrated uncertainty propagation framework. The QPIRT is an essential piece because the uncertainty of those selected parameters will be calibrated to data from both Separate and Integral Effect Tests (SETs and IETs). Therefore the system response uncertainty will incorporate the knowledge gained from the database of past large IETs. (authors)
The ends of uncertainty: Air quality science and planning in Central California
Fine, James
2003-09-01
Air quality planning in Central California is complicated and controversial despite millions of dollars invested to improve scientific understanding. This research describes and critiques the use of photochemical air quality simulation modeling studies in planning to attain standards for ground-level ozone in the San Francisco Bay Area and the San Joaquin Valley during the 1990's. Data are gathered through documents and interviews with planners, modelers, and policy-makers at public agencies and with representatives from the regulated and environmental communities. Interactions amongst organizations are diagramed to identify significant nodes of interaction. Dominant policy coalitions are described through narratives distinguished by their uses of and responses to uncertainty, their exposures to risks, and their responses to the principles of conservatism, civil duty, and caution. Policy narratives are delineated using aggregated respondent statements to describe and understand advocacy coalitions. I found that models impacted the planning process significantly, but were used not purely for their scientific capabilities. Modeling results provided justification for decisions based on other constraints and political considerations. Uncertainties were utilized opportunistically by stakeholders instead of managed explicitly. Ultimately, the process supported the partisan views of those in control of the modeling. Based on these findings, as well as a review of model uncertainty analysis capabilities, I recommend modifying the planning process to allow for the development and incorporation of uncertainty information, while addressing the need for inclusive and meaningful public participation. By documenting an actual air quality planning process these findings provide insights about the potential for using new scientific information and understanding to achieve environmental goals, most notably the analysis of uncertainties in modeling applications. Concurrently, needed
WESTSIK, G.A.
2001-06-06
This report presents the results of an evaluation of the Total Measurement Uncertainty (TMU) for the Canberra manufactured Segmented Gamma Scanner Assay System (SGSAS) as employed at the Hanford Plutonium Finishing Plant (PFP). In this document, TMU embodies the combined uncertainties due to all of the individual random and systematic sources of measurement uncertainty. It includes uncertainties arising from corrections and factors applied to the analysis of transuranic waste to compensate for inhomogeneities and interferences from the waste matrix and radioactive components. These include uncertainty components for any assumptions contained in the calibration of the system or computation of the data. Uncertainties are propagated at 1 sigma. The final total measurement uncertainty value is reported at the 95% confidence level. The SGSAS is a gamma assay system that is used to assay plutonium and uranium waste. The SGSAS system can be used in a stand-alone mode to perform the NDA characterization of a container, particularly for low to medium density (0-2.5 g/cc) container matrices. The SGSAS system provides a full gamma characterization of the container content. This document is an edited version of the Rocky Flats TMU Report for the Can Scan Segment Gamma Scanners, which are in use for the plutonium residues projects at the Rocky Flats plant. The can scan segmented gamma scanners at Rocky Flats are the same design as the PFP SGSAS system and use the same software (with the exception of the plutonium isotopics software). Therefore, all performance characteristics are expected to be similar. Modifications in this document reflect minor differences in the system configuration, container packaging, calibration technique, etc. These results are supported by the Quality Assurance Objective (QAO) counts, safeguards test data, calibration data, etc. for the PFP SGSAS system. Other parts of the TMU analysis utilize various modeling techniques such as Monte Carlo N
State-of-the-Art Solar Simulator Reduces Measurement Time and Uncertainty (Fact Sheet)
Not Available
2012-04-01
One-Sun Multisource Solar Simulator (OSMSS) brings accurate energy-rating predictions that account for the nonlinear behavior of multijunction photovoltaic devices. The National Renewable Energy Laboratory (NREL) is one of only a few International Organization for Standardization (ISO)-accredited calibration labs in the world for primary and secondary reference cells and modules. As such, it is critical to seek new horizons in developing simulators and measurement methods. Current solar simulators are not well suited for accurately measuring multijunction devices. To set the electrical current to each junction independently, simulators must precisely tune the spectral content with no overlap between the wavelength regions. Current simulators do not have this capability, and the overlaps lead to large measurement uncertainties of {+-}6%. In collaboration with LabSphere, NREL scientists have designed and implemented the One-Sun Multisource Solar Simulator (OSMSS), which enables automatic spectral adjustment with nine independent wavelength regions. This fiber-optic simulator allows researchers and developers to set the current to each junction independently, reducing errors relating to spectral effects. NREL also developed proprietary software that allows this fully automated simulator to rapidly 'build' a spectrum under which all junctions of a multijunction device are current matched and behave as they would under a reference spectrum. The OSMSS will reduce the measurement uncertainty for multijunction devices, while significantly reducing the current-voltage measurement time from several days to minutes. These features will enable highly accurate energy-rating predictions that take into account the nonlinear behavior of multijunction photovoltaic devices.
Frederik Reitsma; Gerhard Strydom; Bismark Tyobeka; Kostadin Ivanov
2012-10-01
The continued development of High Temperature Gas Cooled Reactors (HTGRs) requires verification of design and safety features with reliable high fidelity physics models and robust, efficient, and accurate codes. The uncertainties in the HTR analysis tools are today typically assessed with sensitivity analysis and then a few important input uncertainties (typically based on a PIRT process) are varied in the analysis to find a spread in the parameter of importance. However, one wish to apply a more fundamental approach to determine the predictive capability and accuracies of coupled neutronics/thermal-hydraulics and depletion simulations used for reactor design and safety assessment. Today there is a broader acceptance of the use of uncertainty analysis even in safety studies and it has been accepted by regulators in some cases to replace the traditional conservative analysis. Finally, there is also a renewed focus in supplying reliable covariance data (nuclear data uncertainties) that can then be used in uncertainty methods. Uncertainty and sensitivity studies are therefore becoming an essential component of any significant effort in data and simulation improvement. In order to address uncertainty in analysis and methods in the HTGR community the IAEA launched a Coordinated Research Project (CRP) on the HTGR Uncertainty Analysis in Modelling early in 2012. The project is built on the experience of the OECD/NEA Light Water Reactor (LWR) Uncertainty Analysis in Best-Estimate Modelling (UAM) benchmark activity, but focuses specifically on the peculiarities of HTGR designs and its simulation requirements. Two benchmark problems were defined with the prismatic type design represented by the MHTGR-350 design from General Atomics (GA) while a 250 MW modular pebble bed design, similar to the INET (China) and indirect-cycle PBMR (South Africa) designs are also included. In the paper more detail on the benchmark cases, the different specific phases and tasks and the latest
Lei, Huan; Yang, Xiu; Zheng, Bin; Lin, Guang; Baker, Nathan A.
2015-12-31
Biomolecules exhibit conformational fluctuations near equilibrium states, inducing uncertainty in various biological properties in a dynamic way. We have developed a general method to quantify the uncertainty of target properties induced by conformational fluctuations. Using a generalized polynomial chaos (gPC) expansion, we construct a surrogate model of the target property with respect to varying conformational states. We also propose a method to increase the sparsity of the gPC expansion by defining a set of conformational “active space” random variables. With the increased sparsity, we employ the compressive sensing method to accurately construct the surrogate model. We demonstrate the performance of the surrogate model by evaluating fluctuation-induced uncertainty in solvent-accessible surface area for the bovine trypsin inhibitor protein system and show that the new approach offers more accurate statistical information than standard Monte Carlo approaches. Further more, the constructed surrogate model also enables us to directly evaluate the target property under various conformational states, yielding a more accurate response surface than standard sparse grid collocation methods. In particular, the new method provides higher accuracy in high-dimensional systems, such as biomolecules, where sparse grid performance is limited by the accuracy of the computed quantity of interest. Our new framework is generalizable and can be used to investigate the uncertainty of a wide variety of target properties in biomolecular systems.
Eldred, Michael Scott; Vigil, Dena M.; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Lefantzi, Sophia; Hough, Patricia Diane; Eddy, John P.
2011-12-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the DAKOTA software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of DAKOTA-related research publications in the areas of surrogate-based optimization, uncertainty quantification, and optimization under uncertainty that provide the foundation for many of DAKOTA's iterative analysis capabilities.
Uncertainty Analysis for a Virtual Flow Meter Using an Air-Handling Unit Chilled Water Valve
Song, Li; Wang, Gang; Brambley, Michael R.
2013-04-28
A virtual water flow meter is developed that uses the chilled water control valve on an air-handling unit as a measurement device. The flow rate of water through the valve is calculated using the differential pressure across the valve and its associated coil, the valve command, and an empirically determined valve characteristic curve. Thus, the probability of error in the measurements is significantly greater than for conventionally manufactured flow meters. In this paper, mathematical models are developed and used to conduct uncertainty analysis for the virtual flow meter, and the results from the virtual meter are compared to measurements made with an ultrasonic flow meter. Theoretical uncertainty analysis shows that the total uncertainty in flow rates from the virtual flow meter is 1.46% with 95% confidence; comparison of virtual flow meter results with measurements from an ultrasonic flow meter yielded anuncertainty of 1.46% with 99% confidence. The comparable results from the theoretical uncertainty analysis and empirical comparison with the ultrasonic flow meter corroborate each other, and tend to validate the approach to computationally estimating uncertainty for virtual sensors introduced in this study.
Salloum, Maher N.; Sargsyan, Khachik; Jones, Reese E.; Najm, Habib N.; Debusschere, Bert
2015-08-11
We present a methodology to assess the predictive fidelity of multiscale simulations by incorporating uncertainty in the information exchanged between the components of an atomistic-to-continuum simulation. We account for both the uncertainty due to finite sampling in molecular dynamics (MD) simulations and the uncertainty in the physical parameters of the model. Using Bayesian inference, we represent the expensive atomistic component by a surrogate model that relates the long-term output of the atomistic simulation to its uncertain inputs. We then present algorithms to solve for the variables exchanged across the atomistic-continuum interface in terms of polynomial chaos expansions (PCEs). We also consider a simple Couette flow where velocities are exchanged between the atomistic and continuum components, while accounting for uncertainty in the atomistic model parameters and the continuum boundary conditions. Results show convergence of the coupling algorithm at a reasonable number of iterations. As a result, the uncertainty in the obtained variables significantly depends on the amount of data sampled from the MD simulations and on the width of the time averaging window used in the MD simulations.
Salloum, Maher N.; Sargsyan, Khachik; Jones, Reese E.; Najm, Habib N.; Debusschere, Bert
2015-08-11
We present a methodology to assess the predictive fidelity of multiscale simulations by incorporating uncertainty in the information exchanged between the components of an atomistic-to-continuum simulation. We account for both the uncertainty due to finite sampling in molecular dynamics (MD) simulations and the uncertainty in the physical parameters of the model. Using Bayesian inference, we represent the expensive atomistic component by a surrogate model that relates the long-term output of the atomistic simulation to its uncertain inputs. We then present algorithms to solve for the variables exchanged across the atomistic-continuum interface in terms of polynomial chaos expansions (PCEs). We alsomore » consider a simple Couette flow where velocities are exchanged between the atomistic and continuum components, while accounting for uncertainty in the atomistic model parameters and the continuum boundary conditions. Results show convergence of the coupling algorithm at a reasonable number of iterations. As a result, the uncertainty in the obtained variables significantly depends on the amount of data sampled from the MD simulations and on the width of the time averaging window used in the MD simulations.« less
Optimization Under Uncertainty for Water Consumption in a Pulverized Coal Power Plant
Juan M. Salazara; Stephen E. Zitney; Urmila M. Diwekara
2009-01-01
Pulverized coal (PC) power plants are widely recognized as major water consumers whose operability has started to be affected by drought conditions across some regions of the country. Water availability will further restrict the retrofitting of existing PC plants with water-expensive carbon capture technologies. Therefore, national efforts to reduce water withdrawal and consumption have been intensified. Water consumption in PC plants is strongly associated to losses from the cooling water cycle, particularly water evaporation from cooling towers. Accurate estimation of these water losses requires realistic cooling tower models, as well as the inclusion of uncertainties arising from atmospheric conditions. In this work, the cooling tower for a supercritical PC power plant was modeled as a humidification operation and used for optimization under uncertainty. Characterization of the uncertainty (air temperature and humidity) was based on available weather data. Process characteristics including boiler conditions, reactant ratios, and pressure ratios in turbines were calculated to obtain the minimum water consumption under the above mentioned uncertainties. In this study, the calculated conditions predicted up to 12% in reduction in the average water consumption for a 548 MW supercritical PC power plant simulated using Aspen Plus. Optimization under uncertainty for these large-scale PC plants cannot be solved with conventional stochastic programming algorithms because of the computational expenses involved. In this work, we discuss the use of a novel better optimization of nonlinear uncertain systems (BONUS) algorithm which dramatically decreases the computational requirements of the stochastic optimization.
Optimization under Uncertainty for Water Consumption in a Pulverized Coal Power Plant
Juan M. Salazar; Stephen E. Zitney; Urmila Diwekar
2009-01-01
Pulverized coal (PC) power plants are widely recognized as major water consumers whose operability has started to be affected by drought conditions across some regions of the country. Water availability will further restrict the retrofitting of existing PC plants with water-expensive carbon capture technologies. Therefore, national efforts to reduce water withdrawal and consumption have been intensified. Water consumption in PC plants is strongly associated to losses from the cooling water cycle, particularly water evaporation from cooling towers. Accurate estimation of these water losses requires realistic cooling tower models, as well as the inclusion of uncertainties arising from atmospheric conditions. In this work, the cooling tower for a supercritical PC power plant was modeled as a humidification operation and used for optimization under uncertainty. Characterization of the uncertainty (air temperature and humidity) was based on available weather data. Process characteristics including boiler conditions, reactant ratios, and pressure ratios in turbines were calculated to obtain the minimum water consumption under the above mentioned uncertainties. In this study, the calculated conditions predicted up to 12% in reduction in the average water consumption for a 548 MW supercritical PC power plant simulated using Aspen Plus. Optimization under uncertainty for these large-scale PC plants cannot be solved with conventional stochastic programming algorithms because of the computational expenses involved. In this work, we discuss the use of a novel better optimization of nonlinear uncertain systems (BONUS) algorithm which dramatically decreases the computational requirements of the stochastic optimization.
Gauntt, Randall O.; Mattie, Patrick D.; Bixler, Nathan E.; Ross, Kyle; Cardoni, Jeffrey N; Kalinich, Donald A.; Osborn, Douglas M.; Sallaberry, Cedric Jean-Marie; Ghosh, S. Tina
2014-02-01
This paper describes the knowledge advancements from the uncertainty analysis for the State-of- the-Art Reactor Consequence Analyses (SOARCA) unmitigated long-term station blackout accident scenario at the Peach Bottom Atomic Power Station. This work assessed key MELCOR and MELCOR Accident Consequence Code System, Version 2 (MACCS2) modeling uncertainties in an integrated fashion to quantify the relative importance of each uncertain input on potential accident progression, radiological releases, and off-site consequences. This quantitative uncertainty analysis provides measures of the effects on consequences, of each of the selected uncertain parameters both individually and in interaction with other parameters. The results measure the model response (e.g., variance in the output) to uncertainty in the selected input. Investigation into the important uncertain parameters in turn yields insights into important phenomena for accident progression and off-site consequences. This uncertainty analysis confirmed the known importance of some parameters, such as failure rate of the Safety Relief Valve in accident progression modeling and the dry deposition velocity in off-site consequence modeling. The analysis also revealed some new insights, such as dependent effect of cesium chemical form for different accident progressions. (auth)
Petruzzi, A.; D'Auria, F.; Giannotti, W.; Ivanov, K.
2005-02-15
The best-estimate calculation results from complex system codes are affected by approximations that are unpredictable without the use of computational tools that account for the various sources of uncertainty.The code with (the capability of) internal assessment of uncertainty (CIAU) has been previously proposed by the University of Pisa to realize the integration between a qualified system code and an uncertainty methodology and to supply proper uncertainty bands each time a nuclear power plant (NPP) transient scenario is calculated. The derivation of the methodology and the results achieved by the use of CIAU are discussed to demonstrate the main features and capabilities of the method.In a joint effort between the University of Pisa and The Pennsylvania State University, the CIAU method has been recently extended to evaluate the uncertainty of coupled three-dimensional neutronics/thermal-hydraulics calculations. The result is CIAU-TN. The feasibility of the approach has been demonstrated, and sample results related to the turbine trip transient in the Peach Bottom NPP are shown. Notwithstanding that the full implementation and use of the procedure requires a database of errors not available at the moment, the results give an idea of the errors expected from the present computational tools.
nCTEQ15 - Global analysis of nuclear parton distributions with uncertainties in the CTEQ framework
Kovarik, K.; Kusina, A.; Jezo, T.; Clark, D. B.; Keppel, C.; Lyonnet, F.; Morfin, J. G.; Olness, F. I.; Owens, J. F.; Schienbein, I.; et al
2016-04-28
We present the new nCTEQ15 set of nuclear parton distribution functions with uncertainties. This fit extends the CTEQ proton PDFs to include the nuclear dependence using data on nuclei all the way up to 208Pb. The uncertainties are determined using the Hessian method with an optimal rescaling of the eigenvectors to accurately represent the uncertainties for the chosen tolerance criteria. In addition to the Deep Inelastic Scattering (DIS) and Drell-Yan (DY) processes, we also include inclusive pion production data from RHIC to help constrain the nuclear gluon PDF. Here, we investigate the correlation of the data sets with specific nPDFmore » flavor components, and asses the impact of individual experiments. We also provide comparisons of the nCTEQ15 set with recent fits from other groups.« less
Eslick, John C.; Ng, Brenda; Gao, Qianwen; Tong, Charles H.; Sahinidis, Nikolaos V.; Miller, David C.
2014-12-31
Under the auspices of the U.S. Department of Energy’s Carbon Capture Simulation Initiative (CCSI), a Framework for Optimization and Quantification of Uncertainty and Sensitivity (FOQUS) has been developed. This tool enables carbon capture systems to be rapidly synthesized and rigorously optimized, in an environment that accounts for and propagates uncertainties in parameters and models. FOQUS currently enables (1) the development of surrogate algebraic models utilizing the ALAMO algorithm, which can be used for superstructure optimization to identify optimal process configurations, (2) simulation-based optimization utilizing derivative free optimization (DFO) algorithms with detailed black-box process models, and (3) rigorous uncertainty quantification throughmore » PSUADE. FOQUS utilizes another CCSI technology, the Turbine Science Gateway, to manage the thousands of simulated runs necessary for optimization and UQ. Thus, this computational framework has been demonstrated for the design and analysis of a solid sorbent based carbon capture system.« less
Eslick, John C.; Ng, Brenda; Gao, Qianwen; Tong, Charles H.; Sahinidis, Nikolaos V.; Miller, David C.
2014-12-31
Under the auspices of the U.S. Department of Energys Carbon Capture Simulation Initiative (CCSI), a Framework for Optimization and Quantification of Uncertainty and Sensitivity (FOQUS) has been developed. This tool enables carbon capture systems to be rapidly synthesized and rigorously optimized, in an environment that accounts for and propagates uncertainties in parameters and models. FOQUS currently enables (1) the development of surrogate algebraic models utilizing the ALAMO algorithm, which can be used for superstructure optimization to identify optimal process configurations, (2) simulation-based optimization utilizing derivative free optimization (DFO) algorithms with detailed black-box process models, and (3) rigorous uncertainty quantification through PSUADE. FOQUS utilizes another CCSI technology, the Turbine Science Gateway, to manage the thousands of simulated runs necessary for optimization and UQ. Thus, this computational framework has been demonstrated for the design and analysis of a solid sorbent based carbon capture system.
Survey of sampling-based methods for uncertainty and sensitivity analysis.
Johnson, Jay Dean; Helton, Jon Craig; Sallaberry, Cedric J. PhD.; Storlie, Curt B. (Colorado State University, Fort Collins, CO)
2006-06-01
Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (1) Definition of probability distributions to characterize epistemic uncertainty in analysis inputs, (2) Generation of samples from uncertain analysis inputs, (3) Propagation of sampled inputs through an analysis, (4) Presentation of uncertainty analysis results, and (5) Determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition.
Levine, S.; Kaiser, G. D.; Arcieri, W. C.; Firstenberg, H.; Fulford, P. J.; Lam, P. S.; Ritzman, R. L.; Schmidt, E. R.
1982-03-01
The purpose of this document is to assess the state of knowledge and expert opinions that exist about fission product source terms from potential nuclear power plant accidents. This is so that recommendations can be made for research and analyses which have the potential to reduce the uncertainties in these estimated source terms and to derive improved methods for predicting their magnitudes. The main reasons for writing this report are to indicate the major uncertainties involved in defining realistic source terms that could arise from severe reactor accidents, to determine which factors would have the most significant impact on public risks and emergency planning, and to suggest research and analyses that could result in the reduction of these uncertainties. Source terms used in the conventional consequence calculations in the licensing process are not explicitly addressed.
Uncertainty Analysis of Spectral Irradiance Reference Standards Used for NREL Calibrations
Habte, A.; Andreas, A.; Reda, I.; Campanelli, M.; Stoffel, T.
2013-05-01
Spectral irradiance produced by lamp standards such as the National Institute of Standards and Technology (NIST) FEL-type tungsten halogen lamps are used to calibrate spectroradiometers at the National Renewable Energy Laboratory. Spectroradiometers are often used to characterize spectral irradiance of solar simulators, which in turn are used to characterize photovoltaic device performance, e.g., power output and spectral response. Therefore, quantifying the calibration uncertainty of spectroradiometers is critical to understanding photovoltaic system performance. In this study, we attempted to reproduce the NIST-reported input variables, including the calibration uncertainty in spectral irradiance for a standard NIST lamp, and quantify uncertainty for measurement setup at the Optical Metrology Laboratory at the National Renewable Energy Laboratory.
Uncertainty quantification of a radionuclide release model using an adaptive spectral technique
Gilli, L.; Hoogwerf, C.; Lathouwers, D.; Kloosterman, J. L.
2013-07-01
In this paper we present the application of a non-intrusive spectral techniques we recently developed for the evaluation of the uncertainties associated with a radionuclide migration problem. Spectral techniques can be used to reconstruct stochastic quantities of interest by means of a Fourier-like expansion. Their application to uncertainty propagation problems can be performed by evaluating a set of realizations which are chosen adaptively, in this work the main details about how this is done are presented. The uncertainty quantification problem we are going to deal with was first solved in a recent work where the authors used a spectral technique based on an intrusive approach. In this paper we are going to reproduce the results of this reference work, compare them and discuss the main numerical aspects. (authors)
Franco, Guillermo; Shen-Tu, Bing Ming; Bazzurro, Paolo; Goretti, Agostino; Valensise, Gianluca
2008-07-08
Increasing sophistication in the insurance and reinsurance market is stimulating the move towards catastrophe models that offer a greater degree of flexibility in the definition of model parameters and model assumptions. This study explores the impact of uncertainty in the input parameters on the loss estimates by departing from the exclusive usage of mean values to establish the earthquake event mechanism, the ground motion fields, or the damageability of the building stock. Here the potential losses due to a repeat of the 1908 Messina-Reggio Calabria event are calculated using different plausible alternatives found in the literature that encompass 12 event scenarios, 2 different ground motion prediction equations, and 16 combinations of damage functions for the building stock, a total of 384 loss scenarios. These results constitute the basis for a sensitivity analysis of the different assumptions on the loss estimates that allows the model user to estimate the impact of the uncertainty on input parameters and the potential spread of the model results. For the event under scrutiny, average losses would amount today to about 9.000 to 10.000 million Euros. The uncertainty in the model parameters is reflected in the high coefficient of variation of this loss, reaching approximately 45%. The choice of ground motion prediction equations and vulnerability functions of the building stock contribute the most to the uncertainty in loss estimates. This indicates that the application of non-local-specific information has a great impact on the spread of potential catastrophic losses. In order to close this uncertainty gap, more exhaustive documentation practices in insurance portfolios will have to go hand in hand with greater flexibility in the model input parameters.
Hu, Jianwei; Gauld, Ian C.
2014-12-01
The U.S. Department of Energy’s Next Generation Safeguards Initiative Spent Fuel (NGSI-SF) project is nearing the final phase of developing several advanced nondestructive assay (NDA) instruments designed to measure spent nuclear fuel assemblies for the purpose of improving nuclear safeguards. Current efforts are focusing on calibrating several of these instruments with spent fuel assemblies at two international spent fuel facilities. Modelling and simulation is expected to play an important role in predicting nuclide compositions, neutron and gamma source terms, and instrument responses in order to inform the instrument calibration procedures. As part of NGSI-SF project, this work was carried outmore » to assess the impacts of uncertainties in the nuclear data used in the calculations of spent fuel content, radiation emissions and instrument responses. Nuclear data is an essential part of nuclear fuel burnup and decay codes and nuclear transport codes. Such codes are routinely used for analysis of spent fuel and NDA safeguards instruments. Hence, the uncertainties existing in the nuclear data used in these codes affect the accuracies of such analysis. In addition, nuclear data uncertainties represent the limiting (smallest) uncertainties that can be expected from nuclear code predictions, and therefore define the highest attainable accuracy of the NDA instrument. This work studies the impacts of nuclear data uncertainties on calculated spent fuel nuclide inventories and the associated NDA instrument response. Recently developed methods within the SCALE code system are applied in this study. The Californium Interrogation with Prompt Neutron instrument was selected to illustrate the impact of these uncertainties on NDA instrument response.« less
Seitz, R.
2011-03-02
It is widely recognized that the results of safety assessment calculations provide an important contribution to the safety arguments for a disposal facility, but cannot in themselves adequately demonstrate the safety of the disposal system. The safety assessment and a broader range of arguments and activities need to be considered holistically to justify radioactive waste disposal at any particular site. Many programs are therefore moving towards the production of what has become known as a Safety Case, which includes all of the different activities that are conducted to demonstrate the safety of a disposal concept. Recognizing the growing interest in the concept of a Safety Case, the International Atomic Energy Agency (IAEA) is undertaking an intercomparison and harmonization project called PRISM (Practical Illustration and use of the Safety Case Concept in the Management of Near-surface Disposal). The PRISM project is organized into four Task Groups that address key aspects of the Safety Case concept: Task Group 1 - Understanding the Safety Case; Task Group 2 - Disposal facility design; Task Group 3 - Managing waste acceptance; and Task Group 4 - Managing uncertainty. This paper addresses the work of Task Group 4, which is investigating approaches for managing the uncertainties associated with near-surface disposal of radioactive waste and their consideration in the context of the Safety Case. Emphasis is placed on identifying a wide variety of approaches that can and have been used to manage different types of uncertainties, especially non-quantitative approaches that have not received as much attention in previous IAEA projects. This paper includes discussions of the current results of work on the task on managing uncertainty, including: the different circumstances being considered, the sources/types of uncertainties being addressed and some initial proposals for approaches that can be used to manage different types of uncertainties.
Hu, Jianwei; Gauld, Ian C.
2014-12-01
The U.S. Department of Energy’s Next Generation Safeguards Initiative Spent Fuel (NGSI-SF) project is nearing the final phase of developing several advanced nondestructive assay (NDA) instruments designed to measure spent nuclear fuel assemblies for the purpose of improving nuclear safeguards. Current efforts are focusing on calibrating several of these instruments with spent fuel assemblies at two international spent fuel facilities. Modelling and simulation is expected to play an important role in predicting nuclide compositions, neutron and gamma source terms, and instrument responses in order to inform the instrument calibration procedures. As part of NGSI-SF project, this work was carried out to assess the impacts of uncertainties in the nuclear data used in the calculations of spent fuel content, radiation emissions and instrument responses. Nuclear data is an essential part of nuclear fuel burnup and decay codes and nuclear transport codes. Such codes are routinely used for analysis of spent fuel and NDA safeguards instruments. Hence, the uncertainties existing in the nuclear data used in these codes affect the accuracies of such analysis. In addition, nuclear data uncertainties represent the limiting (smallest) uncertainties that can be expected from nuclear code predictions, and therefore define the highest attainable accuracy of the NDA instrument. This work studies the impacts of nuclear data uncertainties on calculated spent fuel nuclide inventories and the associated NDA instrument response. Recently developed methods within the SCALE code system are applied in this study. The Californium Interrogation with Prompt Neutron instrument was selected to illustrate the impact of these uncertainties on NDA instrument response.
Uncertainty Quantification of Composite Laminate Damage with the Generalized Information Theory
J. Lucero; F. Hemez; T. Ross; K.Kline; J.Hundhausen; T. Tippetts
2006-05-01
This work presents a survey of five theories to assess the uncertainty of projectile impact induced damage on multi-layered carbon-epoxy composite plates. Because the types of uncertainty dealt with in this application are multiple (variability, ambiguity, and conflict) and because the data sets collected are sparse, characterizing the amount of delamination damage with probability theory alone is possible but incomplete. This motivates the exploration of methods contained within a broad Generalized Information Theory (GIT) that rely on less restrictive assumptions than probability theory. Probability, fuzzy sets, possibility, and imprecise probability (probability boxes (p-boxes) and Dempster-Shafer) are used to assess the uncertainty in composite plate damage. Furthermore, this work highlights the usefulness of each theory. The purpose of the study is not to compare directly the different GIT methods but to show that they can be deployed on a practical application and to compare the assumptions upon which these theories are based. The data sets consist of experimental measurements and finite element predictions of the amount of delamination and fiber splitting damage as multilayered composite plates are impacted by a projectile at various velocities. The physical experiments consist of using a gas gun to impact suspended plates with a projectile accelerated to prescribed velocities, then, taking ultrasound images of the resulting delamination. The nonlinear, multiple length-scale numerical simulations couple local crack propagation implemented through cohesive zone modeling to global stress-displacement finite element analysis. The assessment of damage uncertainty is performed in three steps by, first, considering the test data only; then, considering the simulation data only; finally, performing an assessment of total uncertainty where test and simulation data sets are combined. This study leads to practical recommendations for reducing the uncertainty and
ROBUSTNESS OF DECISION INSIGHTS UNDER ALTERNATIVE ALEATORY/EPISTEMIC UNCERTAINTY CLASSIFICATIONS
Unwin, Stephen D.; Eslinger, Paul W.; Johnson, Kenneth I.
2013-09-22
The Risk-Informed Safety Margin Characterization (RISMC) pathway is a set of activities defined under the U.S. Department of Energy Light Water Reactor Sustainability Program. The overarching objective of RISMC is to support plant life-extension decision-making by providing a state-of-knowledge characterization of safety margins in key systems, structures, and components (SSCs). A key technical challenge is to establish the conceptual and technical feasibility of analyzing safety margin in a risk-informed way, which, unlike conventionally defined deterministic margin analysis, would be founded on probabilistic characterizations of SSC performance. Evaluation of probabilistic safety margins will in general entail the uncertainty characterization both of the prospective challenge to the performance of an SSC ("load") and of its "capacity" to withstand that challenge. The RISMC framework contrasts sharply with the traditional probabilistic risk assessment (PRA) structure in that the underlying models are not inherently aleatory. Rather, they are largely deterministic physical/engineering models with ambiguities about the appropriate uncertainty classification of many model parameters. The current analysis demonstrates that if the distinction between epistemic and aleatory uncertainties is to be preserved in a RISMC-like modeling environment, then it is unlikely that analysis insights supporting decision-making will in general be robust under recategorization of input uncertainties. If it is believed there is a true conceptual distinction between epistemic and aleatory uncertainty (as opposed to the distinction being primarily a legacy of the PRA paradigm) then a consistent and defensible basis must be established by which to categorize input uncertainties.
Quantification of margins and uncertainty for risk-informed decision analysis.
Alvin, Kenneth Fredrick
2010-09-01
QMU stands for 'Quantification of Margins and Uncertainties'. QMU is a basic framework for consistency in integrating simulation, data, and/or subject matter expertise to provide input into a risk-informed decision-making process. QMU is being applied to a wide range of NNSA stockpile issues, from performance to safety. The implementation of QMU varies with lab and application focus. The Advanced Simulation and Computing (ASC) Program develops validated computational simulation tools to be applied in the context of QMU. QMU provides input into a risk-informed decision making process. The completeness aspect of QMU can benefit from the structured methodology and discipline of quantitative risk assessment (QRA)/probabilistic risk assessment (PRA). In characterizing uncertainties it is important to pay attention to the distinction between those arising from incomplete knowledge ('epistemic' or systematic), and those arising from device-to-device variation ('aleatory' or random). The national security labs should investigate the utility of a probability of frequency (PoF) approach in presenting uncertainties in the stockpile. A QMU methodology is connected if the interactions between failure modes are included. The design labs should continue to focus attention on quantifying uncertainties that arise from epistemic uncertainties such as poorly-modeled phenomena, numerical errors, coding errors, and systematic uncertainties in experiment. The NNSA and design labs should ensure that the certification plan for any RRW is supported by strong, timely peer review and by an ongoing, transparent QMU-based documentation and analysis in order to permit a confidence level necessary for eventual certification.
On The short-term uncertainty in performance of a point absorber wave energy converter
U.S. Department of Energy (DOE) all webpages (Extended Search)
ON THE SHORT-TERM UNCERTAINTY IN PERFORMANCE OF A POINT ABSORBER WAVE ENERGY CONVERTER Lance Manuel 1 and Jarred Canning University of Texas at Austin Austin, TX, USA Ryan G. Coe and Carlos Michelen Sandia National Laboratories Albuquerque, NM, USA 1 Corresponding author: lmanuel@mail.utexas.edu INTRODUCTION Of interest, in this study, is the quantification of uncertainty in the performance of a two-body wave point absorber (Reference Model 3 or RM3), which serves as a wave energy converter
Calibration and Forward Uncertainty Propagation for Large-eddy Simulations of Engineering Flows
Templeton, Jeremy Alan; Blaylock, Myra L.; Domino, Stefan P.; Hewson, John C.; Kumar, Pritvi Raj; Ling, Julia; Najm, Habib N.; Ruiz, Anthony; Safta, Cosmin; Sargsyan, Khachik; Stewart, Alessia; Wagner, Gregory
2015-09-01
The objective of this work is to investigate the efficacy of using calibration strategies from Uncertainty Quantification (UQ) to determine model coefficients for LES. As the target methods are for engineering LES, uncertainty from numerical aspects of the model must also be quantified. 15 The ultimate goal of this research thread is to generate a cost versus accuracy curve for LES such that the cost could be minimized given an accuracy prescribed by an engineering need. Realization of this goal would enable LES to serve as a predictive simulation tool within the engineering design process.
Constantinescu, E. M; Zavala, V. M.; Rocklin, M.; Lee, S.; Anitescu, M.
2011-02-01
We present a computational framework for integrating a state-of-the-art numerical weather prediction (NWP) model in stochastic unit commitment/economic dispatch formulations that account for wind power uncertainty. We first enhance the NWP model with an ensemble-based uncertainty quantification strategy implemented in a distributed-memory parallel computing architecture. We discuss computational issues arising in the implementation of the framework and validate the model using real wind-speed data obtained from a set of meteorological stations. We build a simulated power system to demonstrate the developments.
Modeling and Uncertainty Quantification of Vapor Sorption and Diffusion in Heterogeneous Polymers
Sun, Yunwei; Harley, Stephen J.; Glascoe, Elizabeth A.
2015-08-13
A high-fidelity model of kinetic and equilibrium sorption and diffusion is developed and exercised. The gas-diffusion model is coupled with a triple-sorption mechanism: Henry’s law absorption, Langmuir adsorption, and pooling or clustering of molecules at higher partial pressures. Sorption experiments are conducted and span a range of relative humidities (0-95 %) and temperatures (30-60 °C). Kinetic and equilibrium sorption properties and effective diffusivity are determined by minimizing the absolute difference between measured and modeled uptakes. Uncertainty quantification and sensitivity analysis methods are described and exercised herein to demonstrate the capability of this modeling approach. Water uptake in silica-filled and unfilled poly(dimethylsiloxane) networks is investigated; however, the model is versatile enough to be used with a wide range of materials and vapors.
Rapidity gap survival in central exclusive diffraction: Dynamical mechanisms and uncertainties
Strikman, Mark; Weiss, Christian
2009-01-01
We summarize our understanding of the dynamical mechanisms governing rapidity gap survival in central exclusive diffraction, pp -> p + H + p (H = high-mass system), and discuss the uncertainties in present estimates of the survival probability. The main suppression of diffractive scattering is due to inelastic soft spectator interactions at small pp impact parameters and can be described in a mean-field approximation (independent hard and soft interactions). Moderate extra suppression results from fluctuations of the partonic configurations of the colliding protons. At LHC energies absorptive interactions of hard spectator partons associated with the gg -> H process reach the black-disk regime and cause substantial additional suppression, pushing the survival probability below 0.01.
Reda, I.; Stoffel, T.; Habte, A.
2014-03-01
The National Renewable Energy Laboratory (NREL) and the Atmospheric Radiation Measurement (ARM) Climate Research Facility work together in providing data from strategically located in situ measurement observatories around the world. Both work together in improving and developing new technologies that assist in acquiring high quality radiometric data. In this presentation we summarize the uncertainty estimates of the ARM data collected at the ARM Solar Infrared Radiation Station (SIRS), Sky Radiometers on Stand for Downwelling Radiation (SKYRAD), and Ground Radiometers on Stand for Upwelling Radiation (GNDRAD), which ultimately improve the existing radiometric data. Three studies are also included to show the difference between calibrating pyrgeometers (e.g., Eppley PIR) using the manufacturer blackbody versus the interim World Infrared Standard Group (WISG), a pyrgeometer aging study, and the sampling rate effect of correcting historical data.
Which Models Matter: Uncertainty and Sensitivity Analysis for
U.S. Department of Energy (DOE) all webpages (Extended Search)
of Energy EPS Billboard) Which Bulb Is Right for You? (High-Resolution EPS Billboard) High-resolution EPS of billboard reading, 'Which bulb is right for you? Save energy, save money. Energysaver.gov.' DoE_Billboard_Which_Bulb.eps (11.05 MB) More Documents & Publications Which Bulb Is Right for You? (High-Resolution JPG Billboard) Which Bulb Is Right for You? (Low-Resolution JPG Billboard) Goodbye, Watts. Hello, Lumens. (High-Resolution EPS of Energy
JPG Billboard) Which Bulb Is
Influence of Advanced Fuel Cycles on Uncertainty in the Performance...
Office of Scientific and Technical Information (OSTI)
Resource Relation: Conference: Proposed for presentation at the International High-Level Radioactive Waste Management Conference held April 28 - May 2, 2013 in Albuquerque, NM...
Keller, Dustin M.
2013-11-01
A comprehensive investigation into the measurement uncertainty in polarization produced by Dynamic Nuclear Polarization is outlined. The polarization data taken during Jefferson Lab experiment E08-007 is used to obtain error estimates and to develop an algorithm to minimize uncertainty of the measurement of polarization in irradiated View the ^14NH_3 targets, which is readily applied to other materials. The target polarization and corresponding uncertainties for E08-007 are reported. The resulting relative uncertainty found in the target polarization is determined to be less than or equal to 3.9%.
Johnson, J. D.; Oberkampf, William Louis; Helton, Jon Craig (Arizona State University, Tempe, AZ); Storlie, Curtis B. (North Carolina State University, Raleigh, NC)
2006-10-01
Evidence theory provides an alternative to probability theory for the representation of epistemic uncertainty in model predictions that derives from epistemic uncertainty in model inputs, where the descriptor epistemic is used to indicate uncertainty that derives from a lack of knowledge with respect to the appropriate values to use for various inputs to the model. The potential benefit, and hence appeal, of evidence theory is that it allows a less restrictive specification of uncertainty than is possible within the axiomatic structure on which probability theory is based. Unfortunately, the propagation of an evidence theory representation for uncertainty through a model is more computationally demanding than the propagation of a probabilistic representation for uncertainty, with this difficulty constituting a serious obstacle to the use of evidence theory in the representation of uncertainty in predictions obtained from computationally intensive models. This presentation describes and illustrates a sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory. Preliminary trials indicate that the presented strategy can be used to propagate uncertainty representations based on evidence theory in analysis situations where naive sampling-based (i.e., unsophisticated Monte Carlo) procedures are impracticable due to computational cost.
Maximum-likelihood fitting of data dominated by Poisson statistical uncertainties
Stoneking, M.R.; Den Hartog, D.J.
1996-06-01
The fitting of data by {chi}{sup 2}-minimization is valid only when the uncertainties in the data are normally distributed. When analyzing spectroscopic or particle counting data at very low signal level (e.g., a Thomson scattering diagnostic), the uncertainties are distributed with a Poisson distribution. The authors have developed a maximum-likelihood method for fitting data that correctly treats the Poisson statistical character of the uncertainties. This method maximizes the total probability that the observed data are drawn from the assumed fit function using the Poisson probability function to determine the probability for each data point. The algorithm also returns uncertainty estimates for the fit parameters. They compare this method with a {chi}{sup 2}-minimization routine applied to both simulated and real data. Differences in the returned fits are greater at low signal level (less than {approximately}20 counts per measurement). the maximum-likelihood method is found to be more accurate and robust, returning a narrower distribution of values for the fit parameters with fewer outliers.
Uncertainty and Sensitivity of Alternative Rn-222 Flux Density Models Used in Performance Assessment
Greg J. Shott, Vefa Yucel, Lloyd Desotell; Non-Nstec Authors: G. Pyles and Jon Carilli
2007-06-01
Performance assessments for the Area 5 Radioactive Waste Management Site on the Nevada Test Site have used three different mathematical models to estimate Rn-222 flux density. This study describes the performance, uncertainty, and sensitivity of the three models which include the U.S. Nuclear Regulatory Commission Regulatory Guide 3.64 analytical method and two numerical methods. The uncertainty of each model was determined by Monte Carlo simulation using Latin hypercube sampling. The global sensitivity was investigated using Morris one-at-time screening method, sample-based correlation and regression methods, the variance-based extended Fourier amplitude sensitivity test, and Sobol's sensitivity indices. The models were found to produce similar estimates of the mean and median flux density, but to have different uncertainties and sensitivities. When the Rn-222 effective diffusion coefficient was estimated using five different published predictive models, the radon flux density models were found to be most sensitive to the effective diffusion coefficient model selected, the emanation coefficient, and the radionuclide inventory. Using a site-specific measured effective diffusion coefficient significantly reduced the output uncertainty. When a site-specific effective-diffusion coefficient was used, the models were most sensitive to the emanation coefficient and the radionuclide inventory.
ASSESSMENT OF UNCERTAINTY IN THE RADIATION DOSES FOR THE TECHA RIVER DOSIMETRY SYSTEM
Napier, Bruce A.; Degteva, M. O.; Anspaugh, L. R.; Shagina, N. B.
2009-10-23
In order to provide more accurate and precise estimates of individual dose (and thus more precise estimates of radiation risk) for the members of the ETRC, a new dosimetric calculation system, the Techa River Dosimetry System-2009 (TRDS-2009) has been prepared. The deterministic version of the improved dosimetry system TRDS-2009D was basically completed in April 2009. Recent developments in evaluation of dose-response models in light of uncertain dose have highlighted the importance of different types of uncertainties in the development of individual dose estimates. These include uncertain parameters that may be either shared or unshared within the dosimetric cohort, and also the nature of the type of uncertainty as aleatory or epistemic and either classical or Berkson. This report identifies the nature of the various input parameters and calculational methods incorporated in the Techa River Dosimetry System (based on the TRDS-2009D implementation), with the intention of preparing a stochastic version to estimate the uncertainties in the dose estimates. This report reviews the equations, databases, and input parameters, and then identifies the authors interpretations of their general nature. It presents the approach selected so that the stochastic, Monte-Carlo, implementation of the dosimetry System - TRDS-2009MC - will provide useful information regarding the uncertainties of the doses.
Uncertainties in aspheric profile measurements with the geometry measuring machine at NIST.
Griesmann, U.; Machkour-Deshayes, N.; Soons, J.; Kim, B. C.; Wang, Q.; Stoup, J. R.; Assoufid, L.; Experimental Facilities Division; NIST
2005-01-01
The Geometry Measuring Machine (GEMM) of the National Institute of Standards and Technology (NIST) is a profilometer for free-form surfaces. A profile is reconstructed from the local curvature of a test part surface, measured at several locations along a line. For profile measurements of free-form surfaces, methods based on local part curvature sensing have strong appeal. Unlike full-aperture interferometry they do not require customized null optics. The uncertainty of a reconstructed profile is critically dependent upon the uncertainty of the curvature measurement and, to a lesser extent, on curvature sensor positioning accuracy. For an instrument of the GEMM type, we evaluate the measurement uncertainties for a curvature sensor based on a small aperture interferometer and then estimate the uncertainty that can be achieved in the reconstructed profile. In addition, profile measurements of a free-form mirror using GEMM are compared with measurements using a long-trace profiler, a coordinate measuring machine, and subaperture-stitching interferometry.
Physics-based dimension reduction in uncertainty quantification for radiative transfer
Hetzler, A. C.; Adams, M. L.; Stripling Iv, H. F.; Hawkins, W. D.
2013-07-01
We present a physics-based methodology for quantifying the uncertainty in a given quantity of interest (QOI) that is contributed by uncertainties in opacities in radiation transport problems. Typically, opacities are tabulated as a function of density, temperature, and photon energy group. The size of this table makes a study of uncertainties at this level challenging because of the well-known 'curse of dimensionality.' We address this by studying uncertain parameters in the underlying physical model that generates the opacity tables. At this level, there are fewer uncertain parameters but still too many to analyze directly through computationally expensive radiation transport simulations. In order to explore this large uncertain parameter space, we develop two simplified radiation transport problems that are much less computationally demanding than the target problem of interest. An emulator is created for each QOI for each simplified problem using Bayesian Multivariate Adaptive Regression Splines (BMARS). This emulator is used to create a functional relationship between the QOIs and the uncertain parameters. Sensitivity analysis is performed using the emulator to determine which parameters contribute significantly to the uncertainty. This physics-based screening process reduces the dimension of the parameter space that is then studied via the computationally expensive radiation transport calculation to generate distributions of quantities of interest. Results of this research demonstrate that the QOIs for the target problem agree for varying screening criteria determined by the sensitivity analysis, and the QOIs agree well for varying Latin Hypercube Design (LHD) sample sizes for the uncertain space. (authors)
Zhang, J.; Hodge, B.; Miettinen, J.; Holttinen, H.; Gomez-Lozaro, E.; Cutululis, N.; Litong-Palima, M.; Sorensen, P.; Lovholm, A.; Berge, E.; Dobschinski, J.
2013-10-01
This presentation summarizes the work to investigate the uncertainty in wind forecasting at different times of year and compare wind forecast errors in different power systems using large-scale wind power prediction data from six countries: the United States, Finland, Spain, Denmark, Norway, and Germany.
SENSIT-2D: a two-dimensional cross-section sensitivity and uncertainty analysis code
Embrechts, M.J.
1982-10-01
SENSIT-2D is a computer program that calculates the sensitivity and/or uncertainty for an integral response (e.g., heating, radiation damage), obtained from the two-dimensional discrete ordinates transport code TRIDENT-CTR, to the cross sections and cross-section uncertainties. A design-sensitivity option allows one to calculate the integral response when the cross sections in certain regions are changed. A secondary-energy-distribution sensitivity- and uncertainty-analysis capability is included. SENSIT-2D incorporates all the essential features of TRIDENT-CTR (r,z geometry option, triangular mesh, nonorthogonal boundaries, group-dependent quadrature sets) and is aimed at the needs of the fusion community. The structure of SENSIT-2D is similar to the structure of the SENSIT code, a one-dimensional sensitivity- and uncertainty-analysis code. This report covers the theory used in SENSIT-2D, outlines the code structure, and gives detailed input specifications. Where appropriate, parts of the SENSIT report are taken over in this write-up. Two sample problems which illustrate the use of SENSIT-2D are explained.
Wildermann, R.; Beittel, R. )
1993-01-01
The Minerals Management Service (MMS) of the US Department of the Interior prepares an environmental impact statement (EIS) for each proposal to lease a portion of the Outer Continental Shelf (OCS) for oil and gas exploration and development. The nature, magnitude, and timing of the activities that would ultimately result from leasing are subject to wide speculation, primarily because of uncertainties about the locations and amounts of petroleum hydrocarbons that exist on most potential leases. These uncertainties create challenges in preparing EIS's that meet National Environmental Policy Act requirements and provide information useful to decision-makers. This paper examines the constraints that uncertainty places on the detail and reliability of assessments of impacts from potential OCS development. It further describes how the MMS accounts for uncertainty in developing reasonable scenarios of future events that can be evaluated in the EIS. A process for incorporating the risk of accidental oil spills into assessments of expected impacts is also presented. Finally, the paper demonstrates through examination of case studies how a balance can be achieved between the need for an EIS to present impacts in sufficient detail to allow a meaningful comparison of alternatives and the tendency to push the analysis beyond credible limits.
Linking the uncertainty of low frequency variability in tropical forcing in regional climate change
Forest, Chris E.; Barsugli, Joseph J.; Li, Wei
2015-02-20
The project utilizes multiple atmospheric general circulation models (AGCMs) to examine the regional climate sensitivity to tropical sea surface temperature forcing through a series of ensemble experiments. The overall goal for this work is to use the global teleconnection operator (GTO) as a metric to assess the impact of model structural differences on the uncertainties in regional climate variability.
Measuring Cross-Section and Estimating Uncertainties with the fissionTPC
Bowden, N.; Manning, B.; Sangiorgio, S.; Seilhan, B.
2015-01-30
The purpose of this document is to outline the prescription for measuring fission cross-sections with the NIFFTE fissionTPC and estimating the associated uncertainties. As such it will serve as a work planning guide for NIFFTE collaboration members and facilitate clear communication of the procedures used to the broader community.
Distributed Generation Investment by a Microgrid UnderUncertainty
Siddiqui, Afzal; Marnay, Chris
2006-06-16
This paper examines a California-based microgrid s decision to invest in a distributed generation (DG) unit that operates on natural gas. While the long-term natural gas generation cost is stochastic, we initially assume that the microgrid may purchase electricity at a fixed retail rate from its utility. Using the real options approach, we find natural gas generating cost thresholds that trigger DG investment. Furthermore, the consideration of operational flexibility by the microgrid accelerates DG investment, while the option to disconnect entirely from the utility is not attractive. By allowing the electricity price to be stochastic, we next determine an investment threshold boundary and find that high electricity price volatility relative to that of natural gas generating cost delays investment while simultaneously increasing the value of the investment. We conclude by using this result to find the implicit option value of the DG unit.
Hurdling barriers through market uncertainty: Case studies ininnovative technology adoption
Payne, Christopher T.; Radspieler Jr., Anthony; Payne, Jack
2002-08-18
The crisis atmosphere surrounding electricity availability in California during the summer of 2001 produced two distinct phenomena in commercial energy consumption decision-making: desires to guarantee energy availability while blackouts were still widely anticipated, and desires to avoid or mitigate significant price increases when higher commercial electricity tariffs took effect. The climate of increased consideration of these factors seems to have led, in some cases, to greater willingness on the part of business decision-makers to consider highly innovative technologies. This paper examines three case studies of innovative technology adoption: retrofit of time-and-temperature signs on an office building; installation of fuel cells to supply power, heating, and cooling to the same building; and installation of a gas-fired heat pump at a microbrewery. We examine the decision process that led to adoption of these technologies. In each case, specific constraints had made more conventional energy-efficient technologies inapplicable. We examine how these barriers to technology adoption developed over time, how the California energy decision-making climate combined with the characteristics of these innovative technologies to overcome the barriers, and what the implications of hurdling these barriers are for future energy decisions within the firms.
Implementation of a Bayesian Engine for Uncertainty Analysis
Leng Vang; Curtis Smith; Steven Prescott
2014-08-01
In probabilistic risk assessment, it is important to have an environment where analysts have access to a shared and secured high performance computing and a statistical analysis tool package. As part of the advanced small modular reactor probabilistic risk analysis framework implementation, we have identified the need for advanced Bayesian computations. However, in order to make this technology available to non-specialists, there is also a need of a simplified tool that allows users to author models and evaluate them within this framework. As a proof-of-concept, we have implemented an advanced open source Bayesian inference tool, OpenBUGS, within the browser-based cloud risk analysis framework that is under development at the Idaho National Laboratory. This development, the “OpenBUGS Scripter” has been implemented as a client side, visual web-based and integrated development environment for creating OpenBUGS language scripts. It depends on the shared server environment to execute the generated scripts and to transmit results back to the user. The visual models are in the form of linked diagrams, from which we automatically create the applicable OpenBUGS script that matches the diagram. These diagrams can be saved locally or stored on the server environment to be shared with other users.
Use of Quantitative Uncertainty Analysis to Support M&VDecisions in ESPCs
Mathew, Paul A.; Koehling, Erick; Kumar, Satish
2005-05-11
Measurement and Verification (M&V) is a critical elementof an Energy Savings Performance Contract (ESPC) - without M&V, thereisno way to confirm that the projected savings in an ESPC are in factbeing realized. For any given energy conservation measure in an ESPC,there are usually several M&V choices, which will vary in terms ofmeasurement uncertainty, cost, and technical feasibility. Typically,M&V decisions are made almost solely based on engineering judgmentand experience, with little, if any, quantitative uncertainty analysis(QUA). This paper describes the results of a pilot project initiated bythe Department of Energy s Federal Energy Management Program to explorethe use of Monte-Carlo simulation to assess savings uncertainty andthereby augment the M&V decision-making process in ESPCs. The intentwas to use QUA selectively in combination with heuristic knowledge, inorder to obtain quantitative estimates of the savings uncertainty withoutthe burden of a comprehensive "bottoms-up" QUA. This approach was used toanalyze the savings uncertainty in an ESPC for a large federal agency.The QUA was seamlessly integrated into the ESPC development process andthe incremental effort was relatively small with user-friendly tools thatare commercially available. As the case study illustrates, in some casesthe QUA simply confirms intuitive or qualitative information, while inother cases, it provides insight that suggests revisiting the M&Vplan. The case study also showed that M&V decisions should beinformed by the portfolio risk diversification. By providing quantitativeuncertainty information, QUA can effectively augment the M&Vdecision-making process as well as the overall ESPC financialanalysis.
SU-D-16A-06: Modeling Biological Effects of Residual Uncertainties For Stereotactic Radiosurgery
Ma, L; Larson, D; McDermott, M; Sneed, P; Sahgal, A
2014-06-01
Purpose: Residual uncertainties on the order of 1-2 mm are frequently observed when delivering stereotactic radiosurgery via on-line imaging guidance with a relocatable frame. In this study, a predictive model was developed to evalute potentiral late radiation effects associated with such uncertainties. Methods: A mathematical model was first developed to correlate the peripherial isodose volume with the internal and/or setup margins for a radiosurgical target. Such a model was then integrated with a previoulsy published logistic regression normal tissue complication model for determining the symptomatic radiation necrosis rate at various target sizes and prescription dose levels. The model was tested on a cohort of 15 brain tumor and tumor resection cavity patient cases and model predicted results were compared with the clinical results reported in the literature. Results: A normalized target diameter (D{sub 0}) in term of D{sub 0} = 6V/S, where V is the volume of a radiosurgical target and S is the surface of the target, was found to correlate excellently with the peripheral isodose volume for a radiosurgical delivery (logarithmic regression R{sup 2} > 0.99). The peripheral isodose volumes were found increase rapidly with increasing uncertainties levels. In general, a 1-mm residual uncertainties as calculated to result in approximately 0.5%, 1%, and 3% increases in the symptomatic radiation necrosis rate for D{sub 0} = 1 cm, 2 cm, and 3 cm based on the prescription guideline of RTOG 9005, i.e., 21 Gy to a lesion of 1 cm in diameter, 18 Gy to a lesion 2 cm in diameter, and 15 Gy to a lesion 3 cm in diameter respectively. Conclusion: The results of study suggest more stringent criteria on residual uncertainties are needed when treating a large target such as D{sub 0}≤ 3 cm with stereotactic radiosurgery. Dr. Ma and Dr. Sahgal are currently serving on the board of international society of stereotactic radiosurgery (ISRS)
Strom, Daniel J.; Joyce, Kevin E.; Maclellan, Jay A.; Watson, David J.; Lynch, Timothy P.; Antonio, Cheryl L.; Birchall, Alan; Anderson, Kevin K.; Zharov, Peter
2012-04-17
In making low-level radioactivity measurements of populations, it is commonly observed that a substantial portion of net results are negative. Furthermore, the observed variance of the measurement results arises from a combination of measurement uncertainty and population variability. This paper presents a method for disaggregating measurement uncertainty from population variability to produce a probability density function (PDF) of possibly true results. To do this, simple, justifiable, and reasonable assumptions are made about the relationship of the measurements to the measurands (the 'true values'). The measurements are assumed to be unbiased, that is, that their average value is the average of the measurands. Using traditional estimates of each measurement's uncertainty to disaggregate population variability from measurement uncertainty, a PDF of measurands for the population is produced. Then, using Bayes's theorem, the same assumptions, and all the data from the population of individuals, a prior PDF is computed for each individual's measurand. These PDFs are non-negative, and their average is equal to the average of the measurement results for the population. The uncertainty in these Bayesian posterior PDFs is all Berkson with no remaining classical component. The methods are applied to baseline bioassay data from the Hanford site. The data include 90Sr urinalysis measurements on 128 people, 137Cs in vivo measurements on 5,337 people, and 239Pu urinalysis measurements on 3,270 people. The method produces excellent results for the 90Sr and 137Cs measurements, since there are nonzero concentrations of these global fallout radionuclides in people who have not been occupationally exposed. The method does not work for the 239Pu measurements in non-occupationally exposed people because the population average is essentially zero.
SU-E-T-573: The Robustness of a Combined Margin Recipe for Uncertainties During Radiotherapy
Stroom, J; Vieira, S; Greco, C [Champalimaud Foundation, Lisbon, Lisbon (Portugal)
2014-06-01
Purpose: To investigate the variability of a safety margin recipe that combines CTV and PTV margins quadratically, with several tumor, treatment, and user related factors. Methods: Margin recipes were calculated by monte-carlo simulations in 5 steps. 1. A spherical tumor with or without isotropic microscopic was irradiated with a 5 field dose plan2. PTV: Geometric uncertainties were introduced using systematic (Sgeo) and random (sgeo) standard deviations. CTV: Microscopic disease distribution was modelled by semi-gaussian (Smicro) with varying number of islets (Ni)3. For a specific uncertainty set (Sgeo, sgeo, Smicro(Ni)), margins were varied until pre-defined decrease in TCP or dose coverage was fulfilled. 4. First, margin recipes were calculated for each of the three uncertainties separately. CTV and PTV recipes were then combined quadratically to yield a final recipe M(Sgeo, sgeo, Smicro(Ni)).5. The final M was verified by simultaneous simulations of the uncertainties.Now, M has been calculated for various changing parameters like margin criteria, penumbra steepness, islet radio-sensitivity, dose conformity, and number of fractions. We subsequently investigated A: whether the combined recipe still holds in all these situations, and B: what the margin variation was in all these cases. Results: We found that the accuracy of the combined margin recipes remains on average within 1mm for all situations, confirming the correctness of the quadratic addition. Depending on the specific parameter, margin factors could change such that margins change over 50%. Especially margin recipes based on TCP-criteria are more sensitive to more parameters than those based on purely geometric Dmin-criteria. Interestingly, measures taken to minimize treatment field sizes (by e.g. optimizing dose conformity) are counteracted by the requirement of larger margins to get the same tumor coverage. Conclusion: Margin recipes combining geometric and microscopic uncertainties quadratically are
Gomez, J. C.; Glatzmaier, G. C.; Mehos, M.
2012-09-01
The main objective of this study was to calculate the uncertainty at 95% confidence for the experimental values of heat capacity of the eutectic mixture of biphenyl/diphenyl ether (Therminol VP-1) determined from 300 to 370 degrees C. Twenty-five samples were evaluated using differential scanning calorimetry (DSC) to obtain the sample heat flow as a function of temperature. The ASTM E-1269-05 standard was used to determine the heat capacity using DSC evaluations. High-pressure crucibles were employed to contain the sample in the liquid state without vaporizing. Sample handling has a significant impact on the random uncertainty. It was determined that the fluid is difficult to handle, and a high variability of the data was produced. The heat capacity of Therminol VP-1 between 300 and 370 degrees C was measured to be equal to 0.0025T+0.8672 with an uncertainty of +/- 0.074 J/g.K (3.09%) at 95% confidence with T (temperature) in Kelvin.
Gerhard Strydom; Su-Jong Yoon
2014-04-01
Computational Fluid Dynamics (CFD) evaluation of homogeneous and heterogeneous fuel models was performed as part of the Phase I calculations of the International Atomic Energy Agency (IAEA) Coordinate Research Program (CRP) on High Temperature Reactor (HTR) Uncertainties in Modeling (UAM). This study was focused on the nominal localized stand-alone fuel thermal response, as defined in Ex. I-3 and I-4 of the HTR UAM. The aim of the stand-alone thermal unit-cell simulation is to isolate the effect of material and boundary input uncertainties on a very simplified problem, before propagation of these uncertainties are performed in subsequent coupled neutronics/thermal fluids phases on the benchmark. In many of the previous studies for high temperature gas cooled reactors, the volume-averaged homogeneous mixture model of a single fuel compact has been applied. In the homogeneous model, the Tristructural Isotropic (TRISO) fuel particles in the fuel compact were not modeled directly and an effective thermal conductivity was employed for the thermo-physical properties of the fuel compact. On the contrary, in the heterogeneous model, the uranium carbide (UCO), inner and outer pyrolytic carbon (IPyC/OPyC) and silicon carbide (SiC) layers of the TRISO fuel particles are explicitly modeled. The fuel compact is modeled as a heterogeneous mixture of TRISO fuel kernels embedded in H-451 matrix graphite. In this study, a steady-state and transient CFD simulations were performed with both homogeneous and heterogeneous models to compare the thermal characteristics. The nominal values of the input parameters are used for this CFD analysis. In a future study, the effects of input uncertainties in the material properties and boundary parameters will be investigated and reported.
Bostelmann, Friederike; Strydom, Gerhard; Reitsma, Frederik; Ivanov, Kostadin
2016-01-11
The quantification of uncertainties in design and safety analysis of reactors is today not only broadly accepted, but in many cases became the preferred way to replace traditional conservative analysis for safety and licensing analysis. The use of a more fundamental methodology is also consistent with the reliable high fidelity physics models and robust, efficient, and accurate codes available today. To facilitate uncertainty analysis applications a comprehensive approach and methodology must be developed and applied, in contrast to the historical approach where sensitivity analysis were performed and uncertainties then determined by a simplified statistical combination of a few important inputmore » parameters. New methodologies are currently under development in the OECD/NEA Light Water Reactor (LWR) Uncertainty Analysis in Best-Estimate Modelling (UAM) benchmark activity. High Temperature Gas-cooled Reactor (HTGR) designs require specific treatment of the double heterogeneous fuel design and large graphite quantities at high temperatures. The IAEA has therefore launched a Coordinated Research Project (CRP) on HTGR Uncertainty Analysis in Modelling (UAM) in 2013 to study uncertainty propagation specifically in the HTGR analysis chain. Two benchmark problems are defined, with the prismatic design represented by the General Atomics (GA) MHTGR-350 and a 250 MW modular pebble bed design similar to the Chinese HTR-PM. Work has started on the first phase and the current CRP status is reported in the paper. A comparison of the Serpent and SCALE/KENO-VI reference Monte Carlo results for Ex. I-1 of the MHTGR-350 design is also included. It was observed that the SCALE/KENO-VI Continuous Energy (CE) k∞ values were 395 pcm (Ex. I-1a) to 803 pcm (Ex. I-1b) higher than the respective Serpent lattice calculations, and that within the set of the SCALE results, the KENO-VI 238 Multi-Group (MG) k∞ values were up to 800 pcm lower than the KENO-VI CE values. The use of the
U.S. Department of Energy (DOE) all webpages (Extended Search)
Analysis - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Energy Defense Waste Management Programs Advanced Nuclear Energy
U.S. Department of Energy (DOE) all webpages (Extended Search)
Quantification - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Energy Defense Waste Management Programs Advanced Nuclear
U.S. Department of Energy (DOE) all webpages (Extended Search)
Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable ... Arctic Climate Measurements Global Climate Models Software Sustainable Subsurface ...
U.S. Department of Energy (DOE) all webpages (Extended Search)
Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing ... Heavy Duty Fuels DISI Combustion HCCISCCI Fundamentals Spray Combustion Modeling ...
El Hanandeh, Ali; El-Zein, Abbas
2010-05-15
This paper describes the development and application of the Stochastic Integrated Waste Management Simulator (SIWMS) model. SIWMS provides a detailed view of the environmental impacts and associated costs of municipal solid waste (MSW) management alternatives under conditions of uncertainty. The model follows a life-cycle inventory approach extended with compensatory systems to provide more equitable bases for comparing different alternatives. Economic performance is measured by the net present value. The model is verified against four publicly available models under deterministic conditions and then used to study the impact of uncertainty on Sydney's MSW management 'best practices'. Uncertainty has a significant effect on all impact categories. The greatest effect is observed in the global warming category where a reversal of impact direction is predicted. The reliability of the system is most sensitive to uncertainties in the waste processing and disposal. The results highlight the importance of incorporating uncertainty at all stages to better understand the behaviour of the MSW system.
Sun, Y.; Tong, C.; Trainor-Guitten, W. J.; Lu, C.; Mansoor, K.; Carroll, S. A.
2012-12-20
The risk of CO_{2} leakage from a deep storage reservoir into a shallow aquifer through a fault is assessed and studied using physics-specific computer models. The hypothetical CO_{2} geological sequestration system is composed of three subsystems: a deep storage reservoir, a fault in caprock, and a shallow aquifer, which are modeled respectively by considering sub-domain-specific physics. Supercritical CO_{2} is injected into the reservoir subsystem with uncertain permeabilities of reservoir, caprock, and aquifer, uncertain fault location, and injection rate (as a decision variable). The simulated pressure and CO_{2}/brine saturation are connected to the fault-leakage model as a boundary condition. CO_{2} and brine fluxes from the fault-leakage model at the fault outlet are then imposed in the aquifer model as a source term. Moreover, uncertainties are propagated from the deep reservoir model, to the fault-leakage model, and eventually to the geochemical model in the shallow aquifer, thus contributing to risk profiles. To quantify the uncertainties and assess leakage-relevant risk, we propose a global sampling-based method to allocate sub-dimensions of uncertain parameters to sub-models. The risk profiles are defined and related to CO_{2} plume development for pH value and total dissolved solids (TDS) below the EPA's Maximum Contaminant Levels (MCL) for drinking water quality. A global sensitivity analysis is conducted to select the most sensitive parameters to the risk profiles. The resulting uncertainty of pH- and TDS-defined aquifer volume, which is impacted by CO_{2} and brine leakage, mainly results from the uncertainty of fault permeability. Subsequently, high-resolution, reduced-order models of risk profiles are developed as functions of all the decision variables and uncertain parameters in all three subsystems.
Scolnic, D.; Riess, A.; Brout, D.; Rodney, S. [Department of Physics and Astronomy, Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218 (United States); Rest, A. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Huber, M. E.; Tonry, J. L. [Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96822 (United States); Foley, R. J.; Chornock, R.; Berger, E.; Soderberg, A. M.; Stubbs, C. W.; Kirshner, R. P.; Challis, P.; Czekala, I.; Drout, M. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Narayan, G. [Department of Physics, Harvard University, 17 Oxford Street, Cambridge, MA 02138 (United States); Smartt, S. J.; Botticella, M. T. [Astrophysics Research Centre, School of Mathematics and Physics, Queens University Belfast, Belfast BT7 1NN (United Kingdom); Schlafly, E. [Max Planck Institute for Astronomy, Konigstuhl 17, D-69117 Heidelberg (Germany); and others
2014-11-01
We probe the systematic uncertainties from the 113 Type Ia supernovae (SN Ia) in the Pan-STARRS1 (PS1) sample along with 197 SN Ia from a combination of low-redshift surveys. The companion paper by Rest et al. describes the photometric measurements and cosmological inferences from the PS1 sample. The largest systematic uncertainty stems from the photometric calibration of the PS1 and low-z samples. We increase the sample of observed Calspec standards from 7 to 10 used to define the PS1 calibration system. The PS1 and SDSS-II calibration systems are compared and discrepancies up to ?0.02 mag are recovered. We find uncertainties in the proper way to treat intrinsic colors and reddening produce differences in the recovered value of w up to 3%. We estimate masses of host galaxies of PS1 supernovae and detect an insignificant difference in distance residuals of the full sample of 0.037 0.031 mag for host galaxies with high and low masses. Assuming flatness and including systematic uncertainties in our analysis of only SNe measurements, we find w =?1.120{sub ?0.206}{sup +0.360}(Stat){sub ?0.291}{sup +0.269}(Sys). With additional constraints from Baryon acoustic oscillation, cosmic microwave background (CMB) (Planck) and H {sub 0} measurements, we find w=?1.166{sub ?0.069}{sup +0.072} and ?{sub m}=0.280{sub ?0.012}{sup +0.013} (statistical and systematic errors added in quadrature). The significance of the inconsistency with w = 1 depends on whether we use Planck or Wilkinson Microwave Anisotropy Probe measurements of the CMB: w{sub BAO+H0+SN+WMAP}=?1.124{sub ?0.065}{sup +0.083}.
Propagation of Isotopic Bias and Uncertainty to Criticality Safety Analyses of PWR Waste Packages
Radulescu, Georgeta
2010-06-01
Burnup credit methodology is economically advantageous because significantly higher loading capacity may be achieved for spent nuclear fuel (SNF) casks based on this methodology as compared to the loading capacity based on a fresh fuel assumption. However, the criticality safety analysis for establishing the loading curve based on burnup credit becomes increasingly complex as more parameters accounting for spent fuel isotopic compositions are introduced to the safety analysis. The safety analysis requires validation of both depletion and criticality calculation methods. Validation of a neutronic-depletion code consists of quantifying the bias and the uncertainty associated with the bias in predicted SNF compositions caused by cross-section data uncertainty and by approximations in the calculational method. The validation is based on comparison between radiochemical assay (RCA) data and calculated isotopic concentrations for fuel samples representative of SNF inventory. The criticality analysis methodology for commercial SNF disposal allows burnup credit for 14 actinides and 15 fission product isotopes in SNF compositions. The neutronic-depletion method for disposal criticality analysis employing burnup credit is the two-dimensional (2-D) depletion sequence TRITON (Transport Rigor Implemented with Time-dependent Operation for Neutronic depletion)/NEWT (New ESC-based Weighting Transport code) and the 44GROUPNDF5 crosssection library in the Standardized Computer Analysis for Licensing Evaluation (SCALE 5.1) code system. The SCALE 44GROUPNDF5 cross section library is based on the Evaluated Nuclear Data File/B Version V (ENDF/B-V) library. The criticality calculation code for disposal criticality analysis employing burnup credit is General Monte Carlo N-Particle (MCNP) Transport Code. The purpose of this calculation report is to determine the bias on the calculated effective neutron multiplication factor, k{sub eff}, due to the bias and bias uncertainty associated with
Investment Timing and Capacity Choice for Small-Scale Wind PowerUnder Uncertainty
Fleten, Stein-Erik; Maribu, Karl Magnus
2004-11-28
This paper presents a method for evaluation of investments in small-scale wind power under uncertainty. It is assumed that the price of electricity is uncertain and that an owner of a property with wind resources has a deferrable opportunity to invest in one wind power turbine within a capacity range. The model evaluates investment in a set of projects with different capacity. It is assumed that the owner substitutes own electricity load with electricity from the wind mill and sells excess electricity back to the grid on an hourly basis. The problem for the owner is to find the price levels at which it is optimal to invest, and in which capacity to invest. The results suggests it is optimal to wait for significantly higher prices than the net present value break-even. Optimal scale and timing depend on the expected price growth rate and the uncertainty in the future prices.
Constantinescu, E. M.; Zavala, V. M.; Rocklin, M.; Lee, S.; Anitescu, M.
2009-10-09
We present a computational framework for integrating the state-of-the-art Weather Research and Forecasting (WRF) model in stochastic unit commitment/energy dispatch formulations that account for wind power uncertainty. We first enhance the WRF model with adjoint sensitivity analysis capabilities and a sampling technique implemented in a distributed-memory parallel computing architecture. We use these capabilities through an ensemble approach to model the uncertainty of the forecast errors. The wind power realizations are exploited through a closed-loop stochastic unit commitment/energy dispatch formulation. We discuss computational issues arising in the implementation of the framework. In addition, we validate the framework using real wind speed data obtained from a set of meteorological stations. We also build a simulated power system to demonstrate the developments.
APR1400 LBLOCA uncertainty quantification by Monte Carlo method and comparison with Wilks' formula
Hwang, M.; Bae, S.; Chung, B. D.
2012-07-01
An analysis of the uncertainty quantification for the PWR LBLOCA by the Monte Carlo calculation has been performed and compared with the tolerance level determined by Wilks' formula. The uncertainty range and distribution of each input parameter associated with the LBLOCA accident were determined by the PIRT results from the BEMUSE project. The Monte-Carlo method shows that the 95. percentile PCT value can be obtained reliably with a 95% confidence level using the Wilks' formula. The extra margin by the Wilks' formula over the true 95. percentile PCT by the Monte-Carlo method was rather large. Even using the 3 rd order formula, the calculated value using the Wilks' formula is nearly 100 K over the true value. It is shown that, with the ever increasing computational capability, the Monte-Carlo method is accessible for the nuclear power plant safety analysis within a realistic time frame. (authors)
Uncertainty Quantification Techniques for Sensor Calibration Monitoring in Nuclear Power Plants
Ramuhalli, Pradeep; Lin, Guang; Crawford, Susan L.; Konomi, Bledar A.; Braatz, Brett G.; Coble, Jamie B.; Shumaker, Brent; Hashemian, Hash
2013-09-01
This report describes the status of ongoing research towards the development of advanced algorithms for online calibration monitoring. The objective of this research is to develop the next generation of online monitoring technologies for sensor calibration interval extension and signal validation in operating and new reactors. These advances are expected to improve the safety and reliability of current and planned nuclear power systems as a result of higher accuracies and increased reliability of sensors used to monitor key parameters. The focus of this report is on documenting the outcomes of the first phase of R&D under this project, which addressed approaches to uncertainty quantification (UQ) in online monitoring that are data-driven, and can therefore adjust estimates of uncertainty as measurement conditions change. Such data-driven approaches to UQ are necessary to address changing plant conditions, for example, as nuclear power plants experience transients, or as next-generation small modular reactors (SMR) operate in load-following conditions.
Martinez-Canales, Monica L.; Heaphy, Robert; Gramacy, Robert B.; Taddy, Matt; Chiesa, Michael L.; Thomas, Stephen W.; Swiler, Laura Painton; Hough, Patricia Diane; Lee, Herbert K. H.; Trucano, Timothy Guy; Gray, Genetha Anne
2006-11-01
This project focused on research and algorithmic development in optimization under uncertainty (OUU) problems driven by earth penetrator (EP) designs. While taking into account uncertainty, we addressed three challenges in current simulation-based engineering design and analysis processes. The first challenge required leveraging small local samples, already constructed by optimization algorithms, to build effective surrogate models. We used Gaussian Process (GP) models to construct these surrogates. We developed two OUU algorithms using 'local' GPs (OUU-LGP) and one OUU algorithm using 'global' GPs (OUU-GGP) that appear competitive or better than current methods. The second challenge was to develop a methodical design process based on multi-resolution, multi-fidelity models. We developed a Multi-Fidelity Bayesian Auto-regressive process (MF-BAP). The third challenge involved the development of tools that are computational feasible and accessible. We created MATLAB{reg_sign} and initial DAKOTA implementations of our algorithms.
Stewart, G.; Lackner, M.; Haid, L.; Matha, D.; Jonkman, J.; Robertson, A.
2013-07-01
With the push towards siting wind turbines farther offshore due to higher wind quality and less visibility, floating offshore wind turbines, which can be located in deep water, are becoming an economically attractive option. The International Electrotechnical Commission's (IEC) 61400-3 design standard covers fixed-bottom offshore wind turbines, but there are a number of new research questions that need to be answered to modify these standards so that they are applicable to floating wind turbines. One issue is the appropriate simulation length needed for floating turbines. This paper will discuss the results from a study assessing the impact of simulation length on the ultimate and fatigue loads of the structure, and will address uncertainties associated with changing the simulation length for the analyzed floating platform. Recommendations of required simulation length based on load uncertainty will be made and compared to current simulation length requirements.
Reducing Uncertainty in the Seismic Design Basis for the Waste Treatment Plant, Hanford, Washington
Brouns, Thomas M.; Rohay, Alan C.; Reidel, Steve; Gardner, Martin G.
2007-02-27
The seismic design basis for the Waste Treatment Plant (WTP) at the Department of Energys (DOE) Hanford Site near Richland was re-evaluated in 2005, resulting in an increase by up to 40% in the seismic design basis. The original seismic design basis for the WTP was established in 1999 based on a probabilistic seismic hazard analysis completed in 1996. The 2005 analysis was performed to address questions raised by the Defense Nuclear Facilities Safety Board (DNFSB) about the assumptions used in developing the original seismic criteria and adequacy of the site geotechnical surveys. The updated seismic response analysis used existing and newly acquired seismic velocity data, statistical analysis, expert elicitation, and ground motion simulation to develop interim design ground motion response spectra which enveloped the remaining uncertainties. The uncertainties in these response spectra were enveloped at approximately the 84th percentile to produce conservative design spectra, which contributed significantly to the increase in the seismic design basis.
Archibald, Richard K; Chakoumakos, Madison; Zhuang, Zibo
2012-01-01
Understanding and characterizing sources of uncertainty in climate modeling is an important task. Because of the ever increasing sophistication and resolution of climate modeling it is increasing important to develop uncertainty quantification methods that minimize the computational cost that occurs when these methods are added to climate modeling. This research explores the application of sparse stochastic collocation with polynomial edge detection to characterize portions of the probability space associated with the Earth s radiative budget in the Community Earth System Model (CESM). Specifically, we develop surrogate models with error estimates for a range of acceptable input parameters that predict statistical values of the Earth s radiative budget as derived from the CESM simulation. We extend these results in resolution from T31 to T42 and in parameter space increasing the degrees of freedom from two to three.
Miller, David C.; Ng, Brenda; Eslick, John
2014-01-01
Advanced multi-scale modeling and simulation has the potential to dramatically reduce development time, resulting in considerable cost savings. The Carbon Capture Simulation Initiative (CCSI) is a partnership among national laboratories, industry and universities that is developing, demonstrating, and deploying a suite of multi-scale modeling and simulation tools. One significant computational tool is FOQUS, a Framework for Optimization and Quantification of Uncertainty and Sensitivity, which enables basic data submodels, including thermodynamics and kinetics, to be used within detailed process models to rapidly synthesize and optimize a process and determine the level of uncertainty associated with the resulting process. The overall approach of CCSI is described with a more detailed discussion of FOQUS and its application to carbon capture systems.
Idealization, uncertainty and heterogeneity : game frameworks defined with formal concept analysis.
Racovitan, M. T.; Sallach, D. L.; Decision and Information Sciences; Northern Illinois Univ.
2006-01-01
The present study begins with Formal Concept Analysis, and undertakes to demonstrate how a succession of game frameworks may, by design, address increasingly complex and interesting social phenomena. We develop a series of multi-agent exchange games, each of which incorporates an additional dimension of complexity. All games are based on coalition patterns in exchanges where diverse cultural markers provide a basis for trust and reciprocity. The first game is characterized by an idealized concept of trust. A second game framework introduces uncertainty regarding the reciprocity of prospective transactions. A third game framework retains idealized trust and uncertainty, and adds additional agent heterogeneity. Cultural markers are not equally salient in conferring or withholding trust, and the result is a richer transactional process.
Uncertainty Quantification Techniques for Sensor Calibration Monitoring in Nuclear Power Plants
Ramuhalli, Pradeep; Lin, Guang; Crawford, Susan L.; Konomi, Bledar A.; Coble, Jamie B.; Shumaker, Brent; Hashemian, Hash
2014-04-30
This report describes research towards the development of advanced algorithms for online calibration monitoring. The objective of this research is to develop the next generation of online monitoring technologies for sensor calibration interval extension and signal validation in operating and new reactors. These advances are expected to improve the safety and reliability of current and planned nuclear power systems as a result of higher accuracies and increased reliability of sensors used to monitor key parameters. The focus of this report is on documenting the outcomes of the first phase of R&D under this project, which addressed approaches to uncertainty quantification (UQ) in online monitoring that are data-driven, and can therefore adjust estimates of uncertainty as measurement conditions change. Such data-driven approaches to UQ are necessary to address changing plant conditions, for example, as nuclear power plants experience transients, or as next-generation small modular reactors (SMR) operate in load-following conditions.
Cardoso, Goncalo; Stadler, Michael; Bozchalui, Mohammed C.; Sharma, Ratnesh; Marnay, Chris; Barbosa-Povoa, Ana; Ferrao, Paulo
2013-12-06
The large scale penetration of electric vehicles (EVs) will introduce technical challenges to the distribution grid, but also carries the potential for vehicle-to-grid services. Namely, if available in large enough numbers, EVs can be used as a distributed energy resource (DER) and their presence can influence optimal DER investment and scheduling decisions in microgrids. In this work, a novel EV fleet aggregator model is introduced in a stochastic formulation of DER-CAM [1], an optimization tool used to address DER investment and scheduling problems. This is used to assess the impact of EV interconnections on optimal DER solutions considering uncertainty in EV driving schedules. Optimization results indicate that EVs can have a significant impact on DER investments, particularly if considering short payback periods. Furthermore, results suggest that uncertainty in driving schedules carries little significance to total energy costs, which is corroborated by results obtained using the stochastic formulation of the problem.
Integration of Wind Generation and Load Forecast Uncertainties into Power Grid Operations
Makarov, Yuri V.; Etingov, Pavel V.; Huang, Zhenyu; Ma, Jian; Chakrabarti, Bhujanga B.; Subbarao, Krishnappa; Loutan, Clyde; Guttromson, Ross T.
2010-04-20
In this paper, a new approach to evaluate the uncertainty ranges for the required generation performance envelope, including the balancing capacity, ramping capability and ramp duration is presented. The approach includes three stages: statistical and actual data acquisition, statistical analysis of retrospective information, and prediction of future grid balancing requirements for specified time horizons and confidence intervals. Assessment of the capacity and ramping requirements is performed using a specially developed probabilistic algorithm based on a histogram analysis incorporating all sources of uncertainty and parameters of a continuous (wind forecast and load forecast errors) and discrete (forced generator outages and failures to start up) nature. Preliminary simulations using California Independent System Operator (CAISO) real life data have shown the effectiveness and efficiency of the proposed approach.
Michael Pernice
2012-10-01
Grid-to-rod fretting is the leading cause of fuel failures in pressurized water reactors, and is one of the challenge problems being addressed by the Consortium for Advanced Simulation of Light Water Reactors to guide its efforts to develop a virtual reactor environment. Prior and current efforts in modeling and simulation of grid-to-rod fretting are discussed. Sources of uncertainty in grid-to-rod fretting are also described.
Munoz, Francisco D.; Watson, Jean -Paul; Hobbs, Benjamin F.
2015-06-04
In this study, the anticipated magnitude of needed investments in new transmission infrastructure in the U.S. requires that these be allocated in a way that maximizes the likelihood of achieving society's goals for power system operation. The use of state-of-the-art optimization tools can identify cost-effective investment alternatives, extract more benefits out of transmission expansion portfolios, and account for the huge economic, technology, and policy uncertainties that the power sector faces over the next several decades.
Final Report-Optimization Under Uncertainty and Nonconvexity: Algorithms and Software
Jeff Linderoth
2008-10-10
The goal of this research was to develop new algorithmic techniques for solving large-scale numerical optimization problems, focusing on problems classes that have proven to be among the most challenging for practitioners: those involving uncertainty and those involving nonconvexity. This research advanced the state-of-the-art in solving mixed integer linear programs containing symmetry, mixed integer nonlinear programs, and stochastic optimization problems.
SWEPP PAN assay system uncertainty analysis: Passive mode measurements of graphite waste
Blackwood, L.G.; Harker, Y.D.; Meachum, T.R.; Yoon, Woo Y.
1997-07-01
The Idaho National Engineering and Environmental Laboratory is being used as a temporary storage facility for transuranic waste generated by the U.S. Nuclear Weapons program at the Rocky Flats Plant (RFP) in Golden, Colorado. Currently, there is a large effort in progress to prepare to ship this waste to the Waste Isolation Pilot Plant (WIPP) in Carlsbad, New Mexico. In order to meet the TRU Waste Characterization Quality Assurance Program Plan nondestructive assay compliance requirements and quality assurance objectives, it is necessary to determine the total uncertainty of the radioassay results produced by the Stored Waste Examination Pilot Plant (SWEPP) Passive Active Neutron (PAN) radioassay system. To this end a modified statistical sampling and verification approach has been developed to determine the total uncertainty of a PAN measurement. In this approach the total performance of the PAN nondestructive assay system is simulated using computer models of the assay system and the resultant output is compared with the known input to assess the total uncertainty. This paper is one of a series of reports quantifying the results of the uncertainty analysis of the PAN system measurements for specific waste types and measurement modes. In particular this report covers passive mode measurements of weapons grade plutonium-contaminated graphite molds contained in 208 liter drums (waste code 300). The validity of the simulation approach is verified by comparing simulated output against results from measurements using known plutonium sources and a surrogate graphite waste form drum. For actual graphite waste form conditions, a set of 50 cases covering a statistical sampling of the conditions exhibited in graphite wastes was compiled using a Latin hypercube statistical sampling approach.
Helton, J.C.; Johnson, J.D.; McKay, M.D.; Shiver, A.W.; Sprung, J.L.
1995-01-01
Uncertainty and sensitivity analysis techniques based on Latin hypercube sampling, partial correlation analysis and stepwise regression analysis are used in an investigation with the MACCS model of the early health effects associated with a severe accident at a nuclear power station. The primary purpose of this study is to provide guidance on the variables to be considered in future review work to reduce the uncertainty in the important variables used in the calculation of reactor accident consequences. The effects of 34 imprecisely known input variables on the following reactor accident consequences are studied: number of early fatalities, number of cases of prodromal vomiting, population dose within 10 mi of the reactor, population dose within 1000 mi of the reactor, individual early fatality probability within 1 mi of the reactor, and maximum early fatality distance. When the predicted variables are considered collectively, the following input variables were found to be the dominant contributors to uncertainty: scaling factor for horizontal dispersion, dry deposition velocity, inhalation protection factor for nonevacuees, groundshine shielding factor for nonevacuees, early fatality hazard function alpha value for bone marrow exposure, and scaling factor for vertical dispersion.
Burr, Tom; Croft, Stephen; Jarman, Kenneth D.
2015-09-05
The various methods of nondestructive assay (NDA) of special nuclear material (SNM) have applications in nuclear nonproliferation, including detection and identification of illicit SNM at border crossings, and quantifying SNM at nuclear facilities for safeguards. No assay method is complete without “error bars,” which provide one way of expressing confidence in the assay result. Consequently, NDA specialists typically quantify total uncertainty in terms of “random” and “systematic” components, and then specify error bars for the total mass estimate in multiple items. Uncertainty quantification (UQ) for NDA has always been important, but it is recognized that greater rigor is needed and achievable using modern statistical methods. To this end, we describe the extent to which the guideline for expressing uncertainty in measurements (GUM) can be used for NDA. Also, we propose improvements over GUM for NDA by illustrating UQ challenges that it does not address, including calibration with errors in predictors, model error, and item-specific biases. A case study is presented using low-resolution NaI spectra and applying the enrichment meter principle to estimate the U-235 mass in an item. The case study illustrates how to update the current American Society for Testing and Materials guide for application of the enrichment meter principle using gamma spectra from a NaI detector.
Cogeneration: A northwest medical facility`s answer to the uncertainties of deregulation
Almeda, R.; Rivers, J.
1998-10-01
Not so long ago, in the good old days, the energy supply to a health care facility was one of the most stable. The local utility provided what was needed at a reasonable cost. Now the energy industry is being deregulated. Major uncertainties exist in all parts of the energy industry. Since reasonably priced and readily available energy is mandatory for a health care facility operation, the energy industry uncertainties reverberate through the health care industry. This article reviews how the uncertainty of electric utility deregulation was converted to an opportunity to implement the ultimate energy conservation project--cogeneration. The project development was made essentially risk free by tailoring project development to deregulation. Costs and financial exposure were minimized by taking numerous small steps in sequence. Valley Medical Center, by persevering with the development of a cogeneration plant, has been able to reduce its energy costs and more importantly, stabilize its energy supply and costs for many years to come. This article reviews activities in two arenas, internal project development and external energy industry developments, by periodically updating each arena and showing how external developments affected the project.
Salloum, Maher N.; Gharagozloo, Patricia E.
2013-10-01
Metal particle beds have recently become a major technique for hydrogen storage. In order to extract hydrogen from such beds, it is crucial to understand the decomposition kinetics of the metal hydride. We are interested in obtaining a a better understanding of the uranium hydride (UH3) decomposition kinetics. We first developed an empirical model by fitting data compiled from different experimental studies in the literature and quantified the uncertainty resulting from the scattered data. We found that the decomposition time range predicted by the obtained kinetics was in a good agreement with published experimental results. Secondly, we developed a physics based mathematical model to simulate the rate of hydrogen diffusion in a hydride particle during the decomposition. We used this model to simulate the decomposition of the particles for temperatures ranging from 300K to 1000K while propagating parametric uncertainty and evaluated the kinetics from the results. We compared the kinetics parameters derived from the empirical and physics based models and found that the uncertainty in the kinetics predicted by the physics based model covers the scattered experimental data. Finally, we used the physics-based kinetics parameters to simulate the effects of boundary resistances and powder morphological changes during decomposition in a continuum level model. We found that the species change within the bed occurring during the decomposition accelerates the hydrogen flow by increasing the bed permeability, while the pressure buildup and the thermal barrier forming at the wall significantly impede the hydrogen extraction.
Zhang, J.; Hodge, B. M.; Gomez-Lazaro, E.; Lovholm, A. L.; Berge, E.; Miettinen, J.; Holttinen, H.; Cutululis, N.; Litong-Palima, M.; Sorensen, P.; Dobschinski, J.
2013-10-01
One of the critical challenges of wind power integration is the variable and uncertain nature of the resource. This paper investigates the variability and uncertainty in wind forecasting for multiple power systems in six countries. An extensive comparison of wind forecasting is performed among the six power systems by analyzing the following scenarios: (i) wind forecast errors throughout a year; (ii) forecast errors at a specific time of day throughout a year; (iii) forecast errors at peak and off-peak hours of a day; (iv) forecast errors in different seasons; (v) extreme forecasts with large overforecast or underforecast errors; and (vi) forecast errors when wind power generation is at different percentages of the total wind capacity. The kernel density estimation method is adopted to characterize the distribution of forecast errors. The results show that the level of uncertainty and the forecast error distribution vary among different power systems and scenarios. In addition, for most power systems, (i) there is a tendency to underforecast in winter; and (ii) the forecasts in winter generally have more uncertainty than the forecasts in summer.
Makarov, Yuri V.; Etingov, Pavel V.; Ma, Jian; Huang, Zhenyu; Subbarao, Krishnappa
2011-06-23
An approach to evaluate the uncertainties of the balancing capacity, ramping capability, and ramp duration requirements is proposed. The approach includes three steps: forecast data acquisition, statistical analysis of retrospective information, and prediction of grid balancing requirements for a specified time horizon and a given confidence level. An assessment of the capacity and ramping requirements is performed using a specially developed probabilistic algorithm based on histogram analysis, incorporating sources of uncertainty - both continuous (wind and load forecast errors) and discrete (forced generator outages and start-up failures). A new method called the 'flying-brick' technique is developed to evaluate the look-ahead required generation performance envelope for the worst case scenario within a user-specified confidence level. A self-validation process is used to validate the accuracy of the confidence intervals. To demonstrate the validity of the developed uncertainty assessment methods and its impact on grid operation, a framework for integrating the proposed methods with an EMS system is developed. Demonstration through EMS integration illustrates the applicability of the proposed methodology and the developed tool for actual grid operation and paves the road for integration with EMS systems in control rooms.
Makarov, Yuri V.; Etingov, Pavel V.; Huang, Zhenyu; Ma, Jian; Subbarao, Krishnappa
2010-10-19
In this paper, an approach to evaluate the uncertainties of the balancing capacity, ramping capability, and ramp duration requirements is proposed. The approach includes three steps: forecast data acquisition, statistical analysis of retrospective information, and prediction of grid balancing requirements for a specified time horizon and a given confidence level. Assessment of the capacity and ramping requirements is performed using a specially developed probabilistic algorithm based on histogram analysis, incorporating sources of uncertainty of both continuous (wind and load forecast errors) and discrete (forced generator outages and start-up failures) nature. A new method called the "flying-brick" technique is developed to evaluate the look-ahead required generation performance envelope for the worst case scenario within a user-specified confidence level. A self-validation process is used to validate the accuracy of the confidence intervals. To demonstrate the validity of the developed uncertainty assessment methods and its impact on grid operation, a framework for integrating the proposed methods with an EMS system is developed. Demonstration through integration with an EMS system illustrates the applicability of the proposed methodology and the developed tool for actual grid operation and paves the road for integration with EMS systems from other vendors.
Burr, Tom; Croft, Stephen; Jarman, Kenneth D.
2015-09-05
The various methods of nondestructive assay (NDA) of special nuclear material (SNM) have applications in nuclear nonproliferation, including detection and identification of illicit SNM at border crossings, and quantifying SNM at nuclear facilities for safeguards. No assay method is complete without “error bars,” which provide one way of expressing confidence in the assay result. Consequently, NDA specialists typically quantify total uncertainty in terms of “random” and “systematic” components, and then specify error bars for the total mass estimate in multiple items. Uncertainty quantification (UQ) for NDA has always been important, but it is recognized that greater rigor is needed andmore » achievable using modern statistical methods. To this end, we describe the extent to which the guideline for expressing uncertainty in measurements (GUM) can be used for NDA. Also, we propose improvements over GUM for NDA by illustrating UQ challenges that it does not address, including calibration with errors in predictors, model error, and item-specific biases. A case study is presented using low-resolution NaI spectra and applying the enrichment meter principle to estimate the U-235 mass in an item. The case study illustrates how to update the current American Society for Testing and Materials guide for application of the enrichment meter principle using gamma spectra from a NaI detector.« less
Al-Hashimi, M.H. Wiese, U.-J.
2009-12-15
We consider wave packets of free particles with a general energy-momentum dispersion relation E(p). The spreading of the wave packet is determined by the velocity v={partial_derivative}{sub p}E. The position-velocity uncertainty relation {delta}x{delta}v{>=}1/2 |<{partial_derivative}{sub p}{sup 2}E>| is saturated by minimal uncertainty wave packets {phi}(p)=Aexp(-{alpha}E(p)+{beta}p). In addition to the standard minimal Gaussian wave packets corresponding to the non-relativistic dispersion relation E(p)=p{sup 2}/2m, analytic calculations are presented for the spreading of wave packets with minimal position-velocity uncertainty product for the lattice dispersion relation E(p)=-cos(pa)/ma{sup 2} as well as for the relativistic dispersion relation E(p)={radical}(p{sup 2}+m{sup 2}). The boost properties of moving relativistic wave packets as well as the propagation of wave packets in an expanding Universe are also discussed.
Sungyeol Choi; Jaeyeong Park; Robert O. Hoover; Supathorn Phongikaroon; Michael F. Simpson; Kwang-Rag Kim; Il Soon Hwang
2011-09-01
This study examines how much cell potential changes with five differently assumed real anode surface area cases. Determining real anode surface area is a significant issue to be resolved for precisely modeling molten salt electrorefining. Based on a three-dimensional electrorefining model, calculated cell potentials compare with an experimental cell potential variation over 80 hours of operation of the Mark-IV electrorefiner with driver fuel from the Experimental Breeder Reactor II. We succeeded to achieve a good agreement with an overall trend of the experimental data with appropriate selection of a mode for real anode surface area, but there are still local inconsistencies between theoretical calculation and experimental observation. In addition, the results were validated and compared with two-dimensional results to identify possible uncertainty factors that had to be further considered in a computational electrorefining analysis. These uncertainty factors include material properties, heterogeneous material distribution, surface roughness, and current efficiency. Zirconium's abundance and complex behavior have more impact on uncertainty towards the latter period of electrorefining at given batch of fuel. The benchmark results found that anode materials would be dissolved from both axial and radial directions at least for low burn-up metallic fuels after active liquid sodium bonding was dissolved.
A flexible uncertainty quantification method for linearly coupled multi-physics systems
Chen, Xiao Ng, Brenda; Sun, Yunwei; Tong, Charles
2013-09-01
Highlights: We propose a modularly hybrid UQ methodology suitable for independent development of module-based multi-physics simulation. Our algorithmic framework allows for each module to have its own UQ method (either intrusive or non-intrusive). Information from each module is combined systematically to propagate global uncertainty. Our proposed approach can allow for easy swapping of new methods for any modules without the need to address incompatibilities. We demonstrate the proposed framework on a practical application involving a multi-species reactive transport model. -- Abstract: This paper presents a novel approach to building an integrated uncertainty quantification (UQ) methodology suitable for modern-day component-based approach for multi-physics simulation development. Our hybrid UQ methodology supports independent development of the most suitable UQ method, intrusive or non-intrusive, for each physics module by providing an algorithmic framework to couple these stochastic modules for propagating global uncertainties. We address algorithmic and computational issues associated with the construction of this hybrid framework. We demonstrate the utility of such a framework on a practical application involving a linearly coupled multi-species reactive transport model.
A transform of complementary aspects with applications to entropic uncertainty relations
Mandayam, Prabha; Wehner, Stephanie; Balachandran, Niranjan
2010-08-15
Even though mutually unbiased bases and entropic uncertainty relations play an important role in quantum cryptographic protocols, they remain ill understood. Here, we construct special sets of up to 2n+1 mutually unbiased bases (MUBs) in dimension d=2{sup n}, which have particularly beautiful symmetry properties derived from the Clifford algebra. More precisely, we show that there exists a unitary transformation that cyclically permutes such bases. This unitary can be understood as a generalization of the Fourier transform, which exchanges two MUBs, to multiple complementary aspects. We proceed to prove a lower bound for min-entropic entropic uncertainty relations for any set of MUBs and show that symmetry plays a central role in obtaining tight bounds. For example, we obtain for the first time a tight bound for four MUBs in dimension d=4, which is attained by an eigenstate of our complementarity transform. Finally, we discuss the relation to other symmetries obtained by transformations in discrete phase space and note that the extrema of discrete Wigner functions are directly related to min-entropic uncertainty relations for MUBs.
Little, M.P.; Muirhead, C.R.; Goossens, L.H.J.; Kraan, B.C.P.; Cooke, R.M.; Harper, F.T.; Hora, S.C.
1997-12-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequence from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the US Nuclear Regulatory Commission and the Commission of the European Communities began cosponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables. A formal expert judgment elicitation and evaluation process was identified as the best technology available for developing a library of uncertainty distributions for these consequence parameters. This report focuses on the results of the study to develop distribution for variables related to the MACCS and COSYMA late health effects models. This volume contains appendices that include (1) a summary of the MACCS and COSYMA consequence codes, (2) the elicitation questionnaires and case structures, (3) the rationales and results for the expert panel on late health effects, (4) short biographies of the experts, and (5) the aggregated results of their responses.
UNCERTAINTIES OF MODELING GAMMA-RAY PULSAR LIGHT CURVES USING VACUUM DIPOLE MAGNETIC FIELD
Bai Xuening; Spitkovsky, Anatoly E-mail: anatoly@astro.princeton.ed
2010-06-01
Current models of pulsar gamma-ray emission use the magnetic field of a rotating dipole in vacuum as a first approximation to the shape of a plasma-filled pulsar magnetosphere. In this paper, we revisit the question of gamma-ray light curve formation in pulsars in order to ascertain the robustness of the 'two-pole caustic (TPC)' and 'outer gap (OG)' models based on the vacuum magnetic field. We point out an inconsistency in the literature on the use of the relativistic aberration formula, where in several works the shape of the vacuum field was treated as known in the instantaneous corotating frame, rather than in the laboratory frame. With the corrected formula, we find that the peaks in the light curves predicted from the TPC model using the vacuum field are less sharp. The sharpness of the peaks in the OG model is less affected by this change, but the range of magnetic inclination angles and viewing geometries resulting in double-peaked light curves is reduced. In a realistic magnetosphere, the modification of field structure near the light cylinder (LC) due to plasma effects may change the shape of the polar cap and the location of the emission zones. We study the sensitivity of the light curves to different shapes of the polar cap for static and retarded vacuum dipole fields. In particular, we consider polar caps traced by the last open field lines and compare them to circular polar caps. We find that the TPC model is very sensitive to the shape of the polar cap, and a circular polar cap can lead to four peaks of emission. The OG model is less affected by different polar cap shapes, but is subject to big uncertainties of applying the vacuum field near the LC. We conclude that deviations from the vacuum field can lead to large uncertainties in pulse shapes, and a more realistic force-free field should be applied to the study of pulsar high-energy emission.
Sensitivity of helium beam-modulator design to uncertainties in biological data
Petti, P.L.; Lyman, J.T.; Castro, J.R. )
1991-05-01
The goal in designing beam-modulating devices for heavy charged-particle therapy is to achieve uniform biological effects across the spread-peak region of the beam. To accomplish this, the linear-quadratic model for cell survival has been used to describe the biological response of the target cells to charged-particle radiation. In this paper, the sensitivity of the beam-modulator design in the high-dose region to the values of the linear-quadratic variables {alpha} and {beta} has been investigated for a 215-MeV/u helium beam, and implications for higher LET beams are discussed. The major conclusions of this work are that, for helium over the LET range of 2 to 16 keV/{mu}, uncertainties in measuring {alpha} and {beta} for a given cell type which are of the order 20% or less have a negligible effect on the beam-modulator design (i.e., on the slope of the spread Bragg peak); uncertainties less than or equal to 10% in the dose-averaged LET at each depth are unimportant; and, if the linear-quadratic variables for the tumor differ from those used in the beam-modulator design by a constant factor between about 0.5 and 3, then the resultant nonuniformity in the photon-equivalent dose delivered to the tumor is within {plus minus}2.5%. It is also shown that for any ion, if the nominal values of {alpha} or {beta} used by the beam-modulator design program differ from their actual values by a constant factor, then the maximum errors possible in the beam-modulator design may be characterized by two limiting depth-dose curves such that the ratio of the dose at the proximal end of the spread Bragg curve to the dose at the distal end of the spread peak is given by {alpha}{sub distal}/{alpha}{sub prox} for the steepest curve, and ({beta}{sub distal}/{beta}{sub prox}){sup 1/2} for the flattest curve.
Knoetig, Max L., E-mail: mknoetig@phys.ethz.ch [Institute for Particle Physics, ETH Zurich, 8093 Zurich (Switzerland)
2014-08-01
For decades researchers have studied the On/Off counting problem where a measured rate consists of two parts. One part is due to a signal process and the other is due to a background process, the magnitudes for both of which are unknown. While most frequentist methods are adequate for large number counts, they cannot be applied to sparse data. Here, I want to present a new objective Bayesian solution that only depends on three parameters: the number of events in the signal region, the number of events in the background region, and the ratio of the exposure for both regions. First, the probability of the counts only being due to background is derived analytically. Second, the marginalized posterior for the signal parameter is also derived analytically. With this two-step approach it is easy to calculate the signal's significance, strength, uncertainty, or upper limit in a unified way. This approach is valid without restrictions for any number count, including zero, and may be widely applied in particle physics, cosmic-ray physics, and high-energy astrophysics. In order to demonstrate the performance of this approach, I apply the method to gamma-ray burst data.
Accounting for Global Climate Model Projection Uncertainty in Modern Statistical Downscaling
Johannesson, G
2010-03-17
Future climate change has emerged as a national and a global security threat. To carry out the needed adaptation and mitigation steps, a quantification of the expected level of climate change is needed, both at the global and the regional scale; in the end, the impact of climate change is felt at the local/regional level. An important part of such climate change assessment is uncertainty quantification. Decision and policy makers are not only interested in 'best guesses' of expected climate change, but rather probabilistic quantification (e.g., Rougier, 2007). For example, consider the following question: What is the probability that the average summer temperature will increase by at least 4 C in region R if global CO{sub 2} emission increases by P% from current levels by time T? It is a simple question, but one that remains very difficult to answer. It is answering these kind of questions that is the focus of this effort. The uncertainty associated with future climate change can be attributed to three major factors: (1) Uncertainty about future emission of green house gasses (GHG). (2) Given a future GHG emission scenario, what is its impact on the global climate? (3) Given a particular evolution of the global climate, what does it mean for a particular location/region? In what follows, we assume a particular GHG emission scenario has been selected. Given the GHG emission scenario, the current batch of the state-of-the-art global climate models (GCMs) is used to simulate future climate under this scenario, yielding an ensemble of future climate projections (which reflect, to some degree our uncertainty of being able to simulate future climate give a particular GHG scenario). Due to the coarse-resolution nature of the GCM projections, they need to be spatially downscaled for regional impact assessments. To downscale a given GCM projection, two methods have emerged: dynamical downscaling and statistical (empirical) downscaling (SDS). Dynamic downscaling involves
Hartini, Entin Andiwijayakusuma, Dinan
2014-09-30
This research was carried out on the development of code for uncertainty analysis is based on a statistical approach for assessing the uncertainty input parameters. In the butn-up calculation of fuel, uncertainty analysis performed for input parameters fuel density, coolant density and fuel temperature. This calculation is performed during irradiation using Monte Carlo N-Particle Transport. The Uncertainty method based on the probabilities density function. Development code is made in python script to do coupling with MCNPX for criticality and burn-up calculations. Simulation is done by modeling the geometry of PWR terrace, with MCNPX on the power 54 MW with fuel type UO2 pellets. The calculation is done by using the data library continuous energy cross-sections ENDF / B-VI. MCNPX requires nuclear data in ACE format. Development of interfaces for obtaining nuclear data in the form of ACE format of ENDF through special process NJOY calculation to temperature changes in a certain range.
Generalized Uncertainty Quantification for Linear Inverse Problems in X-ray Imaging
Fowler, Michael James
2014-04-25
In industrial and engineering applications, X-ray radiography has attained wide use as a data collection protocol for the assessment of material properties in cases where direct observation is not possible. The direct measurement of nuclear materials, particularly when they are under explosive or implosive loading, is not feasible, and radiography can serve as a useful tool for obtaining indirect measurements. In such experiments, high energy X-rays are pulsed through a scene containing material of interest, and a detector records a radiograph by measuring the radiation that is not attenuated in the scene. One approach to the analysis of these radiographs is to model the imaging system as an operator that acts upon the object being imaged to produce a radiograph. In this model, the goal is to solve an inverse problem to reconstruct the values of interest in the object, which are typically material properties such as density or areal density. The primary objective in this work is to provide quantitative solutions with uncertainty estimates for three separate applications in X-ray radiography: deconvolution, Abel inversion, and radiation spot shape reconstruction. For each problem, we introduce a new hierarchical Bayesian model for determining a posterior distribution on the unknowns and develop efficient Markov chain Monte Carlo (MCMC) methods for sampling from the posterior. A Poisson likelihood, based on a noise model for photon counts at the detector, is combined with a prior tailored to each application: an edge-localizing prior for deconvolution; a smoothing prior with non-negativity constraints for spot reconstruction; and a full covariance sampling prior based on a Wishart hyperprior for Abel inversion. After developing our methods in a general setting, we demonstrate each model on both synthetically generated datasets, including those from a well known radiation transport code, and real high energy radiographs taken at two U. S. Department of Energy
Helton, J.C.; Johnson, J.D.; Rollstin, J.A.; Shiver, A.W.; Sprung, J.L.
1995-01-01
Uncertainty and sensitivity analysis techniques based on Latin hypercube sampling, partial correlation analysis and stepwise regression analysis are used in an investigation with the MACCS model of the food pathways associated with a severe accident at a nuclear power station. The primary purpose of this study is to provide guidance on the variables to be considered in future review work to reduce the uncertainty in the important variables used in the calculation of reactor accident consequences. The effects of 87 imprecisely-known input variables on the following reactor accident consequences are studied: crop growing season dose, crop long-term dose, milk growing season dose, total food pathways dose, total ingestion pathways dose, total long-term pathways dose, area dependent cost, crop disposal cost, milk disposal cost, condemnation area, crop disposal area and milk disposal area. When the predicted variables are considered collectively, the following input variables were found to be the dominant contributors to uncertainty: fraction of cesium deposition on grain fields that is retained on plant surfaces and transferred directly to grain, maximum allowable ground concentrations of Cs-137 and Sr-90 for production of crops, ground concentrations of Cs-134, Cs-137 and I-131 at which the disposal of milk will be initiated due to accidents that occur during the growing season, ground concentrations of Cs-134, I-131 and Sr-90 at which the disposal of crops will be initiated due to accidents that occur during the growing season, rate of depletion of Cs-137 and Sr-90 from the root zone, transfer of Sr-90 from soil to legumes, transfer of Cs-137 from soil to pasture, transfer of cesium from animal feed to meat, and the transfer of cesium, iodine and strontium from animal feed to milk.
Helton, J.C.; Johnson, J.D.; Rollstin, J.A.; Shiver, A.W.; Sprung, J.L.
1995-01-01
Uncertainty and sensitivity analysis techniques based on Latin hypercube sampling, partial correlation analysis and stepwise regression analysis are used in an investigation with the MACCS model of the chronic exposure pathways associated with a severe accident at a nuclear power station. The primary purpose of this study is to provide guidance on the variables to be considered in future review work to reduce the uncertainty in the important variables used in the calculation of reactor accident consequences. The effects of 75 imprecisely known input variables on the following reactor accident consequences are studied: crop growing season dose, crop long-term dose, water ingestion dose, milk growing season dose, long-term groundshine dose, long-term inhalation dose, total food pathways dose, total ingestion pathways dose, total long-term pathways dose, total latent cancer fatalities, area-dependent cost, crop disposal cost, milk disposal cost, population-dependent cost, total economic cost, condemnation area, condemnation population, crop disposal area and milk disposal area. When the predicted variables are considered collectively, the following input variables were found to be the dominant contributors to uncertainty: dry deposition velocity, transfer of cesium from animal feed to milk, transfer of cesium from animal feed to meat, ground concentration of Cs-134 at which the disposal of milk products will be initiated, transfer of Sr-90 from soil to legumes, maximum allowable ground concentration of Sr-90 for production of crops, fraction of cesium entering surface water that is consumed in drinking water, groundshine shielding factor, scale factor defining resuspension, dose reduction associated with decontamination, and ground concentration of 1-131 at which disposal of crops will be initiated due to accidents that occur during the growing season.
Yang, Xianjin; Chen, Xiao; Carrigan, Charles R.; Ramirez, Abelardo L.
2014-06-03
A parametric bootstrap approach is presented for uncertainty quantification (UQ) of CO₂ saturation derived from electrical resistance tomography (ERT) data collected at the Cranfield, Mississippi (USA) carbon sequestration site. There are many sources of uncertainty in ERT-derived CO₂ saturation, but we focus on how the ERT observation errors propagate to the estimated CO₂ saturation in a nonlinear inversion process. Our UQ approach consists of three steps. We first estimated the observational errors from a large number of reciprocal ERT measurements. The second step was to invert the pre-injection baseline data and the resulting resistivity tomograph was used as the priormore » information for nonlinear inversion of time-lapse data. We assigned a 3% random noise to the baseline model. Finally, we used a parametric bootstrap method to obtain bootstrap CO₂ saturation samples by deterministically solving a nonlinear inverse problem many times with resampled data and resampled baseline models. Then the mean and standard deviation of CO₂ saturation were calculated from the bootstrap samples. We found that the maximum standard deviation of CO₂ saturation was around 6% with a corresponding maximum saturation of 30% for a data set collected 100 days after injection began. There was no apparent spatial correlation between the mean and standard deviation of CO₂ saturation but the standard deviation values increased with time as the saturation increased. The uncertainty in CO₂ saturation also depends on the ERT reciprocal error threshold used to identify and remove noisy data and inversion constraints such as temporal roughness. Five hundred realizations requiring 3.5 h on a single 12-core node were needed for the nonlinear Monte Carlo inversion to arrive at stationary variances while the Markov Chain Monte Carlo (MCMC) stochastic inverse approach may expend days for a global search. This indicates that UQ of 2D or 3D ERT inverse problems can be performed
Yang, Xianjin; Chen, Xiao; Carrigan, Charles R.; Ramirez, Abelardo L.
2014-06-03
A parametric bootstrap approach is presented for uncertainty quantification (UQ) of CO₂ saturation derived from electrical resistance tomography (ERT) data collected at the Cranfield, Mississippi (USA) carbon sequestration site. There are many sources of uncertainty in ERT-derived CO₂ saturation, but we focus on how the ERT observation errors propagate to the estimated CO₂ saturation in a nonlinear inversion process. Our UQ approach consists of three steps. We first estimated the observational errors from a large number of reciprocal ERT measurements. The second step was to invert the pre-injection baseline data and the resulting resistivity tomograph was used as the prior information for nonlinear inversion of time-lapse data. We assigned a 3% random noise to the baseline model. Finally, we used a parametric bootstrap method to obtain bootstrap CO₂ saturation samples by deterministically solving a nonlinear inverse problem many times with resampled data and resampled baseline models. Then the mean and standard deviation of CO₂ saturation were calculated from the bootstrap samples. We found that the maximum standard deviation of CO₂ saturation was around 6% with a corresponding maximum saturation of 30% for a data set collected 100 days after injection began. There was no apparent spatial correlation between the mean and standard deviation of CO₂ saturation but the standard deviation values increased with time as the saturation increased. The uncertainty in CO₂ saturation also depends on the ERT reciprocal error threshold used to identify and remove noisy data and inversion constraints such as temporal roughness. Five hundred realizations requiring 3.5 h on a single 12-core node were needed for the nonlinear Monte Carlo inversion to arrive at stationary variances while the Markov Chain Monte Carlo (MCMC) stochastic inverse approach may expend days for a global search. This indicates that UQ of 2D or 3D ERT inverse problems can be performed on a
Assessing the near-term risk of climate uncertainty : interdependencies among the U.S. states.
Loose, Verne W.; Lowry, Thomas Stephen; Malczynski, Leonard A.; Tidwell, Vincent Carroll; Stamber, Kevin Louis; Reinert, Rhonda K.; Backus, George A.; Warren, Drake E.; Zagonel, Aldo A.; Ehlen, Mark Andrew; Klise, Geoffrey T.; Vargas, Vanessa N.
2010-04-01
Policy makers will most likely need to make decisions about climate policy before climate scientists have resolved all relevant uncertainties about the impacts of climate change. This study demonstrates a risk-assessment methodology for evaluating uncertain future climatic conditions. We estimate the impacts of climate change on U.S. state- and national-level economic activity from 2010 to 2050. To understand the implications of uncertainty on risk and to provide a near-term rationale for policy interventions to mitigate the course of climate change, we focus on precipitation, one of the most uncertain aspects of future climate change. We use results of the climate-model ensemble from the Intergovernmental Panel on Climate Change's (IPCC) Fourth Assessment Report 4 (AR4) as a proxy for representing climate uncertainty over the next 40 years, map the simulated weather from the climate models hydrologically to the county level to determine the physical consequences on economic activity at the state level, and perform a detailed 70-industry analysis of economic impacts among the interacting lower-48 states. We determine the industry-level contribution to the gross domestic product and employment impacts at the state level, as well as interstate population migration, effects on personal income, and consequences for the U.S. trade balance. We show that the mean or average risk of damage to the U.S. economy from climate change, at the national level, is on the order of $1 trillion over the next 40 years, with losses in employment equivalent to nearly 7 million full-time jobs.
Treatment planning for prostate focal laser ablation in the face of needle placement uncertainty
Cepek, Jeremy Fenster, Aaron; Lindner, Uri; Trachtenberg, John; Davidson, Sean R. H.; Haider, Masoom A.; Ghai, Sangeet
2014-01-15
Purpose: To study the effect of needle placement uncertainty on the expected probability of achieving complete focal target destruction in focal laser ablation (FLA) of prostate cancer. Methods: Using a simplified model of prostate cancer focal target, and focal laser ablation region shapes, Monte Carlo simulations of needle placement error were performed to estimate the probability of completely ablating a region of target tissue. Results: Graphs of the probability of complete focal target ablation are presented over clinically relevant ranges of focal target sizes and shapes, ablation region sizes, and levels of needle placement uncertainty. In addition, a table is provided for estimating the maximum target size that is treatable. The results predict that targets whose length is at least 5 mm smaller than the diameter of each ablation region can be confidently ablated using, at most, four laser fibers if the standard deviation in each component of needle placement error is less than 3 mm. However, targets larger than this (i.e., near to or exceeding the diameter of each ablation region) require more careful planning. This process is facilitated by using the table provided. Conclusions: The probability of completely ablating a focal target using FLA is sensitive to the level of needle placement uncertainty, especially as the target length approaches and becomes greater than the diameter of ablated tissue that each individual laser fiber can achieve. The results of this work can be used to help determine individual patient eligibility for prostate FLA, to guide the planning of prostate FLA, and to quantify the clinical benefit of using advanced systems for accurate needle delivery for this treatment modality.
DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]
E.T. Coon; C.J. Wilson; S.L. Painter; V.E. Romanovsky; D.R. Harp; A.L. Atchley; J.C. Rowland
2016-02-02
This dataset contains an ensemble of thermal-hydro soil parameters including porosity, thermal conductivity, thermal conductivity shape parameters, and residual saturation of peat and mineral soil. The ensemble was generated using a Null-Space Monte Carlo analysis of parameter uncertainty based on a calibration to soil temperatures collected at the Barrow Environmental Observatory site by the NGEE team. The micro-topography of ice wedge polygons present at the site is included in the analysis using three 1D column models to represent polygon center, rim and trough features. The Arctic Terrestrial Simulator (ATS) was used in the calibration to model multiphase thermal and hydrological processes in the subsurface.
nCTEQ15 - Global analysis of nuclear parton distributions with uncertainties
Kusina, A.; Jezo, T.; Clark, D. B.; Keppel, Cynthia; Lyonnet, F.; Morfin, Jorge; Olness, F. I.; Owens, Jeff; Schienbein, I.
2015-09-01
We present the first official release of the nCTEQ nuclear parton distribution functions with errors. The main addition to the previous nCTEQ PDFs is the introduction of PDF uncertainties based on the Hessian method. Another important addition is the inclusion of pion production data from RHIC that give us a handle on constraining the gluon PDF. This contribution summarizes our results from arXiv:1509.00792 and concentrates on the comparison with other groups providing nuclear parton distributions.
Generalized uncertainty principle in f(R) gravity for a charged black hole
Said, Jackson Levi; Adami, Kristian Zarb
2011-02-15
Using f(R) gravity in the Palatini formularism, the metric for a charged spherically symmetric black hole is derived, taking the Ricci scalar curvature to be constant. The generalized uncertainty principle is then used to calculate the temperature of the resulting black hole; through this the entropy is found correcting the Bekenstein-Hawking entropy in this case. Using the entropy the tunneling probability and heat capacity are calculated up to the order of the Planck length, which produces an extra factor that becomes important as black holes become small, such as in the case of mini-black holes.
Development and implementation of future advanced fuel cycles including those that recycle fuel materials, use advanced fuels different from current fuels, or partition and transmute actinide radionuclides, will impact the waste management system. The UFD Campaign can reasonably conclude that advanced fuel cycles, in combination with partitioning and transmutation, which remove actinides, will not materially alter the performance, the spread in dose results around the mean, the modeling effort to include significant features, events, and processes (FEPs) in the performance assessment, or the characterization of uncertainty associated with a geologic disposal system in the regulatory environment of the US.
Analysis of sampling plan options for tank 16H from the perspective of statistical uncertainty
Shine, E. P.
2013-02-28
This report develops a concentration variability model for Tank 16H in order to compare candidate sampling plans for assessing the concentrations of analytes in the residual material in the annulus and on the floor of the primary vessel. A concentration variability model is used to compare candidate sampling plans based on the expected upper 95% confidence limit (UCL95) for the mean. The result is expressed as a rank order of candidate sampling plans from lowest to highest expected UCL95, with the lowest being the most desirable from an uncertainty perspective.
Parker, W
2009-09-18
Non-destructive gamma-ray analysis is a fundamental part of nuclear safeguards, including nuclear energy safeguards technology. Developing safeguards capabilities for nuclear energy will certainly benefit from the advanced use of gamma-ray spectroscopy as well as the ability to model various reactor scenarios. There is currently a wide variety of nuclear data that could be used in computer modeling and gamma-ray spectroscopy analysis. The data can be discrepant (with varying uncertainties), and it may difficult for a modeler or software developer to determine the best nuclear data set for a particular situation. To use gamma-ray spectroscopy to determine the relative isotopic composition of nuclear materials, the gamma-ray energies and the branching ratios or intensities of the gamma-rays emitted from the nuclides in the material must be well known. A variety of computer simulation codes will be used during the development of the nuclear energy safeguards, and, to compare the results of various codes, it will be essential to have all the {gamma}-ray libraries agree. Assessing our nuclear data needs allows us to create a prioritized list of desired measurements, and provides uncertainties for energies and especially for branching intensities. Of interest are actinides, fission products, and activation products, and most particularly mixtures of all of these radioactive isotopes, including mixtures of actinides and other products. Recent work includes the development of new detectors with increased energy resolution, and studies of gamma-rays and their lines used in simulation codes. Because new detectors are being developed, there is an increased need for well known nuclear data for radioactive isotopes of some elements. Safeguards technology should take advantage of all types of gamma-ray detectors, including new super cooled detectors, germanium detectors and cadmium zinc telluride detectors. Mixed isotopes, particularly mixed actinides found in nuclear reactor
Jenny B. Chapman; Karl Pohlmann; Greg Pohll; Ahmed Hassan; Peter Sanders; Monica Sanchez; Sigurd Jaunarajs
2001-10-18
The hundreds of locations where nuclear tests were conducted underground are dramatic legacies of the cold war. The vast majority of these tests are within the borders of the Nevada Test Site (NTS), but 11 underground tests were conducted elsewhere. The Faultless test, conducted in central Nevada, is the site of an ongoing environmental remediation effort that has successfully progressed through numerous technical challenges due to close cooperation between the U.S. Department of Energy (DOE) National Nuclear Security Administration (NNSA) and the State of Nevada Division of Environmental Protection (NDEP). The challenges faced at this site are similar to those of many other sites of groundwater contamination: substantial uncertainties due to the relative lack of data from a highly heterogeneous subsurface environment. Knowing when, where, and how to devote the often enormous resources needed to collect new data is a common problem, and one that can cause disputes between remediators and regulators that stall progress toward closing sites. For Faultless, a variety of numerical modeling techniques and statistical tools were used to provide the information needed for NNSA and NDEP to confidently move forward along the remediation path to site closure. A general framework for remediation was established in an agreement and consent order between DOE and the State of Nevada that recognized that no cost-effective technology currently exists to remove the source of the contaminants in the nuclear cavities. Rather, the emphasis of the corrective action is on identifying the impacted groundwater resource and ensuring protection of human health and the environment from the contamination through monitoring. As a result, groundwater flow and transport modeling is the lynchpin in the remediation effort.
Validation and quantification of uncertainty in coupled climate models using network analysis
Bracco, Annalisa
2015-08-10
We developed a fast, robust and scalable methodology to examine, quantify, and visualize climate patterns and their relationships. It is based on a set of notions, algorithms and metrics used in the study of graphs, referred to as complex network analysis. This approach can be applied to explain known climate phenomena in terms of an underlying network structure and to uncover regional and global linkages in the climate system, while comparing general circulation models outputs with observations. The proposed method is based on a two-layer network representation, and is substantially new within the available network methodologies developed for climate studies. At the first layer, gridded climate data are used to identify ‘‘areas’’, i.e., geographical regions that are highly homogeneous in terms of the given climate variable. At the second layer, the identified areas are interconnected with links of varying strength, forming a global climate network. The robustness of the method (i.e. the ability to separate between topological distinct fields, while identifying correctly similarities) has been extensively tested. It has been proved that it provides a reliable, fast framework for comparing and ranking the ability of climate models of reproducing observed climate patterns and their connectivity. We further developed the methodology to account for lags in the connectivity between climate patterns and refined our area identification algorithm to account for autocorrelation in the data. The new methodology based on complex network analysis has been applied to state-of-the-art climate model simulations that participated to the last IPCC (International Panel for Climate Change) assessment to verify their performances, quantify uncertainties, and uncover changes in global linkages between past and future projections. Network properties of modeled sea surface temperature and rainfall over 1956–2005 have been constrained towards observations or reanalysis data sets
Chung, Bub Dong; Lee, Young Lee; Park, Chan Eok; Lee, Sang Yong
1996-10-01
Assessment of the original REAP/N4OD3.1 code against the FLECHT SEASET series of experiments has identified some weaknesses of the reflood model, such as the lack of a quenching temperature model, the shortcoming of the Chen transition boiling model, and the incorrect prediction of droplet size and interfacial heat transfer. Also, high temperature spikes during the reflood calculation resulted in high steam flow oscillation and liquid carryover. An effort had been made to improve the code with respect to the above weakness, and the necessary model for the wall heat transfer package and the numerical scheme had been modified. Some important FLECHT-SEASET experiments were assessed using the improved version and standard version. The result from the improved REAP/MOD3.1 shows the weaknesses of REAP/N4OD3.1 were much improved when compared to the standard MOD3.1 code. The prediction of void profile and cladding temperature agreed better with test data, especially for the gravity feed test. The scatter diagram of peak cladding temperatures (PCTs) is made from the comparison of all the calculated PCTs and the corresponding experimental values. The deviation between experimental and calculated PCTs were calculated for 2793 data points. The deviations are shown to be normally distributed, and used to quantify statistically the PCT uncertainty of the code. The upper limit of PCT uncertainty at 95% confidence level is evaluated to be about 99K.
Uncertainty analyses of CO2 plume expansion subsequent to wellbore CO2 leakage into aquifers
Hou, Zhangshuan; Bacon, Diana H.; Engel, David W.; Lin, Guang; Fang, Yilin; Ren, Huiying; Fang, Zhufeng
2014-08-01
In this study, we apply an uncertainty quantification (UQ) framework to CO2 sequestration problems. In one scenario, we look at the risk of wellbore leakage of CO2 into a shallow unconfined aquifer in an urban area; in another scenario, we study the effects of reservoir heterogeneity on CO2 migration. We combine various sampling approaches (quasi-Monte Carlo, probabilistic collocation, and adaptive sampling) in order to reduce the number of forward calculations while trying to fully explore the input parameter space and quantify the input uncertainty. The CO2 migration is simulated using the PNNL-developed simulator STOMP-CO2e (the water-salt-CO2 module). For computationally demanding simulations with 3D heterogeneity fields, we combined the framework with a scalable version module, eSTOMP, as the forward modeling simulator. We built response curves and response surfaces of model outputs with respect to input parameters, to look at the individual and combined effects, and identify and rank the significance of the input parameters.
Covariant energymomentum and an uncertainty principle for general relativity
Cooperstock, F.I.; Dupre, M.J.
2013-12-15
We introduce a naturally-defined totally invariant spacetime energy expression for general relativity incorporating the contribution from gravity. The extension links seamlessly to the action integral for the gravitational field. The demand that the general expression for arbitrary systems reduces to the Tolman integral in the case of stationary bounded distributions, leads to the matter-localized Ricci integral for energymomentum in support of the energy localization hypothesis. The role of the observer is addressed and as an extension of the special relativistic case, the field of observers comoving with the matter is seen to compute the intrinsic global energy of a system. The new localized energy supports the Bonnor claim that the Szekeres collapsing dust solutions are energy-conserving. It is suggested that in the extreme of strong gravity, the Heisenberg Uncertainty Principle be generalized in terms of spacetime energymomentum. -- Highlights: We present a totally invariant spacetime energy expression for general relativity incorporating the contribution from gravity. Demand for the general expression to reduce to the Tolman integral for stationary systems supports the Ricci integral as energymomentum. Localized energy via the Ricci integral is consistent with the energy localization hypothesis. New localized energy supports the Bonnor claim that the Szekeres collapsing dust solutions are energy-conserving. Suggest the Heisenberg Uncertainty Principle be generalized in terms of spacetime energymomentum in strong gravity extreme.
Prather, Michael J.; Hsu, Juno; Nicolau, Alex; Veidenbaum, Alex; Smith, Philip Cameron; Bergmann, Dan
2014-11-07
Atmospheric chemistry controls the abundances and hence climate forcing of important greenhouse gases including N_{2}O, CH_{4}, HFCs, CFCs, and O_{3}. Attributing climate change to human activities requires, at a minimum, accurate models of the chemistry and circulation of the atmosphere that relate emissions to abundances. This DOE-funded research provided realistic, yet computationally optimized and affordable, photochemical modules to the Community Earth System Model (CESM) that augment the CESM capability to explore the uncertainty in future stratospheric-tropospheric ozone, stratospheric circulation, and thus the lifetimes of chemically controlled greenhouse gases from climate simulations. To this end, we have successfully implemented Fast-J (radiation algorithm determining key chemical photolysis rates) and Linoz v3.0 (linearized photochemistry for interactive O_{3}, N_{2}O, NO_{y} and CH_{4}) packages in LLNL-CESM and for the first time demonstrated how change in O2 photolysis rate within its uncertainty range can significantly impact on the stratospheric climate and ozone abundances. From the UCI side, this proposal also helped LLNL develop a CAM-Superfast Chemistry model that was implemented for the IPCC AR5 and contributed chemical-climate simulations to CMIP5.
Cosmological parameter uncertainties from SALT-II type Ia supernova light curve models
Mosher, J.; Sako, M. [Department of Physics and Astronomy, University of Pennsylvania, 209 South 33rd Street, Philadelphia, PA 19104 (United States); Guy, J.; Astier, P.; Betoule, M.; El-Hage, P.; Pain, R.; Regnault, N. [LPNHE, CNRS/IN2P3, Universit Pierre et Marie Curie Paris 6, Universi Denis Diderot Paris 7, 4 place Jussieu, F-75252 Paris Cedex 05 (France); Kessler, R.; Frieman, J. A. [Kavli Institute for Cosmological Physics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637 (United States); Marriner, J. [Center for Particle Astrophysics, Fermi National Accelerator Laboratory, P.O. Box 500, Batavia, IL 60510 (United States); Biswas, R.; Kuhlmann, S. [Argonne National Laboratory, 9700 South Cass Avenue, Lemont, IL 60439 (United States); Schneider, D. P., E-mail: kessler@kicp.chicago.edu [Department of Astronomy and Astrophysics, The Pennsylvania State University, University Park, PA 16802 (United States)
2014-09-20
We use simulated type Ia supernova (SN Ia) samples, including both photometry and spectra, to perform the first direct validation of cosmology analysis using the SALT-II light curve model. This validation includes residuals from the light curve training process, systematic biases in SN Ia distance measurements, and a bias on the dark energy equation of state parameter w. Using the SN-analysis package SNANA, we simulate and analyze realistic samples corresponding to the data samples used in the SNLS3 analysis: ?120 low-redshift (z < 0.1) SNe Ia, ?255 Sloan Digital Sky Survey SNe Ia (z < 0.4), and ?290 SNLS SNe Ia (z ? 1). To probe systematic uncertainties in detail, we vary the input spectral model, the model of intrinsic scatter, and the smoothing (i.e., regularization) parameters used during the SALT-II model training. Using realistic intrinsic scatter models results in a slight bias in the ultraviolet portion of the trained SALT-II model, and w biases (w {sub input} w {sub recovered}) ranging from 0.005 0.012 to 0.024 0.010. These biases are indistinguishable from each other within the uncertainty; the average bias on w is 0.014 0.007.
Impacts of Variability and Uncertainty in Solar Photovoltaic Generation at Multiple Timescales
Ela, E.; Diakov, V.; Ibanez, E.; Heaney, M.
2013-05-01
The characteristics of variability and uncertainty of PV solar power have been studied extensively. These characteristics can create challenges for system operators who must ensure a balance between generation and demand while obeying power system constraints at the lowest possible cost. A number of studies have looked at the impact of wind power plants, and some recent studies have also included solar PV. The simulations that are used in these studies, however, are typically fixed to one time resolution. This makes it difficult to analyze the variability across several timescales. In this study, we use a simulation tool that has the ability to evaluate both the economic and reliability impacts of PV variability and uncertainty at multiple timescales. This information should help system operators better prepare for increases of PV on their systems and develop improved mitigation strategies to better integrate PV with enhanced reliability. Another goal of this study is to understand how different mitigation strategies and methods can improve the integration of solar power more reliably and efficiently.
Hydropower generation management under uncertainty via scenario analysis and parallel computation
Escudero, L.F.; Garcia, C.; Fuente, J.L. de la; Prieto, F.J.
1996-05-01
The authors present a modeling framework for the robust solution of hydroelectric power management problems with uncertainty in the values of the water inflows and outflows. A deterministic treatment of the problem provides unsatisfactory results, except for very short time horizons. The authors describe a model based on scenario analysis that allows a satisfactory treatment of uncertainty in the model data for medium and long-term planning problems. Their approach results in a huge model with a network submodel per scenario plus coupling constraints. The size of the problem and the structure of the constraints are adequate for the use of decomposition techniques and parallel computation tools. The authors present computational results for both sequential and parallel implementation versions of the codes, running on a cluster of workstations. The codes have been tested on data obtained from the reservoir network of Iberdrola, a power utility owning 50% of the total installed hydroelectric capacity of Spain, and generating 40% of the total energy demand.
Hydropower generation management under uncertainty via scenario analysis and parallel computation
Escudero, L.F.; Garcia, C.; Fuente, J.L. de la; Prieto, F.J.
1995-12-31
The authors present a modeling framework for the robust solution of hydroelectric power management problems and uncertainty in the values of the water inflows and outflows. A deterministic treatment of the problem provides unsatisfactory results, except for very short time horizons. The authors describe a model based on scenario analysis that allows a satisfactory treatment of uncertainty in the model data for medium and long-term planning problems. This approach results in a huge model with a network submodel per scenario plus coupling constraints. The size of the problem and the structure of the constraints are adequate for the use of decomposition techniques and parallel computation tools. The authors present computational results for both sequential and parallel implementation versions of the codes, running on a cluster of workstations. The code have been tested on data obtained from the reservoir network of Iberdrola, a power utility owning 50% of the total installed hydroelectric capacity of Spain, and generating 40% of the total energy demand.
Hou, Zhangshuan; Engel, David W.; Lin, Guang; Fang, Yilin; Fang, Zhufeng
2013-10-01
In this paper, we introduce an uncertainty quantification (UQ) software framework for carbon sequestration, focused on the effect of spatial heterogeneity of reservoir properties on CO2 migration. We use a sequential Gaussian method (SGSIM) to generate realizations of permeability fields with various spatial statistical attributes. To deal with the computational difficulties, we integrate the following ideas/approaches. First, we use three different sampling approaches (probabilistic collocation, quasi-Monte Carlo, and adaptive sampling) to reduce the number of forward calculations while trying to explore the parameter space and quantify the input uncertainty. Second, we use eSTOMP as the forward modeling simulator. eSTOMP is implemented with the Global Arrays toolkit that is based on one-sided inter-processor communication and supports a shared memory programming style on distributed memory platforms, providing a highly-scalable performance. Third, we built an adaptive system infrastructure to select the best possible data transfer mechanisms, to optimally allocate system resources to improve performance and to integrate software packages and data for composing carbon sequestration simulation, computation, analysis, estimation and visualization. We demonstrate the framework with a given CO2 injection scenario in heterogeneous sandstone reservoirs.
Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.; Jakeman, John Davis; Swiler, Laura Painton; Stephens, John Adam; Vigil, Dena M.; Wildey, Timothy Michael; Bohnhoff, William J.; Eddy, John P.; Hu, Kenneth T.; Dalbey, Keith R.; Bauman, Lara E; Hough, Patricia Diane
2014-05-01
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
Report on INL Activities for Uncertainty Reduction Analysis of FY11
G. Plamiotti; H. Hiruta; M. Salvatores
2011-09-01
This report presents the status of activities performed at INL under the ARC Work Package on 'Uncertainty Reduction Analyses' that has a main goal the reduction of uncertainties associated with nuclear data on neutronic integral parameters of interest for the design of advanced fast reactors under consideration by the ARC program. First, an analysis of experiments was carried out. For both JOYO (the first Japanese fast reactor) and ZPPR-9 (a large size zero power plutonium fueled experiment performed at ANL-W in Idaho) the performance of ENDF/B-VII.0 is quite satisfying except for the sodium void configurations of ZPPR-9, but for which one has to take into account the approximation of the modeling. In fact, when one uses a more detailed model (calculations performed at ANL in a companion WP) more reasonable results are obtained. A large effort was devoted to the analysis of the irradiation experiments, PROFIL-1 and -2 and TRAPU, performed at the French fast reactor PHENIX. For these experiments a pre-release of the ENDF/B-VII.1 cross section files was also used, in order to provide validation feedback to the CSWEG nuclear data evaluation community. In the PROFIL experiments improvements can be observed for the ENDF/B-VII.1 capture data in 238Pu, 241Am, 244Cm, 97Mo, 151Sm, 153Eu, and for 240Pu(n,2n). On the other hand, 240,242Pu, 95Mo, 133Cs and 145Nd capture C/E results are worse. For the major actinides 235U and especially 239Pu capture C/E's are underestimated. For fission products, 105,106Pd, 143,144Nd and 147,149Sm are significantly underestimated, while 101Ru and 151Sm are overestimated. Other C/E deviations from unity are within the combined experimental and calculated statistical uncertainty. From the TRAPU analysis, the major improvement is in the predicted 243Cm build-up, presumably due to an improved 242Cm capture evaluation. The COSMO experiment was also analyzed in order to provide useful feedback on fission cross sections. It was found out that ENDF
Singh, Aditya; Serbin, Shawn P.; McNeil, Brenden E.; Kingdon, Clayton C.; Townsend, Philip A.
2015-12-01
A major goal of remote sensing is the development of generalizable algorithms to repeatedly and accurately map ecosystem properties across space and time. Imaging spectroscopy has great potential to map vegetation traits that cannot be retrieved from broadband spectral data, but rarely have such methods been tested across broad regions. Here we illustrate a general approach for estimating key foliar chemical and morphological traits through space and time using NASA's Airborne Visible/Infrared Imaging Spectrometer (AVIRIS-Classic). We apply partial least squares regression (PLSR) to data from 237 field plots within 51 images acquired between 2008 and 2011. Using a series of 500 randomized 50/50 subsets of the original data, we generated spatially explicit maps of seven traits (leaf mass per area (M_{area}), percentage nitrogen, carbon, fiber, lignin, and cellulose, and isotopic nitrogen concentration, δ^{15}N) as well as pixel-wise uncertainties in their estimates based on error propagation in the analytical methods. Both Marea and %N PLSR models had a R^{2} > 0.85. Root mean square errors (RMSEs) for both variables were less than 9% of the range of data. Fiber and lignin were predicted with R^{2} > 0.65 and carbon and cellulose with R^{2} > 0.45. Although R^{2} of %C and cellulose were lower than Marea and %N, the measured variability of these constituents (especially %C) was also lower, and their RMSE values were beneath 12% of the range in overall variability. Model performance for δ^{15}N was the lowest (R^{2} = 0.48, RMSE = 0.95‰), but within 15% of the observed range. The resulting maps of chemical and morphological traits, together with their overall uncertainties, represent a first-of-its-kind approach for examining the spatiotemporal patterns of forest functioning and nutrient cycling across a broad range of temperate and sub-boreal ecosystems. These results offer an alternative to
Singh, Aditya; Serbin, Shawn P.; McNeil, Brenden E.; Kingdon, Clayton C.; Townsend, Philip A.
2015-12-01
A major goal of remote sensing is the development of generalizable algorithms to repeatedly and accurately map ecosystem properties across space and time. Imaging spectroscopy has great potential to map vegetation traits that cannot be retrieved from broadband spectral data, but rarely have such methods been tested across broad regions. Here we illustrate a general approach for estimating key foliar chemical and morphological traits through space and time using NASA's Airborne Visible/Infrared Imaging Spectrometer (AVIRIS-Classic). We apply partial least squares regression (PLSR) to data from 237 field plots within 51 images acquired between 2008 and 2011. Using a series ofmore » 500 randomized 50/50 subsets of the original data, we generated spatially explicit maps of seven traits (leaf mass per area (Marea), percentage nitrogen, carbon, fiber, lignin, and cellulose, and isotopic nitrogen concentration, δ15N) as well as pixel-wise uncertainties in their estimates based on error propagation in the analytical methods. Both Marea and %N PLSR models had a R2 > 0.85. Root mean square errors (RMSEs) for both variables were less than 9% of the range of data. Fiber and lignin were predicted with R2 > 0.65 and carbon and cellulose with R2 > 0.45. Although R2 of %C and cellulose were lower than Marea and %N, the measured variability of these constituents (especially %C) was also lower, and their RMSE values were beneath 12% of the range in overall variability. Model performance for δ15N was the lowest (R2 = 0.48, RMSE = 0.95‰), but within 15% of the observed range. The resulting maps of chemical and morphological traits, together with their overall uncertainties, represent a first-of-its-kind approach for examining the spatiotemporal patterns of forest functioning and nutrient cycling across a broad range of temperate and sub-boreal ecosystems. These results offer an alternative to categorical maps of functional or physiognomic types by providing non-discrete maps (i
Uncertainty Quantification of Calculated Temperatures for the U.S. Capsules in the AGR-2 Experiment
Lybeck, Nancy; Einerson, Jeffrey J.; Pham, Binh T.; Hawkes, Grant L.
2015-03-01
A series of Advanced Gas Reactor (AGR) irradiation experiments are being conducted within the Advanced Reactor Technology (ART) Fuel Development and Qualification Program. The main objectives of the fuel experimental campaign are to provide the necessary data on fuel performance to support fuel process development, qualify a fuel design and fabrication process for normal operation and accident conditions, and support development and validation of fuel performance and fission product transport models and codes (PLN-3636). The AGR-2 test was inserted in the B-12 position in the Advanced Test Reactor (ATR) core at Idaho National Laboratory (INL) in June 2010 and successfully completed irradiation in October 2013, resulting in irradiation of the TRISO fuel for 559.2 effective full power days (EFPDs) during approximately 3.3 calendar years. The AGR-2 data, including the irradiation data and calculated results, were qualified and stored in the Nuclear Data Management and Analysis System (NDMAS) (Pham and Einerson 2014). To support the U.S. TRISO fuel performance assessment and to provide data for validation of fuel performance and fission product transport models and codes, the daily as-run thermal analysis has been performed separately on each of four AGR-2 U.S. capsules for the entire irradiation as discussed in (Hawkes 2014). The ABAQUS code’s finite element-based thermal model predicts the daily average volume-average fuel temperature and peak fuel temperature in each capsule. This thermal model involves complex physical mechanisms (e.g., graphite holder and fuel compact shrinkage) and properties (e.g., conductivity and density). Therefore, the thermal model predictions are affected by uncertainty in input parameters and by incomplete knowledge of the underlying physics leading to modeling assumptions. Therefore, alongside with the deterministic predictions from a set of input thermal conditions, information about prediction uncertainty is instrumental for the ART
Lombardo, A.J.; Orthen, R.F.; Shonka, J.J.; Scott, L.M.
2007-07-01
The regulatory release of sites and facilities (property) for restricted or unrestricted use has evolved beyond prescribed levels to model-derived dose and risk based limits. Dose models for deriving corresponding soil radionuclide concentration guidelines are necessarily simplified representations of complex processes. It is not practical to obtain data to fully or accurately characterize transport and exposure pathway processes. Similarly, it is not possible to predict future conditions with certainty absent durable land use restrictions. To compensate for the shortage of comprehensive characterization data and site specific inputs to describe the projected 'as-left' contaminated zone, conservative default values are used to derive acceptance criteria. The result is overly conservative criteria. Furthermore, implementation of a remediation plan and subsequent final surveys to show compliance with the conservative criteria often result in excessive remediation due to the large uncertainty. During a recent decommissioning project of a site contaminated with thorium, a unique approach to dose modeling and remedial action design was implemented to effectively manage end-point uncertainty. The approach used a dynamic feedback dose model and soil segregation technology to characterize impacted material with precision and accuracy not possible with static control approaches. Utilizing the remedial action goal 'over excavation' and subsequent auto-segregation of excavated material for refill, the end-state (as-left conditions of the refilled excavation) RESRAD input parameters were re-entered to assess the final dose. The segregation process produced separate below and above criteria material stockpiles whose volumes were optimized for maximum refill and minimum waste. The below criteria material was returned to the excavation without further analysis, while the above criteria material was packaged for offsite disposal. Using the activity concentration data recorded by
Cafferty, Kara G.; Searcy, Erin M.; Nguyen, Long; Spatari, Sabrina
2014-11-01
To meet Energy Independence and Security Act (EISA) cellulosic biofuel mandates, the United States will require an annual domestic supply of about 242 million Mg of biomass by 2022. To improve the feedstock logistics of lignocellulosic biofuels and access available biomass resources from areas with varying yields, commodity systems have been proposed and designed to deliver on-spec biomass feedstocks at preprocessing “depots”, which densify and stabilize the biomass prior to long-distance transport and delivery to centralized biorefineries. The harvesting, preprocessing, and logistics (HPL) of biomass commodity supply chains thus could introduce spatially variable environmental impacts into the biofuel life cycle due to needing to harvest, move, and preprocess biomass from multiple distances that have variable spatial density. This study examines the uncertainty in greenhouse gas (GHG) emissions of corn stover logisticsHPL within a bio-ethanol supply chain in the state of Kansas, where sustainable biomass supply varies spatially. Two scenarios were evaluated each having a different number of depots of varying capacity and location within Kansas relative to a central commodity-receiving biorefinery to test GHG emissions uncertainty. Monte Carlo simulation was used to estimate the spatial uncertainty in the HPL gate-to-gate sequence. The results show that the transport of densified biomass introduces the highest variability and contribution to the carbon footprint of the logistics HPL supply chain (0.2-13 g CO_{2}e/MJ). Moreover, depending upon the biomass availability and its spatial density and surrounding transportation infrastructure (road and rail), logistics HPL processes can increase the variability in life cycle environmental impacts for lignocellulosic biofuels. Within Kansas, life cycle GHG emissions could range from 24 to 41 g CO_{2}e/MJ depending upon the location, size and number of preprocessing depots constructed. However, this
Yunhua Zhu; H. Christopher Frey
2006-12-15
Integrated gasification combined cycle (IGCC) technology is a promising alternative for clean generation of power and coproduction of chemicals from coal and other feedstocks. Advanced concepts for IGCC systems that incorporate state-of-the-art gas turbine systems, however, are not commercially demonstrated. Therefore, there is uncertainty regarding the future commercial-scale performance, emissions, and cost of such technologies. The Frame 7F gas turbine represents current state-of-practice, whereas the Frame 7H is the most recently introduced advanced commercial gas turbine. The objective of this study was to evaluate the risks and potential payoffs of IGCC technology based on different gas turbine combined cycle designs. Models of entrained-flow gasifier-based IGCC systems with Frame 7F (IGCC-7F) and 7H gas turbine combined cycles (IGCC-7H) were developed in ASPEN Plus. An uncertainty analysis was conducted. Gasifier carbon conversion and project cost uncertainty are identified as the most important uncertain inputs with respect to system performance and cost. The uncertainties in the difference of the efficiencies and costs for the two systems are characterized. Despite uncertainty, the IGCC-7H system is robustly preferred to the IGCC-7F system. Advances in gas turbine design will improve the performance, emissions, and cost of IGCC systems. The implications of this study for decision-making regarding technology selection, research planning, and plant operation are discussed. 38 refs., 11 figs., 5 tabs.
Stracuzzi, David John; Brost, Randolph C.; Phillips, Cynthia A.; Robinson, David G.; Wilson, Alyson G.; Woodbridge, Diane M. -K.
2015-09-26
Geospatial semantic graphs provide a robust foundation for representing and analyzing remote sensor data. In particular, they support a variety of pattern search operations that capture the spatial and temporal relationships among the objects and events in the data. However, in the presence of large data corpora, even a carefully constructed search query may return a large number of unintended matches. This work considers the problem of calculating a quality score for each match to the query, given that the underlying data are uncertain. As a result, we present a preliminary evaluation of three methods for determining both match quality scores and associated uncertainty bounds, illustrated in the context of an example based on overhead imagery data.
Stracuzzi, David John; Brost, Randolph C.; Phillips, Cynthia A.; Robinson, David G.; Wilson, Alyson G.; Woodbridge, Diane M. -K.
2015-09-26
Geospatial semantic graphs provide a robust foundation for representing and analyzing remote sensor data. In particular, they support a variety of pattern search operations that capture the spatial and temporal relationships among the objects and events in the data. However, in the presence of large data corpora, even a carefully constructed search query may return a large number of unintended matches. This work considers the problem of calculating a quality score for each match to the query, given that the underlying data are uncertain. As a result, we present a preliminary evaluation of three methods for determining both match qualitymore » scores and associated uncertainty bounds, illustrated in the context of an example based on overhead imagery data.« less
Uncertainties in the characterization of the thermal environment of a solid rocket propellant fire
Diaz, J.C.
1993-10-01
There has been an interest in developing models capable of predicting the response of systems to Minuteman (MM) III third-stage solid propellant fires. Input parameters for such an effort include the boundary conditions that describe the fire temperature, heat flux, emissivity, and propellant burn rate. In this study scanning spectroscopy and pyrometry were used to infer plume temperatures. Each diagnostic system possessed strengths and weaknesses. The intention was to use various supportive methods to infer plume temperature and emissivity, because no one diagnostic had proven capabilities for determining temperature under these conditions. Furthermore, these diagnostics were being used near the limit of their applicability. All these points created some uncertainty in the data collected.
Uncertainties in compliance with harmonic current distortion limits in electric power systems
Gruzs, T.M. )
1991-07-01
The harmonic distortion of any repetitive voltage or current waveform is typically described by the quantity total harmonic distortion (THD). With the proliferation of nonlinear loads, such as static power converters, there has been increasing concern over the generation of harmonic currents and the effects of these currents on the power system. Proposals have been made to limit harmonic currents in power systems using the total harmonic distortion of the current as the criterion. This criterion, although it may be necessary, can be ambiguous and lead to compliance uncertainties. In this paper a discussion is presented on several of the practical problems by applying total harmonic current distortion limits to industrial and commercial power systems.
DOE R&D Accomplishments [OSTI]
Post, W. M.; Dale, V. H.; DeAngelis, D. L.; Mann, L. K.; Mulholland, P. J.; O`Neill, R. V.; Peng, T. -H.; Farrell, M. P.
1990-02-01
The global carbon cycle is the dynamic interaction among the earth's carbon sources and sinks. Four reservoirs can be identified, including the atmosphere, terrestrial biosphere, oceans, and sediments. Atmospheric CO{sub 2} concentration is determined by characteristics of carbon fluxes among major reservoirs of the global carbon cycle. The objective of this paper is to document the knowns, and unknowns and uncertainties associated with key questions that if answered will increase the understanding of the portion of past, present, and future atmospheric CO{sub 2} attributable to fossil fuel burning. Documented atmospheric increases in CO{sub 2} levels are thought to result primarily from fossil fuel use and, perhaps, deforestation. However, the observed atmospheric CO{sub 2} increase is less than expected from current understanding of the global carbon cycle because of poorly understood interactions among the major carbon reservoirs.
Electromagnetic form factors of the nucleon: New fit and analysis of uncertainties
Alberico, W. M.; Giunti, C.; Bilenky, S. M.; Graczyk, K. M.
2009-06-15
Electromagnetic form factors of proton and neutron, obtained from a new fit of data, are presented. The proton form factors are obtained from a simultaneous fit to the ratio {mu}{sub p}G{sub Ep}/G{sub Mp} determined from polarization transfer measurements and to ep elastic cross section data. Phenomenological two-photon exchange corrections are taken into account. The present fit for protons was performed in the kinematical region Q{sup 2} is an element of (0,6) GeV{sup 2}. For both protons and neutrons we use the latest available data. For all form factors, the uncertainties and correlations of form factor parameters are investigated with the {chi}{sup 2} method.
DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]
J.C. Rowland; D.R. Harp; C.J. Wilson; A.L. Atchley; V.E. Romanovsky; E.T. Coon; S.L. Painter
2016-02-02
This Modeling Archive is in support of an NGEE Arctic publication available at doi:10.5194/tc-10-341-2016. This dataset contains an ensemble of thermal-hydro soil parameters including porosity, thermal conductivity, thermal conductivity shape parameters, and residual saturation of peat and mineral soil. The ensemble was generated using a Null-Space Monte Carlo analysis of parameter uncertainty based on a calibration to soil temperatures collected at the Barrow Environmental Observatory site by the NGEE team. The micro-topography of ice wedge polygons present at the site is included in the analysis using three 1D column models to represent polygon center, rim and trough features. The Arctic Terrestrial Simulator (ATS) was used in the calibration to model multiphase thermal and hydrological processes in the subsurface.
Challenges, uncertainties and issues facing gas production from gas hydrate deposits
Moridis, G.J.; Collett, T.S.; Pooladi-Darvish, M.; Hancock, S.; Santamarina, C.; Boswell, R.; Kneafsey, T.; Rutqvist, J.; Kowalsky, M.; Reagan, M.T.; Sloan, E.D.; Sum, A.K.; Koh, C.
2010-11-01
The current paper complements the Moridis et al. (2009) review of the status of the effort toward commercial gas production from hydrates. We aim to describe the concept of the gas hydrate petroleum system, to discuss advances, requirement and suggested practices in gas hydrate (GH) prospecting and GH deposit characterization, and to review the associated technical, economic and environmental challenges and uncertainties, including: the accurate assessment of producible fractions of the GH resource, the development of methodologies for identifying suitable production targets, the sampling of hydrate-bearing sediments and sample analysis, the analysis and interpretation of geophysical surveys of GH reservoirs, well testing methods and interpretation of the results, geomechanical and reservoir/well stability concerns, well design, operation and installation, field operations and extending production beyond sand-dominated GH reservoirs, monitoring production and geomechanical stability, laboratory investigations, fundamental knowledge of hydrate behavior, the economics of commercial gas production from hydrates, and the associated environmental concerns.
Babendreier, Justin E.; Castleton, Karl J.
2005-08-01
Elucidating uncertainty and sensitivity structures in environmental models can be a difficult task, even for low-order, single-medium constructs driven by a unique set of site-specific data. Quantitative assessment of integrated, multimedia models that simulate hundreds of sites, spanning multiple geographical and ecological regions, will ultimately require a comparative approach using several techniques, coupled with sufficient computational power. The Framework for Risk Analysis in Multimedia Environmental Systems - Multimedia, Multipathway, and Multireceptor Risk Assessment (FRAMES-3MRA) is an important software model being developed by the United States Environmental Protection Agency for use in risk assessment of hazardous waste management facilities. The 3MRA modeling system includes a set of 17 science modules that collectively simulate release, fate and transport, exposure, and risk associated with hazardous contaminants disposed of in land-based waste management units (WMU) .
Solar Irradiances Measured using SPN1 Radiometers: Uncertainties and Clues for Development
Badosa, Jordi; Wood, John; Blanc, Philippe; Long, Charles N.; Vuilleumier, Laurent; Demengel, Dominique; Haeffelin, Martial
2014-12-08
The fast development of solar radiation and energy applications, such as photovoltaic and solar thermodynamic systems, has increased the need for solar radiation measurement and monitoring, not only for the global component but also the diffuse and direct. End users look for the best compromise between getting close to state-of-the-art measurements and keeping capital, maintenance and operating costs to a minimum. Among the existing commercial options, SPN1 is a relatively low cost solar radiometer that estimates global and diffuse solar irradiances from seven thermopile sensors under a shading mask and without moving parts. This work presents a comprehensive study of SPN1 accuracy and sources of uncertainty, which results from laboratory experiments, numerical modeling and comparison studies between measurements from this sensor and state-of-the art instruments for six diverse sites. Several clues are provided for improving the SPN1 accuracy and agreement with state-of-the-art measurements.
Sensitivity of CO2 migration estimation on reservoir temperature and pressure uncertainty
Jordan, Preston; Doughty, Christine
2008-11-01
The density and viscosity of supercritical CO{sub 2} are sensitive to pressure and temperature (PT) while the viscosity of brine is sensitive primarily to temperature. Oil field PT data in the vicinity of WESTCARB's Phase III injection pilot test site in the southern San Joaquin Valley, California, show a range of PT values, indicating either PT uncertainty or variability. Numerical simulation results across the range of likely PT indicate brine viscosity variation causes virtually no difference in plume evolution and final size, but CO{sub 2} density variation causes a large difference. Relative ultimate plume size is almost directly proportional to the relative difference in brine and CO{sub 2} density (buoyancy flow). The majority of the difference in plume size occurs during and shortly after the cessation of injection.
Entropic uncertainty relations and locking: Tight bounds for mutually unbiased bases
Ballester, Manuel A.; Wehner, Stephanie
2007-02-15
We prove tight entropic uncertainty relations for a large number of mutually unbiased measurements. In particular, we show that a bound derived from the result by Maassen and Uffink [Phys. Rev. Lett. 60, 1103 (1988)] for two such measurements can in fact be tight for up to {radical}(d) measurements in mutually unbiased bases. We then show that using more mutually unbiased bases does not always lead to a better locking effect. We prove that the optimal bound for the accessible information using up to {radical}(d) specific mutually unbiased bases is log d/2, which is the same as can be achieved by using only two bases. Our result indicates that merely using mutually unbiased bases is not sufficient to achieve a strong locking effect and we need to look for additional properties.
Menndez, Javier
2013-12-30
We explore the theoretical uncertainties related to the transition operator of neutrinoless double-beta (0???) decay. The transition operator used in standard calculations is a product of one-body currents, that can be obtained phenomenologically as in Tomoda [1] or imkovic et al. [2]. However, corrections to the operator are hard to obtain in the phenomenological approach. Instead, we calculate the 0??? decay operator in the framework of chiral effective theory (EFT), which gives a systematic order-by-order expansion of the transition currents. At leading orders in chiral EFT we reproduce the standard one-body currents of Refs. [1] and [2]. Corrections appear as two-body (2b) currents predicted by chiral EFT. We compute the effects of the leading 2b currents to the nuclear matrix elements of 0??? decay for several transition candidates. The 2b current contributions are related to the quenching of Gamow-Teller transitions found in nuclear structure calculations.
Langton, C.; Kosson, D.
2009-11-30
Cementitious barriers for nuclear applications are one of the primary controls for preventing or limiting radionuclide release into the environment. At the present time, performance and risk assessments do not fully incorporate the effectiveness of engineered barriers because the processes that influence performance are coupled and complicated. Better understanding the behavior of cementitious barriers is necessary to evaluate and improve the design of materials and structures used for radioactive waste containment, life extension of current nuclear facilities, and design of future nuclear facilities, including those needed for nuclear fuel storage and processing, nuclear power production and waste management. The focus of the Cementitious Barriers Partnership (CBP) literature review is to document the current level of knowledge with respect to: (1) mechanisms and processes that directly influence the performance of cementitious materials (2) methodologies for modeling the performance of these mechanisms and processes and (3) approaches to addressing and quantifying uncertainties associated with performance predictions. This will serve as an important reference document for the professional community responsible for the design and performance assessment of cementitious materials in nuclear applications. This review also provides a multi-disciplinary foundation for identification, research, development and demonstration of improvements in conceptual understanding, measurements and performance modeling that would be lead to significant reductions in the uncertainties and improved confidence in the estimating the long-term performance of cementitious materials in nuclear applications. This report identifies: (1) technology gaps that may be filled by the CBP project and also (2) information and computational methods that are in currently being applied in related fields but have not yet been incorporated into performance assessments of cementitious barriers. The various
Frey, H. Christopher; Rhodes, David S.
1999-04-30
This is Volume 1 of a two-volume set of reports describing work conducted at North Carolina State University sponsored by Grant Number DE-FG05-95ER30250 by the U.S. Department of Energy. The title of the project is “Quantitative Analysis of Variability and Uncertainty in Acid Rain Assessments.” The work conducted under sponsorship of this grant pertains primarily to two main topics: (1) development of new methods for quantitative analysis of variability and uncertainty applicable to any type of model; and (2) analysis of variability and uncertainty in the performance, emissions, and cost of electric power plant combustion-based NOx control technologies. These two main topics are reported separately in Volumes 1 and 2.
FRAP-T6 uncertainty study of LOCA tests LOFT L2-3 and PBF LLR-3. [PWR
Chambers, R.; Driskell, W.E.; Resch, S.C.
1983-01-01
This paper presents the accuracy and uncertainty of fuel rod behavior calculations performed by the transient Fuel Rod Analysis Program (FRAP-T6) during large break loss-of-coolant accidents. The accuracy of the code was determined primarily through comparisons of code calculations with cladding surface temperature measurements from two loss-of-coolant experiments (LOCEs). These LOCEs were the L2-3 experiment conducted in the Loss-of-Fluid Test (LOFT) Facility and the LOFT Lead Rod 3 (LLR-3) experiment conducted in the Power Burst Facility (PBF). Uncertainties in code calculations resulting from uncertainties in fuel and cladding design variables, material property and heat transfer correlations, and thermal-hydraulic boundary conditions were analyzed.
Kim, Young-Min; Zhou, Ying; Gao, Yang; Fu, Joshua S.; Johnson, Brent A.; Huang, Cheng; Liu, Yang
2014-11-16
We report that the spatial pattern of the uncertainty in air pollution-related health impacts due to climate change has rarely been studied due to the lack of high-resolution model simulations, especially under the Representative Concentration Pathways (RCPs), the latest greenhouse gas emission pathways. We estimated future tropospheric ozone (O3) and related excess mortality and evaluated the associated uncertainties in the continental United States under RCPs. Based on dynamically downscaled climate model simulations, we calculated changes in O3 level at 12 km resolution between the future (2057 and 2059) and base years (2001–2004) under a low-to-medium emission scenario (RCP4.5) and amore » fossil fuel intensive emission scenario (RCP8.5). We then estimated the excess mortality attributable to changes in O3. Finally, we analyzed the sensitivity of the excess mortality estimates to the input variables and the uncertainty in the excess mortality estimation using Monte Carlo simulations. O3-related premature deaths in the continental U.S. were estimated to be 1312 deaths/year under RCP8.5 (95 % confidence interval (CI): 427 to 2198) and ₋2118 deaths/year under RCP4.5 (95 % CI: ₋3021 to ₋1216), when allowing for climate change and emissions reduction. The uncertainty of O3-related excess mortality estimates was mainly caused by RCP emissions pathways. Finally, excess mortality estimates attributable to the combined effect of climate and emission changes on O3 as well as the associated uncertainties vary substantially in space and so do the most influential input variables. Spatially resolved data is crucial to develop effective community level mitigation and adaptation policy.« less
Kim, Young-Min; Zhou, Ying; Gao, Yang; Fu, Joshua S.; Johnson, Brent; Huang, Cheng; Liu, Yang
2015-01-01
BACKGROUND: The spatial pattern of the uncertainty in climate air pollution health impact has rarely been studied due to the lack of high-resolution model simulations, especially under the latest Representative Concentration Pathways (RCPs). OBJECTIVES: We estimated county-level ozone (O3) and PM2.5 related excess mortality (EM) and evaluated the associated uncertainties in the continental United States in the 2050s under RCP4.5 and RCP8.5. METHODS: Using dynamically downscaled climate model simulations, we calculated changes in O3 and PM2.5 levels at 12 km resolution between the future (2057-2059) and present (2001-2004) under two RCP scenarios. Using concentration-response relationships in the literature and projected future populations, we estimated EM attributable to the changes in O3 and PM2.5. We finally analyzed the contribution of input variables to the uncertainty in the county-level EM estimation using Monte Carlo simulation. RESULTS: O3-related premature deaths in the continental U.S. were estimated to be 1,082 deaths/year under RCP8.5 (95% confidence interval (CI): -288 to 2,453), and -5,229 deaths/year under RCP4.5 (-7,212 to -3,246). Simulated PM2.5 changes resulted in a significant decrease in EM under the two RCPs. The uncertainty of O3-related EM estimates was mainly caused by RCP scenarios, whereas that of PM2.5-related EMs was mainly from concentration-response functions. CONCLUSION: EM estimates attributable to climate change-induced air pollution change as well as the associated uncertainties vary substantially in space, and so are the most influential input variables. Spatially resolved data is crucial to develop effective mitigation and adaptation policy.
Bean, J.E.; Berglund, J.W.; Davis, F.J.; Economy, K.; Garner, J.W.; Helton, J.C.; Johnson, J.D.; MacKinnon, R.J.; Miller, J.; O'Brien, D.G.; Ramsey, J.L.; Schreiber, J.D.; Shinta, A.; Smith, L.N.; Stockman, C.; Stoelzel, D.M.; Vaughn, P.
1998-09-01
The Waste Isolation Pilot Plant (WPP) is located in southeastern New Mexico and is being developed by the U.S. Department of Energy (DOE) for the geologic (deep underground) disposal of transuranic (TRU) waste. A detailed performance assessment (PA) for the WIPP was carried out in 1996 and supports an application by the DOE to the U.S. Environmental Protection Agency (EPA) for the certification of the WIPP for the disposal of TRU waste. The 1996 WIPP PA uses a computational structure that maintains a separation between stochastic (i.e., aleatory) and subjective (i.e., epistemic) uncertainty, with stochastic uncertainty arising from the many possible disruptions that could occur over the 10,000 yr regulatory period that applies to the WIPP and subjective uncertainty arising from the imprecision with which many of the quantities required in the PA are known. Important parts of this structure are (1) the use of Latin hypercube sampling to incorporate the effects of subjective uncertainty, (2) the use of Monte Carlo (i.e., random) sampling to incorporate the effects of stochastic uncertainty, and (3) the efficient use of the necessarily limited number of mechanistic calculations that can be performed to support the analysis. The use of Latin hypercube sampling generates a mapping from imprecisely known analysis inputs to analysis outcomes of interest that provides both a display of the uncertainty in analysis outcomes (i.e., uncertainty analysis) and a basis for investigating the effects of individual inputs on these outcomes (i.e., sensitivity analysis). The sensitivity analysis procedures used in the PA include examination of scatterplots, stepwise regression analysis, and partial correlation analysis. Uncertainty and sensitivity analysis results obtained as part of the 1996 WIPP PA are presented and discussed. Specific topics considered include two phase flow in the vicinity of the repository, radionuclide release from the repository, fluid flow and radionuclide
Heath, Garvin; Warner, Ethan; Steinberg, Daniel; Brandt, Adam
2015-08-01
A growing number of studies have raised questions regarding uncertainties in our understanding of methane (CH_{4}) emissions from fugitives and venting along the natural gas (NG) supply chain. In particular, a number of measurement studies have suggested that actual levels of CH_{4} emissions may be higher than estimated by EPA" tm s U.S. GHG Emission Inventory. We reviewed the literature to identify the growing number of studies that have raised questions regarding uncertainties in our understanding of methane (CH_{4}) emissions from fugitives and venting along the natural gas (NG) supply chain.
Vierow, Karen; Aldemir, Tunc
2009-09-10
The project entitled, Uncertainty Quantification in the Reliability and Risk Assessment of Generation IV Reactors, was conducted as a DOE NERI project collaboration between Texas A&M University and The Ohio State University between March 2006 and June 2009. The overall goal of the proposed project was to develop practical approaches and tools by which dynamic reliability and risk assessment techniques can be used to augment the uncertainty quantification process in probabilistic risk assessment (PRA) methods and PRA applications for Generation IV reactors. This report is the Final Scientific/Technical Report summarizing the project.
Narayan, Amrendra
2015-05-01
The Q-weak experiment aims to measure the weak charge of proton with a precision of 4.2%. The proposed precision on weak charge required a 2.5% measurement of the parity violating asymmetry in elastic electron - proton scattering. Polarimetry was the largest experimental contribution to this uncertainty and a new Compton polarimeter was installed in Hall C at Jefferson Lab to make the goal achievable. In this polarimeter the electron beam collides with green laser light in a low gain Fabry-Perot Cavity; the scattered electrons are detected in 4 planes of a novel diamond micro strip detector while the back scattered photons are detected in lead tungstate crystals. This diamond micro-strip detector is the first such device to be used as a tracking detector in a nuclear and particle physics experiment. The diamond detectors are read out using custom built electronic modules that include a preamplifier, a pulse shaping amplifier and a discriminator for each detector micro-strip. We use field programmable gate array based general purpose logic modules for event selection and histogramming. Extensive Monte Carlo simulations and data acquisition simulations were performed to estimate the systematic uncertainties. Additionally, the Moller and Compton polarimeters were cross calibrated at low electron beam currents using a series of interleaved measurements. In this dissertation, we describe all the subsystems of the Compton polarimeter with emphasis on the electron detector. We focus on the FPGA based data acquisition system built by the author and the data analysis methods implemented by the author. The simulations of the data acquisition and the polarimeter that helped rigorously establish the systematic uncertainties of the polarimeter are also elaborated, resulting in the first sub 1% measurement of low energy (?1 GeV) electron beam polarization with a Compton electron detector. We have demonstrated that diamond based micro-strip detectors can be used for tracking in a
Studies of the Impact of Magnetic Field Uncertainties on Physics Parameters of the Mu2e Experiment
Bradascio, Federica
2016-01-01
The Mu2e experiment at Fermilab will search for a signature of charged lepton flavor violation, an effect prohibitively too small to be observed within the Standard Model of particle physics. Therefore, its observation is a signal of new physics. The signature that Mu2e will search for is the ratio of the rate of neutrinoless coherent conversion of muons into electrons in the field of a nucleus, relative to the muon capture rate by the nucleus. The conversion process is an example of charged lepton flavor violation. This experiment aims at a sensitivity of four orders of magnitude higher than previous related experiments. The desired sensitivity implies highly demanding requirements of accuracy in the design and conduct of the experiment. It is therefore important to investigate the tolerance of the experiment to instrumental uncertainties and provide specifications that the design and construction must meet. This is the core of the work reported in this thesis. The design of the experiment is based on three superconducting solenoid magnets. The most important uncertainties in the magnetic field of the solenoids can arise from misalignments of the Transport Solenoid, which transfers the beam from the muon production area to the detector area and eliminates beam-originating backgrounds. In this thesis, the field uncertainties induced by possible misalignments and their impact on the physics parameters of the experiment are examined. The physics parameters include the muon and pion stopping rates and the scattering of beam electrons off the capture target, which determine the signal, intrinsic background and late-arriving background yields, respectively. Additionally, a possible test of the Transport Solenoid alignment with low momentum electrons is examined, as an alternative option to measure its field with conventional probes, which is technically difficult due to mechanical interference. Misalignments of the Transport Solenoid were simulated using standard
Makarov, Yuri V.; Huang, Zhenyu; Etingov, Pavel V.; Ma, Jian; Guttromson, Ross T.; Subbarao, Krishnappa; Chakrabarti, Bhujanga B.
2010-01-01
The power system balancing process, which includes the scheduling, real time dispatch (load following) and regulation processes, is traditionally based on deterministic models. Since the conventional generation needs time to be committed and dispatched to a desired megawatt level, the scheduling and load following processes use load and wind and solar power production forecasts to achieve future balance between the conventional generation and energy storage on the one side, and system load, intermittent resources (such as wind and solar generation), and scheduled interchange on the other side. Although in real life the forecasting procedures imply some uncertainty around the load and wind/solar forecasts (caused by forecast errors), only their mean values are actually used in the generation dispatch and commitment procedures. Since the actual load and intermittent generation can deviate from their forecasts, it becomes increasingly unclear (especially, with the increasing penetration of renewable resources) whether the system would be actually able to meet the conventional generation requirements within the look-ahead horizon, what the additional balancing efforts would be needed as we get closer to the real time, and what additional costs would be incurred by those needs. To improve the system control performance characteristics, maintain system reliability, and minimize expenses related to the system balancing functions, it becomes necessary to incorporate the predicted uncertainty ranges into the scheduling, load following, and, in some extent, into the regulation processes. It is also important to address the uncertainty problem comprehensively by including all sources of uncertainty (load, intermittent generation, generators’ forced outages, etc.) into consideration. All aspects of uncertainty such as the imbalance size (which is the same as capacity needed to mitigate the imbalance) and generation ramping requirement must be taken into account. The latter
Uncertainty in Modeling Dust Mass Balance and Radiative Forcing from Size Parameterization
Zhao, Chun; Chen, Siyu; Leung, Lai-Yung R.; Qian, Yun; Kok, Jasper; Zaveri, Rahul A.; Huang, J.
2013-11-05
This study examines the uncertainties in simulating mass balance and radiative forcing of mineral dust due to biases in the aerosol size parameterization. Simulations are conducted quasi-globally (180oW-180oE and 60oS-70oN) using the WRF24 Chem model with three different approaches to represent aerosol size distribution (8-bin, 4-bin, and 3-mode). The biases in the 3-mode or 4-bin approaches against a relatively more accurate 8-bin approach in simulating dust mass balance and radiative forcing are identified. Compared to the 8-bin approach, the 4-bin approach simulates similar but coarser size distributions of dust particles in the atmosphere, while the 3-mode pproach retains more fine dust particles but fewer coarse dust particles due to its prescribed og of each mode. Although the 3-mode approach yields up to 10 days longer dust mass lifetime over the remote oceanic regions than the 8-bin approach, the three size approaches produce similar dust mass lifetime (3.2 days to 3.5 days) on quasi-global average, reflecting that the global dust mass lifetime is mainly determined by the dust mass lifetime near the dust source regions. With the same global dust emission (~6000 Tg yr-1), the 8-bin approach produces a dust mass loading of 39 Tg, while the 4-bin and 3-mode approaches produce 3% (40.2 Tg) and 25% (49.1 Tg) higher dust mass loading, respectively. The difference in dust mass loading between the 8-bin approach and the 4-bin or 3-mode approaches has large spatial variations, with generally smaller relative difference (<10%) near the surface over the dust source regions. The three size approaches also result in significantly different dry and wet deposition fluxes and number concentrations of dust. The difference in dust aerosol optical depth (AOD) (a factor of 3) among the three size approaches is much larger than their difference (25%) in dust mass loading. Compared to the 8-bin approach, the 4-bin approach yields stronger dust absorptivity, while the 3-mode
Zhu Zhixi Bai, Hongtao Xu He Zhu Tan
2011-11-15
Strategic environmental assessment (SEA) inherently needs to address greater levels of uncertainty in the formulation and implementation processes of strategic decisions, compared with project environmental impact assessment. The range of uncertainties includes internal and external factors of the complex system that is concerned in the strategy. Scenario analysis is increasingly being used to cope with uncertainty in SEA. Following a brief introduction of scenarios and scenario analysis, this paper examines the rationale for scenario analysis in SEA in the context of China. The state of the art associated with scenario analysis applied to SEA in China was reviewed through four SEA case analyses. Lessons learned from these cases indicated the word 'scenario' appears to be abused and the scenario-based methods appear to be misused due to the lack of understanding of an uncertain future and scenario analysis. However, good experiences were also drawn on, regarding how to integrate scenario analysis into the SEA process in China, how to cope with driving forces including uncertainties, how to combine qualitative scenario storylines with quantitative impact predictions, and how to conduct assessments and propose recommendations based on scenarios. Additionally, the ways to improve the application of this tool in SEA were suggested. We concluded by calling for further methodological research on this issue and more practices.
Webster, Mort David
2015-03-10
This report presents the final outcomes and products of the project as performed at the Massachusetts Institute of Technology. The research project consists of three main components: methodology development for decision-making under uncertainty, improving the resolution of the electricity sector to improve integrated assessment, and application of these methods to integrated assessment. Results in each area is described in the report.
Meyer, Philip D.; Gee, Glendon W.; Nicholson, Thomas J.
2000-02-28
This report addresses issues related to the analysis of uncertainty in dose assessments conducted as part of decommissioning analyses. The analysis is limited to the hydrologic aspects of the exposure pathway involving infiltration of water at the ground surface, leaching of contaminants, and transport of contaminants through the groundwater to a point of exposure. The basic conceptual models and mathematical implementations of three dose assessment codes are outlined along with the site-specific conditions under which the codes may provide inaccurate, potentially nonconservative results. In addition, the hydrologic parameters of the codes are identified and compared. A methodology for parameter uncertainty assessment is outlined that considers the potential data limitations and modeling needs of decommissioning analyses. This methodology uses generic parameter distributions based on national or regional databases, sensitivity analysis, probabilistic modeling, and Bayesian updating to incorporate site-specific information. Data sources for best-estimate parameter values and parameter uncertainty information are also reviewed. A follow-on report will illustrate the uncertainty assessment methodology using decommissioning test cases.
Makarov, Yuri V.; Huang, Zhenyu; Etingov, Pavel V.; Ma, Jian; Guttromson, Ross T.; Subbarao, Krishnappa; Chakrabarti, Bhujanga B.
2010-09-01
The power system balancing process, which includes the scheduling, real time dispatch (load following) and regulation processes, is traditionally based on deterministic models. Since the conventional generation needs time to be committed and dispatched to a desired megawatt level, the scheduling and load following processes use load and wind power production forecasts to achieve future balance between the conventional generation and energy storage on the one side, and system load, intermittent resources (such as wind and solar generation) and scheduled interchange on the other side. Although in real life the forecasting procedures imply some uncertainty around the load and wind forecasts (caused by forecast errors), only their mean values are actually used in the generation dispatch and commitment procedures. Since the actual load and intermittent generation can deviate from their forecasts, it becomes increasingly unclear (especially, with the increasing penetration of renewable resources) whether the system would be actually able to meet the conventional generation requirements within the look-ahead horizon, what the additional balancing efforts would be needed as we get closer to the real time, and what additional costs would be incurred by those needs. In order to improve the system control performance characteristics, maintain system reliability, and minimize expenses related to the system balancing functions, it becomes necessary to incorporate the predicted uncertainty ranges into the scheduling, load following, and, in some extent, into the regulation processes. It is also important to address the uncertainty problem comprehensively, by including all sources of uncertainty (load, intermittent generation, generators’ forced outages, etc.) into consideration. All aspects of uncertainty such as the imbalance size (which is the same as capacity needed to mitigate the imbalance) and generation ramping requirement must be taken into account. The latter unique
Hou, Zhangshuan; Huang, Maoyi; Leung, Lai-Yung R.; Lin, Guang; Ricciuto, Daniel M.
2012-08-10
Uncertainties in hydrologic parameters could have significant impacts on the simulated water and energy fluxes and land surface states, which will in turn affect atmospheric processes and the carbon cycle. Quantifying such uncertainties is an important step toward better understanding and quantification of uncertainty of integrated earth system models. In this paper, we introduce an uncertainty quantification (UQ) framework to analyze sensitivity of simulated surface fluxes to selected hydrologic parameters in the Community Land Model (CLM4) through forward modeling. Thirteen flux tower footprints spanning a wide range of climate and site conditions were selected to perform sensitivity analyses by perturbing the parameters identified. In the UQ framework, prior information about the parameters was used to quantify the input uncertainty using the Minimum-Relative-Entropy approach. The quasi-Monte Carlo approach was applied to generate samples of parameters on the basis of the prior pdfs. Simulations corresponding to sampled parameter sets were used to generate response curves and response surfaces and statistical tests were used to rank the significance of the parameters for output responses including latent (LH) and sensible heat (SH) fluxes. Overall, the CLM4 simulated LH and SH show the largest sensitivity to subsurface runoff generation parameters. However, study sites with deep root vegetation are also affected by surface runoff parameters, while sites with shallow root zones are also sensitive to the vadose zone soil water parameters. Generally, sites with finer soil texture and shallower rooting systems tend to have larger sensitivity of outputs to the parameters. Our results suggest the necessity of and possible ways for parameter inversion/calibration using available measurements of latent/sensible heat fluxes to obtain the optimal parameter set for CLM4. This study also provided guidance on reduction of parameter set dimensionality and parameter
Modeling of Uncertainties in Major Drivers in U.S. Electricity Markets: Preprint
Short, W.; Ferguson, T.; Leifman, M.
2006-09-01
This paper presents information on the Stochastic Energy Deployment System (SEDS) model. DOE and NREL are developing this new model, intended to address many of the shortcomings of the current suite of energy models. Once fully built, the salient qualities of SEDS will include full probabilistic treatment of the major uncertainties in national energy forecasts; code compactness for desktop application; user-friendly interface for a reasonably trained analyst; run-time within limits acceptable for quick-response analysis; choice of detailed or aggregate representations; and transparency of design, code, and assumptions. Moreover, SEDS development will be increasingly collaborative, as DOE and NREL will be coordinating with multiple national laboratories and other institutions, making SEDS nearly an 'open source' project. The collaboration will utilize the best expertise on specific sectors and problems, and also allow constant examination and review of the model. This paper outlines the rationale for this project and a description of its alpha version, as well as some example results. It also describes some of the expected development efforts in SEDS.
Davis, Jonathan H.
2015-03-09
Future multi-tonne Direct Detection experiments will be sensitive to solar neutrino induced nuclear recoils which form an irreducible background to light Dark Matter searches. Indeed for masses around 6 GeV the spectra of neutrinos and Dark Matter are so similar that experiments are said to run into a neutrino floor, for which sensitivity increases only marginally with exposure past a certain cross section. In this work we show that this floor can be overcome using the different annual modulation expected from solar neutrinos and Dark Matter. Specifically for cross sections below the neutrino floor the DM signal is observable through a phase shift and a smaller amplitude for the time-dependent event rate. This allows the exclusion power to be improved by up to an order of magnitude for large exposures. In addition we demonstrate that, using only spectral information, the neutrino floor exists over a wider mass range than has been previously shown, since the large uncertainties in the Dark Matter velocity distribution make the signal spectrum harder to distinguish from the neutrino background. However for most velocity distributions it can still be surpassed using timing information, and so the neutrino floor is not an absolute limit on the sensitivity of Direct Detection experiments.
Zhang, Yan; Sahinidis, Nikolaos V.
2013-04-06
In this paper, surrogate models are iteratively built using polynomial chaos expansion (PCE) and detailed numerical simulations of a carbon sequestration system. Output variables from a numerical simulator are approximated as polynomial functions of uncertain parameters. Once generated, PCE representations can be used in place of the numerical simulator and often decrease simulation times by several orders of magnitude. However, PCE models are expensive to derive unless the number of terms in the expansion is moderate, which requires a relatively small number of uncertain variables and a low degree of expansion. To cope with this limitation, instead of using a classical full expansion at each step of an iterative PCE construction method, we introduce a mixed-integer programming (MIP) formulation to identify the best subset of basis terms in the expansion. This approach makes it possible to keep the number of terms small in the expansion. Monte Carlo (MC) simulation is then performed by substituting the values of the uncertain parameters into the closed-form polynomial functions. Based on the results of MC simulation, the uncertainties of injecting CO{sub 2} underground are quantified for a saline aquifer. Moreover, based on the PCE model, we formulate an optimization problem to determine the optimal CO{sub 2} injection rate so as to maximize the gas saturation (residual trapping) during injection, and thereby minimize the chance of leakage.
Mathieu, Johanna L.; Callaway, Duncan S.; Kiliccote, Sila
2011-08-15
Controlling electric loads to deliver power system services presents a number of interesting challenges. For example, changes in electricity consumption of Commercial and Industrial (C&I) facilities are usually estimated using counterfactual baseline models, and model uncertainty makes it difficult to precisely quantify control responsiveness. Moreover, C&I facilities exhibit variability in their response. This paper seeks to understand baseline model error and demand-side variability in responses to open-loop control signals (i.e. dynamic prices). Using a regression-based baseline model, we define several Demand Response (DR) parameters, which characterize changes in electricity use on DR days, and then present a method for computing the error associated with DR parameter estimates. In addition to analyzing the magnitude of DR parameter error, we develop a metric to determine how much observed DR parameter variability is attributable to real event-to-event variability versus simply baseline model error. Using data from 38 C&I facilities that participated in an automated DR program in California, we find that DR parameter errors are large. For most facilities, observed DR parameter variability is likely explained by baseline model error, not real DR parameter variability; however, a number of facilities exhibit real DR parameter variability. In some cases, the aggregate population of C&I facilities exhibits real DR parameter variability, resulting in implications for the system operator with respect to both resource planning and system stability.
Forecasting the market for SO sub 2 emission allowances under uncertainty
Hanson, D.; Molburg, J.; Fisher, R.; Boyd, G.; Pandola, G.; Lurie, G.; Taxon, T.
1991-01-01
This paper deals with the effects of uncertainty and risk aversion on market outcomes for SO{sub 2} emission allowance prices and on electric utility compliance choices. The 1990 Clean Air Act Amendments (CAAA), which are briefly reviewed here, provide for about twice as many SO{sub 2} allowances to be issued per year in Phase 1 (1995--1999) than in Phase 2. Considering the scrubber incentives in Phase 1, there is likely to be substantial emission banking for use in Phase 2. Allowance prices are expected to increase over time at a rate less than the return on alternative investments, so utilities which are risk neutral, or potential speculators in the allowance market, are not expected to bank allowances. The allowances will be banked by utilities that are risk averse. The Argonne Utility Simulation Model (ARGUS2) is being revised to incorporate the provisions of the CAAA acid rain title and to simulate SO{sub 2} allowance prices, compliance choices, capacity expansion, system dispatch, fuel use, and emissions using a unit level data base and alternative scenario assumptions. 1 fig.
Optimal Control of Distributed Energy Resources and Demand Response under Uncertainty
Siddiqui, Afzal; Stadler, Michael; Marnay, Chris; Lai, Judy
2010-06-01
We take the perspective of a microgrid that has installed distribution energy resources (DER) in the form of distributed generation with combined heat and power applications. Given uncertain electricity and fuel prices, the microgrid minimizes its expected annual energy bill for various capacity sizes. In almost all cases, there is an economic and environmental advantage to using DER in conjunction with demand response (DR): the expected annualized energy bill is reduced by 9percent while CO2 emissions decline by 25percent. Furthermore, the microgrid's risk is diminished as DER may be deployed depending on prevailing market conditions and local demand. In order to test a policy measure that would place a weight on CO2 emissions, we use a multi-criteria objective function that minimizes a weighted average of expected costs and emissions. We find that greater emphasis on CO2 emissions has a beneficial environmental impact only if DR is available and enough reserve generation capacity exists. Finally, greater uncertainty results in higher expected costs and risk exposure, the effects of which may be mitigated by selecting a larger capacity.
Atmospheric Carbon Dioxide and the Global Carbon Cycle: The Key Uncertainties
DOE R&D Accomplishments [OSTI]
Peng, T. H.; Post, W. M.; DeAngelis, D. L.; Dale, V. H.; Farrell, M. P.
1987-12-01
The biogeochemical cycling of carbon between its sources and sinks determines the rate of increase in atmospheric CO{sub 2} concentrations. The observed increase in atmospheric CO{sub 2} content is less than the estimated release from fossil fuel consumption and deforestation. This discrepancy can be explained by interactions between the atmosphere and other global carbon reservoirs such as the oceans, and the terrestrial biosphere including soils. Undoubtedly, the oceans have been the most important sinks for CO{sub 2} produced by man. But, the physical, chemical, and biological processes of oceans are complex and, therefore, credible estimates of CO{sub 2} uptake can probably only come from mathematical models. Unfortunately, one- and two-dimensional ocean models do not allow for enough CO{sub 2} uptake to accurately account for known releases. Thus, they produce higher concentrations of atmospheric CO{sub 2} than was historically the case. More complex three-dimensional models, while currently being developed, may make better use of existing tracer data than do one- and two-dimensional models and will also incorporate climate feedback effects to provide a more realistic view of ocean dynamics and CO{sub 2} fluxes. The instability of current models to estimate accurately oceanic uptake of CO{sub 2} creates one of the key uncertainties in predictions of atmospheric CO{sub 2} increases and climate responses over the next 100 to 200 years.
Griffin, Joshua D. (Sandai National Labs, Livermore, CA); Eldred, Michael Scott; Martinez-Canales, Monica L.; Watson, Jean-Paul; Kolda, Tamara Gibson; Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J.; Hough, Patricia Diane; Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Guinta, Anthony A.; Brown, Shannon L.
2006-10-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.
McVeigh, J.; Lausten, M.; Eugeni, E.; Soni, A.
2010-11-01
The U.S. Department of Energy (DOE) Solar Energy Technologies Program (SETP) conducted a 2009 Technical Risk and Uncertainty Analysis to better assess its cost goals for concentrating solar power (CSP) and photovoltaic (PV) systems, and to potentially rebalance its R&D portfolio. This report details the methodology, schedule, and results of this technical risk and uncertainty analysis.
Reducing uncertainty in high-resolution sea ice models.
Peterson, Kara J.; Bochev, Pavel Blagoveston
2013-07-01
Arctic sea ice is an important component of the global climate system, reflecting a significant amount of solar radiation, insulating the ocean from the atmosphere and influencing ocean circulation by modifying the salinity of the upper ocean. The thickness and extent of Arctic sea ice have shown a significant decline in recent decades with implications for global climate as well as regional geopolitics. Increasing interest in exploration as well as climate feedback effects make predictive mathematical modeling of sea ice a task of tremendous practical import. Satellite data obtained over the last few decades have provided a wealth of information on sea ice motion and deformation. The data clearly show that ice deformation is focused along narrow linear features and this type of deformation is not well-represented in existing models. To improve sea ice dynamics we have incorporated an anisotropic rheology into the Los Alamos National Laboratory global sea ice model, CICE. Sensitivity analyses were performed using the Design Analysis Kit for Optimization and Terascale Applications (DAKOTA) to determine the impact of material parameters on sea ice response functions. Two material strength parameters that exhibited the most significant impact on responses were further analyzed to evaluate their influence on quantitative comparisons between model output and data. The sensitivity analysis along with ten year model runs indicate that while the anisotropic rheology provides some benefit in velocity predictions, additional improvements are required to make this material model a viable alternative for global sea ice simulations.
Not Available
1993-08-01
Before disposing of transuranic radioactive waste in the Waste Isolation Pilot Plant (WIPP), the United States Department of Energy (DOE) must evaluate compliance with applicable long-term regulations of the United States Environmental Protection Agency (EPA). Sandia National Laboratories is conducting iterative performance assessments (PAs) of the WIPP for the DOE to provide interim guidance while preparing for a final compliance evaluation. This volume of the 1992 PA contains results of uncertainty and sensitivity analyses with respect to migration of gas and brine from the undisturbed repository. Additional information about the 1992 PA is provided in other volumes. Volume 1 contains an overview of WIPP PA and results of a preliminary comparison with 40 CFR 191, Subpart B. Volume 2 describes the technical basis for the performance assessment, including descriptions of the linked computational models used in the Monte Carlo analyses. Volume 3 contains the reference data base and values for input parameters used in consequence and probability modeling. Volume 4 contains uncertainty and sensitivity analyses with respect to the EPA`s Environmental Standards for the Management and Disposal of Spent Nuclear Fuel, High-Level and Transuranic Radioactive Wastes (40 CFR 191, Subpart B). Finally, guidance derived from the entire 1992 PA is presented in Volume 6. Results of the 1992 uncertainty and sensitivity analyses indicate that, conditional on the modeling assumptions and the assigned parameter-value distributions, the most important parameters for which uncertainty has the potential to affect gas and brine migration from the undisturbed repository are: initial liquid saturation in the waste, anhydrite permeability, biodegradation-reaction stoichiometry, gas-generation rates for both corrosion and biodegradation under inundated conditions, and the permeability of the long-term shaft seal.
Shaffer, C.J. [Science and Engineering Associates, Albuquerque, NM (United States); Miller, L.A.; Payne, A.C. Jr.
1992-10-01
A Level III Probabilistic Risk Assessment (PRA) has been performed for LaSalle Unit 2 under the Risk Methods Integration and Evaluation Program (RMIEP) and the Phenomenology and Risk Uncertainty Evaluation Program (PRUEP). This report documents the phenomenological calculations and sources of. uncertainty in the calculations performed with HELCOR in support of the Level II portion of the PRA. These calculations are an integral part of the Level II analysis since they provide quantitative input to the Accident Progression Event Tree (APET) and Source Term Model (LASSOR). However, the uncertainty associated with the code results must be considered in the use of the results. The MELCOR calculations performed include four integrated calculations: (1) a high-pressure short-term station blackout, (2) a low-pressure short-term station blackout, (3) an intermediate-term station blackout, and (4) a long-term station blackout. Several sensitivity studies investigating the effect of variations in containment failure size and location, as well as hydrogen ignition concentration are also documented.
Shaffer, C.J. (Science and Engineering Associates, Albuquerque, NM (United States)); Miller, L.A.; Payne, A.C. Jr.
1992-10-01
A Level III Probabilistic Risk Assessment (PRA) has been performed for LaSalle Unit 2 under the Risk Methods Integration and Evaluation Program (RMIEP) and the Phenomenology and Risk Uncertainty Evaluation Program (PRUEP). This report documents the phenomenological calculations and sources of. uncertainty in the calculations performed with HELCOR in support of the Level II portion of the PRA. These calculations are an integral part of the Level II analysis since they provide quantitative input to the Accident Progression Event Tree (APET) and Source Term Model (LASSOR). However, the uncertainty associated with the code results must be considered in the use of the results. The MELCOR calculations performed include four integrated calculations: (1) a high-pressure short-term station blackout, (2) a low-pressure short-term station blackout, (3) an intermediate-term station blackout, and (4) a long-term station blackout. Several sensitivity studies investigating the effect of variations in containment failure size and location, as well as hydrogen ignition concentration are also documented.
PUFF-III: A Code for Processing ENDF Uncertainty Data Into Multigroup Covariance Matrices
Dunn, M.E.
2000-06-01
PUFF-III is an extension of the previous PUFF-II code that was developed in the 1970s and early 1980s. The PUFF codes process the Evaluated Nuclear Data File (ENDF) covariance data and generate multigroup covariance matrices on a user-specified energy grid structure. Unlike its predecessor, PUFF-III can process the new ENDF/B-VI data formats. In particular, PUFF-III has the capability to process the spontaneous fission covariances for fission neutron multiplicity. With regard to the covariance data in File 33 of the ENDF system, PUFF-III has the capability to process short-range variance formats, as well as the lumped reaction covariance data formats that were introduced in ENDF/B-V. In addition to the new ENDF formats, a new directory feature is now available that allows the user to obtain a detailed directory of the uncertainty information in the data files without visually inspecting the ENDF data. Following the correlation matrix calculation, PUFF-III also evaluates the eigenvalues of each correlation matrix and tests each matrix for positive definiteness. Additional new features are discussed in the manual. PUFF-III has been developed for implementation in the AMPX code system, and several modifications were incorporated to improve memory allocation tasks and input/output operations. Consequently, the resulting code has a structure that is similar to other modules in the AMPX code system. With the release of PUFF-III, a new and improved covariance processing code is available to process ENDF covariance formats through Version VI.
Ensslin, Torsten A.; Frommert, Mona [Max-Planck-Institut fuer Astrophysik, Karl-Schwarzschild-Str. 1, 85741 Garching (Germany)
2011-05-15
The optimal reconstruction of cosmic metric perturbations and other signals requires knowledge of their power spectra and other parameters. If these are not known a priori, they have to be measured simultaneously from the same data used for the signal reconstruction. We formulate the general problem of signal inference in the presence of unknown parameters within the framework of information field theory. To solve this, we develop a generic parameter-uncertainty renormalized estimation (PURE) technique. As a concrete application, we address the problem of reconstructing Gaussian signals with unknown power-spectrum with five different approaches: (i) separate maximum-a-posteriori power-spectrum measurement and subsequent reconstruction, (ii) maximum-a-posteriori reconstruction with marginalized power-spectrum, (iii) maximizing the joint posterior of signal and spectrum, (iv) guessing the spectrum from the variance in the Wiener-filter map, and (v) renormalization flow analysis of the field-theoretical problem providing the PURE filter. In all cases, the reconstruction can be described or approximated as Wiener-filter operations with assumed signal spectra derived from the data according to the same recipe, but with differing coefficients. All of these filters, except the renormalized one, exhibit a perception threshold in case of a Jeffreys prior for the unknown spectrum. Data modes with variance below this threshold do not affect the signal reconstruction at all. Filter (iv) seems to be similar to the so-called Karhune-Loeve and Feldman-Kaiser-Peacock estimators for galaxy power spectra used in cosmology, which therefore should also exhibit a marginal perception threshold if correctly implemented. We present statistical performance tests and show that the PURE filter is superior to the others, especially if the post-Wiener-filter corrections are included or in case an additional scale-independent spectral smoothness prior can be adopted.
Budnitz, R.J.; Apostolakis, G.; Boore, D.M.
1997-04-01
Probabilistic Seismic Hazard Analysis (PSHA) is a methodology that estimates the likelihood that various levels of earthquake-caused ground motion will be exceeded at a given location in a given future time period. Due to large uncertainties in all the geosciences data and in their modeling, multiple model interpretations are often possible. This leads to disagreement among experts, which in the past has led to disagreement on the selection of ground motion for design at a given site. In order to review the present state-of-the-art and improve on the overall stability of the PSHA process, the U.S. Nuclear Regulatory Commission (NRC), the U.S. Department of Energy (DOE), and the Electric Power Research Institute (EPRI) co-sponsored a project to provide methodological guidance on how to perform a PSHA. The project has been carried out by a seven-member Senior Seismic Hazard Analysis Committee (SSHAC) supported by a large number other experts. The SSHAC reviewed past studies, including the Lawrence Livermore National Laboratory and the EPRI landmark PSHA studies of the 1980`s and examined ways to improve on the present state-of-the-art. The Committee`s most important conclusion is that differences in PSHA results are due to procedural rather than technical differences. Thus, in addition to providing a detailed documentation on state-of-the-art elements of a PSHA, this report provides a series of procedural recommendations. The role of experts is analyzed in detail. Two entities are formally defined-the Technical Integrator (TI) and the Technical Facilitator Integrator (TFI)--to account for the various levels of complexity in the technical issues and different levels of efforts needed in a given study.
M dwarf metallicities and giant planet occurrence: Ironing out uncertainties and systematics
Gaidos, Eric; Mann, Andrew W.
2014-08-10
Comparisons between the planet populations around solar-type stars and those orbiting M dwarfs shed light on the possible dependence of planet formation and evolution on stellar mass. However, such analyses must control for other factors, i.e., metallicity, a stellar parameter that strongly influences the occurrence of gas giant planets. We obtained infrared spectra of 121 M dwarfs stars monitored by the California Planet Search and determined metallicities with an accuracy of 0.08 dex. The mean and standard deviation of the sample are 0.05 and 0.20 dex, respectively. We parameterized the metallicity dependence of the occurrence of giant planets on orbits with a period less than two years around solar-type stars and applied this to our M dwarf sample to estimate the expected number of giant planets. The number of detected planets (3) is lower than the predicted number (6.4), but the difference is not very significant (12% probability of finding as many or fewer planets). The three M dwarf planet hosts are not especially metal rich and the most likely value of the power-law index relating planet occurrence to metallicity is 1.06 dex per dex for M dwarfs compared to 1.80 for solar-type stars; this difference, however, is comparable to uncertainties. Giant planet occurrence around both types of stars allows, but does not necessarily require, a mass dependence of ?1 dex per dex. The actual planet-mass-metallicity relation may be complex, and elucidating it will require larger surveys like those to be conducted by ground-based infrared spectrographs and the Gaia space astrometry mission.
Denman, Matthew R.; Brooks, Dusty Marie
2015-08-01
Sandia National Laboratories (SNL) has conducted an uncertainty analysi s (UA) on the Fukushima Daiichi unit (1F1) accident progression wit h the MELCOR code. Volume I of the 1F1 UA discusses the physical modeling details and time history results of the UA. Volume II of the 1F1 UA discusses the statistical viewpoint. The model used was developed for a previous accident reconstruction investigation jointly sponsored by the US Department of Energy (DOE) and Nuclear Regulatory Commission (NRC). The goal of this work was to perform a focused evaluation of uncertainty in core damage progression behavior and its effect on key figures - of - merit (e.g., hydrogen production, fraction of intact fuel, vessel lower head failure) and in doing so assess the applicability of traditional sensitivity analysis techniques .
Scott, Michael J.; Daly, Don S.; Hathaway, John E.; Lansing, Carina S.; Liu, Ying; McJeon, Haewon C.; Moss, Richard H.; Patel, Pralit L.; Peterson, Marty J.; Rice, Jennie S.; Zhou, Yuyu
2015-10-01
In this paper, an integrated assessment model (IAM) uses a newly-developed Monte Carlo analysis capability to analyze the impacts of more aggressive U.S. residential and commercial building-energy codes and equipment standards on energy consumption and energy service costs at the state level, explicitly recognizing uncertainty in technology effectiveness and cost, socioeconomics, presence or absence of carbon prices, and climate impacts on energy demand. The paper finds that aggressive building-energy codes and equipment standards are an effective, cost-saving way to reduce energy consumption in buildings and greenhouse gas emissions in U.S. states. This conclusion is robust to significant uncertainties in population, economic activity, climate, carbon prices, and technology performance and costs.
Pace, D. C. Fisher, R. K.; Van Zeeland, M. A.; Pipes, R.
2014-11-15
New phase space mapping and uncertainty analysis of energetic ion loss data in the DIII-D tokamak provides experimental results that serve as valuable constraints in first-principles simulations of energetic ion transport. Beam ion losses are measured by the fast ion loss detector (FILD) diagnostic system consisting of two magnetic spectrometers placed independently along the outer wall. Monte Carlo simulations of mono-energetic and single-pitch ions reaching the FILDs are used to determine the expected uncertainty in the measurements. Modeling shows that the variation in gyrophase of 80 keV beam ions at the FILD aperture can produce an apparent measured energy signature spanning across 50-140 keV. These calculations compare favorably with experiments in which neutral beam prompt loss provides a well known energy and pitch distribution.
Domino, Stefan Paul; Figueroa, Victor G.; Romero, Vicente Jose; Glaze, David Jason; Sherman, Martin P.; Luketa-Hanlin, Anay Josephine
2009-12-01
The objective of this work is to perform an uncertainty quantification (UQ) and model validation analysis of simulations of tests in the cross-wind test facility (XTF) at Sandia National Laboratories. In these tests, a calorimeter was subjected to a fire and the thermal response was measured via thermocouples. The UQ and validation analysis pertains to the experimental and predicted thermal response of the calorimeter. The calculations were performed using Sierra/Fuego/Syrinx/Calore, an Advanced Simulation and Computing (ASC) code capable of predicting object thermal response to a fire environment. Based on the validation results at eight diversely representative TC locations on the calorimeter the predicted calorimeter temperatures effectively bound the experimental temperatures. This post-validates Sandia's first integrated use of fire modeling with thermal response modeling and associated uncertainty estimates in an abnormal-thermal QMU analysis.
Heath, Garvin; Warner, Ethan; Steinberg, Daniel; Brandt, Adam
2015-11-19
Presentation summarizing key findings of a Joint Institute for Strategic Energy Analysis Report at an Environmental Protection Agency workshop: 'Stakeholder Workshop on EPA GHG Data on Petroleum and Natural Gas Systems' on November 19, 2015. For additional information see the JISEA report, 'Estimating U.S. Methane Emissions from the Natural Gas Supply Chain: Approaches, Uncertainties, Current Estimates, and Future Studies' NREL/TP-6A50-62820.
Slayzak, S.J.; Ryan, J.P.
1998-04-01
As part of the US Department of Energy`s Advanced Desiccant Technology Program, the National Renewable Energy Laboratory (NREL) is characterizing the state-of-the-art in desiccant dehumidifiers, the key component of desiccant cooling systems. The experimental data will provide industry and end users with independent performance evaluation and help researchers assess the energy savings potential of the technology. Accurate determination of humidity ratio is critical to this work and an understanding of the capabilities of the available instrumentation is central to its proper application. This paper compares the minimum theoretical random error in humidity ratio calculation for three common measurement methods to give a sense of the relative maximum accuracy possible for each method assuming systematic errors can be made negligible. A series of experiments conducted also illustrate the capabilities of relative humidity sensors as compared to dewpoint sensors in measuring the grain depression of desiccant dehumidifiers. These tests support the results of the uncertainty analysis. At generally available instrument accuracies, uncertainty in calculated humidity ratio for dewpoint sensors is determined to be constant at approximately 2%. Wet-bulb sensors range between 2% and 6% above 10 g/kg (4%--15% below), and relative humidity sensors vary between 4% above 90% rh and 15% at 20% rh. Below 20% rh, uncertainty for rh sensors increases dramatically. Highest currently attainable accuracies bring dewpoint instruments down to 1% uncertainty, wet bulb to a range of 1%--3% above 10 g/kg (1.5%--8% below), and rh sensors between 1% and 5%.
Chen, J.C.; Chun, R.C.; Goudreau, G.L.; Maslenikov, O.R.; Johnson, J.J.
1984-01-01
This paper summarizes the results of the dynamic response analysis of the Zion reactor containment building using three different soil-structure interaction (SSI) analytical procedures which are: the substructure method, CLASSI; the equivalent linear finite element approach, ALUSH; and the nonlinear finite element procedure, DYNA3D. Uncertainties in analyzing a soil-structure system due to SSI analysis procedures were investigated. Responses at selected locations in the structure were compared through peak accelerations and response spectra.
Office of Environmental Management (EM)
5 November 2007 Generic Technical Issue Discussion on Sensitivity and Uncertainty Analysis and Model Support Attendees: Representatives from Department of Energy-Headquarters (DOE-HQ) and the U.S. Nuclear Regulatory Commission (NRC) met at the DOE offices in Germantown, Maryland on 15 November 2007. Representatives from Department of Energy-Savannah River (DOE-SR) and the South Carolina Department of Health and Environmental Control (SCDHEC) participated in the meeting via a teleconference link.
Eça, L.; Hoekstra, M.
2014-04-01
This paper offers a procedure for the estimation of the numerical uncertainty of any integral or local flow quantity as a result of a fluid flow computation; the procedure requires solutions on systematically refined grids. The error is estimated with power series expansions as a function of the typical cell size. These expansions, of which four types are used, are fitted to the data in the least-squares sense. The selection of the best error estimate is based on the standard deviation of the fits. The error estimate is converted into an uncertainty with a safety factor that depends on the observed order of grid convergence and on the standard deviation of the fit. For well-behaved data sets, i.e. monotonic convergence with the expected observed order of grid convergence and no scatter in the data, the method reduces to the well known Grid Convergence Index. Examples of application of the procedure are included. - Highlights: • Estimation of the numerical uncertainty of any integral or local flow quantity. • Least squares fits to power series expansions to handle noisy data. • Excellent results obtained for manufactured solutions. • Consistent results obtained for practical CFD calculations. • Reduces to the well known Grid Convergence Index for well-behaved data sets.
Younkin, J.M.; Rushton, J.E.
1980-02-05
A program is under way to design an effective International Atomic Energy Agency (IAEA) safeguards system that could be applied to the Portsmouth Gas Centrifuge Enrichment Plant (GCEP). This system would integrate nuclear material accountability with containment and surveillance. Uncertainties in material balances due to errors in the measurements of the declared uranium streams have been projected on a yearly basis for GCEP under such a system in a previous study. Because of the large uranium flows, the projected balance uncertainties were, in some cases, greater than the IAEA goal quantity of 75 kg of U-235 contained in low-enriched uranium. Therefore, it was decided to investigate the benefits of material balance periods of less than a year in order to improve the sensitivity and timeliness of the nuclear material accountability system. An analysis has been made of projected uranium measurement uncertainties for various short-term material balance periods. To simplify this analysis, only a material balance around the process area is considered and only the major UF/sub 6/ stream measurements are included. That is, storage areas are not considered and uranium waste streams are ignored. It is also assumed that variations in the cascade inventory are negligible compared to other terms in the balance so that the results obtained in this study are independent of the absolute cascade inventory. This study is intended to provide information that will serve as the basis for the future design of a dynamic materials accounting component of the IAEA safeguards system for GCEP.
Gauntt, Randall O.; Mattie, Patrick D.
2016-01-01
Sandia National Laboratories (SNL) has conducted an uncertainty analysis (UA) on the Fukushima Daiichi unit (1F1) accident progression with the MELCOR code. The model used was developed for a previous accident reconstruction investigation jointly sponsored by the US Department of Energy (DOE) and Nuclear Regulatory Commission (NRC). That study focused on reconstructing the accident progressions, as postulated by the limited plant data. This work was focused evaluation of uncertainty in core damage progression behavior and its effect on key figures-of-merit (e.g., hydrogen production, reactor damage state, fraction of intact fuel, vessel lower head failure). The primary intent of this study was to characterize the range of predicted damage states in the 1F1 reactor considering state of knowledge uncertainties associated with MELCOR modeling of core damage progression and to generate information that may be useful in informing the decommissioning activities that will be employed to defuel the damaged reactors at the Fukushima Daiichi Nuclear Power Plant. Additionally, core damage progression variability inherent in MELCOR modeling numerics is investigated.
A joint analysis of Planck and BICEP2 B modes including dust polarization uncertainty
Mortonson, Michael J.; Seljak, Uro E-mail: useljak@berkeley.edu
2014-10-01
We analyze BICEP2 and Planck data using a model that includes CMB lensing, gravity waves, and polarized dust. Recently published Planck dust polarization maps have highlighted the difficulty of estimating the amount of dust polarization in low intensity regions, suggesting that the polarization fractions have considerable uncertainties and may be significantly higher than previous predictions. In this paper, we start by assuming nothing about the dust polarization except for the power spectrum shape, which we take to be C{sub l}{sup BB,dust}?l{sup -2.42}. The resulting joint BICEP2+Planck analysis favors solutions without gravity waves, and the upper limit on the tensor-to-scalar ratio is r<0.11, a slight improvement relative to the Planck analysis alone which gives r<0.13 (95% c.l.). The estimated amplitude of the dust polarization power spectrum agrees with expectations for this field based on both HI column density and Planck polarization measurements at 353 GHz in the BICEP2 field. Including the latter constraint on the dust spectrum amplitude in our analysis improves the limit further to r<0.09, placing strong constraints on theories of inflation (e.g., models with r>0.14 are excluded with 99.5% confidence). We address the cross-correlation analysis of BICEP2 at 150 GHz with BICEP1 at 100 GHz as a test of foreground contamination. We find that the null hypothesis of dust and lensing with 0r= gives ??{sup 2}<2 relative to the hypothesis of no dust, so the frequency analysis does not strongly favor either model over the other. We also discuss how more accurate dust polarization maps may improve our constraints. If the dust polarization is measured perfectly, the limit can reach r<0.05 (or the corresponding detection significance if the observed dust signal plus the expected lensing signal is below the BICEP2 observations), but this degrades quickly to almost no improvement if the dust calibration error is 20% or larger or if the dust maps are not
SU-E-J-145: Geometric Uncertainty in CBCT Extrapolation for Head and Neck Adaptive Radiotherapy
Liu, C; Kumarasiri, A; Chetvertkov, M; Gordon, J; Chetty, I; Siddiqui, F; Kim, J
2014-06-01
Purpose: One primary limitation of using CBCT images for H'N adaptive radiotherapy (ART) is the limited field of view (FOV) range. We propose a method to extrapolate the CBCT by using a deformed planning CT for the dose of the day calculations. The aim was to estimate the geometric uncertainty of our extrapolation method. Methods: Ten H'N patients, each with a planning CT (CT1) and a subsequent CT (CT2) taken, were selected. Furthermore, a small FOV CBCT (CT2short) was synthetically created by cropping CT2 to the size of a CBCT image. Then, an extrapolated CBCT (CBCTextrp) was generated by deformably registering CT1 to CT2short and resampling with a wider FOV (42mm more from the CT2short borders), where CT1 is deformed through translation, rigid, affine, and b-spline transformations in order. The geometric error is measured as the distance map ||DVF|| produced by a deformable registration between CBCTextrp and CT2. Mean errors were calculated as a function of the distance away from the CBCT borders. The quality of all the registrations was visually verified. Results: Results were collected based on the average numbers from 10 patients. The extrapolation error increased linearly as a function of the distance (at a rate of 0.7mm per 1 cm) away from the CBCT borders in the S/I direction. The errors (??) at the superior and inferior boarders were 0.8 0.5mm and 3.0 1.5mm respectively, and increased to 2.7 2.2mm and 5.9 1.9mm at 4.2cm away. The mean error within CBCT borders was 1.16 0.54mm . The overall errors within 4.2cm error expansion were 2.0 1.2mm (sup) and 4.5 1.6mm (inf). Conclusion: The overall error in inf direction is larger due to more large unpredictable deformations in the chest. The error introduced by extrapolation is plan dependent. The mean error in the expanded region can be large, and must be considered during implementation. This work is supported in part by Varian Medical Systems, Palo Alto, CA.
Alonso, Juan J.; Iaccarino, Gianluca
2013-08-25
The following is the final report covering the entire period of this aforementioned grant, June 1, 2011 - May 31, 2013 for the portion of the effort corresponding to Stanford University (SU). SU has partnered with Sandia National Laboratories (PI: Mike S. Eldred) and Purdue University (PI: Dongbin Xiu) to complete this research project and this final report includes those contributions made by the members of the team at Stanford. Dr. Eldred is continuing his contributions to this project under a no-cost extension and his contributions to the overall effort will be detailed at a later time (once his effort has concluded) on a separate project submitted by Sandia National Laboratories. At Stanford, the team is made up of Profs. Alonso, Iaccarino, and Duraisamy, post-doctoral researcher Vinod Lakshminarayan, and graduate student Santiago Padron. At Sandia National Laboratories, the team includes Michael Eldred, Matt Barone, John Jakeman, and Stefan Domino, and at Purdue University, we have Prof. Dongbin Xiu as our main collaborator. The overall objective of this project was to develop a novel, comprehensive methodology for uncertainty quantification by combining stochastic expansions (nonintrusive polynomial chaos and stochastic collocation), the adjoint approach, and fusion with experimental data to account for aleatory and epistemic uncertainties from random variable, random field, and model form sources. The expected outcomes of this activity were detailed in the proposal and are repeated here to set the stage for the results that we have generated during the time period of execution of this project: 1. The rigorous determination of an error budget comprising numerical errors in physical space and statistical errors in stochastic space and its use for optimal allocation of resources; 2. A considerable increase in efficiency when performing uncertainty quantification with a large number of uncertain variables in complex non-linear multi-physics problems; 3. A
Meyer, Philip D.; Ye, Ming; Rockhold, Mark L.; Neuman, Shlomo P.; Cantrell, Kirk J.
2007-07-30
This report to the Nuclear Regulatory Commission (NRC) describes the development and application of a methodology to systematically and quantitatively assess predictive uncertainty in groundwater flow and transport modeling that considers the combined impact of hydrogeologic uncertainties associated with the conceptual-mathematical basis of a model, model parameters, and the scenario to which the model is applied. The methodology is based on a n extension of a Maximum Likelihood implementation of Bayesian Model Averaging. Model uncertainty is represented by postulating a discrete set of alternative conceptual models for a site with associated prior model probabilities that reflect a belief about the relative plausibility of each model based on its apparent consistency with available knowledge and data. Posterior model probabilities are computed and parameter uncertainty is estimated by calibrating each model to observed system behavior; prior parameter estimates are optionally included. Scenario uncertainty is represented as a discrete set of alternative future conditions affecting boundary conditions, source/sink terms, or other aspects of the models, with associated prior scenario probabilities. A joint assessment of uncertainty results from combining model predictions computed under each scenario using as weight the posterior model and prior scenario probabilities. The uncertainty methodology was applied to modeling of groundwater flow and uranium transport at the Hanford Site 300 Area. Eight alternative models representing uncertainty in the hydrogeologic and geochemical properties as well as the temporal variability were considered. Two scenarios represent alternative future behavior of the Columbia River adjacent to the site were considered. The scenario alternatives were implemented in the models through the boundary conditions. Results demonstrate the feasibility of applying a comprehensive uncertainty assessment to large-scale, detailed groundwater flow
Gray, Genetha Anne; Watson, Jean-Paul; Silva Monroy, Cesar Augusto; Gramacy, Robert B.
2013-09-01
This report summarizes findings and results of the Quantifiably Secure Power Grid Operation, Management, and Evolution LDRD. The focus of the LDRD was to develop decisionsupport technologies to enable rational and quantifiable risk management for two key grid operational timescales: scheduling (day-ahead) and planning (month-to-year-ahead). Risk or resiliency metrics are foundational in this effort. The 2003 Northeast Blackout investigative report stressed the criticality of enforceable metrics for system resiliency - the grid's ability to satisfy demands subject to perturbation. However, we neither have well-defined risk metrics for addressing the pervasive uncertainties in a renewable energy era, nor decision-support tools for their enforcement, which severely impacts efforts to rationally improve grid security. For day-ahead unit commitment, decision-support tools must account for topological security constraints, loss-of-load (economic) costs, and supply and demand variability - especially given high renewables penetration. For long-term planning, transmission and generation expansion must ensure realized demand is satisfied for various projected technological, climate, and growth scenarios. The decision-support tools investigated in this project paid particular attention to tailoriented risk metrics for explicitly addressing high-consequence events. Historically, decisionsupport tools for the grid consider expected cost minimization, largely ignoring risk and instead penalizing loss-of-load through artificial parameters. The technical focus of this work was the development of scalable solvers for enforcing risk metrics. Advanced stochastic programming solvers were developed to address generation and transmission expansion and unit commitment, minimizing cost subject to pre-specified risk thresholds. Particular attention was paid to renewables where security critically depends on production and demand prediction accuracy. To address this concern, powerful
J. Zhu; K. Pohlmann; J. Chapman; C. Russell; R.W.H. Carroll; D. Shafer
2009-09-10
Yucca Mountain (YM), Nevada, has been proposed by the U.S. Department of Energy as the nation’s first permanent geologic repository for spent nuclear fuel and highlevel radioactive waste. In this study, the potential for groundwater advective pathways from underground nuclear testing areas on the Nevada Test Site (NTS) to intercept the subsurface of the proposed land withdrawal area for the repository is investigated. The timeframe for advective travel and its uncertainty for possible radionuclide movement along these flow pathways is estimated as a result of effective-porosity value uncertainty for the hydrogeologic units (HGUs) along the flow paths. Furthermore, sensitivity analysis is conducted to determine the most influential HGUs on the advective radionuclide travel times from the NTS to the YM area. Groundwater pathways are obtained using the particle tracking package MODPATH and flow results from the Death Valley regional groundwater flow system (DVRFS) model developed by the U.S. Geological Survey (USGS). Effectiveporosity values for HGUs along these pathways are one of several parameters that determine possible radionuclide travel times between the NTS and proposed YM withdrawal areas. Values and uncertainties of HGU porosities are quantified through evaluation of existing site effective-porosity data and expert professional judgment and are incorporated in the model through Monte Carlo simulations to estimate mean travel times and uncertainties. The simulations are based on two steady-state flow scenarios, the pre-pumping (the initial stress period of the DVRFS model), and the 1998 pumping (assuming steady-state conditions resulting from pumping in the last stress period of the DVRFS model) scenarios for the purpose of long-term prediction and monitoring. The pumping scenario accounts for groundwater withdrawal activities in the Amargosa Desert and other areas downgradient of YM. Considering each detonation in a clustered region around Pahute Mesa (in
Loose, Verne W.; Lowry, Thomas Stephen; Malczynski, Leonard A.; Tidwell, Vincent Carroll; Stamber, Kevin Louis; Reinert, Rhonda K.; Backus, George A.; Warren, Drake E.; Zagonel, Aldo A.; Ehlen, Mark Andrew; Klise, Geoffrey T.; Vargas, Vanessa N.
2010-04-01
Policy makers will most likely need to make decisions about climate policy before climate scientists have resolved all relevant uncertainties about the impacts of climate change. This study demonstrates a risk-assessment methodology for evaluating uncertain future climatic conditions. We estimate the impacts of climate change on U.S. state- and national-level economic activity from 2010 to 2050. To understand the implications of uncertainty on risk and to provide a near-term rationale for policy interventions to mitigate the course of climate change, we focus on precipitation, one of the most uncertain aspects of future climate change. We use results of the climate-model ensemble from the Intergovernmental Panel on Climate Change's (IPCC) Fourth Assessment Report 4 (AR4) as a proxy for representing climate uncertainty over the next 40 years, map the simulated weather from the climate models hydrologically to the county level to determine the physical consequences on economic activity at the state level, and perform a detailed 70-industry analysis of economic impacts among the interacting lower-48 states. We determine the industry-level contribution to the gross domestic product and employment impacts at the state level, as well as interstate population migration, effects on personal income, and consequences for the U.S. trade balance. We show that the mean or average risk of damage to the U.S. economy from climate change, at the national level, is on the order of $1 trillion over the next 40 years, with losses in employment equivalent to nearly 7 million full-time jobs.
Hagos, Samson M.; Leung, Lai-Yung R.; Xue, Yongkang; Boone, Aaron; de Sales, Fernando; Neupane, Naresh; Huang, Maoyi; Yoon, Jin-Ho
2014-02-22
Land use and land cover over Africa have changed substantially over the last sixty years and this change has been proposed to affect monsoon circulation and precipitation. This study examines the uncertainties on the effect of these changes on the African Monsoon system and Sahel precipitation using an ensemble of regional model simulations with different combinations of land surface and cumulus parameterization schemes. Although the magnitude of the response covers a broad range of values, most of the simulations show a decline in Sahel precipitation due to the expansion of pasture and croplands at the expense of trees and shrubs and an increase in surface air temperature.
Hagos, Samson M.; Leung, Lai-Yung Ruby; Xue, Yongkang; Boone, Aaron; de Sales, Fernando; Neupane, Naresh; Huang, Maoyi; Yoon, Jin -Ho
2014-02-22
Land use and land cover over Africa have changed substantially over the last sixty years and this change has been proposed to affect monsoon circulation and precipitation. This study examines the uncertainties on the effect of these changes on the African Monsoon system and Sahel precipitation using an ensemble of regional model simulations with different combinations of land surface and cumulus parameterization schemes. Furthermore, the magnitude of the response covers a broad range of values, most of the simulations show a decline in Sahel precipitation due to the expansion of pasture and croplands at the expense of trees and shrubs and an increase in surface air temperature.
Smith, P.J.; Eddings, E.G.; Ring, T.; Thornock, J.; Draper, T.; Isaac, B.; Rezeai, D.; Toth, P.; Wu, Y.; Kelly, K.
2014-08-01
The objective of this task is to produce predictive capability with quantified uncertainty bounds for the heat flux in commercial-scale, tangentially fired, oxy-coal boilers. Validation data came from the Alstom Boiler Simulation Facility (BSF) for tangentially fired, oxy-coal operation. This task brings together experimental data collected under Alstom’s DOE project for measuring oxy-firing performance parameters in the BSF with this University of Utah project for large eddy simulation (LES) and validation/uncertainty quantification (V/UQ). The Utah work includes V/UQ with measurements in the single-burner facility where advanced strategies for O2 injection can be more easily controlled and data more easily obtained. Highlights of the work include: • Simulations of Alstom’s 15 megawatt (MW) BSF, exploring the uncertainty in thermal boundary conditions. A V/UQ analysis showed consistency between experimental results and simulation results, identifying uncertainty bounds on the quantities of interest for this system (Subtask 9.1) • A simulation study of the University of Utah’s oxy-fuel combustor (OFC) focused on heat flux (Subtask 9.2). A V/UQ analysis was used to show consistency between experimental and simulation results. • Measurement of heat flux and temperature with new optical diagnostic techniques and comparison with conventional measurements (Subtask 9.3). Various optical diagnostics systems were created to provide experimental data to the simulation team. The final configuration utilized a mid-wave infrared (MWIR) camera to measure heat flux and temperature, which was synchronized with a high-speed, visible camera to utilize two-color pyrometry to measure temperature and soot concentration. • Collection of heat flux and temperature measurements in the University of Utah’s OFC for use is subtasks 9.2 and 9.3 (Subtask 9.4). Several replicates were carried to better assess the experimental error. Experiments were specifically designed for the
Anderson, D.R.; Trauth, K.M. ); Hora, S.C. )
1991-01-01
Iterative, annual performance-assessment calculations are being performed for the Waste Isolation Pilot Plant (WIPP), a planned underground repository in southeastern New Mexico, USA for the disposal of transuranic waste. The performance-assessment calculations estimate the long-term radionuclide releases from the disposal system to the accessible environment. Because direct experimental data in some areas are presently of insufficient quantity to form the basis for the required distributions. Expert judgment was used to estimate the concentrations of specific radionuclides in a brine exiting a repository room or drift as it migrates up an intruding borehole, and also the distribution coefficients that describe the retardation of radionuclides in the overlying Culebra Dolomite. The variables representing these concentrations and coefficients have been shown by 1990 sensitivity analyses to be among the set of parameters making the greatest contribution to the uncertainty in WIPP performance-assessment predictions. Utilizing available information, the experts (one expert panel addressed concentrations and a second panel addressed retardation) developed an understanding of the problem and were formally elicited to obtain probability distributions that characterize the uncertainty in fixed, but unknown, quantities. The probability distributions developed by the experts are being incorporated into the 1991 performance-assessment calculations. 16 refs., 4 tabs.
Griffin, Joshua D.; Eldred, Michael Scott; Martinez-Canales, Monica L.; Watson, Jean-Paul; Kolda, Tamara Gibson; Giunta, Anthony Andrew; Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J.; Hough, Patricia Diane; Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Brown, Shannon L.
2006-10-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
Eldred, Michael Scott; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Hough, Patricia Diane; Gay, David M.; Eddy, John P.; Haskell, Karen H.
2010-05-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.
Griffin, Joshua D. (Sandia National lababoratory, Livermore, CA); Eldred, Michael Scott; Martinez-Canales, Monica L.; Watson, Jean-Paul; Kolda, Tamara Gibson (Sandia National lababoratory, Livermore, CA); Giunta, Anthony Andrew; Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J.; Hough, Patricia Diane (Sandia National lababoratory, Livermore, CA); Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Brown, Shannon L.
2006-10-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.
Eldred, Michael Scott; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Hough, Patricia Diane; Gay, David M.; Eddy, John P.; Haskell, Karen H.
2010-05-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
Bao, Jie; Hou, Zhangshuan; Fang, Yilin; Ren, Huiying; Lin, Guang
2015-06-01
A series of numerical test cases reflecting broad and realistic ranges of geological formation and preexisting fault properties was developed to systematically evaluate the impacts of preexisting faults on pressure buildup and ground surface uplift during CO? injection. Numerical test cases were conducted using a coupled hydro-geomechanical simulator, eSTOMP (extreme-scale Subsurface Transport over Multiple Phases). For efficient sensitivity analysis and reliable construction of a reduced-order model, a quasi-Monte Carlo sampling method was applied to effectively sample a high-dimensional input parameter space to explore uncertainties associated with hydrologic, geologic, and geomechanical properties. The uncertainty quantification results show that the impacts on geomechanical response from the pre-existing faults mainly depend on reservoir and fault permeability. When the fault permeability is two to three orders of magnitude smaller than the reservoir permeability, the fault can be considered as an impermeable block that resists fluid transport in the reservoir, which causes pressure increase near the fault. When the fault permeability is close to the reservoir permeability, or higher than 10?? m in this study, the fault can be considered as a conduit that penetrates the caprock, connecting the fluid flow between the reservoir and the upper rock.
Sung, Yixing; Adams, Brian M.; Secker, Jeffrey R.
2011-12-01
The CASL Level 1 Milestone CASL.P4.01, successfully completed in December 2011, aimed to 'conduct, using methodologies integrated into VERA, a detailed sensitivity analysis and uncertainty quantification of a crud-relevant problem with baseline VERA capabilities (ANC/VIPRE-W/BOA).' The VUQ focus area led this effort, in partnership with AMA, and with support from VRI. DAKOTA was coupled to existing VIPRE-W thermal-hydraulics and BOA crud/boron deposit simulations representing a pressurized water reactor (PWR) that previously experienced crud-induced power shift (CIPS). This work supports understanding of CIPS by exploring the sensitivity and uncertainty in BOA outputs with respect to uncertain operating and model parameters. This report summarizes work coupling the software tools, characterizing uncertainties, and analyzing the results of iterative sensitivity and uncertainty studies. These studies focused on sensitivity and uncertainty of CIPS indicators calculated by the current version of the BOA code used in the industry. Challenges with this kind of analysis are identified to inform follow-on research goals and VERA development targeting crud-related challenge problems.
Demartin, Federico; Mariani, Elisa [Dipartimento di Fisica, Universita di Milano, Via Celoria 16, I-20133 Milano (Italy); Forte, Stefano; Vicini, Alessandro [Dipartimento di Fisica, Universita di Milano, Via Celoria 16, I-20133 Milano (Italy); INFN, Sezione di Milano, Via Celoria 16, I-20133 Milano (Italy); Rojo, Juan [INFN, Sezione di Milano, Via Celoria 16, I-20133 Milano (Italy)
2010-07-01
We present a systematic study of uncertainties due to parton distributions (PDFs) and the strong coupling on the gluon-fusion production cross section of the standard model Higgs at the Tevatron and LHC colliders. We compare procedures and results when three recent sets of PDFs are used, CTEQ6.6, MSTW08, and NNPDF1.2, and we discuss specifically the way PDF and strong coupling uncertainties are combined. We find that results obtained from different PDF sets are in reasonable agreement if a common value of the strong coupling is adopted. We show that the addition in quadrature of PDF and {alpha}{sub s} uncertainties provides an adequate approximation to the full result with exact error propagation. We discuss a simple recipe to determine a conservative PDF+{alpha}{sub s} uncertainty from available global parton sets, and we use it to estimate this uncertainty on the given process to be about 10% at the Tevatron and 5% at the LHC for a light Higgs.
Martin, William R.; Lee, John C.; baxter, Alan; Wemple, Chuck
2012-03-31
Information and measured data from the intial Fort St. Vrain (FSV) high temperature gas reactor core is used to develop a benchmark configuration to validate computational methods for analysis of a full-core, commercial HTR configuration. Large uncertainties in the geometry and composition data for the FSV fuel and core are identified, including: (1) the relative numbers of fuel particles for the four particle types, (2) the distribution of fuel kernel diameters for the four particle types, (3) the Th:U ratio in the initial FSV core, (4) and the buffer thickness for the fissile and fertile particles. Sensitivity studies were performed to assess each of these uncertainties. A number of methods were developed to assist in these studies, including: (1) the automation of MCNP5 input files for FSV using Python scripts, (2) a simple method to verify isotopic loadings in MCNP5 input files, (3) an automated procedure to conduct a coupled MCNP5-RELAP5 analysis for a full-core FSV configuration with thermal-hydraulic feedback, and (4) a methodology for sampling kernel diameters from arbitrary power law and Gaussian PDFs that preserved fuel loading and packing factor constraints. A reference FSV fuel configuration was developed based on having a single diameter kernel for each of the four particle types, preserving known uranium and thorium loadings and packing factor (58%). Three fuel models were developed, based on representing the fuel as a mixture of kernels with two diameters, four diameters, or a continuous range of diameters. The fuel particles were put into a fuel compact using either a lattice-bsed approach or a stochastic packing methodology from RPI, and simulated with MCNP5. The results of the sensitivity studies indicated that the uncertainties in the relative numbers and sizes of fissile and fertile kernels were not important nor were the distributions of kernel diameters within their diameter ranges. The uncertainty in the Th:U ratio in the intial FSV core was
Dougherty, T.; Maciuca, C.; McAssey, E.V. Jr.; Reddy, D.G.; Yang, B.W.
1990-05-01
In June 1988, Savannah River Laboratory requested that the Heat Transfer Research Facility modify the flow excursion program, which had been in progress since November 1987, to include testing of single tubes in vertical down-flow over a range of length to diameter (L/D) ratios of 100 to 500. The impetus for the request was the desire to obtain experimental data as quickly as possible for code development work. In July 1988, HTRF submitted a proposal to SRL indicating that by modifying a facility already under construction the data could be obtained within three to four months. In January 1990, HTFR issued report CU-HTRF-T4, part 1. This report contained the technical discussion of the results from the single tube uniformly heated tests. The present report is part 2 of CU-HTRF-T4 which contains further discussion of the uncertainty analysis and the complete set of data.
Redus, K. S.; Hampshire, G. J.; Patterson, J. E.; Perkins, A. B.
2002-02-25
The Waste Acceptance Criteria Forecasting and Analysis Capability System (WACFACS) is used to plan for, evaluate, and control the supply of approximately 1.8 million yd3 of low-level radioactive, TSCA, and RCRA hazardous wastes from over 60 environmental restoration projects between FY02 through FY10 to the Oak Ridge Environmental Management Waste Management Facility (EMWMF). WACFACS is a validated decision support tool that propagates uncertainties inherent in site-related contaminant characterization data, disposition volumes during EMWMF operations, and project schedules to quantitatively determine the confidence that risk-based performance standards are met. Trade-offs in schedule, volumes of waste lots, and allowable concentrations of contaminants are performed to optimize project waste disposition, regulatory compliance, and disposal cell management.
Hagos, Samson M.; Leung, Lai-Yung Ruby; Xue, Yongkang; Boone, Aaron; de Sales, Fernando; Neupane, Naresh; Huang, Maoyi; Yoon, Jin -Ho
2014-02-22
Land use and land cover over Africa have changed substantially over the last sixty years and this change has been proposed to affect monsoon circulation and precipitation. This study examines the uncertainties on the effect of these changes on the African Monsoon system and Sahel precipitation using an ensemble of regional model simulations with different combinations of land surface and cumulus parameterization schemes. Furthermore, the magnitude of the response covers a broad range of values, most of the simulations show a decline in Sahel precipitation due to the expansion of pasture and croplands at the expense of trees and shrubsmore » and an increase in surface air temperature.« less
R. W. Youngblood
2010-10-01
The concept of “margin” has a long history in nuclear licensing and in the codification of good engineering practices. However, some traditional applications of “margin” have been carried out for surrogate scenarios (such as design basis scenarios), without regard to the actual frequencies of those scenarios, and have been carried out with in a systematically conservative fashion. This means that the effectiveness of the application of the margin concept is determined in part by the original choice of surrogates, and is limited in any case by the degree of conservatism imposed on the evaluation. In the RISMC project, which is part of the Department of Energy’s “Light Water Reactor Sustainability Program” (LWRSP), we are developing a risk-informed characterization of safety margin. Beginning with the traditional discussion of “margin” in terms of a “load” (a physical challenge to system or component function) and a “capacity” (the capability of that system or component to accommodate the challenge), we are developing the capability to characterize probabilistic load and capacity spectra, reflecting both aleatory and epistemic uncertainty in system response. For example, the probabilistic load spectrum will reflect the frequency of challenges of a particular severity. Such a characterization is required if decision-making is to be informed optimally. However, in order to enable the quantification of probabilistic load spectra, existing analysis capability needs to be extended. Accordingly, the INL is working on a next-generation safety analysis capability whose design will allow for much more efficient parameter uncertainty analysis, and will enable a much better integration of reliability-related and phenomenology-related aspects of margin.
Deng, Hailin; Dai, Zhenxue; Jiao, Zunsheng; Stauffer, Philip H.; Surdam, Ronald C.
2011-01-01
Many geological, geochemical, geomechanical and hydrogeological factors control CO{sub 2} storage in subsurface. Among them heterogeneity in saline aquifer can seriously influence design of injection wells, CO{sub 2} injection rate, CO{sub 2} plume migration, storage capacity, and potential leakage and risk assessment. This study applies indicator geostatistics, transition probability and Markov chain model at the Rock Springs Uplift, Wyoming generating facies-based heterogeneous fields for porosity and permeability in target saline aquifer (Pennsylvanian Weber sandstone) and surrounding rocks (Phosphoria, Madison and cap-rock Chugwater). A multiphase flow simulator FEHM is then used to model injection of CO{sub 2} into the target saline aquifer involving field-scale heterogeneity. The results reveal that (1) CO{sub 2} injection rates in different injection wells significantly change with local permeability distributions; (2) brine production rates in different pumping wells are also significantly impacted by the spatial heterogeneity in permeability; (3) liquid pressure evolution during and after CO{sub 2} injection in saline aquifer varies greatly for different realizations of random permeability fields, and this has potential important effects on hydraulic fracturing of the reservoir rock, reactivation of pre-existing faults and the integrity of the cap-rock; (4) CO{sub 2} storage capacity estimate for Rock Springs Uplift is 6614 {+-} 256 Mt at 95% confidence interval, which is about 36% of previous estimate based on homogeneous and isotropic storage formation; (5) density profiles show that the density of injected CO{sub 2} below 3 km is close to that of the ambient brine with given geothermal gradient and brine concentration, which indicates CO{sub 2} plume can sink to the deep before reaching thermal equilibrium with brine. Finally, we present uncertainty analysis of CO{sub 2} leakage into overlying formations due to heterogeneity in both the target saline
Muriana, Cinzia
2015-07-15
Highlights: • The food recovery is seen as suitable way to manage food near to its expiry date. • The variability of the products shelf life must be taken into account. • The paper addresses the mathematic modeling of the profit related to food recovery. • The optimal time to withdraw the products is determinant for food recovery. - Abstract: Food losses represent a significant issue affecting food supply chains. The possibility of recovering such products can be seen as an effective way to reduce such a phenomenon, improve supply chain performances and ameliorate the conditions of undernourished people. The topic has been already investigated by a previous paper enforcing the hypothesis of deterministic and constant Shelf Life (SL) of products. However, such a model cannot be properly extended to products affected by uncertainties of the SL as it does not take into account the deterioration costs and loss of profits due to the overcoming of the SL within the cycle time. Thus the present paper presents an extension of the previous one under stochastic conditions of the food quality. Differently from the previous publication, this work represents a general model applicable to all supply chains, especially to those managing fresh products characterized by uncertain SL such as fruits and vegetables. The deterioration costs and loss of profits are included in the model and the optimal time at which to withdraw the products from the shelves as well as the quantities to be shipped at each alternative destination have been determined. A comparison of the proposed model with that reported in the previous publication has been carried out in order to underline the impact of the SL variability on the optimality conditions. The results show that the food recovery strategy in the presence of uncertainty of the food quality is rewarding, even if the optimal profit is lower than that of the deterministic case.
Dayem, H.A.; Ostenak, C.A.; Gutmacher, R.G.; Kern, E.A.; Markin, J.T.; Martinez, D.P.; Thomas, C.C. Jr.
1982-07-01
This report describes the conceptual design of a materials accounting system for the feed preparation and chemical separations processes of a fast breeder reactor spent-fuel reprocessing facility. For the proposed accounting system, optimization techniques are used to calculate instrument measurement uncertainties that meet four different accounting performance goals while minimizing the total development cost of instrument systems. We identify instruments that require development to meet performance goals and measurement uncertainty components that dominate the materials balance variance. Materials accounting in the feed preparation process is complicated by large in-process inventories and spent-fuel assembly inputs that are difficult to measure. To meet 8 kg of plutonium abrupt and 40 kg of plutonium protracted loss-detection goals, materials accounting in the chemical separations process requires: process tank volume and concentration measurements having a precision less than or equal to 1%; accountability and plutonium sample tank volume measurements having a precision less than or equal to 0.3%, a shortterm correlated error less than or equal to 0.04%, and a long-term correlated error less than or equal to 0.04%; and accountability and plutonium sample tank concentration measurements having a precision less than or equal to 0.4%, a short-term correlated error less than or equal to 0.1%, and a long-term correlated error less than or equal to 0.05%. The effects of process design on materials accounting are identified. Major areas of concern include the voloxidizer, the continuous dissolver, and the accountability tank.
Gauntt, Randall O.; Bixler, Nathan E.; Wagner, Kenneth Charles
2014-03-01
A methodology for using the MELCOR code with the Latin Hypercube Sampling method was developed to estimate uncertainty in various predicted quantities such as hydrogen generation or release of fission products under severe accident conditions. In this case, the emphasis was on estimating the range of hydrogen sources in station blackout conditions in the Sequoyah Ice Condenser plant, taking into account uncertainties in the modeled physics known to affect hydrogen generation. The method uses user-specified likelihood distributions for uncertain model parameters, which may include uncertainties of a stochastic nature, to produce a collection of code calculations, or realizations, characterizing the range of possible outcomes. Forty MELCOR code realizations of Sequoyah were conducted that included 10 uncertain parameters, producing a range of in-vessel hydrogen quantities. The range of total hydrogen produced was approximately 583kg 131kg. Sensitivity analyses revealed expected trends with respected to the parameters of greatest importance, however, considerable scatter in results when plotted against any of the uncertain parameters was observed, with no parameter manifesting dominant effects on hydrogen generation. It is concluded that, with respect to the physics parameters investigated, in order to further reduce predicted hydrogen uncertainty, it would be necessary to reduce all physics parameter uncertainties similarly, bearing in mind that some parameters are inherently uncertain within a range. It is suspected that some residual uncertainty associated with modeling complex, coupled and synergistic phenomena, is an inherent aspect of complex systems and cannot be reduced to point value estimates. The probabilistic analyses such as the one demonstrated in this work are important to properly characterize response of complex systems such as severe accident progression in nuclear power plants.
Piepel, Gregory F.; Cooley, Scott K.; Kuhn, William L.; Rector, David R.; Heredia-Langner, Alejandro
2015-05-01
This report discusses the statistical methods for quantifying uncertainties in 1) test responses and other parameters in the Large Scale Integrated Testing (LSIT), and 2) estimates of coefficients and predictions of mixing performance from models that relate test responses to test parameters. Testing at a larger scale has been committed to by Bechtel National, Inc. and the U.S. Department of Energy (DOE) to “address uncertainties and increase confidence in the projected, full-scale mixing performance and operations” in the Waste Treatment and Immobilization Plant (WTP).
Aad, G.
2015-01-15
The jet energy scale (JES) and its systematic uncertainty are determined for jets measured with the ATLAS detector using proton–proton collision data with a centre-of-mass energy of \\(\\sqrt{s}=7\\) TeV corresponding to an integrated luminosity of \\(4.7\\) \\(\\,\\,\\text{ fb }^{-1}\\). Jets are reconstructed from energy deposits forming topological clusters of calorimeter cells using the anti-\\(k_{t}\\) algorithm with distance parameters \\(R=0.4\\) or \\(R=0.6\\), and are calibrated using MC simulations. A residual JES correction is applied to account for differences between data and MC simulations. This correction and its systematic uncertainty are estimated using a combination of in situ techniques exploiting the transversemore » momentum balance between a jet and a reference object such as a photon or a \\(Z\\) boson, for \\({20} \\le p_{\\mathrm {T}}^\\mathrm {jet}<{1000}\\, ~\\mathrm{GeV }\\) and pseudorapidities \\(|\\eta |<{4.5}\\). The effect of multiple proton–proton interactions is corrected for, and an uncertainty is evaluated using in situ techniques. The smallest JES uncertainty of less than 1 % is found in the central calorimeter region (\\(|\\eta |<{1.2}\\)) for jets with \\({55} \\le p_{\\mathrm {T}}^\\mathrm {jet}<{500}\\, ~\\mathrm{GeV }\\). For central jets at lower \\(p_{\\mathrm {T}}\\), the uncertainty is about 3 %. A consistent JES estimate is found using measurements of the calorimeter response of single hadrons in proton–proton collisions and test-beam data, which also provide the estimate for \\(p_{\\mathrm {T}}^\\mathrm {jet}> 1\\) TeV. The calibration of forward jets is derived from dijet \\(p_{\\mathrm {T}}\\) balance measurements. The resulting uncertainty reaches its largest value of 6 % for low-\\(p_{\\mathrm {T}}\\) jets at \\(|\\eta |=4.5\\). In addition, JES uncertainties due to specific event topologies, such as close-by jets or selections of event samples with an enhanced content of jets originating from light