Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
... Software Computations Uncertainty Quantification Stochastic About CRF Transportation Energy Consortiums Engine Combustion Heavy Duty Heavy Duty Low-Temperature & Diesel Combustion ...
Sensitivity and Uncertainty Analysis
Broader source: Energy.gov [DOE]
Summary Notes from 15 November 2007 Generic Technical Issue Discussion on Sensitivity and Uncertainty Analysis and Model Support
Direct Aerosol Forcing Uncertainty
DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]
Mccomiskey, Allison
2008-01-15
Understanding sources of uncertainty in aerosol direct radiative forcing (DRF), the difference in a given radiative flux component with and without aerosol, is essential to quantifying changes in Earth's radiation budget. We examine the uncertainty in DRF due to measurement uncertainty in the quantities on which it depends: aerosol optical depth, single scattering albedo, asymmetry parameter, solar geometry, and surface albedo. Direct radiative forcing at the top of the atmosphere and at the surface as well as sensitivities, the changes in DRF in response to unit changes in individual aerosol or surface properties, are calculated at three locations representing distinct aerosol types and radiative environments. The uncertainty in DRF associated with a given property is computed as the product of the sensitivity and typical measurement uncertainty in the respective aerosol or surface property. Sensitivity and uncertainty values permit estimation of total uncertainty in calculated DRF and identification of properties that most limit accuracy in estimating forcing. Total uncertainties in modeled local diurnally averaged forcing range from 0.2 to 1.3 W m-2 (42 to 20%) depending on location (from tropical to polar sites), solar zenith angle, surface reflectance, aerosol type, and aerosol optical depth. The largest contributor to total uncertainty in DRF is usually single scattering albedo; however decreasing measurement uncertainties for any property would increase accuracy in DRF. Comparison of two radiative transfer models suggests the contribution of modeling error is small compared to the total uncertainty although comparable to uncertainty arising from some individual properties.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Climate Uncertainty The uncertainty in climate change and in its impacts is of great concern to the international community. While the ever-growing body of scientific evidence substantiates present climate change, the driving concern about this issue lies in the consequences it poses to humanity. Policy makers will most likely need to make decisions about climate policy before climate scientists have quantified all relevant uncertainties about the impacts of climate change. Sandia scientists
Physical Uncertainty Bounds (PUB)
Vaughan, Diane Elizabeth; Preston, Dean L.
2015-03-19
This paper introduces and motivates the need for a new methodology for determining upper bounds on the uncertainties in simulations of engineered systems due to limited fidelity in the composite continuum-level physics models needed to simulate the systems. We show that traditional uncertainty quantification methods provide, at best, a lower bound on this uncertainty. We propose to obtain bounds on the simulation uncertainties by first determining bounds on the physical quantities or processes relevant to system performance. By bounding these physics processes, as opposed to carrying out statistical analyses of the parameter sets of specific physics models or simply switching out the available physics models, one can obtain upper bounds on the uncertainties in simulated quantities of interest.
Measurement uncertainty relations
Busch, Paul; Lahti, Pekka; Werner, Reinhard F.
2014-04-15
Measurement uncertainty relations are quantitative bounds on the errors in an approximate joint measurement of two observables. They can be seen as a generalization of the error/disturbance tradeoff first discussed heuristically by Heisenberg. Here we prove such relations for the case of two canonically conjugate observables like position and momentum, and establish a close connection with the more familiar preparation uncertainty relations constraining the sharpness of the distributions of the two observables in the same state. Both sets of relations are generalized to means of order ? rather than the usual quadratic means, and we show that the optimal constants are the same for preparation and for measurement uncertainty. The constants are determined numerically and compared with some bounds in the literature. In both cases, the near-saturation of the inequalities entails that the state (resp. observable) is uniformly close to a minimizing one.
A High-Performance Embedded Hybrid Methodology for Uncertainty Quantification With Applications
Iaccarino, Gianluca
2014-04-01
Multiphysics processes modeled by a system of unsteady di#11;erential equations are natu- rally suited for partitioned (modular) solution strategies. We consider such a model where probabilistic uncertainties are present in each module of the system and represented as a set of random input parameters. A straightforward approach in quantifying uncertainties in the predicted solution would be to sample all the input parameters into a single set, and treat the full system as a black-box. Although this method is easily parallelizable and requires minimal modi#12;cations to deterministic solver, it is blind to the modular structure of the underlying multiphysical model. On the other hand, using spectral representations polynomial chaos expansions (PCE) can provide richer structural information regarding the dynamics of these uncertainties as they propagate from the inputs to the predicted output, but can be prohibitively expensive to implement in the high-dimensional global space of un- certain parameters. Therefore, we investigated hybrid methodologies wherein each module has the exibility of using sampling or PCE based methods of capturing local uncertainties while maintaining accuracy in the global uncertainty analysis. For the latter case, we use a conditional PCE model which mitigates the curse of dimension associated with intru- sive Galerkin or semi-intrusive Pseudospectral methods. After formalizing the theoretical framework, we demonstrate our proposed method using a numerical viscous ow simulation and benchmark the performance against a solely Monte-Carlo method and solely spectral method.
Calibration Under Uncertainty.
Swiler, Laura Painton; Trucano, Timothy Guy
2005-03-01
This report is a white paper summarizing the literature and different approaches to the problem of calibrating computer model parameters in the face of model uncertainty. Model calibration is often formulated as finding the parameters that minimize the squared difference between the model-computed data (the predicted data) and the actual experimental data. This approach does not allow for explicit treatment of uncertainty or error in the model itself: the model is considered the %22true%22 deterministic representation of reality. While this approach does have utility, it is far from an accurate mathematical treatment of the true model calibration problem in which both the computed data and experimental data have error bars. This year, we examined methods to perform calibration accounting for the error in both the computer model and the data, as well as improving our understanding of its meaning for model predictability. We call this approach Calibration under Uncertainty (CUU). This talk presents our current thinking on CUU. We outline some current approaches in the literature, and discuss the Bayesian approach to CUU in detail.
RUMINATIONS ON NDA MEASUREMENT UNCERTAINTY COMPARED TO DA UNCERTAINTY
Salaymeh, S.; Ashley, W.; Jeffcoat, R.
2010-06-17
It is difficult to overestimate the importance that physical measurements performed with nondestructive assay instruments play throughout the nuclear fuel cycle. They underpin decision making in many areas and support: criticality safety, radiation protection, process control, safeguards, facility compliance, and waste measurements. No physical measurement is complete or indeed meaningful, without a defensible and appropriate accompanying statement of uncertainties and how they combine to define the confidence in the results. The uncertainty budget should also be broken down in sufficient detail suitable for subsequent uses to which the nondestructive assay (NDA) results will be applied. Creating an uncertainty budget and estimating the total measurement uncertainty can often be an involved process, especially for non routine situations. This is because data interpretation often involves complex algorithms and logic combined in a highly intertwined way. The methods often call on a multitude of input data subject to human oversight. These characteristics can be confusing and pose a barrier to developing and understanding between experts and data consumers. ASTM subcommittee C26-10 recognized this problem in the context of how to summarize and express precision and bias performance across the range of standards and guides it maintains. In order to create a unified approach consistent with modern practice and embracing the continuous improvement philosophy a consensus arose to prepare a procedure covering the estimation and reporting of uncertainties in non destructive assay of nuclear materials. This paper outlines the needs analysis, objectives and on-going development efforts. In addition to emphasizing some of the unique challenges and opportunities facing the NDA community we hope this article will encourage dialog and sharing of best practice and furthermore motivate developers to revisit the treatment of measurement uncertainty.
Entropic uncertainty relations and entanglement
Guehne, Otfried; Lewenstein, Maciej
2004-08-01
We discuss the relationship between entropic uncertainty relations and entanglement. We present two methods for deriving separability criteria in terms of entropic uncertainty relations. In particular, we show how any entropic uncertainty relation on one part of the system results in a separability condition on the composite system. We investigate the resulting criteria using the Tsallis entropy for two and three qubits.
Uncertainty quantification and multiscale mathematics. (Conference...
Office of Scientific and Technical Information (OSTI)
Uncertainty quantification and multiscale mathematics. Citation Details In-Document Search Title: Uncertainty quantification and multiscale mathematics. No abstract prepared. ...
Capturing the uncertainty in adversary attack simulations.
Darby, John L.; Brooks, Traci N.; Berry, Robert Bruce
2008-09-01
This work provides a comprehensive uncertainty technique to evaluate uncertainty, resulting in a more realistic evaluation of PI, thereby requiring fewer resources to address scenarios and allowing resources to be used across more scenarios. For a given set of dversary resources, two types of uncertainty are associated with PI for a scenario: (1) aleatory (random) uncertainty for detection probabilities and time delays and (2) epistemic (state of knowledge) uncertainty for the adversary resources applied during an attack. Adversary esources consist of attributes (such as equipment and training) and knowledge about the security system; to date, most evaluations have assumed an adversary with very high resources, adding to the conservatism in the evaluation of PI. The aleatory uncertainty in PI is ddressed by assigning probability distributions to detection probabilities and time delays. A numerical sampling technique is used to evaluate PI, addressing the repeated variable dependence in the equation for PI.
Evaluation of machine learning algorithms for prediction of regions of high RANS uncertainty
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Ling, Julia; Templeton, Jeremy Alan
2015-08-04
Reynolds Averaged Navier Stokes (RANS) models are widely used in industry to predict fluid flows, despite their acknowledged deficiencies. Not only do RANS models often produce inaccurate flow predictions, but there are very limited diagnostics available to assess RANS accuracy for a given flow configuration. If experimental or higher fidelity simulation results are not available for RANS validation, there is no reliable method to evaluate RANS accuracy. This paper explores the potential of utilizing machine learning algorithms to identify regions of high RANS uncertainty. Three different machine learning algorithms were evaluated: support vector machines, Adaboost decision trees, and random forests.more » The algorithms were trained on a database of canonical flow configurations for which validated direct numerical simulation or large eddy simulation results were available, and were used to classify RANS results on a point-by-point basis as having either high or low uncertainty, based on the breakdown of specific RANS modeling assumptions. Classifiers were developed for three different basic RANS eddy viscosity model assumptions: the isotropy of the eddy viscosity, the linearity of the Boussinesq hypothesis, and the non-negativity of the eddy viscosity. It is shown that these classifiers are able to generalize to flows substantially different from those on which they were trained. As a result, feature selection techniques, model evaluation, and extrapolation detection are discussed in the context of turbulence modeling applications.« less
Evaluation of machine learning algorithms for prediction of regions of high RANS uncertainty
Ling, Julia; Templeton, Jeremy Alan
2015-08-04
Reynolds Averaged Navier Stokes (RANS) models are widely used in industry to predict fluid flows, despite their acknowledged deficiencies. Not only do RANS models often produce inaccurate flow predictions, but there are very limited diagnostics available to assess RANS accuracy for a given flow configuration. If experimental or higher fidelity simulation results are not available for RANS validation, there is no reliable method to evaluate RANS accuracy. This paper explores the potential of utilizing machine learning algorithms to identify regions of high RANS uncertainty. Three different machine learning algorithms were evaluated: support vector machines, Adaboost decision trees, and random forests. The algorithms were trained on a database of canonical flow configurations for which validated direct numerical simulation or large eddy simulation results were available, and were used to classify RANS results on a point-by-point basis as having either high or low uncertainty, based on the breakdown of specific RANS modeling assumptions. Classifiers were developed for three different basic RANS eddy viscosity model assumptions: the isotropy of the eddy viscosity, the linearity of the Boussinesq hypothesis, and the non-negativity of the eddy viscosity. It is shown that these classifiers are able to generalize to flows substantially different from those on which they were trained. As a result, feature selection techniques, model evaluation, and extrapolation detection are discussed in the context of turbulence modeling applications.
Rodriguez, E. A.; Pepin, J. E.; Thacker, B. H.; Riha, D. S.
2002-01-01
Los Alamos National Laboratory (LANL), in cooperation with Southwest Research Institute, has been developing capabilities to provide reliability-based structural evaluation techniques for performing weapon component and system reliability assessments. The development and applications of Probabilistic Structural Analysis Methods (PSAM) is an important ingredient in the overall weapon reliability assessments. Focus, herein, is placed on the uncertainty quantification associated with the structural response of a containment vessel for high-explosive (HE) experiments. The probabilistic dynamic response of the vessel is evaluated through the coupling of the probabilistic code NESSUS with the non-linear structural dynamics code, DYNA-3D. The probabilistic model includes variations in geometry and mechanical properties, such as Young's Modulus, yield strength, and material flow characteristics. Finally, the probability of exceeding a specified strain limit, which is related to vessel failure, is determined.
Report: Technical Uncertainty and Risk Reduction
Office of Environmental Management (EM)
TECHNICAL UNCERTAINTY AND RISK REDUCTION Background In FY 2007 EMAB was tasked to assess EM's ability to reduce risk and technical uncertainty. Board members explored this topic ...
Reducing Petroleum Despendence in California: Uncertainties About...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Petroleum Despendence in California: Uncertainties About Light-Duty Diesel Reducing Petroleum Despendence in California: Uncertainties About Light-Duty Diesel 2002 DEER Conference ...
Uncertainty Analysis Technique for OMEGA Dante Measurements ...
Office of Scientific and Technical Information (OSTI)
Uncertainty Analysis Technique for OMEGA Dante Measurements Citation Details In-Document Search Title: Uncertainty Analysis Technique for OMEGA Dante Measurements You are...
Statistics, Uncertainty, and Transmitted Variation
Wendelberger, Joanne Roth
2014-11-05
The field of Statistics provides methods for modeling and understanding data and making decisions in the presence of uncertainty. When examining response functions, variation present in the input variables will be transmitted via the response function to the output variables. This phenomenon can potentially have significant impacts on the uncertainty associated with results from subsequent analysis. This presentation will examine the concept of transmitted variation, its impact on designed experiments, and a method for identifying and estimating sources of transmitted variation in certain settings.
Experimental uncertainty estimation and statistics for data having interval uncertainty.
Kreinovich, Vladik; Oberkampf, William Louis; Ginzburg, Lev; Ferson, Scott; Hajagos, Janos
2007-05-01
This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.
Uncertainty quantification and multiscale mathematics. (Conference...
Office of Scientific and Technical Information (OSTI)
quantification and multiscale mathematics. Citation Details In-Document Search Title: Uncertainty quantification and multiscale mathematics. Authors: Trucano, Timothy Guy ...
Data Assimilation and Uncertainty Quantification
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Data Assimilation and Uncertainty Quantification Using VERA-CS for a Core Wide LWR Problem with Depletion L2:VMA.P12.01 Bassam A. Khuwaileh and Paul J. Turinsky North Carolina State University, Department of Nuclear Engineering, Raleigh, NC. This research was supported by the Consortium for Advanced Simulation of Light Water Reactors (www.casl.gov), an Energy Innovation Hub (http://www.energy.gov/hubs) for Modeling and Simulation of Nuclear Reactors under U.S. Department of Energy Contract No.
Extended Forward Sensitivity Analysis for Uncertainty Quantification
Haihua Zhao; Vincent A. Mousseau
2011-09-01
Verification and validation (V&V) are playing more important roles to quantify uncertainties and realize high fidelity simulations in engineering system analyses, such as transients happened in a complex nuclear reactor system. Traditional V&V in the reactor system analysis focused more on the validation part or did not differentiate verification and validation. The traditional approach to uncertainty quantification is based on a 'black box' approach. The simulation tool is treated as an unknown signal generator, a distribution of inputs according to assumed probability density functions is sent in and the distribution of the outputs is measured and correlated back to the original input distribution. The 'black box' method mixes numerical errors with all other uncertainties. It is also not efficient to perform sensitivity analysis. Contrary to the 'black box' method, a more efficient sensitivity approach can take advantage of intimate knowledge of the simulation code. In these types of approaches equations for the propagation of uncertainty are constructed and the sensitivities are directly solved for as variables in the simulation. This paper presents the forward sensitivity analysis as a method to help uncertainty qualification. By including time step and potentially spatial step as special sensitivity parameters, the forward sensitivity method is extended as one method to quantify numerical errors. Note that by integrating local truncation errors over the whole system through the forward sensitivity analysis process, the generated time step and spatial step sensitivity information reflect global numerical errors. The discretization errors can be systematically compared against uncertainties due to other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty qualification. By knowing the relative sensitivity of time and space steps with other interested physical parameters, the simulation is allowed to run at optimized time and space steps without affecting the confidence of the physical parameter sensitivity results. The time and space steps forward sensitivity analysis method can also replace the traditional time step and grid convergence study with much less computational cost. Several well defined benchmark problems with manufactured solutions are utilized to demonstrate the extended forward sensitivity analysis method. All the physical solutions, parameter sensitivity solutions, even the time step sensitivity in one case, have analytical forms, which allows the verification to be performed in the strictest sense.
Microsoft Word - Price Uncertainty Supplement.doc
Annual Energy Outlook [U.S. Energy Information Administration (EIA)]
uncertainty in Europe and China, and robust oil demand in global energy ... to reports of monetary tightening in China, which could dampen energy demand in that ...
Uncertainty quantification for discrimination of nuclear events...
Office of Scientific and Technical Information (OSTI)
comprehensive nuclear-test-ban treaty Title: Uncertainty quantification for discrimination of nuclear events as violations of the comprehensive nuclear-test-ban treaty Authors: ...
Uncertainty Quantification for Nuclear Density Functional Theory...
Office of Scientific and Technical Information (OSTI)
Uncertainty Quantification for Nuclear Density Functional Theory and Information Content of New Measurements Citation Details In-Document Search This content will become publicly...
ARM - PI Product - Direct Aerosol Forcing Uncertainty
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
although comparable to uncertainty arising from some individual properties. Data Details Contact Allison Mccomiskey CIRES NOAA allison.mccomiskey@noaa.gov 303-497-6189 NOAA ...
Direct Aerosol Forcing Uncertainty (Dataset) | Data Explorer
Office of Scientific and Technical Information (OSTI)
comparable to uncertainty arising from some individual properties. less Authors: Mccomiskey, Allison Publication Date: 2008-01-15 OSTI Identifier: 1169526 DOE Contract ...
Uncertainty Quantification and Propagation in Nuclear Density...
Office of Scientific and Technical Information (OSTI)
and Propagation in Nuclear Density Functional Theory Citation Details In-Document Search Title: Uncertainty Quantification and Propagation in Nuclear Density Functional ...
From Deterministic Inversion to Uncertainty Quantification: Planning...
Office of Scientific and Technical Information (OSTI)
Planning a Long Journey in Ice Sheet Modeling. Citation Details In-Document Search Title: From Deterministic Inversion to Uncertainty Quantification: Planning a Long Journey in ...
Habte, Aron
2015-06-25
This presentation summarizes uncertainty estimation of radiometric data using the Guide to the Expression of Uncertainty (GUM) method.
Sensitivity and Uncertainty Analysis Shell
Energy Science and Technology Software Center (OSTI)
1999-04-20
SUNS (Sensitivity and Uncertainty Analysis Shell) is a 32-bit application that runs under Windows 95/98 and Windows NT. It is designed to aid in statistical analyses for a broad range of applications. The class of problems for which SUNS is suitable is generally defined by two requirements: 1. A computer code is developed or acquired that models some processes for which input is uncertain and the user is interested in statistical analysis of the outputmore » of that code. 2. The statistical analysis of interest can be accomplished using the Monte Carlo analysis. The implementation then requires that the user identify which input to the process model is to be manipulated for statistical analysis. With this information, the changes required to loosely couple SUNS with the process model can be completed. SUNS is then used to generate the required statistical sample and the user-supplied process model analyses the sample. The SUNS post processor displays statistical results from any existing file that contains sampled input and output values.« less
Estimating the uncertainty in underresolved nonlinear dynamics
Chorin, Alelxandre; Hald, Ole
2013-06-12
The Mori-Zwanzig formalism of statistical mechanics is used to estimate the uncertainty caused by underresolution in the solution of a nonlinear dynamical system. A general approach is outlined and applied to a simple example. The noise term that describes the uncertainty turns out to be neither Markovian nor Gaussian. It is argued that this is the general situation.
Uncertainty Analysis for Photovoltaic Degradation Rates (Poster)
Jordan, D.; Kurtz, S.; Hansen, C.
2014-04-01
Dependable and predictable energy production is the key to the long-term success of the PV industry. PV systems show over the lifetime of their exposure a gradual decline that depends on many different factors such as module technology, module type, mounting configuration, climate etc. When degradation rates are determined from continuous data the statistical uncertainty is easily calculated from the regression coefficients. However, total uncertainty that includes measurement uncertainty and instrumentation drift is far more difficult to determine. A Monte Carlo simulation approach was chosen to investigate a comprehensive uncertainty analysis. The most important effect for degradation rates is to avoid instrumentation that changes over time in the field. For instance, a drifting irradiance sensor, which can be achieved through regular calibration, can lead to a substantially erroneous degradation rates. However, the accuracy of the irradiance sensor has negligible impact on degradation rate uncertainty emphasizing that precision (relative accuracy) is more important than absolute accuracy.
Quantifying uncertainty in stable isotope mixing models
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Davis, Paul; Syme, James; Heikoop, Jeffrey; Fessenden-Rahn, Julianna; Perkins, George; Newman, Brent; Chrystal, Abbey E.; Hagerty, Shannon B.
2015-05-19
Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, SIAR [Parnell et al., 2010] a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ15N and δ18O) but all methods testedmore » are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated mixing fractions.« less
Quantifying uncertainty in stable isotope mixing models
Davis, Paul; Syme, James; Heikoop, Jeffrey; Fessenden-Rahn, Julianna; Perkins, George; Newman, Brent; Chrystal, Abbey E.; Hagerty, Shannon B.
2015-05-19
Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, SIAR [Parnell et al., 2010] a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (?^{15}N and ?^{18}O) but all methods tested are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated mixing fractions.
Numerical uncertainty in computational engineering and physics
Hemez, Francois M
2009-01-01
Obtaining a solution that approximates ordinary or partial differential equations on a computational mesh or grid does not necessarily mean that the solution is accurate or even 'correct'. Unfortunately assessing the quality of discrete solutions by questioning the role played by spatial and temporal discretizations generally comes as a distant third to test-analysis comparison and model calibration. This publication is contributed to raise awareness of the fact that discrete solutions introduce numerical uncertainty. This uncertainty may, in some cases, overwhelm in complexity and magnitude other sources of uncertainty that include experimental variability, parametric uncertainty and modeling assumptions. The concepts of consistency, convergence and truncation error are overviewed to explain the articulation between the exact solution of continuous equations, the solution of modified equations and discrete solutions computed by a code. The current state-of-the-practice of code and solution verification activities is discussed. An example in the discipline of hydro-dynamics illustrates the significant effect that meshing can have on the quality of code predictions. A simple method is proposed to derive bounds of solution uncertainty in cases where the exact solution of the continuous equations, or its modified equations, is unknown. It is argued that numerical uncertainty originating from mesh discretization should always be quantified and accounted for in the overall uncertainty 'budget' that supports decision-making for applications in computational physics and engineering.
The Role of Uncertainty Quantification for Reactor Physics
Salvatores, Massimo; Palmiotti, Giuseppe; Aliberti, G.
2015-01-01
The quantification of uncertainties is a crucial step in design. The comparison of a-priori uncertainties with the target accuracies, allows to define needs and priorities for uncertainty reduction. In view of their impact, the uncertainty analysis requires a reliability assessment of the uncertainty data used. The choice of the appropriate approach and the consistency of different approaches are discussed.
An uncertainty principle for unimodular quantum groups
Crann, Jason; Kalantar, Mehrdad E-mail: mkalanta@math.carleton.ca
2014-08-15
We present a generalization of Hirschman's entropic uncertainty principle for locally compact Abelian groups to unimodular locally compact quantum groups. As a corollary, we strengthen a well-known uncertainty principle for compact groups, and generalize the relation to compact quantum groups of Kac type. We also establish the complementarity of finite-dimensional quantum group algebras. In the non-unimodular setting, we obtain an uncertainty relation for arbitrary locally compact groups using the relative entropy with respect to the Haar weight as the measure of uncertainty. We also show that when restricted to q-traces of discrete quantum groups, the relative entropy with respect to the Haar weight reduces to the canonical entropy of the random walk generated by the state.
Efficient uncertainty propagation for network multiphysics systems.
Office of Scientific and Technical Information (OSTI)
(Journal Article) | SciTech Connect Journal Article: Efficient uncertainty propagation for network multiphysics systems. Citation Details In-Document Search Title: Efficient uncertainty propagation for network multiphysics systems. Authors: Phipps, Eric T. ; Wildey, Timothy Michael ; Constantine, Paul G. Publication Date: 2013-01-01 OSTI Identifier: 1063360 Report Number(s): SAND2013-0423J DOE Contract Number: AC04-94AL85000 Resource Type: Journal Article Resource Relation: Journal Name:
Cost Analysis: Technology, Competitiveness, Market Uncertainty | Department
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
of Energy Technology to Market » Cost Analysis: Technology, Competitiveness, Market Uncertainty Cost Analysis: Technology, Competitiveness, Market Uncertainty As a basis for strategic planning, competitiveness analysis, funding metrics and targets, SunShot supports analysis teams at national laboratories to assess technology costs, location-specific competitive advantages, policy impacts on system financing, and to perform detailed levelized cost of energy (LCOE) analyses. This shows the
ENHANCED UNCERTAINTY ANALYSIS FOR SRS COMPOSITE ANALYSIS
Smith, F.; Phifer, M.
2011-06-30
The Composite Analysis (CA) performed for the Savannah River Site (SRS) in 2009 (SRS CA 2009) included a simplified uncertainty analysis. The uncertainty analysis in the CA (Smith et al. 2009b) was limited to considering at most five sources in a separate uncertainty calculation performed for each POA. To perform the uncertainty calculations in a reasonable amount of time, the analysis was limited to using 400 realizations, 2,000 years of simulated transport time, and the time steps used for the uncertainty analysis were increased from what was used in the CA base case analysis. As part of the CA maintenance plan, the Savannah River National Laboratory (SRNL) committed to improving the CA uncertainty/sensitivity analysis. The previous uncertainty analysis was constrained by the standard GoldSim licensing which limits the user to running at most four Monte Carlo uncertainty calculations (also called realizations) simultaneously. Some of the limitations on the number of realizations that could be practically run and the simulation time steps were removed by building a cluster of three HP Proliant windows servers with a total of 36 64-bit processors and by licensing the GoldSim DP-Plus distributed processing software. This allowed running as many as 35 realizations simultaneously (one processor is reserved as a master process that controls running the realizations). These enhancements to SRNL computing capabilities made uncertainty analysis: using 1000 realizations, using the time steps employed in the base case CA calculations, with more sources, and simulating radionuclide transport for 10,000 years feasible. In addition, an importance screening analysis was performed to identify the class of stochastic variables that have the most significant impact on model uncertainty. This analysis ran the uncertainty model separately testing the response to variations in the following five sets of model parameters: (a) K{sub d} values (72 parameters for the 36 CA elements in sand and clay), (b) Dose Parameters (34 parameters), (c) Material Properties (20 parameters), (d) Surface Water Flows (6 parameters), and (e) Vadose and Aquifer Flow (4 parameters). Results provided an assessment of which group of parameters is most significant in the dose uncertainty. It was found that K{sub d} and the vadose/aquifer flow parameters, both of which impact transport timing, had the greatest impact on dose uncertainty. Dose parameters had an intermediate level of impact while material properties and surface water flows had little impact on dose uncertainty. Results of the importance analysis are discussed further in Section 7 of this report. The objectives of this work were to address comments received during the CA review on the uncertainty analysis and to demonstrate an improved methodology for CA uncertainty calculations as part of CA maintenance. This report partially addresses the LFRG Review Team issue of producing an enhanced CA sensitivity and uncertainty analysis. This is described in Table 1-1 which provides specific responses to pertinent CA maintenance items extracted from Section 11 of the SRS CA (2009). As noted above, the original uncertainty analysis looked at each POA separately and only included the effects from at most five sources giving the highest peak doses at each POA. Only 17 of the 152 CA sources were used in the original uncertainty analysis and the simulation time was reduced from 10,000 to 2,000 years. A major constraint on the original uncertainty analysis was the limitation of only being able to use at most four distributed processes. This work expanded the analysis to 10,000 years using 39 of the CA sources, included cumulative dose effects at downstream POAs, with more realizations (1,000) and finer time steps. This was accomplished by using the GoldSim DP-Plus module and the 36 processors available on a new windows cluster. The last part of the work looked at the contribution to overall uncertainty from the main categories of uncertainty variables: K{sub d}s, dose parameters, flow parameters, and material propertie
Addressing Uncertainties in Design Inputs: A Case Study of Probabilist...
Office of Environmental Management (EM)
Addressing Uncertainties in Design Inputs: A Case Study of Probabilistic Settlement Evaluations for Soft Zone Collapse at SWPF Addressing Uncertainties in Design Inputs: A Case...
Risk Analysis and Decision-Making Under Uncertainty: A Strategy...
Office of Environmental Management (EM)
Analysis and Decision-Making Under Uncertainty: A Strategy and its Applications Risk Analysis and Decision-Making Under Uncertainty: A Strategy and its Applications Ming Ye...
Improvements of Nuclear Data and Its Uncertainties by Theoretical...
Office of Scientific and Technical Information (OSTI)
Improvements of Nuclear Data and Its Uncertainties by Theoretical Modeling Citation Details In-Document Search Title: Improvements of Nuclear Data and Its Uncertainties by ...
Improvements to Nuclear Data and Its Uncertainties by Theoretical...
Office of Scientific and Technical Information (OSTI)
Improvements to Nuclear Data and Its Uncertainties by Theoretical Modeling Citation Details In-Document Search Title: Improvements to Nuclear Data and Its Uncertainties by ...
Error propagation equations for estimating the uncertainty in high-speed wind tunnel test results
Clark, E.L.
1994-07-01
Error propagation equations, based on the Taylor series model, are derived for the nondimensional ratios and coefficients most often encountered in high-speed wind tunnel testing. These include pressure ratio and coefficient, static force and moment coefficients, dynamic stability coefficients, and calibration Mach number. The error equations contain partial derivatives, denoted as sensitivity coefficients, which define the influence of free-steam Mach number, M{infinity}, on various aerodynamic ratios. To facilitate use of the error equations, sensitivity coefficients are derived and evaluated for five fundamental aerodynamic ratios which relate free-steam test conditions to a reference condition.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Fowler, Michael J.; Howard, Marylesa; Luttman, Aaron; Mitchell, Stephen E.; Webb, Timothy J.
2015-06-03
One of the primary causes of blur in a high-energy X-ray imaging system is the shape and extent of the radiation source, or ‘spot’. It is important to be able to quantify the size of the spot as it provides a lower bound on the recoverable resolution for a radiograph, and penumbral imaging methods – which involve the analysis of blur caused by a structured aperture – can be used to obtain the spot’s spatial profile. We present a Bayesian approach for estimating the spot shape that, unlike variational methods, is robust to the initial choice of parameters. The posteriormore » is obtained from a normal likelihood, which was constructed from a weighted least squares approximation to a Poisson noise model, and prior assumptions that enforce both smoothness and non-negativity constraints. A Markov chain Monte Carlo algorithm is used to obtain samples from the target posterior, and the reconstruction and uncertainty estimates are the computed mean and variance of the samples, respectively. Lastly, synthetic data-sets are used to demonstrate accurate reconstruction, while real data taken with high-energy X-ray imaging systems are used to demonstrate applicability and feasibility.« less
Fowler, Michael J.; Howard, Marylesa; Luttman, Aaron; Mitchell, Stephen E.; Webb, Timothy J.
2015-06-03
One of the primary causes of blur in a high-energy X-ray imaging system is the shape and extent of the radiation source, or ‘spot’. It is important to be able to quantify the size of the spot as it provides a lower bound on the recoverable resolution for a radiograph, and penumbral imaging methods – which involve the analysis of blur caused by a structured aperture – can be used to obtain the spot’s spatial profile. We present a Bayesian approach for estimating the spot shape that, unlike variational methods, is robust to the initial choice of parameters. The posterior is obtained from a normal likelihood, which was constructed from a weighted least squares approximation to a Poisson noise model, and prior assumptions that enforce both smoothness and non-negativity constraints. A Markov chain Monte Carlo algorithm is used to obtain samples from the target posterior, and the reconstruction and uncertainty estimates are the computed mean and variance of the samples, respectively. Lastly, synthetic data-sets are used to demonstrate accurate reconstruction, while real data taken with high-energy X-ray imaging systems are used to demonstrate applicability and feasibility.
Photovoltaic System Modeling. Uncertainty and Sensitivity Analyses
Hansen, Clifford W.; Martin, Curtis E.
2015-08-01
We report an uncertainty and sensitivity analysis for modeling AC energy from ph otovoltaic systems . Output from a PV system is predicted by a sequence of models. We quantify u ncertainty i n the output of each model using empirical distribution s of each model's residuals. We propagate uncertainty through the sequence of models by sampli ng these distributions to obtain a n empirical distribution of a PV system's output. We consider models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane - of - array irradiance; (2) estimate effective irradiance; (3) predict cell temperature; (4) estimate DC voltage, current and power ; (5) reduce DC power for losses due to inefficient maximum power point tracking or mismatch among modules; and (6) convert DC to AC power . O ur analysis consider s a notional PV system com prising an array of FirstSolar FS - 387 modules and a 250 kW AC inverter ; we use measured irradiance and weather at Albuquerque, NM. We found the uncertainty in PV syste m output to be relatively small, on the order of 1% for daily energy. We found that unce rtainty in the models for POA irradiance and effective irradiance to be the dominant contributors to uncertainty in predicted daily energy. Our analysis indicates that efforts to reduce the uncertainty in PV system output predictions may yield the greatest improvements by focusing on the POA and effective irradiance models.
Model development and data uncertainty integration
Swinhoe, Martyn Thomas
2015-12-02
The effect of data uncertainties is discussed, with the epithermal neutron multiplicity counter as an illustrative example. Simulation using MCNP6, cross section perturbations and correlations are addressed, along with the effect of the ^{240}Pu spontaneous fission neutron spectrum, the effect of P(ν) for ^{240}Pu spontaneous fission, and the effect of spontaneous fission and (α,n) intensity. The effect of nuclear data is the product of the initial uncertainty and the sensitivity -- both need to be estimated. In conclusion, a multi-parameter variation method has been demonstrated, the most significant parameters are the basic emission rates of spontaneous fission and (α,n) processes, and uncertainties and important data depend on the analysis technique chosen.
Improvements to Nuclear Data and Its Uncertainties by Theoretical Modeling
Office of Scientific and Technical Information (OSTI)
(Technical Report) | SciTech Connect Improvements to Nuclear Data and Its Uncertainties by Theoretical Modeling Citation Details In-Document Search Title: Improvements to Nuclear Data and Its Uncertainties by Theoretical Modeling This project addresses three important gaps in existing evaluated nuclear data libraries that represent a significant hindrance against highly advanced modeling and simulation capabilities for the Advanced Fuel Cycle Initiative (AFCI). This project will: Develop
Microsoft Word - Price Uncertainty Supplement.doc
Gasoline and Diesel Fuel Update (EIA)
Outlook Price Uncertainty-January 2010 1 January 2010 Short-Term Energy Outlook Energy Price Volatility and Forecast Uncertainty 1 January 12, 2010 Release Crude Oil Prices. West Texas Intermediate (WTI) crude oil spot prices averaged $74.50 per barrel in December 2009, about $3.50 per barrel lower than the prior month's average. The WTI spot price fell from $78 to $70 during the first 2 weeks of December, but colder-than-normal weather and U.S. crude oil and product inventory draws that
Coping with uncertainties of mercury regulation
Reich, K.
2006-09-15
The thermometer is rising as coal-fired plants cope with the uncertainties of mercury regulation. The paper deals with a diagnosis and a suggested cure. It describes the state of mercury emission rules in the different US states, many of which had laws or rules in place before the Clean Air Mercury Rule (CAMR) was promulgated.
Uncertainty in Simulating Wheat Yields Under Climate Change
Asseng, S.; Ewert, F.; Rosenzweig, C.; Jones, J.W.; Hatfield, Jerry; Ruane, Alex; Boote, K. J.; Thorburn, Peter; Rotter, R.P.; Cammarano, D.; Brisson, N.; Basso, B.; Martre, P.; Aggarwal, P.K.; Angulo, C.; Bertuzzi, P.; Biernath, C.; Challinor, AJ; Doltra, J.; Gayler, S.; Goldberg, R.; Grant, Robert; Heng, L.; Hooker, J.; Hunt, L.A.; Ingwersen, J.; Izaurralde, Roberto C.; Kersebaum, K.C.; Mueller, C.; Naresh Kumar, S.; Nendel, C.; O'Leary, G.O.; Olesen, JE; Osborne, T.; Palosuo, T.; Priesack, E.; Ripoche, D.; Semenov, M.A.; Shcherbak, I.; Steduto, P.; Stockle, Claudio O.; Stratonovitch, P.; Streck, T.; Supit, I.; Tao, F.; Travasso, M.; Waha, K.; Wallach, D.; White, J.W.; Williams, J.R.; Wolf, J.
2013-09-01
Anticipating the impacts of climate change on crop yields is critical for assessing future food security. Process-based crop simulation models are the most commonly used tools in such assessments1,2. Analysis of uncertainties in future greenhouse gas emissions and their impacts on future climate change has been increasingly described in the literature3,4 while assessments of the uncertainty in crop responses to climate change are very rare. Systematic and objective comparisons across impact studies is difficult, and thus has not been fully realized5. Here we present the largest coordinated and standardized crop model intercomparison for climate change impacts on wheat production to date. We found that several individual crop models are able to reproduce measured grain yields under current diverse environments, particularly if sufficient details are provided to execute them. However, simulated climate change impacts can vary across models due to differences in model structures and algorithms. The crop-model component of uncertainty in climate change impact assessments was considerably larger than the climate-model component from Global Climate Models (GCMs). Model responses to high temperatures and temperature-by-CO2 interactions are identified as major sources of simulated impact uncertainties. Significant reductions in impact uncertainties through model improvements in these areas and improved quantification of uncertainty through multi-model ensembles are urgently needed for a more reliable translation of climate change scenarios into agricultural impacts in order to develop adaptation strategies and aid policymaking.
Uncertainty in BWR power during ATWS events
Diamond, D.J.
1986-01-01
A study was undertaken to improve our understanding of BWR conditions following the closure of main steam isolation valves and the failure of reactor trip. Of particular interest was the power during the period when the core had reached a quasi-equilibrium condition with a natural circulation flow rate determined by the water level in the downcomer. Insights into the uncertainity in the calculation of this power with sophisticated computer codes were quantified using a simple model which relates power to the principal thermal-hydraulic variables and reactivity coefficients; the latter representing the link between the thermal-hydraulics and the neutronics. Assumptions regarding the uncertainty in these variables and coefficients were then used to determine the uncertainty in power.
On solar geoengineering and climate uncertainty
MacMartin, Douglas; Kravitz, Benjamin S.; Rasch, Philip J.
2015-09-03
Uncertainty in the climate system response has been raised as a concern regarding solar geoengineering. Here we show that model projections of regional climate change outcomes may have greater agreement under solar geoengineering than with CO2 alone. We explore the effects of geoengineering on one source of climate system uncertainty by evaluating the inter-model spread across 12 climate models participating in the Geoengineering Model Intercomparison project (GeoMIP). The model spread in regional temperature and precipitation changes is reduced with CO2 and a solar reduction, in comparison to the case with increased CO2 alone. That is, the intermodel spread in predictions of climate change and the model spread in the response to solar geoengineering are not additive but rather partially cancel. Furthermore, differences in efficacy explain most of the differences between models in their temperature response to an increase in CO2 that is offset by a solar reduction. These conclusions are important for clarifying geoengineering risks.
Interpolation Uncertainties Across the ARM SGP Area
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Interpolation Uncertainties Across the ARM SGP Area J. E. Christy, C. N. Long, and T. R. Shippert Pacific Northwest National Laboratory Richland, Washington Interpolation Grids Across the SGP Network Area The U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Program operates a network of surface radiation measurement sites across north central Oklahoma and south central Kansas. This Southern Great Plains (SGP) network consists of 21 sites unevenly spaced from 95.5 to 99.5
Uncertainty estimates for derivatives and intercepts
Clark, E.L.
1994-09-01
Straight line least squares fits of experimental data are widely used in the analysis of test results to provide derivatives and intercepts. A method for evaluating the uncertainty in these parameters is described. The method utilizes conventional least squares results and is applicable to experiments where the independent variable is controlled, but not necessarily free of error. A Monte Carlo verification of the method is given.
Representation of analysis results involving aleatory and epistemic uncertainty.
Johnson, Jay Dean; Helton, Jon Craig; Oberkampf, William Louis; Sallaberry, Cedric J.
2008-08-01
Procedures are described for the representation of results in analyses that involve both aleatory uncertainty and epistemic uncertainty, with aleatory uncertainty deriving from an inherent randomness in the behavior of the system under study and epistemic uncertainty deriving from a lack of knowledge about the appropriate values to use for quantities that are assumed to have fixed but poorly known values in the context of a specific study. Aleatory uncertainty is usually represented with probability and leads to cumulative distribution functions (CDFs) or complementary cumulative distribution functions (CCDFs) for analysis results of interest. Several mathematical structures are available for the representation of epistemic uncertainty, including interval analysis, possibility theory, evidence theory and probability theory. In the presence of epistemic uncertainty, there is not a single CDF or CCDF for a given analysis result. Rather, there is a family of CDFs and a corresponding family of CCDFs that derive from epistemic uncertainty and have an uncertainty structure that derives from the particular uncertainty structure (i.e., interval analysis, possibility theory, evidence theory, probability theory) used to represent epistemic uncertainty. Graphical formats for the representation of epistemic uncertainty in families of CDFs and CCDFs are investigated and presented for the indicated characterizations of epistemic uncertainty.
Uncertainty Analysis and Bayesian Calibration | Argonne National Laboratory
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Uncertainty Analysis and Bayesian Calibration Uncertainty Analysis and Bayesian Calibration Argonne researchers are working to understand the uncertainties involved in building energy modeling and the energy saving predictions and costs that are typically used when making decisions about a building during design, construction, and operation, or when considering a retrofit. This work is identifying and quantifying the uncertainty in different parameters in energy models as well as in the energy
Brown, J.; Goossens, L.H.J.; Kraan, B.C.P.
1997-06-01
This volume is the first of a two-volume document that summarizes a joint project conducted by the US Nuclear Regulatory Commission and the European Commission to assess uncertainties in the MACCS and COSYMA probabilistic accident consequence codes. These codes were developed primarily for estimating the risks presented by nuclear reactors based on postulated frequencies and magnitudes of potential accidents. This document reports on an ongoing project to assess uncertainty in the MACCS and COSYMA calculations for the offsite consequences of radionuclide releases by hypothetical nuclear power plant accidents. A panel of sixteen experts was formed to compile credible and traceable uncertainty distributions for food chain variables that affect calculations of offsite consequences. The expert judgment elicitation procedure and its outcomes are described in these volumes. Other panels were formed to consider uncertainty in other aspects of the codes. Their results are described in companion reports. Volume 1 contains background information and a complete description of the joint consequence uncertainty study. Volume 2 contains appendices that include (1) a summary of the MACCS and COSYMA consequence codes, (2) the elicitation questionnaires and case structures for both panels, (3) the rationales and results for the panels on soil and plant transfer and animal transfer, (4) short biographies of the experts, and (5) the aggregated results of their responses.
HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks
Paulson, Patrick R.; Purohit, Sumit; Rodriguez, Luke R.
2015-05-01
This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.
Entropic uncertainty relations in multidimensional position and momentum spaces
Huang Yichen
2011-05-15
Commutator-based entropic uncertainty relations in multidimensional position and momentum spaces are derived, twofold generalizing previous entropic uncertainty relations for one-mode states. They provide optimal lower bounds and imply the multidimensional variance-based uncertainty principle. The article concludes with an open conjecture.
Tutorial examples for uncertainty quantification methods.
De Bord, Sarah
2015-08-01
This report details the work accomplished during my 2015 SULI summer internship at Sandia National Laboratories in Livermore, CA. During this internship, I worked on multiple tasks with the common goal of making uncertainty quantification (UQ) methods more accessible to the general scientific community. As part of my work, I created a comprehensive numerical integration example to incorporate into the user manual of a UQ software package. Further, I developed examples involving heat transfer through a window to incorporate into tutorial lectures that serve as an introduction to UQ methods.
Microsoft Word - Price Uncertainty Supplement .docx
Gasoline and Diesel Fuel Update (EIA)
1 1 January 2011 Short-Term Energy Outlook Energy Price Volatility and Forecast Uncertainty 1 January 11, 2011 Release Crude Oil Prices. West Texas Intermediate (WTI) crude oil spot prices averaged over $89 per barrel in December, about $5 per barrel higher than the November average. Expectations of higher oil demand, combined with unusually cold weather in both Europe and the U.S. Northeast, contributed to prices. EIA has raised the first quarter 2011 WTI spot price forecast by $8 per barrel
Microsoft Word - Price Uncertainty Supplement.doc
Gasoline and Diesel Fuel Update (EIA)
December 2010 Short-Term Energy Outlook Energy Price Volatility and Forecast Uncertainty 1 December 7, 2010 Release Crude Oil Prices. West Texas Intermediate (WTI) crude oil spot prices averaged over $84 per barrel in November, more than $2 per barrel higher than the October average. EIA has raised the average winter 2010-2011 period WTI spot price forecast by $1 per barrel from the last monthʹs Outlook to $84 per barrel. WTI spot prices rise to $89 per barrel by the end of next year, $2 per
Microsoft Word - Price Uncertainty Supplement.doc
Gasoline and Diesel Fuel Update (EIA)
October 2010 Short-Term Energy Outlook Energy Price Volatility and Forecast Uncertainty 1 October 13, 2010 Release Crude Oil Prices. WTI oil prices averaged $75 per barrel in September but rose above $80 at the end of the month and into early October. EIA has raised the average fourth- quarter 2010 forecasted WTI spot price to $79 per barrel compared with $77 per barrel in last monthʹs Outlook. WTI spot prices are projected to rise to $85 per barrel by the fourth quarter of next year. As has
Microsoft Word - Price Uncertainty Supplement.doc
Gasoline and Diesel Fuel Update (EIA)
0 1 September 2010 Short-Term Energy Outlook Energy Price Volatility and Forecast Uncertainty 1 September 8, 2010 Release Crude Oil Prices. West Texas Intermediate (WTI) crude oil spot prices averaged about $77 per barrel in August 2010, very close to the July average, but $3 per barrel lower than projected in last month's Outlook. WTI spot prices averaged almost $82 per barrel over the first 10 days of August but then fell by $9 per barrel over the next 2 weeks as the market reacted to a series
Methodology for characterizing modeling and discretization uncertainties in computational simulation
ALVIN,KENNETH F.; OBERKAMPF,WILLIAM L.; RUTHERFORD,BRIAN M.; DIEGERT,KATHLEEN V.
2000-03-01
This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.
Uncertainty Budget Analysis for Dimensional Inspection Processes (U)
Valdez, Lucas M.
2012-07-26
This paper is intended to provide guidance and describe how to prepare an uncertainty analysis of a dimensional inspection process through the utilization of an uncertainty budget analysis. The uncertainty analysis is stated in the same methodology as that of the ISO GUM standard for calibration and testing. There is a specific distinction between how Type A and Type B uncertainty analysis is used in a general and specific process. All theory and applications are utilized to represent both a generalized approach to estimating measurement uncertainty and how to report and present these estimations for dimensional measurements in a dimensional inspection process. The analysis of this uncertainty budget shows that a well-controlled dimensional inspection process produces a conservative process uncertainty, which can be attributed to the necessary assumptions in place for best possible results.
Efendiev, Yalchin; Datta-Gupta, Akhil; Jafarpour, Behnam; Mallick, Bani; Vassilevski, Panayot
2015-11-09
In this proposal, we have worked on Bayesian uncertainty quantification for predictions of fows in highly heterogeneous media. The research in this proposal is broad and includes: prior modeling for heterogeneous permeability fields; effective parametrization of heterogeneous spatial priors; efficient ensemble- level solution techniques; efficient multiscale approximation techniques; study of the regularity of complex posterior distribution and the error estimates due to parameter reduction; efficient sampling techniques; applications to multi-phase ow and transport. We list our publications below and describe some of our main research activities. Our multi-disciplinary team includes experts from the areas of multiscale modeling, multilevel solvers, Bayesian statistics, spatial permeability modeling, and the application domain.
Survey and Evaluate Uncertainty Quantification Methodologies
Lin, Guang; Engel, David W.; Eslinger, Paul W.
2012-02-01
The Carbon Capture Simulation Initiative (CCSI) is a partnership among national laboratories, industry and academic institutions that will develop and deploy state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technologies from discovery to development, demonstration, and ultimately the widespread deployment to hundreds of power plants. The CCSI Toolset will provide end users in industry with a comprehensive, integrated suite of scientifically validated models with uncertainty quantification, optimization, risk analysis and decision making capabilities. The CCSI Toolset will incorporate commercial and open-source software currently in use by industry and will also develop new software tools as necessary to fill technology gaps identified during execution of the project. The CCSI Toolset will (1) enable promising concepts to be more quickly identified through rapid computational screening of devices and processes; (2) reduce the time to design and troubleshoot new devices and processes; (3) quantify the technical risk in taking technology from laboratory-scale to commercial-scale; and (4) stabilize deployment costs more quickly by replacing some of the physical operational tests with virtual power plant simulations. The goal of CCSI is to deliver a toolset that can simulate the scale-up of a broad set of new carbon capture technologies from laboratory scale to full commercial scale. To provide a framework around which the toolset can be developed and demonstrated, we will focus on three Industrial Challenge Problems (ICPs) related to carbon capture technologies relevant to U.S. pulverized coal (PC) power plants. Post combustion capture by solid sorbents is the technology focus of the initial ICP (referred to as ICP A). The goal of the uncertainty quantification (UQ) task (Task 6) is to provide a set of capabilities to the user community for the quantification of uncertainties associated with the carbon capture processes. As such, we will develop, as needed and beyond existing capabilities, a suite of robust and efficient computational tools for UQ to be integrated into a CCSI UQ software framework.
Lidar arc scan uncertainty reduction through scanning geometry optimization
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Wang, H.; Barthelmie, R. J.; Pryor, S. C.; Brown, G.
2015-10-07
Doppler lidars are frequently operated in a mode referred to as arc scans, wherein the lidar beam scans across a sector with a fixed elevation angle and the resulting measurements are used to derive an estimate of the n minute horizontal mean wind velocity (speed and direction). Previous studies have shown that the uncertainty in the measured wind speed originates from turbulent wind fluctuations and depends on the scan geometry (the arc span and the arc orientation). This paper is designed to provide guidance on optimal scan geometries for two key applications in the wind energy industry: wind turbine powermoreperformance analysis and annual energy production. We present a quantitative analysis of the retrieved wind speed uncertainty derived using a theoretical model with the assumption of isotropic and frozen turbulence, and observations from three sites that are onshore with flat terrain, onshore with complex terrain and offshore, respectively. The results from both the theoretical model and observations show that the uncertainty is scaled with the turbulence intensity such that the relative standard error on the 10 min mean wind speed is about 30 % of the turbulence intensity. The uncertainty in both retrieved wind speeds and derived wind energy production estimates can be reduced by aligning lidar beams with the dominant wind direction, increasing the arc span and lowering the number of beams per arc scan. Large arc spans should be used at sites with high turbulence intensity and/or large wind direction variation when arc scans are used for wind resource assessment.less
Radiotherapy Dose Fractionation under Parameter Uncertainty
Davison, Matt; Kim, Daero; Keller, Harald
2011-11-30
In radiotherapy, radiation is directed to damage a tumor while avoiding surrounding healthy tissue. Tradeoffs ensue because dose cannot be exactly shaped to the tumor. It is particularly important to ensure that sensitive biological structures near the tumor are not damaged more than a certain amount. Biological tissue is known to have a nonlinear response to incident radiation. The linear quadratic dose response model, which requires the specification of two clinically and experimentally observed response coefficients, is commonly used to model this effect. This model yields an optimization problem giving two different types of optimal dose sequences (fractionation schedules). Which fractionation schedule is preferred depends on the response coefficients. These coefficients are uncertainly known and may differ from patient to patient. Because of this not only the expected outcomes but also the uncertainty around these outcomes are important, and it might not be prudent to select the strategy with the best expected outcome.
Intrinsic Uncertainties in Modeling Complex Systems.
Cooper, Curtis S; Bramson, Aaron L.; Ames, Arlo L.
2014-09-01
Models are built to understand and predict the behaviors of both natural and artificial systems. Because it is always necessary to abstract away aspects of any non-trivial system being modeled, we know models can potentially leave out important, even critical elements. This reality of the modeling enterprise forces us to consider the prospective impacts of those effects completely left out of a model - either intentionally or unconsidered. Insensitivity to new structure is an indication of diminishing returns. In this work, we represent a hypothetical unknown effect on a validated model as a finite perturba- tion whose amplitude is constrained within a control region. We find robustly that without further constraints, no meaningful bounds can be placed on the amplitude of a perturbation outside of the control region. Thus, forecasting into unsampled regions is a very risky proposition. We also present inherent difficulties with proper time discretization of models and representing in- herently discrete quantities. We point out potentially worrisome uncertainties, arising from math- ematical formulation alone, which modelers can inadvertently introduce into models of complex systems. Acknowledgements This work has been funded under early-career LDRD project %23170979, entitled %22Quantify- ing Confidence in Complex Systems Models Having Structural Uncertainties%22, which ran from 04/2013 to 09/2014. We wish to express our gratitude to the many researchers at Sandia who con- tributed ideas to this work, as well as feedback on the manuscript. In particular, we would like to mention George Barr, Alexander Outkin, Walt Beyeler, Eric Vugrin, and Laura Swiler for provid- ing invaluable advice and guidance through the course of the project. We would also like to thank Steven Kleban, Amanda Gonzales, Trevor Manzanares, and Sarah Burwell for their assistance in managing project tasks and resources.
A Unified Approach for Reporting ARM Measurement Uncertainties Technical
Office of Scientific and Technical Information (OSTI)
Report (Program Document) | SciTech Connect A Unified Approach for Reporting ARM Measurement Uncertainties Technical Report Citation Details In-Document Search Title: A Unified Approach for Reporting ARM Measurement Uncertainties Technical Report The Atmospheric Radiation Measurement (ARM) Climate Research Facility is observationally based, and quantifying the uncertainty of its measurements is critically important. With over 300 widely differing instruments providing over 2,500 datastreams,
Entanglement criteria via concave-function uncertainty relations
Huang Yichen
2010-07-15
A general theorem as a necessary condition for the separability of quantum states in both finite and infinite dimensional systems, based on concave-function uncertainty relations, is derived. Two special cases of the general theorem are stronger than two known entanglement criteria based on the Shannon entropic uncertainty relation and the Landau-Pollak uncertainty relation, respectively; other special cases are able to detect entanglement where some famous entanglement criteria fail.
Uncertainty Reduction in Power Generation Forecast Using Coupled
Office of Scientific and Technical Information (OSTI)
Wavelet-ARIMA (Conference) | SciTech Connect Uncertainty Reduction in Power Generation Forecast Using Coupled Wavelet-ARIMA Citation Details In-Document Search Title: Uncertainty Reduction in Power Generation Forecast Using Coupled Wavelet-ARIMA In this paper, we introduce a new approach without implying normal distributions and stationarity of power generation forecast errors. In addition, it is desired to more accurately quantify the forecast uncertainty by reducing prediction intervals of
A Unified Approach for Reporting ARM Measurement Uncertainties...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
0 A Unified Approach for Reporting ARM Measurement Uncertainties Technical Report E Campos DL Sisterson October 2015 DISCLAIMER This report was prepared as an account of work ...
October 16, 2014 Webinar- Decisional Analysis under Uncertainty
Broader source: Energy.gov [DOE]
Webinar October 16, 2014, 11 am – 12:40 pm EDT: Dr. Paul Black (Neptune, Inc), Decisional Analysis under Uncertainty
Computational Fluid Dynamics & Large-Scale Uncertainty Quantification...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
... Computational Fluid Dynamics & Large-Scale Uncertainty Quantification for Wind Energy A team of Sandia experts in aerospace engineering, scientific computing, and mathematics ...
Estimation of uncertainty for contour method residual stress measurements
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Olson, Mitchell D.; DeWald, Adrian T.; Prime, Michael B.; Hill, Michael R.
2014-12-03
This paper describes a methodology for the estimation of measurement uncertainty for the contour method, where the contour method is an experimental technique for measuring a two-dimensional map of residual stress over a plane. Random error sources including the error arising from noise in displacement measurements and the smoothing of the displacement surfaces are accounted for in the uncertainty analysis. The output is a two-dimensional, spatially varying uncertainty estimate such that every point on the cross-section where residual stress is determined has a corresponding uncertainty value. Both numerical and physical experiments are reported, which are used to support the usefulnessmore » of the proposed uncertainty estimator. The uncertainty estimator shows the contour method to have larger uncertainty near the perimeter of the measurement plane. For the experiments, which were performed on a quenched aluminum bar with a cross section of 51 × 76 mm, the estimated uncertainty was approximately 5 MPa (σ/E = 7 · 10⁻⁵) over the majority of the cross-section, with localized areas of higher uncertainty, up to 10 MPa (σ/E = 14 · 10⁻⁵).« less
Improvements of Nuclear Data and Its Uncertainties by Theoretical...
Office of Scientific and Technical Information (OSTI)
Improvements of Nuclear Data and Its Uncertainties by Theoretical Modeling Talou, Patrick Los Alamos National Laboratory; Nazarewicz, Witold University of Tennessee, Knoxville,...
Output-Based Error Estimation and Adaptation for Uncertainty...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Output-Based Error Estimation and Adaptation for Uncertainty Quantification Isaac M. Asher and Krzysztof J. Fidkowski University of Michigan US National Congress on Computational...
Uncertainty Reduction in Power Generation Forecast Using Coupled...
Office of Scientific and Technical Information (OSTI)
quantify the forecast uncertainty by reducing prediction intervals of forecasts. ... means, e.g., using weather-based models, and reduce forecast errors prediction intervals. ...
Comparison of Uncertainty of Two Precipitation Prediction Models...
Office of Scientific and Technical Information (OSTI)
Prediction Models at Los Alamos National Lab Technical Area 54 Citation Details In-Document Search Title: Comparison of Uncertainty of Two Precipitation Prediction Models ...
Characterizing Uncertainty for Regional Climate Change Mitigation and Adaptation Decisions
Unwin, Stephen D.; Moss, Richard H.; Rice, Jennie S.; Scott, Michael J.
2011-09-30
This white paper describes the results of new research to develop an uncertainty characterization process to help address the challenges of regional climate change mitigation and adaptation decisions.
Measurement uncertainty analysis techniques applied to PV performance measurements
Wells, C.
1992-10-01
The purpose of this presentation is to provide a brief introduction to measurement uncertainty analysis, outline how it is done, and illustrate uncertainty analysis with examples drawn from the PV field, with particular emphasis toward its use in PV performance measurements. The uncertainty information we know and state concerning a PV performance measurement or a module test result determines, to a significant extent, the value and quality of that result. What is measurement uncertainty analysis It is an outgrowth of what has commonly been called error analysis. But uncertainty analysis, a more recent development, gives greater insight into measurement processes and tests, experiments, or calibration results. Uncertainty analysis gives us an estimate of the I interval about a measured value or an experiment's final result within which we believe the true value of that quantity will lie. Why should we take the time to perform an uncertainty analysis A rigorous measurement uncertainty analysis: Increases the credibility and value of research results; allows comparisons of results from different labs; helps improve experiment design and identifies where changes are needed to achieve stated objectives (through use of the pre-test analysis); plays a significant role in validating measurements and experimental results, and in demonstrating (through the post-test analysis) that valid data have been acquired; reduces the risk of making erroneous decisions; demonstrates quality assurance and quality control measures have been accomplished; define Valid Data as data having known and documented paths of: Origin, including theory; measurements; traceability to measurement standards; computations; uncertainty analysis of results.
Uncertainty quantification in fission cross section measurements at LANSCE
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Tovesson, F.
2015-01-09
Neutron-induced fission cross sections have been measured for several isotopes of uranium and plutonium at the Los Alamos Neutron Science Center (LANSCE) over a wide range of incident neutron energies. The total uncertainties in these measurements are in the range 3–5% above 100 keV of incident neutron energy, which results from uncertainties in the target, neutron source, and detector system. The individual sources of uncertainties are assumed to be uncorrelated, however correlation in the cross section across neutron energy bins are considered. The quantification of the uncertainty contributions will be described here.
Nonproliferation Uncertainties, a Major Barrier to Used Nuclear...
Office of Scientific and Technical Information (OSTI)
Barrier to Used Nuclear Fuel Recycle in the United States Citation Details In-Document Search Title: Nonproliferation Uncertainties, a Major Barrier to Used Nuclear Fuel Recycle in ...
THE EFFECT OF UNCERTAINTY IN MODELING COEFFICIENTS USED TO PREDICT...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
UNCERTAINTY IN MODELING COEFFICIENTS USED TO PREDICT ENERGY PRODUCTION USING THE SANDIA ARRAY ... relating voltage and current to solar irradiance, for crystalline silicon modules. ...
Estimation of uncertainty for contour method residual stress measurements
Olson, Mitchell D.; DeWald, Adrian T.; Prime, Michael B.; Hill, Michael R.
2014-12-03
This paper describes a methodology for the estimation of measurement uncertainty for the contour method, where the contour method is an experimental technique for measuring a two-dimensional map of residual stress over a plane. Random error sources including the error arising from noise in displacement measurements and the smoothing of the displacement surfaces are accounted for in the uncertainty analysis. The output is a two-dimensional, spatially varying uncertainty estimate such that every point on the cross-section where residual stress is determined has a corresponding uncertainty value. Both numerical and physical experiments are reported, which are used to support the usefulness of the proposed uncertainty estimator. The uncertainty estimator shows the contour method to have larger uncertainty near the perimeter of the measurement plane. For the experiments, which were performed on a quenched aluminum bar with a cross section of 51 76 mm, the estimated uncertainty was approximately 5 MPa (?/E = 7 10??) over the majority of the cross-section, with localized areas of higher uncertainty, up to 10 MPa (?/E = 14 10??).
Measuring Thermal Conductivity with Raman:Capability Uncertainty...
Office of Scientific and Technical Information (OSTI)
Title: Measuring Thermal Conductivity with Raman:Capability Uncertainty and Strain Effects. Abstract not provided. Authors: Beechem Iii, Thomas Edwin ; Yates, Luke Publication ...
Uncertainties in Air Exchange using Continuous-Injection, Long...
Office of Scientific and Technical Information (OSTI)
people to minimize experimental costs. In this article we will conduct a first-principles error analysis to estimate the uncertainties and then compare that analysis to CILTS...
Asymptotic and uncertainty analyses of a phase field model for...
Office of Scientific and Technical Information (OSTI)
We perform asymptotic analysis and uncertainty quantification of a phase field model for void formation and evolution in materials subject to irradiation. The parameters of the ...
Harper, F.T.; Young, M.L.; Miller, L.A.; Hora, S.C.; Lui, C.H.; Goossens, L.H.J.; Cooke, R.M.; Paesler-Sauer, J.; Helton, J.C.
1995-01-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the risks presented by nuclear installations based on postulated frequencies and magnitudes of potential accidents. In 1991, the US Nuclear Regulatory Commission (NRC) and the Commission of the European Communities (CEC) began a joint uncertainty analysis of the two codes. The ultimate objective of the joint effort was to develop credible and traceable uncertainty distributions for the input variables of the codes. Expert elicitation was identified as the best technology available for developing a library of uncertainty distributions for the selected consequence parameters. The study was formulated jointly and was limited to the current code models and to physical quantities that could be measured in experiments. Experts developed their distributions independently. To validate the distributions generated for the wet deposition input variables, samples were taken from these distributions and propagated through the wet deposition code model. Resulting distributions closely replicated the aggregated elicited wet deposition distributions. To validate the distributions generated for the dispersion code input variables, samples from the distributions and propagated through the Gaussian plume model (GPM) implemented in the MACCS and COSYMA codes. Project teams from the NRC and CEC cooperated successfully to develop and implement a unified process for the elaboration of uncertainty distributions on consequence code input parameters. Formal expert judgment elicitation proved valuable for synthesizing the best available information. Distributions on measurable atmospheric dispersion and deposition parameters were successfully elicited from experts involved in the many phenomenological areas of consequence analysis. This volume is the first of a three-volume document describing the project.
Harper, F.T.; Young, M.L.; Miller, L.A.; Hora, S.C.; Lui, C.H.; Goossens, L.H.J.; Cooke, R.M.; Paesler-Sauer, J.; Helton, J.C.
1995-01-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, completed in 1990, estimate the risks presented by nuclear installations based on postulated frequencies and magnitudes of potential accidents. In 1991, the US Nuclear Regulatory Commission (NRC) and the Commission of the European Communities (CEC) began a joint uncertainty analysis of the two codes. The objective was to develop credible and traceable uncertainty distributions for the input variables of the codes. Expert elicitation, developed independently, was identified as the best technology available for developing a library of uncertainty distributions for the selected consequence parameters. The study was formulated jointly and was limited to the current code models and to physical quantities that could be measured in experiments. To validate the distributions generated for the wet deposition input variables, samples were taken from these distributions and propagated through the wet deposition code model along with the Gaussian plume model (GPM) implemented in the MACCS and COSYMA codes. Resulting distributions closely replicated the aggregated elicited wet deposition distributions. Project teams from the NRC and CEC cooperated successfully to develop and implement a unified process for the elaboration of uncertainty distributions on consequence code input parameters. Formal expert judgment elicitation proved valuable for synthesizing the best available information. Distributions on measurable atmospheric dispersion and deposition parameters were successfully elicited from experts involved in the many phenomenological areas of consequence analysis. This volume is the second of a three-volume document describing the project and contains two appendices describing the rationales for the dispersion and deposition data along with short biographies of the 16 experts who participated in the project.
Goossens, L.H.J.; Kraan, B.C.P.; Cooke, R.M.; Harrison, J.D.; Harper, F.T.; Hora, S.C.
1998-04-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequence from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the US Nuclear Regulatory Commission and the Commission of the European Communities began cosponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables. A formal expert judgment elicitation and evaluation process was identified as the best technology available for developing a library of uncertainty distributions for these consequence parameters. This report focuses on the results of the study to develop distribution for variables related to the MACCS and COSYMA internal dosimetry models. This volume contains appendices that include (1) a summary of the MACCS and COSYMA consequence codes, (2) the elicitation questionnaires and case structures, (3) the rationales and results for the panel on internal dosimetry, (4) short biographies of the experts, and (5) the aggregated results of their responses.
Investigation of uncertainty components in Coulomb blockade thermometry
Hahtela, O. M.; Heinonen, M.; Manninen, A.; Meschke, M.; Savin, A.; Pekola, J. P.; Gunnarsson, D.; Prunnila, M.; Penttil, J. S.; Roschier, L.
2013-09-11
Coulomb blockade thermometry (CBT) has proven to be a feasible method for primary thermometry in every day laboratory use at cryogenic temperatures from ca. 10 mK to a few tens of kelvins. The operation of CBT is based on single electron charging effects in normal metal tunnel junctions. In this paper, we discuss the typical error sources and uncertainty components that limit the present absolute accuracy of the CBT measurements to the level of about 1 % in the optimum temperature range. Identifying the influence of different uncertainty sources is a good starting point for improving the measurement accuracy to the level that would allow the CBT to be more widely used in high-precision low temperature metrological applications and for realizing thermodynamic temperature in accordance to the upcoming new definition of kelvin.
Size exclusion deep bed filtration: Experimental and modelling uncertainties
Badalyan, Alexander You, Zhenjiang; Aji, Kaiser; Bedrikovetsky, Pavel; Carageorgos, Themis; Zeinijahromi, Abbas
2014-01-15
A detailed uncertainty analysis associated with carboxyl-modified latex particle capture in glass bead-formed porous media enabled verification of the two theoretical stochastic models for prediction of particle retention due to size exclusion. At the beginning of this analysis it is established that size exclusion is a dominant particle capture mechanism in the present study: calculated significant repulsive Derjaguin-Landau-Verwey-Overbeek potential between latex particles and glass beads is an indication of their mutual repulsion, thus, fulfilling the necessary condition for size exclusion. Applying linear uncertainty propagation method in the form of truncated Taylor's series expansion, combined standard uncertainties (CSUs) in normalised suspended particle concentrations are calculated using CSUs in experimentally determined parameters such as: an inlet volumetric flowrate of suspension, particle number in suspensions, particle concentrations in inlet and outlet streams, particle and pore throat size distributions. Weathering of glass beads in high alkaline solutions does not appreciably change particle size distribution, and, therefore, is not considered as an additional contributor to the weighted mean particle radius and corresponded weighted mean standard deviation. Weighted mean particle radius and LogNormal mean pore throat radius are characterised by the highest CSUs among all experimental parameters translating to high CSU in the jamming ratio factor (dimensionless particle size). Normalised suspended particle concentrations calculated via two theoretical models are characterised by higher CSUs than those for experimental data. The model accounting the fraction of inaccessible flow as a function of latex particle radius excellently predicts normalised suspended particle concentrations for the whole range of jamming ratios. The presented uncertainty analysis can be also used for comparison of intra- and inter-laboratory particle size exclusion data.
Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint
Campanelli, Mark; Duck, Benjamin; Emery, Keith
2015-09-28
Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data points can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.
Helton, Jon Craig; Sallaberry, Cedric M.; Hansen, Clifford W.
2010-05-01
Extensive work has been carried out by the U.S. Department of Energy (DOE) in the development of a proposed geologic repository at Yucca Mountain (YM), Nevada, for the disposal of high-level radioactive waste. As part of this development, an extensive performance assessment (PA) for the YM repository was completed in 2008 [1] and supported a license application by the DOE to the U.S. Nuclear Regulatory Commission (NRC) for the construction of the YM repository [2]. This presentation provides an overview of the conceptual and computational structure of the indicated PA (hereafter referred to as the 2008 YM PA) and the roles that uncertainty analysis and sensitivity analysis play in this structure.
Avoiding climate change uncertainties in Strategic Environmental Assessment
Larsen, Sanne Vammen; Krnv, Lone; Driscoll, Patrick
2013-11-15
This article is concerned with how Strategic Environmental Assessment (SEA) practice handles climate change uncertainties within the Danish planning system. First, a hypothetical model is set up for how uncertainty is handled and not handled in decision-making. The model incorporates the strategies reduction and resilience, denying, ignoring and postponing. Second, 151 Danish SEAs are analysed with a focus on the extent to which climate change uncertainties are acknowledged and presented, and the empirical findings are discussed in relation to the model. The findings indicate that despite incentives to do so, climate change uncertainties were systematically avoided or downplayed in all but 5 of the 151 SEAs that were reviewed. Finally, two possible explanatory mechanisms are proposed to explain this: conflict avoidance and a need to quantify uncertainty.
Error and uncertainty in Raman thermal conductivity measurements
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Thomas Edwin Beechem; Yates, Luke; Graham, Samuel
2015-04-22
We investigated error and uncertainty in Raman thermal conductivity measurements via finite element based numerical simulation of two geometries often employed -- Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materialsmore » under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter -- termed the Raman stress factor -- is derived to identify when stress effects will induce large levels of error. Together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.« less
Error and uncertainty in Raman thermal conductivity measurements
Thomas Edwin Beechem; Yates, Luke; Graham, Samuel
2015-04-22
We investigated error and uncertainty in Raman thermal conductivity measurements via finite element based numerical simulation of two geometries often employed -- Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materials under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter -- termed the Raman stress factor -- is derived to identify when stress effects will induce large levels of error. Together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.
Díez, C.J.; Cabellos, O.; Martínez, J.S.
2015-01-15
Several approaches have been developed in last decades to tackle nuclear data uncertainty propagation problems of burn-up calculations. One approach proposed was the Hybrid Method, where uncertainties in nuclear data are propagated only on the depletion part of a burn-up problem. Because only depletion is addressed, only one-group cross sections are necessary, and hence, their collapsed one-group uncertainties. This approach has been applied successfully in several advanced reactor systems like EFIT (ADS-like reactor) or ESFR (Sodium fast reactor) to assess uncertainties on the isotopic composition. However, a comparison with using multi-group energy structures was not carried out, and has to be performed in order to analyse the limitations of using one-group uncertainties.
Measurement uncertainty analysis techniques applied to PV performance measurements
Wells, C.
1992-10-01
The purpose of this presentation is to provide a brief introduction to measurement uncertainty analysis, outline how it is done, and illustrate uncertainty analysis with examples drawn from the PV field, with particular emphasis toward its use in PV performance measurements. The uncertainty information we know and state concerning a PV performance measurement or a module test result determines, to a significant extent, the value and quality of that result. What is measurement uncertainty analysis? It is an outgrowth of what has commonly been called error analysis. But uncertainty analysis, a more recent development, gives greater insight into measurement processes and tests, experiments, or calibration results. Uncertainty analysis gives us an estimate of the I interval about a measured value or an experiment`s final result within which we believe the true value of that quantity will lie. Why should we take the time to perform an uncertainty analysis? A rigorous measurement uncertainty analysis: Increases the credibility and value of research results; allows comparisons of results from different labs; helps improve experiment design and identifies where changes are needed to achieve stated objectives (through use of the pre-test analysis); plays a significant role in validating measurements and experimental results, and in demonstrating (through the post-test analysis) that valid data have been acquired; reduces the risk of making erroneous decisions; demonstrates quality assurance and quality control measures have been accomplished; define Valid Data as data having known and documented paths of: Origin, including theory; measurements; traceability to measurement standards; computations; uncertainty analysis of results.
Calibration and Measurement Uncertainty Estimation of Radiometric Data: Preprint
Habte, A.; Sengupta, M.; Reda, I.; Andreas, A.; Konings, J.
2014-11-01
Evaluating the performance of photovoltaic cells, modules, and arrays that form large solar deployments relies on accurate measurements of the available solar resource. Therefore, determining the accuracy of these solar radiation measurements provides a better understanding of investment risks. This paper provides guidelines and recommended procedures for estimating the uncertainty in calibrations and measurements by radiometers using methods that follow the International Bureau of Weights and Measures Guide to the Expression of Uncertainty (GUM). Standardized analysis based on these procedures ensures that the uncertainty quoted is well documented.
PDF uncertainties at large x and gauge boson production
Accardi, Alberto
2012-10-01
I discuss how global QCD fits of parton distribution functions can make the somewhat separated fields of high-energy particle physics and lower energy hadronic and nuclear physics interact to the benefit of both. In particular, I will argue that large rapidity gauge boson production at the Tevatron and the LHC has the highest short-term potential to constrain the theoretical nuclear corrections to DIS data on deuteron targets necessary for up/down flavor separation. This in turn can considerably reduce the PDF uncertainty on cross section calculations of heavy mass particles such as W' and Z' bosons.
Preliminary Results on Uncertainty Quantification for Pattern Analytics
Stracuzzi, David John; Brost, Randolph; Chen, Maximillian Gene; Malinas, Rebecca; Peterson, Matthew Gregor; Phillips, Cynthia A.; Robinson, David G.; Woodbridge, Diane
2015-09-01
This report summarizes preliminary research into uncertainty quantification for pattern ana- lytics within the context of the Pattern Analytics to Support High-Performance Exploitation and Reasoning (PANTHER) project. The primary focus of PANTHER was to make large quantities of remote sensing data searchable by analysts. The work described in this re- port adds nuance to both the initial data preparation steps and the search process. Search queries are transformed from does the specified pattern exist in the data? to how certain is the system that the returned results match the query? We show example results for both data processing and search, and discuss a number of possible improvements for each.
Uncertainty analysis of multi-rate kinetics of uranium desorption...
Office of Scientific and Technical Information (OSTI)
in the multi-rate model to simualte U(VI) desorption; 3) however, long-term prediction and its uncertainty may be significantly biased by the lognormal assumption for the ...
Dasymetric Modeling and Uncertainty (Journal Article) | SciTech...
Office of Scientific and Technical Information (OSTI)
Journal Article: Dasymetric Modeling and Uncertainty Citation Details In-Document Search ... DOE Contract Number: DE-AC05-00OR22725 Resource Type: Journal Article Resource Relation: ...
Problem Solving Environment for Uncertainty Analysis and Design Exploration
Energy Science and Technology Software Center (OSTI)
2011-10-26
PSUADE is an software system that is used to study the releationships between the inputs and outputs of gerneral simulation models for the purpose of performing uncertainty and sensitivity analyses on simulation models.
Integration of Uncertainty Information into Power System Operations
Makarov, Yuri V.; Lu, Shuai; Samaan, Nader A.; Huang, Zhenyu; Subbarao, Krishnappa; Etingov, Pavel V.; Ma, Jian; Hafen, Ryan P.; Diao, Ruisheng; Lu, Ning
2011-10-10
Contemporary power systems face uncertainties coming from multiple sources, including forecast errors of load, wind and solar generation, uninstructed deviation and forced outage of traditional generators, loss of transmission lines, and others. With increasing amounts of wind and solar generation being integrated into the system, these uncertainties have been growing significantly. It is critical important to build knowledge of major sources of uncertainty, learn how to simulate them, and then incorporate this information into the decision-making processes and power system operations, for better reliability and efficiency. This paper gives a comprehensive view on the sources of uncertainty in power systems, important characteristics, available models, and ways of their integration into system operations. It is primarily based on previous works conducted at the Pacific Northwest National Laboratory (PNNL).
Estimation and Uncertainty Analysis of Impacts of Future Heat...
Office of Scientific and Technical Information (OSTI)
of Future Heat Waves on Mortality in the Eastern United States Citation Details In-Document Search Title: Estimation and Uncertainty Analysis of Impacts of Future Heat Waves on ...
Error and Uncertainty in Raman Thermal Conductivity Measurements
Office of Scientific and Technical Information (OSTI)
20695J 1 Error and Uncertainty in Raman Thermal Conductivity Measurements 2 Thomas Beechem,1, a) Luke Yates,1,2 and Samuel Graham2 3 !)Sandia National Laboratories, Albuquerque, NM, USA 87123 4 2' )G.W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, 5 Atlanta, GA, 30332 Error and uncertainty in Raman thermal conductivity measurements are investigated via fi- nite element based numerical simulation of two geometries often employed-Joule-heating of a wire and
From Deterministic Inversion to Uncertainty Quantification: Planning a Long
Office of Scientific and Technical Information (OSTI)
Journey in Ice Sheet Modeling. (Conference) | SciTech Connect From Deterministic Inversion to Uncertainty Quantification: Planning a Long Journey in Ice Sheet Modeling. Citation Details In-Document Search Title: From Deterministic Inversion to Uncertainty Quantification: Planning a Long Journey in Ice Sheet Modeling. Abstract not provided. Authors: Eldred, Michael S. ; Heimbach, Patrick [1] ; Jackson, Charles [2] ; Jakeman, John Davis ; Perego, Mauro ; Price, Stephen, [3] ; Salinger, Andrew
Improvements of Nuclear Data and Its Uncertainties by Theoretical Modeling
Office of Scientific and Technical Information (OSTI)
(Technical Report) | SciTech Connect Improvements of Nuclear Data and Its Uncertainties by Theoretical Modeling Citation Details In-Document Search Title: Improvements of Nuclear Data and Its Uncertainties by Theoretical Modeling Authors: Talou, Patrick [1] ; Nazarewicz, Witold [2] ; Prinja, Anil [3] ; Danon, Yaron [4] + Show Author Affiliations Los Alamos National Laboratory University of Tennessee, Knoxville, TN 37996, USA University of New Mexico, USA Rensselaer Polytechnic Institute, USA
Lab RFP: Validation and Uncertainty Characterization | Department of Energy
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Validation and Uncertainty Characterization Lab RFP: Validation and Uncertainty Characterization LBNL's FLEXLAB test facility, which includes four test cells each split into two half-cells to enable side-by-side comparative experiments. The cells have one active, reconfigurable facade, and individual, reconfigurable single-zone HVAC systems. The cell facing the camera sits on 270 degree turntable. Photo credit: LBNL. Bottom: ORNL's two-story flexible research platform test building. The building
Measuring Thermal Conductivity with Raman:Capability Uncertainty and Strain
Office of Scientific and Technical Information (OSTI)
Effects. (Conference) | SciTech Connect Measuring Thermal Conductivity with Raman:Capability Uncertainty and Strain Effects. Citation Details In-Document Search Title: Measuring Thermal Conductivity with Raman:Capability Uncertainty and Strain Effects. Abstract not provided. Authors: Beechem Iii, Thomas Edwin ; Yates, Luke Publication Date: 2012-11-01 OSTI Identifier: 1116156 Report Number(s): SAND2012-10198C 480178 DOE Contract Number: AC04-94AL85000 Resource Type: Conference Resource
Modeling Correlations In Prompt Neutron Fission Spectra Uncertainties
Office of Scientific and Technical Information (OSTI)
(Conference) | SciTech Connect Modeling Correlations In Prompt Neutron Fission Spectra Uncertainties Citation Details In-Document Search Title: Modeling Correlations In Prompt Neutron Fission Spectra Uncertainties Authors: White, Morgan C. [1] ; Rising, Michael E. [1] ; Talou, Patrick [1] + Show Author Affiliations Los Alamos National Laboratory Publication Date: 2012-10-22 OSTI Identifier: 1053899 Report Number(s): LA-UR-12-25665 DOE Contract Number: AC52-06NA25396 Resource Type: Conference
A density-matching approach for optimization under uncertainty (Journal
Office of Scientific and Technical Information (OSTI)
Article) | SciTech Connect A density-matching approach for optimization under uncertainty Citation Details In-Document Search Title: A density-matching approach for optimization under uncertainty Authors: Seshadri, Pranay ; Constantine, Paul ; Iaccarino, Gianluca ; Parks, Geoffrey Search SciTech Connect for author "Parks, Geoffrey" Search SciTech Connect for ORCID "0000000181885047" Search orcid.org for ORCID "0000000181885047" Publication Date: 2016-06-01 OSTI
An Efficient Surrogate Modeling Approach in Bayesian Uncertainty Analysis
Office of Scientific and Technical Information (OSTI)
(Conference) | SciTech Connect Conference: An Efficient Surrogate Modeling Approach in Bayesian Uncertainty Analysis Citation Details In-Document Search Title: An Efficient Surrogate Modeling Approach in Bayesian Uncertainty Analysis Authors: Zhang, Guannan [1] ; Gunzburger, Max D [1] ; Lu, Dan [1] ; Webster, Clayton G [1] ; Ye, Ming [2] + Show Author Affiliations ORNL Florida State University, Tallahassee Publication Date: 2013-01-01 OSTI Identifier: 1096320 DOE Contract Number:
Research Portfolio Report Ultra-Deepwater: Geologic Uncertainty
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Geologic Uncertainty Cover Image: 3D visualization of directionally drilled boreholes in the Gulf of Mexico, field MC109, showing NETL's interpretation of two reservoir sand intervals. Research Portfolio Report Ultra-Deepwater: Geologic Uncertainty DOE/NETL-2015/1694 Prepared by: Mari Nichols-Haining, Jennifer Funk, Kathy Bruner, John Oelfke, and Christine Rueter KeyLogic Systems, Inc. National Energy Technology Laboratory (NETL) Contact: James Ammer james.ammer@netl.doe.gov Contract
Nonproliferation Uncertainties, a Major Barrier to Used Nuclear Fuel
Office of Scientific and Technical Information (OSTI)
Recycle in the United States (Conference) | SciTech Connect SciTech Connect Search Results Conference: Nonproliferation Uncertainties, a Major Barrier to Used Nuclear Fuel Recycle in the United States Citation Details In-Document Search Title: Nonproliferation Uncertainties, a Major Barrier to Used Nuclear Fuel Recycle in the United States A study and comparison of the goals and understandings of nonproliferation authorities with those of used nuclear fuel (UNF) recycle advocates have
Assessment of Uncertainty in Cloud Radiative Effects and Heating Rates
Office of Scientific and Technical Information (OSTI)
through Retrieval Algorithm Differences: Analysis using 3-years of ARM data at Darwin, Australia (Journal Article) | SciTech Connect Assessment of Uncertainty in Cloud Radiative Effects and Heating Rates through Retrieval Algorithm Differences: Analysis using 3-years of ARM data at Darwin, Australia Citation Details In-Document Search Title: Assessment of Uncertainty in Cloud Radiative Effects and Heating Rates through Retrieval Algorithm Differences: Analysis using 3-years of ARM data at
Comparison of Uncertainty of Two Precipitation Prediction Models at Los
Office of Scientific and Technical Information (OSTI)
Alamos National Lab Technical Area 54 (Technical Report) | SciTech Connect Comparison of Uncertainty of Two Precipitation Prediction Models at Los Alamos National Lab Technical Area 54 Citation Details In-Document Search Title: Comparison of Uncertainty of Two Precipitation Prediction Models at Los Alamos National Lab Technical Area 54 Meteorological inputs are an important part of subsurface flow and transport modeling. The choice of source for meteorological data used as inputs has
Analysis of the Uncertainty in Wind Measurements from the Atmospheric
Office of Scientific and Technical Information (OSTI)
Radiation Measurement Doppler Lidar during XPIA: Field Campaign Report (Program Document) | SciTech Connect SciTech Connect Search Results Program Document: Analysis of the Uncertainty in Wind Measurements from the Atmospheric Radiation Measurement Doppler Lidar during XPIA: Field Campaign Report Citation Details In-Document Search Title: Analysis of the Uncertainty in Wind Measurements from the Atmospheric Radiation Measurement Doppler Lidar during XPIA: Field Campaign Report In March and
Determining Best Estimates and Uncertainties in Cloud Microphysical
Office of Scientific and Technical Information (OSTI)
Parameters from ARM Field Data: Implications for Models, Retrieval Schemes and Aerosol-Cloud-Radiation Interactions (Technical Report) | SciTech Connect Determining Best Estimates and Uncertainties in Cloud Microphysical Parameters from ARM Field Data: Implications for Models, Retrieval Schemes and Aerosol-Cloud-Radiation Interactions Citation Details In-Document Search Title: Determining Best Estimates and Uncertainties in Cloud Microphysical Parameters from ARM Field Data: Implications for
Gas Exploration Software for Reducing Uncertainty in Gas Concentration
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Estimates - Energy Innovation Portal Energy Analysis Energy Analysis Find More Like This Return to Search Gas Exploration Software for Reducing Uncertainty in Gas Concentration Estimates Lawrence Berkeley National Laboratory Contact LBL About This Technology Technology Marketing SummaryEstimating reservoir parameters for gas exploration from geophysical data is subject to a large degree of uncertainty. Seismic imaging techniques, such as seismic amplitude versus angle (AVA) analysis, can
Uncertainty Quantification and Propagation in Nuclear Density Functional
Office of Scientific and Technical Information (OSTI)
Theory (Conference) | SciTech Connect Conference: Uncertainty Quantification and Propagation in Nuclear Density Functional Theory Citation Details In-Document Search Title: Uncertainty Quantification and Propagation in Nuclear Density Functional Theory Nuclear density functional theory (DFT) is one of the main theoretical tools used to study the properties of heavy and superheavy elements, or to describe the structure of nuclei far from stability. While on-going eff orts seek to better root
Uncertainty quantification for evaluating impacts of caprock and reservoir
Office of Scientific and Technical Information (OSTI)
properties on pressure buildup and ground surface displacement during geological CO2 sequestration (Journal Article) | SciTech Connect Uncertainty quantification for evaluating impacts of caprock and reservoir properties on pressure buildup and ground surface displacement during geological CO2 sequestration Citation Details In-Document Search Title: Uncertainty quantification for evaluating impacts of caprock and reservoir properties on pressure buildup and ground surface displacement during
Uncertainty quantification for evaluating the impacts of fracture zone on
Office of Scientific and Technical Information (OSTI)
pressure build-up and ground surface uplift during geological CO₂ sequestration (Journal Article) | SciTech Connect Uncertainty quantification for evaluating the impacts of fracture zone on pressure build-up and ground surface uplift during geological CO₂ sequestration Citation Details In-Document Search Title: Uncertainty quantification for evaluating the impacts of fracture zone on pressure build-up and ground surface uplift during geological CO₂ sequestration A series of numerical
Uncertainty quantification methodologies development for storage and trans-
Office of Scientific and Technical Information (OSTI)
portation of used nuclear fuel: Pilot study on stress corrosion cracking of canister welds (Technical Report) | SciTech Connect Technical Report: Uncertainty quantification methodologies development for storage and trans- portation of used nuclear fuel: Pilot study on stress corrosion cracking of canister welds Citation Details In-Document Search Title: Uncertainty quantification methodologies development for storage and trans- portation of used nuclear fuel: Pilot study on stress corrosion
Uncertainty quantification of US Southwest climate from IPCC projections.
Office of Scientific and Technical Information (OSTI)
(Technical Report) | SciTech Connect Technical Report: Uncertainty quantification of US Southwest climate from IPCC projections. Citation Details In-Document Search Title: Uncertainty quantification of US Southwest climate from IPCC projections. The Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4) made extensive use of coordinated simulations by 18 international modeling groups using a variety of coupled general circulation models (GCMs) with different
Uncertainties in the Anti-neutrino Production at Nuclear Reactors
Djurcic, Zelimir; Detwiler, Jason A.; Piepke, Andreas; Foster Jr., Vince R.; Miller, Lester; Gratta, Giorgio
2008-08-06
Anti-neutrino emission rates from nuclear reactors are determined from thermal power measurements and fission rate calculations. The uncertainties in these quantities for commercial power plants and their impact on the calculated interaction rates in {bar {nu}}{sub e} detectors is examined. We discuss reactor-to-reactor correlations between the leading uncertainties, and their relevance to reactor {bar {nu}}{sub e} experiments.
How incorporating more data reduces uncertainty in recovery predictions
Campozana, F.P.; Lake, L.W.; Sepehrnoori, K.
1997-08-01
From the discovery to the abandonment of a petroleum reservoir, there are many decisions that involve economic risks because of uncertainty in the production forecast. This uncertainty may be quantified by performing stochastic reservoir modeling (SRM); however, it is not practical to apply SRM every time the model is updated to account for new data. This paper suggests a novel procedure to estimate reservoir uncertainty (and its reduction) as a function of the amount and type of data used in the reservoir modeling. Two types of data are analyzed: conditioning data and well-test data. However, the same procedure can be applied to any other data type. Three performance parameters are suggested to quantify uncertainty. SRM is performed for the following typical stages: discovery, primary production, secondary production, and infill drilling. From those results, a set of curves is generated that can be used to estimate (1) the uncertainty for any other situation and (2) the uncertainty reduction caused by the introduction of new wells (with and without well-test data) into the description.
Fuzzy-probabilistic calculations of water-balance uncertainty
Faybishenko, B.
2009-10-01
Hydrogeological systems are often characterized by imprecise, vague, inconsistent, incomplete, or subjective information, which may limit the application of conventional stochastic methods in predicting hydrogeologic conditions and associated uncertainty. Instead, redictions and uncertainty analysis can be made using uncertain input parameters expressed as probability boxes, intervals, and fuzzy numbers. The objective of this paper is to present the theory for, and a case study as an application of, the fuzzyprobabilistic approach, ombining probability and possibility theory for simulating soil water balance and assessing associated uncertainty in the components of a simple waterbalance equation. The application of this approach is demonstrated using calculations with the RAMAS Risk Calc code, to ssess the propagation of uncertainty in calculating potential evapotranspiration, actual evapotranspiration, and infiltration-in a case study at the Hanford site, Washington, USA. Propagation of uncertainty into the results of water-balance calculations was evaluated by hanging he types of models of uncertainty incorporated into various input parameters. The results of these fuzzy-probabilistic calculations are compared to the conventional Monte Carlo simulation approach and estimates from field observations at the Hanford site.
Uncertainty Estimation Improves Energy Measurement and Verification Procedures
Walter, Travis; Price, Phillip N.; Sohn, Michael D.
2014-05-14
Implementing energy conservation measures in buildings can reduce energy costs and environmental impacts, but such measures cost money to implement so intelligent investment strategies require the ability to quantify the energy savings by comparing actual energy used to how much energy would have been used in absence of the conservation measures (known as the baseline energy use). Methods exist for predicting baseline energy use, but a limitation of most statistical methods reported in the literature is inadequate quantification of the uncertainty in baseline energy use predictions. However, estimation of uncertainty is essential for weighing the risks of investing in retrofits. Most commercial buildings have, or soon will have, electricity meters capable of providing data at short time intervals. These data provide new opportunities to quantify uncertainty in baseline predictions, and to do so after shorter measurement durations than are traditionally used. In this paper, we show that uncertainty estimation provides greater measurement and verification (M&V) information and helps to overcome some of the difficulties with deciding how much data is needed to develop baseline models and to confirm energy savings. We also show that cross-validation is an effective method for computing uncertainty. In so doing, we extend a simple regression-based method of predicting energy use using short-interval meter data. We demonstrate the methods by predicting energy use in 17 real commercial buildings. We discuss the benefits of uncertainty estimates which can provide actionable decision making information for investing in energy conservation measures.
Uncertainty and sensitivity analysis for photovoltaic system modeling.
Hansen, Clifford W.; Pohl, Andrew Phillip; Jordan, Dirk
2013-12-01
We report an uncertainty and sensitivity analysis for modeling DC energy from photovoltaic systems. We consider two systems, each comprised of a single module using either crystalline silicon or CdTe cells, and located either at Albuquerque, NM, or Golden, CO. Output from a PV system is predicted by a sequence of models. Uncertainty in the output of each model is quantified by empirical distributions of each model's residuals. We sample these distributions to propagate uncertainty through the sequence of models to obtain an empirical distribution for each PV system's output. We considered models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane-of-array irradiance; (2) estimate effective irradiance from plane-of-array irradiance; (3) predict cell temperature; and (4) estimate DC voltage, current and power. We found that the uncertainty in PV system output to be relatively small, on the order of 1% for daily energy. Four alternative models were considered for the POA irradiance modeling step; we did not find the choice of one of these models to be of great significance. However, we observed that the POA irradiance model introduced a bias of upwards of 5% of daily energy which translates directly to a systematic difference in predicted energy. Sensitivity analyses relate uncertainty in the PV system output to uncertainty arising from each model. We found that the residuals arising from the POA irradiance and the effective irradiance models to be the dominant contributors to residuals for daily energy, for either technology or location considered. This analysis indicates that efforts to reduce the uncertainty in PV system output should focus on improvements to the POA and effective irradiance models.
Habte, A.; Sengupta, M.; Reda, I.
2015-03-01
Radiometric data with known and traceable uncertainty is essential for climate change studies to better understand cloud radiation interactions and the earth radiation budget. Further, adopting a known and traceable method of estimating uncertainty with respect to SI ensures that the uncertainty quoted for radiometric measurements can be compared based on documented methods of derivation.Therefore, statements about the overall measurement uncertainty can only be made on an individual basis, taking all relevant factors into account. This poster provides guidelines and recommended procedures for estimating the uncertainty in calibrations and measurements from radiometers. The approach follows the Guide to the Expression of Uncertainty in Measurement (GUM). derivation.Therefore, statements about the overall measurement uncertainty can only be made on an individual basis, taking all relevant factors into account. This poster provides guidelines and recommended procedures for estimating the uncertainty in calibrations and measurements from radiometers. The approach follows the Guide to the Expression of Uncertainty in Measurement (GUM).
Investment and Upgrade in Distributed Generation under Uncertainty
Siddiqui, Afzal; Maribu, Karl
2008-08-18
The ongoing deregulation of electricity industries worldwide is providing incentives for microgrids to use small-scale distributed generation (DG) and combined heat and power (CHP) applications via heat exchangers (HXs) to meet local energy loads. Although the electric-only efficiency of DG is lower than that of central-station production, relatively high tariff rates and the potential for CHP applications increase the attraction of on-site generation. Nevertheless, a microgrid contemplatingthe installation of gas-fired DG has to be aware of the uncertainty in the natural gas price. Treatment of uncertainty via real options increases the value of the investment opportunity, which then delays the adoption decision as the opportunity cost of exercising the investment option increases as well. In this paper, we take the perspective of a microgrid that can proceed in a sequential manner with DG capacity and HX investment in order to reduce its exposure to risk from natural gas price volatility. In particular, with the availability of the HX, the microgrid faces a tradeoff between reducing its exposure to the natural gas price and maximising its cost savings. By varying the volatility parameter, we find that the microgrid prefers a direct investment strategy for low levels of volatility and a sequential one for higher levels of volatility.
Systematic uncertainties from halo asphericity in dark matter searches
Bernal, Nicols; Forero-Romero, Jaime E.; Garani, Raghuveer; Palomares-Ruiz, Sergio E-mail: je.forero@uniandes.edu.co E-mail: sergio.palomares.ruiz@ific.uv.es
2014-09-01
Although commonly assumed to be spherical, dark matter halos are predicted to be non-spherical by N-body simulations and their asphericity has a potential impact on the systematic uncertainties in dark matter searches. The evaluation of these uncertainties is the main aim of this work, where we study the impact of aspherical dark matter density distributions in Milky-Way-like halos on direct and indirect searches. Using data from the large N-body cosmological simulation Bolshoi, we perform a statistical analysis and quantify the systematic uncertainties on the determination of local dark matter density and the so-called J factors for dark matter annihilations and decays from the galactic center. We find that, due to our ignorance about the extent of the non-sphericity of the Milky Way dark matter halo, systematic uncertainties can be as large as 35%, within the 95% most probable region, for a spherically averaged value for the local density of 0.3-0.4 GeV/cm {sup 3}. Similarly, systematic uncertainties on the J factors evaluated around the galactic center can be as large as 10% and 15%, within the 95% most probable region, for dark matter annihilations and decays, respectively.
Fuel cycle cost uncertainty from nuclear fuel cycle comparison
Li, J.; McNelis, D.; Yim, M.S.
2013-07-01
This paper examined the uncertainty in fuel cycle cost (FCC) calculation by considering both model and parameter uncertainty. Four different fuel cycle options were compared in the analysis including the once-through cycle (OT), the DUPIC cycle, the MOX cycle and a closed fuel cycle with fast reactors (FR). The model uncertainty was addressed by using three different FCC modeling approaches with and without the time value of money consideration. The relative ratios of FCC in comparison to OT did not change much by using different modeling approaches. This observation was consistent with the results of the sensitivity study for the discount rate. Two different sets of data with uncertainty range of unit costs were used to address the parameter uncertainty of the FCC calculation. The sensitivity study showed that the dominating contributor to the total variance of FCC is the uranium price. In general, the FCC of OT was found to be the lowest followed by FR, MOX, and DUPIC. But depending on the uranium price, the FR cycle was found to have lower FCC over OT. The reprocessing cost was also found to have a major impact on FCC.
PIV Uncertainty Methodologies for CFD Code Validation at the MIR Facility
Piyush Sabharwall; Richard Skifton; Carl Stoots; Eung Soo Kim; Thomas Conder
2013-12-01
Currently, computational fluid dynamics (CFD) is widely used in the nuclear thermal hydraulics field for design and safety analyses. To validate CFD codes, high quality multi dimensional flow field data are essential. The Matched Index of Refraction (MIR) Flow Facility at Idaho National Laboratory has a unique capability to contribute to the development of validated CFD codes through the use of Particle Image Velocimetry (PIV). The significance of the MIR facility is that it permits non intrusive velocity measurement techniques, such as PIV, through complex models without requiring probes and other instrumentation that disturb the flow. At the heart of any PIV calculation is the cross-correlation, which is used to estimate the displacement of particles in some small part of the image over the time span between two images. This image displacement is indicated by the location of the largest peak. In the MIR facility, uncertainty quantification is a challenging task due to the use of optical measurement techniques. Currently, this study is developing a reliable method to analyze uncertainty and sensitivity of the measured data and develop a computer code to automatically analyze the uncertainty/sensitivity of the measured data. The main objective of this study is to develop a well established uncertainty quantification method for the MIR Flow Facility, which consists of many complicated uncertainty factors. In this study, the uncertainty sources are resolved in depth by categorizing them into uncertainties from the MIR flow loop and PIV system (including particle motion, image distortion, and data processing). Then, each uncertainty source is mathematically modeled or adequately defined. Finally, this study will provide a method and procedure to quantify the experimental uncertainty in the MIR Flow Facility with sample test results.
Uncertainty quantification and validation of combined hydrological and macroeconomic analyses.
Hernandez, Jacquelynne; Parks, Mancel Jordan; Jennings, Barbara Joan; Kaplan, Paul Garry; Brown, Theresa Jean; Conrad, Stephen Hamilton
2010-09-01
Changes in climate can lead to instabilities in physical and economic systems, particularly in regions with marginal resources. Global climate models indicate increasing global mean temperatures over the decades to come and uncertainty in the local to national impacts means perceived risks will drive planning decisions. Agent-based models provide one of the few ways to evaluate the potential changes in behavior in coupled social-physical systems and to quantify and compare risks. The current generation of climate impact analyses provides estimates of the economic cost of climate change for a limited set of climate scenarios that account for a small subset of the dynamics and uncertainties. To better understand the risk to national security, the next generation of risk assessment models must represent global stresses, population vulnerability to those stresses, and the uncertainty in population responses and outcomes that could have a significant impact on U.S. national security.
Hybrid processing of stochastic and subjective uncertainty data
Cooper, J.A.; Ferson, S.; Ginzburg, L.
1995-11-01
Uncertainty analyses typically recognize separate stochastic and subjective sources of uncertainty, but do not systematically combine the two, although a large amount of data used in analyses is partly stochastic and partly subjective. We have developed methodology for mathematically combining stochastic and subjective data uncertainty, based on new ``hybrid number`` approaches. The methodology can be utilized in conjunction with various traditional techniques, such as PRA (probabilistic risk assessment) and risk analysis decision support. Hybrid numbers have been previously examined as a potential method to represent combinations of stochastic and subjective information, but mathematical processing has been impeded by the requirements inherent in the structure of the numbers, e.g., there was no known way to multiply hybrids. In this paper, we will demonstrate methods for calculating with hybrid numbers that avoid the difficulties. By formulating a hybrid number as a probability distribution that is only fuzzy known, or alternatively as a random distribution of fuzzy numbers, methods are demonstrated for the full suite of arithmetic operations, permitting complex mathematical calculations. It will be shown how information about relative subjectivity (the ratio of subjective to stochastic knowledge about a particular datum) can be incorporated. Techniques are also developed for conveying uncertainty information visually, so that the stochastic and subjective constituents of the uncertainty, as well as the ratio of knowledge about the two, are readily apparent. The techniques demonstrated have the capability to process uncertainty information for independent, uncorrelated data, and for some types of dependent and correlated data. Example applications are suggested, illustrative problems are worked, and graphical results are given.
Uncertainty quantification in lattice QCD calculations for nuclear physics
Beane, Silas R.; Detmold, William; Orginos, Kostas; Savage, Martin J.
2015-02-05
The numerical technique of Lattice QCD holds the promise of connecting the nuclear forces, nuclei, the spectrum and structure of hadrons, and the properties of matter under extreme conditions with the underlying theory of the strong interactions, quantum chromodynamics. A distinguishing, and thus far unique, feature of this formulation is that all of the associated uncertainties, both statistical and systematic can, in principle, be systematically reduced to any desired precision with sufficient computational and human resources. As a result, we review the sources of uncertainty inherent in Lattice QCD calculations for nuclear physics, and discuss how each is quantified in current efforts.
Uncertainty Analysis Technique for OMEGA Dante Measurements (Conference) |
Office of Scientific and Technical Information (OSTI)
SciTech Connect Conference: Uncertainty Analysis Technique for OMEGA Dante Measurements Citation Details In-Document Search Title: Uncertainty Analysis Technique for OMEGA Dante Measurements The Dante is an 18 channel X-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g. hohlraums, etc.) at X-ray energies between 50 eV to 10 keV. It is a main diagnostics installed on the OMEGA laser facility at the Laboratory for Laser
Distributed Generation Investment by a Microgrid under Uncertainty
Marnay, Chris; Siddiqui, Afzal; Marnay, Chris
2008-08-11
This paper examines a California-based microgrid?s decision to invest in a distributed generation (DG) unit fuelled by natural gas. While the long-term natural gas generation cost is stochastic, we initially assume that the microgrid may purchase electricity at a fixed retail rate from its utility. Using the real options approach, we find a natural gas generation cost threshold that triggers DG investment. Furthermore, the consideration of operational flexibility by the microgrid increases DG investment, while the option to disconnect from the utility is not attractive. By allowing the electricity price to be stochastic, we next determine an investment threshold boundary and find that high electricity price volatility relative to that of natural gas generation cost delays investment while simultaneously increasing the value of the investment. We conclude by using this result to find the implicit option value of the DG unit when two sources of uncertainty exist.
IAEA CRP on HTGR Uncertainty Analysis: Benchmark Definition and Test Cases
Gerhard Strydom; Frederik Reitsma; Hans Gougar; Bismark Tyobeka; Kostadin Ivanov
2012-11-01
Uncertainty and sensitivity studies are essential elements of the reactor simulation code verification and validation process. Although several international uncertainty quantification activities have been launched in recent years in the LWR, BWR and VVER domains (e.g. the OECD/NEA BEMUSE program [1], from which the current OECD/NEA LWR Uncertainty Analysis in Modelling (UAM) benchmark [2] effort was derived), the systematic propagation of uncertainties in cross-section, manufacturing and model parameters for High Temperature Reactor (HTGR) designs has not been attempted yet. This paper summarises the scope, objectives and exercise definitions of the IAEA Coordinated Research Project (CRP) on HTGR UAM [3]. Note that no results will be included here, as the HTGR UAM benchmark was only launched formally in April 2012, and the specification is currently still under development.
Microsoft Word - Price Uncertainty Supplement.doc
Annual Energy Outlook [U.S. Energy Information Administration (EIA)]
Prices and volatility started at opposite ends of the spectrum during July - the former starting low and ending higher, the latter starting high and ending lower (Figures 1 and 2). ...
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Harp, D. R.; Atchley, A. L.; Painter, S. L.; Coon, E. T.; Wilson, C. J.; Romanovsky, V. E.; Rowland, J. C.
2015-06-29
The effect of soil property uncertainties on permafrost thaw projections are studied using a three-phase subsurface thermal hydrology model and calibration-constrained uncertainty analysis. The Null-Space Monte Carlo method is used to identify soil hydrothermal parameter combinations that are consistent with borehole temperature measurements at the study site, the Barrow Environmental Observatory. Each parameter combination is then used in a forward projection of permafrost conditions for the 21st century (from calendar year 2006 to 2100) using atmospheric forcings from the Community Earth System Model (CESM) in the Representative Concentration Pathway (RCP) 8.5 greenhouse gas concentration trajectory. A 100-year projection allows formore » the evaluation of intra-annual uncertainty due to soil properties and the inter-annual variability due to year to year differences in CESM climate forcings. After calibrating to borehole temperature data at this well-characterized site, soil property uncertainties are still significant and result in significant intra-annual uncertainties in projected active layer thickness and annual thaw depth-duration even with a specified future climate. Intra-annual uncertainties in projected soil moisture content and Stefan number are small. A volume and time integrated Stefan number decreases significantly in the future climate, indicating that latent heat of phase change becomes more important than heat conduction in future climates. Out of 10 soil parameters, ALT, annual thaw depth-duration, and Stefan number are highly dependent on mineral soil porosity, while annual mean liquid saturation of the active layer is highly dependent on the mineral soil residual saturation and moderately dependent on peat residual saturation. By comparing the ensemble statistics to the spread of projected permafrost metrics using different climate models, we show that the effect of calibration-constrained uncertainty in soil properties, although significant, is less than that produced by structural climate model uncertainty for this location.« less
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Harp, Dylan; Atchley, Adam; Painter, Scott L; Coon, Ethan T.; Wilson, Cathy; Romanovsky, Vladimir E; Rowland, Joel
2016-01-01
The effect of soil property uncertainties on permafrost thaw projections are studied using a three-phase subsurface thermal hydrology model and calibration-constrained uncertainty analysis. The Null-Space Monte Carlo method is used to identify soil hydrothermal parameter combinations that are consistent with borehole temperature measurements at the study site, the Barrow Environmental Observatory. Each parameter combination is then used in a forward projection of permafrost conditions for the 21more » $$^{st}$$ century (from calendar year 2006 to 2100) using atmospheric forcings from the Community Earth System Model (CESM) in the Representative Concentration Pathway (RCP) 8.5 greenhouse gas concentration trajectory. A 100-year projection allows for the evaluation of intra-annual uncertainty due to soil properties and the inter-annual variability due to year to year differences in CESM climate forcings. After calibrating to borehole temperature data at this well-characterized site, soil property uncertainties are still significant and result in significant intra-annual uncertainties in projected active layer thickness and annual thaw depth-duration even with a specified future climate. Intra-annual uncertainties in projected soil moisture content and Stefan number are small. A volume and time integrated Stefan number decreases significantly in the future climate, indicating that latent heat of phase change becomes more important than heat conduction in future climates. Out of 10 soil parameters, ALT, annual thaw depth-duration, and Stefan number are highly dependent on mineral soil porosity, while annual mean liquid saturation of the active layer is highly dependent on the mineral soil residual saturation and moderately dependent on peat residual saturation. By comparing the ensemble statistics to the spread of projected permafrost metrics using different climate models, we show that the effect of calibration-constrained uncertainty in soil properties, although significant, is less than that produced by structural climate model uncertainty for this location.« less
Harp, D. R.; Atchley, A. L.; Painter, S. L.; Coon, E. T.; Wilson, C. J.; Romanovsky, V. E.; Rowland, J. C.
2015-06-29
The effect of soil property uncertainties on permafrost thaw projections are studied using a three-phase subsurface thermal hydrology model and calibration-constrained uncertainty analysis. The Null-Space Monte Carlo method is used to identify soil hydrothermal parameter combinations that are consistent with borehole temperature measurements at the study site, the Barrow Environmental Observatory. Each parameter combination is then used in a forward projection of permafrost conditions for the 21st century (from calendar year 2006 to 2100) using atmospheric forcings from the Community Earth System Model (CESM) in the Representative Concentration Pathway (RCP) 8.5 greenhouse gas concentration trajectory. A 100-year projection allows for the evaluation of intra-annual uncertainty due to soil properties and the inter-annual variability due to year to year differences in CESM climate forcings. After calibrating to borehole temperature data at this well-characterized site, soil property uncertainties are still significant and result in significant intra-annual uncertainties in projected active layer thickness and annual thaw depth-duration even with a specified future climate. Intra-annual uncertainties in projected soil moisture content and Stefan number are small. A volume and time integrated Stefan number decreases significantly in the future climate, indicating that latent heat of phase change becomes more important than heat conduction in future climates. Out of 10 soil parameters, ALT, annual thaw depth-duration, and Stefan number are highly dependent on mineral soil porosity, while annual mean liquid saturation of the active layer is highly dependent on the mineral soil residual saturation and moderately dependent on peat residual saturation. By comparing the ensemble statistics to the spread of projected permafrost metrics using different climate models, we show that the effect of calibration-constrained uncertainty in soil properties, although significant, is less than that produced by structural climate model uncertainty for this location.
Reducing uncertainty in geostatistical description with well testing pressure data
Reynolds, A.C.; He, Nanqun; Oliver, D.S.
1997-08-01
Geostatistics has proven to be an effective tool for generating realizations of reservoir properties conditioned to static data, e.g., core and log data and geologic knowledge. Due to the lack of closely spaced data in the lateral directions, there will be significant variability in reservoir descriptions generated by geostatistical simulation, i.e., significant uncertainty in the reservoir descriptions. In past work, we have presented procedures based on inverse problem theory for generating reservoir descriptions (rock property fields) conditioned to pressure data and geostatistical information represented as prior means for log-permeability and porosity and variograms. Although we have shown that the incorporation of pressure data reduces the uncertainty below the level contained in the geostatistical model based only on static information (the prior model), our previous results assumed did not explicitly account for uncertainties in the prior means and the parameters defining the variogram model. In this work, we investigate how pressure data can help detect errors in the prior means. If errors in the prior means are large and are not taken into account, realizations conditioned to pressure data represent incorrect samples of the a posteriori probability density function for the rock property fields, whereas, if the uncertainty in the prior mean is incorporated properly into the model, one obtains realistic realizations of the rock property fields.
Improved lower bound on the entropic uncertainty relation
Jafarpour, Mojtaba; Sabour, Abbass
2011-09-15
We present a lower bound on the entropic uncertainty relation for the distinguished measurements of two observables in a d-dimensional Hilbert space for d up to 5. This bound provides an improvement over the best one yet available. The feasibility of the obtained bound presenting an improvement for higher dimensions is also discussed.
Reducing the uncertainties in particle therapy
Oancea, C.; Shipulin, K. N.; Mytsin, G. V.; Luchin, Y. I.
2015-02-24
The use of fundamental Nuclear Physics in Nuclear Medicine has a significant impact in the fight against cancer. Hadrontherapy is an innovative cancer radiotherapy method using nuclear particles (protons, neutrons and ions) for the treatment of early and advanced tumors. The main goal of proton therapy is to deliver high radiation doses to the tumor volume with minimal damage to healthy tissues and organs. The purpose of this work was to investigate the dosimetric errors in clinical proton therapy dose calculation due to the presence of metallic implants in the treatment plan, and to determine the impact of the errors. The results indicate that the errors introduced by the treatment planning systems are higher than 10% in the prediction of the dose at isocenter when the proton beam is passing directly through a metallic titanium alloy implant. In conclusion, we recommend that pencil-beam algorithms not be used when planning treatment for patients with titanium alloy implants, and to consider implementing methods to mitigate the effects of the implants.
River meander modeling and confronting uncertainty.
Posner, Ari J.
2011-05-01
This study examines the meandering phenomenon as it occurs in media throughout terrestrial, glacial, atmospheric, and aquatic environments. Analysis of the minimum energy principle, along with theories of Coriolis forces (and random walks to explain the meandering phenomenon) found that these theories apply at different temporal and spatial scales. Coriolis forces might induce topological changes resulting in meandering planforms. The minimum energy principle might explain how these forces combine to limit the sinuosity to depth and width ratios that are common throughout various media. The study then compares the first order analytical solutions for flow field by Ikeda, et al. (1981) and Johannesson and Parker (1989b). Ikeda's et al. linear bank erosion model was implemented to predict the rate of bank erosion in which the bank erosion coefficient is treated as a stochastic variable that varies with physical properties of the bank (e.g., cohesiveness, stratigraphy, or vegetation density). The developed model was used to predict the evolution of meandering planforms. Then, the modeling results were analyzed and compared to the observed data. Since the migration of a meandering channel consists of downstream translation, lateral expansion, and downstream or upstream rotations several measures are formulated in order to determine which of the resulting planforms is closest to the experimental measured one. Results from the deterministic model highly depend on the calibrated erosion coefficient. Since field measurements are always limited, the stochastic model yielded more realistic predictions of meandering planform evolutions. Due to the random nature of bank erosion coefficient, the meandering planform evolution is a stochastic process that can only be accurately predicted by a stochastic model.
Strydom, Gerhard; Bostelmann, Friederike; Ivanov, Kostadin
2014-10-01
The continued development of High Temperature Gas Cooled Reactors (HTGRs) requires verification of HTGR design and safety features with reliable high fidelity physics models and robust, efficient, and accurate codes. One way to address the uncertainties in the HTGR analysis tools is to assess the sensitivity of critical parameters (such as the calculated maximum fuel temperature during loss of coolant accidents) to a few important input uncertainties. The input parameters were identified by engineering judgement in the past but are today typically based on a Phenomena Identification Ranking Table (PIRT) process. The input parameters can also be derived from sensitivity studies and are then varied in the analysis to find a spread in the parameter of importance. However, there is often no easy way to compensate for these uncertainties. In engineering system design, a common approach for addressing performance uncertainties is to add compensating margins to the system, but with passive properties credited it is not so clear how to apply it in the case of modular HTGR heat removal path. Other more sophisticated uncertainty modelling approaches, including Monte Carlo analysis, have also been proposed and applied. Ideally one wishes to apply a more fundamental approach to determine the predictive capability and accuracies of coupled neutronics/thermal-hydraulics and depletion simulations used for reactor design and safety assessment. Today there is a broader acceptance of the use of uncertainty analysis even in safety studies, and it has been accepted by regulators in some cases to replace the traditional conservative analysis. Therefore some safety analysis calculations may use a mixture of these approaches for different parameters depending upon the particular requirements of the analysis problem involved. Sensitivity analysis can for example be used to provide information as part of an uncertainty analysis to determine best estimate plus uncertainty results to the required confidence level. In order to address uncertainty propagation in analysis and methods in the HTGR community the IAEA initiated a Coordinated Research Project (CRP) on the HTGR Uncertainty Analysis in Modelling (UAM) that officially started in 2013. Although this project focuses specifically on the peculiarities of HTGR designs and its simulation requirements, many lessons can be learned from the LWR community and the significant progress already made towards a consistent methodology uncertainty analysis. In the case of LWRs the NRC has already in 1988 amended 10 CFR 50.46 to allow best-estimate (plus uncertainties) calculations of emergency core cooling system performance. The Nuclear Energy Agency (NEA) of the Organization for Economic Co-operation and Development (OECD) also established an Expert Group on "Uncertainty Analysis in Modelling" which finally led to the definition of the "Benchmark for Uncertainty Analysis in Modelling (UAM) for Design, Operation and Safety Analysis of LWRs". The CRP on HTGR UAM will follow as far as possible the on-going OECD Light Water Reactor UAM benchmark activity.
Giant dipole resonance parameters with uncertainties from photonuclear cross sections
Plujko, V.A.; Capote, R.; Gorbachenko, O.M.
2011-09-15
Updated values and corresponding uncertainties of isovector giant dipole resonance (IVGDR or GDR) model parameters are presented that are obtained by the least-squares fitting of theoretical photoabsorption cross sections to experimental data. The theoretical photoabsorption cross section is taken as a sum of the components corresponding to excitation of the GDR and quasideuteron contribution to the experimental photoabsorption cross section. The present compilation covers experimental data as of January 2010. - Highlights: {yields} Experimental {sigma} ({gamma}, abs) or a sum of partial cross sections are taken as input to the fitting. {yields} Data include contributions from photoproton reactions. {yields} Standard (SLO) or modified (SMLO) Lorentzian approaches are used for formulating GDR models. {yields} Spherical or axially deformed nuclear shapes are used in GDR least-squares fit. {yields} Values and uncertainties of the SLO and SMLO GDR model parameters are tabulated.
Reduction in maximum time uncertainty of paired time signals
Theodosiou, G.E.; Dawson, J.W.
1981-02-11
Reduction in the maximum time uncertainty (t/sub max/ - t/sub min/) of a series of paired time signals t/sub 1/ and t/sub 2/ varying between two input terminals and representative of a series of single events where t/sub 1/ less than or equal to t/sub 2/ and t/sub 1/ + t/sub 2/ equals a constant, is carried out with a circuit utilizing a combination of OR and AND gates as signal selecting means and one or more time delays to increase the minimum value (t/sub min/) of the first signal t/sub 1/ closer to t/sub max/ and thereby reduce the difference. The circuit may utilize a plurality of stages to reduce the uncertainty by factors of 20 to 800.
Reduction in maximum time uncertainty of paired time signals
Theodosiou, G.E.; Dawson, J.W.
1983-10-04
Reduction in the maximum time uncertainty (t[sub max]--t[sub min]) of a series of paired time signals t[sub 1] and t[sub 2] varying between two input terminals and representative of a series of single events where t[sub 1][<=]t[sub 2] and t[sub 1]+t[sub 2] equals a constant, is carried out with a circuit utilizing a combination of OR and AND gates as signal selecting means and one or more time delays to increase the minimum value (t[sub min]) of the first signal t[sub 1] closer to t[sub max] and thereby reduce the difference. The circuit may utilize a plurality of stages to reduce the uncertainty by factors of 20--800. 6 figs.
Reduction in maximum time uncertainty of paired time signals
Theodosiou, George E.; Dawson, John W.
1983-01-01
Reduction in the maximum time uncertainty (t.sub.max -t.sub.min) of a series of paired time signals t.sub.1 and t.sub.2 varying between two input terminals and representative of a series of single events where t.sub.1 .ltoreq.t.sub.2 and t.sub.1 +t.sub.2 equals a constant, is carried out with a circuit utilizing a combination of OR and AND gates as signal selecting means and one or more time delays to increase the minimum value (t.sub.min) of the first signal t.sub.1 closer to t.sub.max and thereby reduce the difference. The circuit may utilize a plurality of stages to reduce the uncertainty by factors of 20-800.
Uncertainties in nuclear transition matrix elements of neutrinoless ?? decay
Rath, P. K.
2013-12-30
To estimate the uncertainties associated with the nuclear transition matrix elements M{sup (K)} (K=0?/0N) for the 0{sup +} ? 0{sup +} transitions of electron and positron emitting modes of the neutrinoless ?? decay, a statistical analysis has been performed by calculating sets of eight (twelve) different nuclear transition matrix elements M{sup (K)} in the PHFB model by employing four different parameterizations of a Hamiltonian with pairing plus multipolar effective two-body interaction and two (three) different parameterizations of Jastrow short range correlations. The averages in conjunction with their standard deviations provide an estimate of the uncertainties associated the nuclear transition matrix elements M{sup (K)} calculated within the PHFB model, the maximum of which turn out to be 13% and 19% owing to the exchange of light and heavy Majorana neutrinos, respectively.
Neutron reactions and climate uncertainties earn Los Alamos scientists DOE
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Early Career awards DOE Early Career awards Neutron reactions and climate uncertainties earn Los Alamos scientists DOE Early Career awards Marian Jandel and Nathan Urban are among the 61 national recipients of the Energy Department's Early Career Research Program awards for 2013. May 10, 2013 Marian Jandel (left) and Nathan Urban Marian Jandel (left) and Nathan Urban Contact Nancy Ambrosiano Communications Office (505) 667-0471 Email Marian and Nathan are excellent examples of the
Comment on ''Improved bounds on entropic uncertainty relations''
Bosyk, G. M.; Portesi, M.; Plastino, A.; Zozor, S.
2011-11-15
We provide an analytical proof of the entropic uncertainty relations presented by J. I. de Vicente and J. Sanchez-Ruiz [Phys. Rev. A 77, 042110 (2008)] and also show that the replacement of Eq. (27) by Eq. (29) in that reference introduces solutions that do not take fully into account the constraints of the problem, which in turn lead to some mistakes in their treatment.
Computational Fluid Dynamics & Large-Scale Uncertainty Quantification for
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Wind Energy Fluid Dynamics & Large-Scale Uncertainty Quantification for Wind Energy - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery
Uncertainty quantification for evaluating impacts of caprock and reservoir
Office of Scientific and Technical Information (OSTI)
properties on pressure buildup and ground surface displacement during geological CO2 sequestration (Journal Article) | SciTech Connect quantification for evaluating impacts of caprock and reservoir properties on pressure buildup and ground surface displacement during geological CO2 sequestration Citation Details In-Document Search Title: Uncertainty quantification for evaluating impacts of caprock and reservoir properties on pressure buildup and ground surface displacement during geological
Composite Multilinearity, Epistemic Uncertainty and Risk Achievement Worth
E. Borgonovo; C. L. Smith
2012-10-01
Risk Achievement Worth is one of the most widely utilized importance measures. RAW is defined as the ratio of the risk metric value attained when a component has failed over the base case value of the risk metric. Traditionally, both the numerator and denominator are point estimates. Relevant literature has shown that inclusion of epistemic uncertainty i) induces notable variability in the point estimate ranking and ii) causes the expected value of the risk metric to differ from its nominal value. We obtain the conditions under which the equality holds between the nominal and expected values of a reliability risk metric. Among these conditions, separability and state-of-knowledge independence emerge. We then study how the presence of epistemic uncertainty aspects RAW and the associated ranking. We propose an extension of RAW (called ERAW) which allows one to obtain a ranking robust to epistemic uncertainty. We discuss the properties of ERAW and the conditions under which it coincides with RAW. We apply our findings to a probabilistic risk assessment model developed for the safety analysis of NASA lunar space missions.
Perko, Z.; Gilli, L.; Lathouwers, D.; Kloosterman, J. L.
2013-07-01
Uncertainty quantification plays an increasingly important role in the nuclear community, especially with the rise of Best Estimate Plus Uncertainty methodologies. Sensitivity analysis, surrogate models, Monte Carlo sampling and several other techniques can be used to propagate input uncertainties. In recent years however polynomial chaos expansion has become a popular alternative providing high accuracy at affordable computational cost. This paper presents such polynomial chaos (PC) methods using adaptive sparse grids and adaptive basis set construction, together with an application to a Gas Cooled Fast Reactor transient. Comparison is made between a new sparse grid algorithm and the traditionally used technique proposed by Gerstner. An adaptive basis construction method is also introduced and is proved to be advantageous both from an accuracy and a computational point of view. As a demonstration the uncertainty quantification of a 50% loss of flow transient in the GFR2400 Gas Cooled Fast Reactor design was performed using the CATHARE code system. The results are compared to direct Monte Carlo sampling and show the superior convergence and high accuracy of the polynomial chaos expansion. Since PC techniques are easy to implement, they can offer an attractive alternative to traditional techniques for the uncertainty quantification of large scale problems. (authors)
Achieving Robustness to Uncertainty for Financial Decision-making
Barnum, George M.; Van Buren, Kendra L.; Hemez, Francois M.; Song, Peter
2014-01-10
This report investigates the concept of robustness analysis to support financial decision-making. Financial models, that forecast future stock returns or market conditions, depend on assumptions that might be unwarranted and variables that might exhibit large fluctuations from their last-known values. The analysis of robustness explores these sources of uncertainty, and recommends model settings such that the forecasts used for decision-making are as insensitive as possible to the uncertainty. A proof-of-concept is presented with the Capital Asset Pricing Model. The robustness of model predictions is assessed using info-gap decision theory. Info-gaps are models of uncertainty that express the “distance,” or gap of information, between what is known and what needs to be known in order to support the decision. The analysis yields a description of worst-case stock returns as a function of increasing gaps in our knowledge. The analyst can then decide on the best course of action by trading-off worst-case performance with “risk”, which is how much uncertainty they think needs to be accommodated in the future. The report also discusses the Graphical User Interface, developed using the MATLAB® programming environment, such that the user can control the analysis through an easy-to-navigate interface. Three directions of future work are identified to enhance the present software. First, the code should be re-written using the Python scientific programming software. This change will achieve greater cross-platform compatibility, better portability, allow for a more professional appearance, and render it independent from a commercial license, which MATLAB® requires. Second, a capability should be developed to allow users to quickly implement and analyze their own models. This will facilitate application of the software to the evaluation of proprietary financial models. The third enhancement proposed is to add the ability to evaluate multiple models simultaneously. When two models reflect past data with similar accuracy, the more robust of the two is preferable for decision-making because its predictions are, by definition, less sensitive to the uncertainty.
Strydom, Gerhard; Bostelmann, F.
2015-09-01
The continued development of High Temperature Gas Cooled Reactors (HTGRs) requires verification of HTGR design and safety features with reliable high fidelity physics models and robust, efficient, and accurate codes. The predictive capability of coupled neutronics/thermal-hydraulics and depletion simulations for reactor design and safety analysis can be assessed with sensitivity analysis (SA) and uncertainty analysis (UA) methods. Uncertainty originates from errors in physical data, manufacturing uncertainties, modelling and computational algorithms. (The interested reader is referred to the large body of published SA and UA literature for a more complete overview of the various types of uncertainties, methodologies and results obtained). SA is helpful for ranking the various sources of uncertainty and error in the results of core analyses. SA and UA are required to address cost, safety, and licensing needs and should be applied to all aspects of reactor multi-physics simulation. SA and UA can guide experimental, modelling, and algorithm research and development. Current SA and UA rely either on derivative-based methods such as stochastic sampling methods or on generalized perturbation theory to obtain sensitivity coefficients. Neither approach addresses all needs. In order to benefit from recent advances in modelling and simulation and the availability of new covariance data (nuclear data uncertainties) extensive sensitivity and uncertainty studies are needed for quantification of the impact of different sources of uncertainties on the design and safety parameters of HTGRs. Only a parallel effort in advanced simulation and in nuclear data improvement will be able to provide designers with more robust and well validated calculation tools to meet design target accuracies. In February 2009, the Technical Working Group on Gas-Cooled Reactors (TWG-GCR) of the International Atomic Energy Agency (IAEA) recommended that the proposed Coordinated Research Program (CRP) on the HTGR Uncertainty Analysis in Modelling (UAM) be implemented. This CRP is a continuation of the previous IAEA and Organization for Economic Co-operation and Development (OECD)/Nuclear Energy Agency (NEA) international activities on Verification and Validation (V&V) of available analytical capabilities for HTGR simulation for design and safety evaluations. Within the framework of these activities different numerical and experimental benchmark problems were performed and insight was gained about specific physics phenomena and the adequacy of analysis methods.
Bixler, Nathan E.; Osborn, Douglas M.; Sallaberry, Cedric Jean-Marie; Eckert-Gallup, Aubrey Celia; Mattie, Patrick D.; Ghosh, S. Tina
2014-02-01
This paper describes the convergence of MELCOR Accident Consequence Code System, Version 2 (MACCS2) probabilistic results of offsite consequences for the uncertainty analysis of the State-of-the-Art Reactor Consequence Analyses (SOARCA) unmitigated long-term station blackout scenario at the Peach Bottom Atomic Power Station. The consequence metrics evaluated are individual latent-cancer fatality (LCF) risk and individual early fatality risk. Consequence results are presented as conditional risk (i.e., assuming the accident occurs, risk per event) to individuals of the public as a result of the accident. In order to verify convergence for this uncertainty analysis, as recommended by the Nuclear Regulatory Commission’s Advisory Committee on Reactor Safeguards, a ‘high’ source term from the original population of Monte Carlo runs has been selected to be used for: (1) a study of the distribution of consequence results stemming solely from epistemic uncertainty in the MACCS2 parameters (i.e., separating the effect from the source term uncertainty), and (2) a comparison between Simple Random Sampling (SRS) and Latin Hypercube Sampling (LHS) in order to validate the original results obtained with LHS. Three replicates (each using a different random seed) of size 1,000 each using LHS and another set of three replicates of size 1,000 using SRS are analyzed. The results show that the LCF risk results are well converged with either LHS or SRS sampling. The early fatality risk results are less well converged at radial distances beyond 2 miles, and this is expected due to the sparse data (predominance of “zero” results).
Harper, F.T.; Young, M.L.; Miller, L.A.
1995-01-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, completed in 1990, estimate the risks presented by nuclear installations based on postulated frequencies and magnitudes of potential accidents. In 1991, the US Nuclear Regulatory Commission (NRC) and the Commission of the European Communities (CEC) began a joint uncertainty analysis of the two codes. The objective was to develop credible and traceable uncertainty distributions for the input variables of the codes. Expert elicitation, developed independently, was identified as the best technology available for developing a library of uncertainty distributions for the selected consequence parameters. The study was formulated jointly and was limited to the current code models and to physical quantities that could be measured in experiments. To validate the distributions generated for the wet deposition input variables, samples were taken from these distributions and propagated through the wet deposition code model along with the Gaussian plume model (GPM) implemented in the MACCS and COSYMA codes. Resulting distributions closely replicated the aggregated elicited wet deposition distributions. Project teams from the NRC and CEC cooperated successfully to develop and implement a unified process for the elaboration of uncertainty distributions on consequence code input parameters. Formal expert judgment elicitation proved valuable for synthesizing the best available information. Distributions on measurable atmospheric dispersion and deposition parameters were successfully elicited from experts involved in the many phenomenological areas of consequence analysis. This volume is the third of a three-volume document describing the project and contains descriptions of the probability assessment principles; the expert identification and selection process; the weighting methods used; the inverse modeling methods; case structures; and summaries of the consequence codes.
Uncertainty and sensitivity analysis of fission gas behavior in engineering-scale fuel modeling
Pastore, Giovanni; Swiler, L. P.; Hales, Jason D.; Novascone, Stephen R.; Perez, Danielle M.; Spencer, Benjamin W.; Luzzi, Lelio; Uffelen, Paul Van; Williamson, Richard L.
2014-10-12
The role of uncertainties in fission gas behavior calculations as part of engineering-scale nuclear fuel modeling is investigated using the BISON fuel performance code and a recently implemented physics-based model for the coupled fission gas release and swelling. Through the integration of BISON with the DAKOTA software, a sensitivity analysis of the results to selected model parameters is carried out based on UO2 single-pellet simulations covering different power regimes. The parameters are varied within ranges representative of the relative uncertainties and consistent with the information from the open literature. The study leads to an initial quantitative assessment of the uncertainty in fission gas behavior modeling with the parameter characterization presently available. Also, the relative importance of the single parameters is evaluated. Moreover, a sensitivity analysis is carried out based on simulations of a fuel rod irradiation experiment, pointing out a significant impact of the considered uncertainties on the calculated fission gas release and cladding diametral strain. The results of the study indicate that the commonly accepted deviation between calculated and measured fission gas release by a factor of 2 approximately corresponds to the inherent modeling uncertainty at high fission gas release. Nevertheless, higher deviations may be expected for values around 10% and lower. Implications are discussed in terms of directions of research for the improved modeling of fission gas behavior for engineering purposes.
Donald M. McEligot; Hugh M. McIlroy, Jr.; Ryan C. Johnson
2007-11-01
The purpose of the fluid dynamics experiments in the MIR (Matched-Index-of-Refraction) flow system at Idaho National Laboratory (INL) is to develop benchmark databases for the assessment of Computational Fluid Dynamics (CFD) solutions of the momentum equations, scalar mixing, and turbulence models for typical Very High Temperature Reactor (VHTR) plenum geometries in the limiting case of negligible buoyancy and constant fluid properties. The experiments use optical techniques, primarily particle image velocimetry (PIV) in the INL MIR flow system. The benefit of the MIR technique is that it permits optical measurements to determine flow characteristics in passages and around objects to be obtained without locating a disturbing transducer in the flow field and without distortion of the optical paths. The objective of the present report is to develop understanding of the magnitudes of experimental uncertainties in the results to be obtained in such experiments. Unheated MIR experiments are first steps when the geometry is complicated. One does not want to use a computational technique, which will not even handle constant properties properly. This report addresses the general background, requirements for benchmark databases, estimation of experimental uncertainties in mean velocities and turbulence quantities, the MIR experiment, PIV uncertainties, positioning uncertainties, and other contributing measurement uncertainties.
Uncertainty and sensitivity analysis of fission gas behavior in engineering-scale fuel modeling
G. Pastore; L.P. Swiler; J.D. Hales; S.R. Novascone; D.M. Perez; B.W. Spencer; L. Luzzi; P. Van Uffelen; R.L. Williamson
2014-10-01
The role of uncertainties in fission gas behavior calculations as part of engineering-scale nuclear fuel modeling is investigated using the BISON fuel performance code and a recently implemented physics-based model for the coupled fission gas release and swelling. Through the integration of BISON with the DAKOTA software, a sensitivity analysis of the results to selected model parameters is carried out based on UO2 single-pellet simulations covering different power regimes. The parameters are varied within ranges representative of the relative uncertainties and consistent with the information from the open literature. The study leads to an initial quantitative assessment of the uncertainty in fission gas behavior modeling with the parameter characterization presently available. Also, the relative importance of the single parameters is evaluated. Moreover, a sensitivity analysis is carried out based on simulations of a fuel rod irradiation experiment, pointing out a significant impact of the considered uncertainties on the calculated fission gas release and cladding diametral strain. The results of the study indicate that the commonly accepted deviation between calculated and measured fission gas release by a factor of 2 approximately corresponds to the inherent modeling uncertainty at high fission gas release. Nevertheless, higher deviations may be expected for values around 10% and lower. Implications are discussed in terms of directions of research for the improved modeling of fission gas behavior for engineering purposes.
Zettlemoyer, M.D.
1990-01-01
The Air Force Toxic Chemical Dispersion (AFTOX) model is a Gaussian puff dispersion model that predicts plumes, concentrations, and hazard distances of toxic chemical spills. A measurement uncertainty propagation formula derived by Freeman et al. (1986) is used within AFTOX to estimate resulting concentration uncertainties due to the effects of data input uncertainties in wind speed, spill height, emission rate, and the horizontal and vertical Gaussian dispersion parameters, and the results are compared to true uncertainties as estimated by standard deviations computed by Monte Carlo simulations. The measurement uncertainty uncertainty propagation formula was found to overestimate measurement uncertainty in AFTOX-calculated concentrations by at least 350 percent, with overestimates worsening with increasing stability and/or increasing measurement uncertainty.
A Two-Step Approach to Uncertainty Quantification of Core Simulators
Office of Scientific and Technical Information (OSTI)
(Journal Article) | SciTech Connect Journal Article: A Two-Step Approach to Uncertainty Quantification of Core Simulators Citation Details In-Document Search Title: A Two-Step Approach to Uncertainty Quantification of Core Simulators For the multiple sources of error introduced into the standard computational regime for simulating reactor cores, rigorous uncertainty analysis methods are available primarily to quantify the effects of cross section uncertainties. Two methods for propagating
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Traceability Chain of Radiometric Measurements Abstract Radiometric data with known and traceable uncertainty is essential for climate change studies to better understand cloud radiation interactions and the earth radiation budget. Further, adopting a known and traceable method of estimating uncertainty with respect to SI ensures that the uncertainty quoted for radiometric measurements can be compared based on documented methods of derivation. Currently, most radiometric data users rely on
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
D00-64984 NREL is a national laboratory of the U.S. Department of Energy, Office of Energy Efficiency and Re newable Energy, operated by the Alliance for Sustainable Energy, LLC. Uncertainty Estimation of Radiometric Data using a Guide to the Expression of Uncertainty in Measurement (GUM) Method ICEM Conference, Boulder, CO 06-25-2015 Presenter: Aron Habte, NREL 2 Motivation: * To develop a consensus standard to estimate radiometric measurement uncertainty. At present the tendency is to look
Ideas underlying quantification of margins and uncertainties(QMU): a white paper.
Helton, Jon Craig; Trucano, Timothy Guy; Pilch, Martin M.
2006-09-01
This report describes key ideas underlying the application of Quantification of Margins and Uncertainties (QMU) to nuclear weapons stockpile lifecycle decisions at Sandia National Laboratories. While QMU is a broad process and methodology for generating critical technical information to be used in stockpile management, this paper emphasizes one component, which is information produced by computational modeling and simulation. In particular, we discuss the key principles of developing QMU information in the form of Best Estimate Plus Uncertainty, the need to separate aleatory and epistemic uncertainty in QMU, and the risk-informed decision making that is best suited for decisive application of QMU. The paper is written at a high level, but provides a systematic bibliography of useful papers for the interested reader to deepen their understanding of these ideas.
TRITIUM UNCERTAINTY ANALYSIS FOR SURFACE WATER SAMPLES AT THE SAVANNAH RIVER SITE
Atkinson, R.
2012-07-31
Radiochemical analyses of surface water samples, in the framework of Environmental Monitoring, have associated uncertainties for the radioisotopic results reported. These uncertainty analyses pertain to the tritium results from surface water samples collected at five locations on the Savannah River near the U.S. Department of Energy's Savannah River Site (SRS). Uncertainties can result from the field-sampling routine, can be incurred during transport due to the physical properties of the sample, from equipment limitations, and from the measurement instrumentation used. The uncertainty reported by the SRS in their Annual Site Environmental Report currently considers only the counting uncertainty in the measurements, which is the standard reporting protocol for radioanalytical chemistry results. The focus of this work is to provide an overview of all uncertainty components associated with SRS tritium measurements, estimate the total uncertainty according to ISO 17025, and to propose additional experiments to verify some of the estimated uncertainties. The main uncertainty components discovered and investigated in this paper are tritium absorption or desorption in the sample container, HTO/H{sub 2}O isotopic effect during distillation, pipette volume, and tritium standard uncertainty. The goal is to quantify these uncertainties and to establish a combined uncertainty in order to increase the scientific depth of the SRS Annual Site Environmental Report.
Entropic uncertainty relations for the ground state of a coupled system
Santhanam, M.S.
2004-04-01
There is a renewed interest in the uncertainty principle, reformulated from the information theoretic point of view, called the entropic uncertainty relations. They have been studied for various integrable systems as a function of their quantum numbers. In this work, focussing on the ground state of a nonlinear, coupled Hamiltonian system, we show that approximate eigenstates can be constructed within the framework of adiabatic theory. Using the adiabatic eigenstates, we estimate the information entropies and their sum as a function of the nonlinearity parameter. We also briefly look at the information entropies for the highly excited states in the system.
Molecular nonlinear dynamics and protein thermal uncertainty quantification
Xia, Kelin [Department of Mathematics, Michigan State University, Michigan 48824 (United States)] [Department of Mathematics, Michigan State University, Michigan 48824 (United States); Wei, Guo-Wei, E-mail: wei@math.msu.edu [Department of Mathematics, Michigan State University, Michigan 48824 (United States) [Department of Mathematics, Michigan State University, Michigan 48824 (United States); Department of Electrical and Computer Engineering, Michigan State University, Michigan 48824 (United States); Department of Biochemistry and Molecular Biology, Michigan State University, Michigan 48824 (United States)
2014-03-15
This work introduces molecular nonlinear dynamics (MND) as a new approach for describing protein folding and aggregation. By using a mode system, we show that the MND of disordered proteins is chaotic while that of folded proteins exhibits intrinsically low dimensional manifolds (ILDMs). The stability of ILDMs is found to strongly correlate with protein energies. We propose a novel method for protein thermal uncertainty quantification based on persistently invariant ILDMs. Extensive comparison with experimental data and the state-of-the-art methods in the field validate the proposed new method for protein B-factor prediction.
PROBABILISTIC SENSITIVITY AND UNCERTAINTY ANALYSIS WORKSHOP SUMMARY REPORT
Seitz, R
2008-06-25
Stochastic or probabilistic modeling approaches are being applied more frequently in the United States and globally to quantify uncertainty and enhance understanding of model response in performance assessments for disposal of radioactive waste. This increased use has resulted in global interest in sharing results of research and applied studies that have been completed to date. This technical report reflects the results of a workshop that was held to share results of research and applied work related to performance assessments conducted at United States Department of Energy sites. Key findings of this research and applied work are discussed and recommendations for future activities are provided.
Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty
Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.; Cantrell, Kirk J.
2004-03-01
The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates based on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four projections, and associated kriging variances, were averaged using the posterior model probabilities as weights. Finally, cross-validation was conducted by eliminating from consideration all data from one borehole at a time, repeating the above process, and comparing the predictive capability of the model-averaged result with that of each individual model. Using two quantitative measures of comparison, the model-averaged result was superior to any individual geostatistical model of log permeability considered.
Identifying and bounding uncertainties in nuclear reactor thermal power calculations
Phillips, J.; Hauser, E.; Estrada, H.
2012-07-01
Determination of the thermal power generated in the reactor core of a nuclear power plant is a critical element in the safe and economic operation of the plant. Direct measurement of the reactor core thermal power is made using neutron flux instrumentation; however, this instrumentation requires frequent calibration due to changes in the measured flux caused by fuel burn-up, flux pattern changes, and instrumentation drift. To calibrate the nuclear instruments, steam plant calorimetry, a process of performing a heat balance around the nuclear steam supply system, is used. There are four basic elements involved in the calculation of thermal power based on steam plant calorimetry: The mass flow of the feedwater from the power conversion system, the specific enthalpy of that feedwater, the specific enthalpy of the steam delivered to the power conversion system, and other cycle gains and losses. Of these elements, the accuracy of the feedwater mass flow and the feedwater enthalpy, as determined from its temperature and pressure, are typically the largest contributors to the calorimetric calculation uncertainty. Historically, plants have been required to include a margin of 2% in the calculation of the reactor thermal power for the licensed maximum plant output to account for instrumentation uncertainty. The margin is intended to ensure a cushion between operating power and the power for which safety analyses are performed. Use of approved chordal ultrasonic transit-time technology to make the feedwater flow and temperature measurements (in place of traditional differential-pressure- based instruments and resistance temperature detectors [RTDs]) allows for nuclear plant thermal power calculations accurate to 0.3%-0.4% of plant rated power. This improvement in measurement accuracy has allowed many plant operators in the U.S. and around the world to increase plant power output through Measurement Uncertainty Recapture (MUR) up-rates of up to 1.7% of rated power, while also decreasing the probability of significant over-power events. This paper will examine the basic elements involved in calculation of thermal power using ultrasonic transit-time technology and will discuss the criteria for bounding uncertainties associated with each element in order to achieve reactor thermal power calculations to within 0.3% to 0.4%. (authors)
Microsoft Word - feb10-Price Uncertainty Supplement.doc
Gasoline and Diesel Fuel Update (EIA)
February 2010 Short-Term Energy Outlook Energy Price Volatility and Forecast Uncertainty 1 February 12, 2010 Release Crude Oil Prices. WTI crude oil spot prices averaged $78.33 per barrel in January 2010, almost $4 per barrel higher than the prior month's average and matching the $78-per-barrel forecast in last month's Outlook. The WTI spot price peaked at $83.12 on January 6 and then fell to $72.85 on January 29 as the weather turned warm and concerns about the strength of world economic
Uncertainty Analysis of RELAP5-3D
Alexandra E Gertman; Dr. George L Mesina
2012-07-01
As world-wide energy consumption continues to increase, so does the demand for the use of alternative energy sources, such as Nuclear Energy. Nuclear Power Plants currently supply over 370 gigawatts of electricity, and more than 60 new nuclear reactors have been commissioned by 15 different countries. The primary concern for Nuclear Power Plant operation and lisencing has been safety. The safety of the operation of Nuclear Power Plants is no simple matter- it involves the training of operators, design of the reactor, as well as equipment and design upgrades throughout the lifetime of the reactor, etc. To safely design, operate, and understand nuclear power plants, industry and government alike have relied upon the use of best-estimate simulation codes, which allow for an accurate model of any given plant to be created with well-defined margins of safety. The most widely used of these best-estimate simulation codes in the Nuclear Power industry is RELAP5-3D. Our project focused on improving the modeling capabilities of RELAP5-3D by developing uncertainty estimates for its calculations. This work involved analyzing high, medium, and low ranked phenomena from an INL PIRT on a small break Loss-Of-Coolant Accident as wall as an analysis of a large break Loss-Of- Coolant Accident. Statistical analyses were performed using correlation coefficients. To perform the studies, computer programs were written that modify a template RELAP5 input deck to produce one deck for each combination of key input parameters. Python scripting enabled the running of the generated input files with RELAP5-3D on INL’s massively parallel cluster system. Data from the studies was collected and analyzed with SAS. A summary of the results of our studies are presented.
Design Features and Technology Uncertainties for the Next Generation Nuclear Plant
John M. Ryskamp; Phil Hildebrandt; Osamu Baba; Ron Ballinger; Robert Brodsky; Hans-Wolfgang Chi; Dennis Crutchfield; Herb Estrada; Jeane-Claude Garnier; Gerald Gordon; Richard Hobbins; Dan Keuter; Marilyn Kray; Philippe Martin; Steve Melancon; Christian Simon; Henry Stone; Robert Varrin; Werner von Lensa
2004-06-01
This report presents the conclusions, observations, and recommendations of the Independent Technology Review Group (ITRG) regarding design features and important technology uncertainties associated with very-high-temperature nuclear system concepts for the Next Generation Nuclear Plant (NGNP). The ITRG performed its reviews during the period November 2003 through April 2004.
Analysis and Reduction of Complex Networks Under Uncertainty.
Ghanem, Roger G
2014-07-31
This effort was a collaboration with Youssef Marzouk of MIT, Omar Knio of Duke University (at the time at Johns Hopkins University) and Habib Najm of Sandia National Laboratories. The objective of this effort was to develop the mathematical and algorithmic capacity to analyze complex networks under uncertainty. Of interest were chemical reaction networks and smart grid networks. The statements of work for USC focused on the development of stochastic reduced models for uncertain networks. The USC team was led by Professor Roger Ghanem and consisted of one graduate student and a postdoc. The contributions completed by the USC team consisted of 1) methodology and algorithms to address the eigenvalue problem, a problem of significance in the stability of networks under stochastic perturbations, 2) methodology and algorithms to characterize probability measures on graph structures with random flows. This is an important problem in characterizing random demand (encountered in smart grid) and random degradation (encountered in infrastructure systems), as well as modeling errors in Markov Chains (with ubiquitous relevance !). 3) methodology and algorithms for treating inequalities in uncertain systems. This is an important problem in the context of models for material failure and network flows under uncertainty where conditions of failure or flow are described in the form of inequalities between the state variables.
The Uncertainty in the Local Seismic Response Analysis
Pasculli, A.; Pugliese, A.; Romeo, R. W.; Sano, T.
2008-07-08
In the present paper is shown the influence on the local seismic response analysis exerted by considering dispersion and uncertainty in the seismic input as well as in the dynamic properties of soils. In a first attempt a 1D numerical model is developed accounting for both the aleatory nature of the input motion and the stochastic variability of the dynamic properties of soils. The seismic input is introduced in a non-conventional way through a power spectral density, for which an elastic response spectrum, derived--for instance--by a conventional seismic hazard analysis, is required with an appropriate level of reliability. The uncertainty in the geotechnical properties of soils are instead investigated through a well known simulation technique (Monte Carlo method) for the construction of statistical ensembles. The result of a conventional local seismic response analysis given by a deterministic elastic response spectrum is replaced, in our approach, by a set of statistical elastic response spectra, each one characterized by an appropriate level of probability to be reached or exceeded. The analyses have been carried out for a well documented real case-study. Lastly, we anticipate a 2D numerical analysis to investigate also the spatial variability of soil's properties.
Quantifying uncertainty in material damage from vibrational data
Butler, T.; Huhtala, A.; Juntunen, M.
2015-02-15
The response of a vibrating beam to a force depends on many physical parameters including those determined by material properties. Damage caused by fatigue or cracks results in local reductions in stiffness parameters and may drastically alter the response of the beam. Data obtained from the vibrating beam are often subject to uncertainties and/or errors typically modeled using probability densities. The goal of this paper is to estimate and quantify the uncertainty in damage modeled as a local reduction in stiffness using uncertain data. We present various frameworks and methods for solving this parameter determination problem. We also describe a mathematical analysis to determine and compute useful output data for each method. We apply the various methods in a specified sequence that allows us to interface the various inputs and outputs of these methods in order to enhance the inferences drawn from the numerical results obtained from each method. Numerical results are presented using both simulated and experimentally obtained data from physically damaged beams.
Data Filtering Impact on PV Degradation Rates and Uncertainty (Poster)
Jordan, D. C.; Kurtz, S. R.
2012-03-01
To sustain the commercial success of photovoltaics (PV) it becomes vital to know how power output decreases with time. In order to predict power delivery, degradation rates must be determined accurately. Data filtering, any data treatment assessment of long-term field behavior, is discussed as part of a more comprehensive uncertainty analysis and can be one of the greatest sources of uncertainty in long-term performance studies. Several distinct filtering methods such as outlier removal and inclusion of only sunny days on several different metrics such as PVUSA, performance ratio, DC power to plane-of-array irradiance ratio, uncorrected, and temperature-corrected were examined. PVUSA showed the highest sensitivity while temperature-corrected power over irradiance ratio was found to be the least sensitive to data filtering conditions. Using this ratio it is demonstrated that quantification of degradation rates with a statistical accuracy of +/- 0.2%/year within 4 years of field data is possible on two crystalline silicon and two thin-film systems.
WE-B-19A-01: SRT II: Uncertainties in SRT
Dieterich, S; Schlesinger, D; Geneser, S
2014-06-15
SRS delivery has undergone major technical changes in the last decade, transitioning from predominantly frame-based treatment delivery to imageguided, frameless SRS. It is important for medical physicists working in SRS to understand the magnitude and sources of uncertainty involved in delivering SRS treatments for a multitude of technologies (Gamma Knife, CyberKnife, linac-based SRS and protons). Sources of SRS planning and delivery uncertainty include dose calculation, dose fusion, and intra- and inter-fraction motion. Dose calculations for small fields are particularly difficult because of the lack of electronic equilibrium and greater effect of inhomogeneities within and near the PTV. Going frameless introduces greater setup uncertainties that allows for potentially increased intra- and interfraction motion, The increased use of multiple imaging modalities to determine the tumor volume, necessitates (deformable) image and contour fusion, and the resulting uncertainties introduced in the image registration process further contribute to overall treatment planning uncertainties. Each of these uncertainties must be quantified and their impact on treatment delivery accuracy understood. If necessary, the uncertainties may then be accounted for during treatment planning either through techniques to make the uncertainty explicit, or by the appropriate addition of PTV margins. Further complicating matters, the statistics of 1-5 fraction SRS treatments differ from traditional margin recipes relying on Poisson statistics. In this session, we will discuss uncertainties introduced during each step of the SRS treatment planning and delivery process and present margin recipes to appropriately account for such uncertainties. Learning Objectives: To understand the major contributors to the total delivery uncertainty in SRS for Gamma Knife, CyberKnife, and linac-based SRS. Learn the various uncertainties introduced by image fusion, deformable image registration, and contouring variation. Learn a variety of strategies for dealing with uncertainty, including margin recipes and explicit visualization of uncertainty. Understand how the assessment of PTV margins differs from regular fractionation (van Herk recipe) for 15 fraction deliveries.
Quantification of initial-data uncertainty on a shock-accelerated gas cylinder
Tritschler, V. K. Avdonin, A.; Hickel, S.; Hu, X. Y.; Adams, N. A.
2014-02-15
We quantify initial-data uncertainties on a shock accelerated heavy-gas cylinder by two-dimensional well-resolved direct numerical simulations. A high-resolution compressible multicomponent flow simulation model is coupled with a polynomial chaos expansion to propagate the initial-data uncertainties to the output quantities of interest. The initial flow configuration follows previous experimental and numerical works of the shock accelerated heavy-gas cylinder. We investigate three main initial-data uncertainties, (i) shock Mach number, (ii) contamination of SF{sub 6} with acetone, and (iii) initial deviations of the heavy-gas region from a perfect cylindrical shape. The impact of initial-data uncertainties on the mixing process is examined. The results suggest that the mixing process is highly sensitive to input variations of shock Mach number and acetone contamination. Additionally, our results indicate that the measured shock Mach number in the experiment of Tomkins et al. [An experimental investigation of mixing mechanisms in shock-accelerated flow, J. Fluid. Mech. 611, 131 (2008)] and the estimated contamination of the SF{sub 6} region with acetone [S. K. Shankar, S. Kawai, and S. K. Lele, Two-dimensional viscous flow simulation of a shock accelerated heavy gas cylinder, Phys. Fluids 23, 024102 (2011)] exhibit deviations from those that lead to best agreement between our simulations and the experiment in terms of overall flow evolution.
Uncertainty and sensitivity analysis of fission gas behavior in engineering-scale fuel modeling
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Pastore, Giovanni; Swiler, L. P.; Hales, Jason D.; Novascone, Stephen R.; Perez, Danielle M.; Spencer, Benjamin W.; Luzzi, Lelio; Uffelen, Paul Van; Williamson, Richard L.
2014-10-12
The role of uncertainties in fission gas behavior calculations as part of engineering-scale nuclear fuel modeling is investigated using the BISON fuel performance code and a recently implemented physics-based model for the coupled fission gas release and swelling. Through the integration of BISON with the DAKOTA software, a sensitivity analysis of the results to selected model parameters is carried out based on UO2 single-pellet simulations covering different power regimes. The parameters are varied within ranges representative of the relative uncertainties and consistent with the information from the open literature. The study leads to an initial quantitative assessment of the uncertaintymore » in fission gas behavior modeling with the parameter characterization presently available. Also, the relative importance of the single parameters is evaluated. Moreover, a sensitivity analysis is carried out based on simulations of a fuel rod irradiation experiment, pointing out a significant impact of the considered uncertainties on the calculated fission gas release and cladding diametral strain. The results of the study indicate that the commonly accepted deviation between calculated and measured fission gas release by a factor of 2 approximately corresponds to the inherent modeling uncertainty at high fission gas release. Nevertheless, higher deviations may be expected for values around 10% and lower. Implications are discussed in terms of directions of research for the improved modeling of fission gas behavior for engineering purposes.« less
Science based stockpile stewardship, uncertainty quantification, and fission fragment beams
Stoyer, M A; McNabb, D; Burke, J; Bernstein, L A; Wu, C Y
2009-09-14
Stewardship of this nation's nuclear weapons is predicated on developing a fundamental scientific understanding of the physics and chemistry required to describe weapon performance without the need to resort to underground nuclear testing and to predict expected future performance as a result of intended or unintended modifications. In order to construct more reliable models, underground nuclear test data is being reanalyzed in novel ways. The extent to which underground experimental data can be matched with simulations is one measure of the credibility of our capability to predict weapon performance. To improve the interpretation of these experiments with quantified uncertainties, improved nuclear data is required. As an example, the fission yield of a device was often determined by measuring fission products. Conversion of the measured fission products to yield was accomplished through explosion code calculations (models) and a good set of nuclear reaction cross-sections. Because of the unique high-fluence environment of an exploding nuclear weapon, many reactions occurred on radioactive nuclides, for which only theoretically calculated cross-sections are available. Inverse kinematics reactions at CARIBU offer the opportunity to measure cross-sections on unstable neutron-rich fission fragments and thus improve the quality of the nuclear reaction cross-section sets. One of the fission products measured was {sup 95}Zr, the accumulation of all mass 95 fission products of Y, Sr, Rb and Kr (see Fig. 1). Subsequent neutron-induced reactions on these short lived fission products were assumed to cancel out - in other words, the destruction of mass 95 nuclides was more or less equal to the production of mass 95 nuclides. If a {sup 95}Sr was destroyed by an (n,2n) reaction it was also produced by (n,2n) reactions on {sup 96}Sr, for example. However, since these nuclides all have fairly short half-lives (seconds to minutes or even less), no experimental nuclear reaction cross-sections exist, and only theoretically modeled cross-sections are available. Inverse kinematics reactions at CARIBU offer the opportunity, should the beam intensity be sufficient, to measure cross-sections on a few important nuclides in order to benchmark the theoretical calculations and significantly improve the nuclear data. The nuclides in Fig. 1 are prioritized by importance factor and displayed in stoplight colors, green the highest and red the lowest priority.
Reda, I.
2011-07-01
The uncertainty of measuring solar irradiance is fundamentally important for solar energy and atmospheric science applications. Without an uncertainty statement, the quality of a result, model, or testing method cannot be quantified, the chain of traceability is broken, and confidence cannot be maintained in the measurement. Measurement results are incomplete and meaningless without a statement of the estimated uncertainty with traceability to the International System of Units (SI) or to another internationally recognized standard. This report explains how to use International Guidelines of Uncertainty in Measurement (GUM) to calculate such uncertainty. The report also shows that without appropriate corrections to solar measuring instruments (solar radiometers), the uncertainty of measuring shortwave solar irradiance can exceed 4% using present state-of-the-art pyranometers and 2.7% using present state-of-the-art pyrheliometers. Finally, the report demonstrates that by applying the appropriate corrections, uncertainties may be reduced by at least 50%. The uncertainties, with or without the appropriate corrections might not be compatible with the needs of solar energy and atmospheric science applications; yet, this report may shed some light on the sources of uncertainties and the means to reduce overall uncertainty in measuring solar irradiance.
Long-term energy generation planning under uncertainty
Escudero, L.F.; Paradinas, I.; Salmeron, J.; Sanchez, M.
1998-07-01
In this work the authors deal with the hydro-thermal coordination problem under uncertainty in generators availability, fuel costs, exogenous water inflow and energy demand. The objective is to minimize the system operating cost. The decision variables are the fuel procurement for each thermal generation site, the energy generated by each thermal and hydro-generator and the release and spilled water from reservoirs. Control variables are the stored water in reservoirs and the stored fuel in thermal plants at the end of each time period. The main contribution on the proposed topic focus in the simultaneous inclusion of the hydro-network and the thermal generation related constraints, as well as the stochastic aspect of the aforementioned parameters. The authors report their computational experience on real problems drawn from the Spanish hydro-thermal generation system. A case tested includes 85 generators (42 thermal plants with a global 27084MW capacity) and 57 reservoirs.
Uncertainty of silicon 1-MeV damage function
Danjaji, M.B.; Griffin, P.J.
1997-02-01
The electronics radiation hardness-testing community uses the ASTM E722-93 Standard Practice to define the energy dependence of the nonionizing neutron damage to silicon semiconductors. This neutron displacement damage response function is defined to be equal to the silicon displacement kerma as calculated from the ORNL Si cross-section evaluation. Experimental work has shown that observed damage ratios at various test facilities agree with the defined response function to within 5%. Here, a covariance matrix for the silicon 1-MeV neutron displacement damage function is developed. This uncertainty data will support the electronic radiation hardness-testing community and will permit silicon displacement damage sensors to be used in least squares spectrum adjustment codes.
Uncertainty in terahertz time-domain spectroscopy measurement
Withayachumnankul, Withawat; Fischer, Bernd M.; Lin Hungyen; Abbott, Derek
2008-06-15
Measurements of optical constants at terahertz--or T-ray--frequencies have been performed extensively using terahertz time-domain spectroscopy (THz-TDS). Spectrometers, together with physical models explaining the interaction between a sample and T-ray radiation, are progressively being developed. Nevertheless, measurement errors in the optical constants, so far, have not been systematically analyzed. This situation calls for a comprehensive analysis of measurement uncertainty in THz-TDS systems. The sources of error existing in a terahertz spectrometer and throughout the parameter estimation process are identified. The analysis herein quantifies the impact of each source on the output optical constants. The resulting analytical model is evaluated against experimental THz-TDS data.
Luis, Alfredo
2011-09-15
We show within a very simple framework that different measures of fluctuations lead to uncertainty relations resulting in contradictory conclusions. More specifically we focus on Tsallis and Renyi entropic uncertainty relations and we get that the minimum joint uncertainty states for some fluctuation measures are the maximum joint uncertainty states of other fluctuation measures, and vice versa.
Asymptotic and uncertainty analyses of a phase field model for void
Office of Scientific and Technical Information (OSTI)
formation under irradiation (Journal Article) | SciTech Connect Asymptotic and uncertainty analyses of a phase field model for void formation under irradiation Citation Details In-Document Search Title: Asymptotic and uncertainty analyses of a phase field model for void formation under irradiation We perform asymptotic analysis and uncertainty quantification of a phase field model for void formation and evolution in materials subject to irradiation. The parameters of the phase field model
Principles and applications of measurement and uncertainty analysis in research and calibration
Wells, C.V.
1992-11-01
Interest in Measurement Uncertainty Analysis has grown in the past several years as it has spread to new fields of application, and research and development of uncertainty methodologies have continued. This paper discusses the subject from the perspectives of both research and calibration environments. It presents a history of the development and an overview of the principles of uncertainty analysis embodied in the United States National Standard, ANSI/ASME PTC 19.1-1985, Measurement Uncertainty. Examples are presented in which uncertainty analysis was utilized or is needed to gain further knowledge of a particular measurement process and to characterize final results. Measurement uncertainty analysis provides a quantitative estimate of the interval about a measured value or an experiment result within which the true value of that quantity is expected to lie. Years ago, Harry Ku of the United States National Bureau of Standards stated that The informational content of the statement of uncertainty determines, to a large extent, the worth of the calibrated value.'' Today, that statement is just as true about calibration or research results as it was in 1968. Why is that true What kind of information should we include in a statement of uncertainty accompanying a calibrated value How and where do we get the information to include in an uncertainty statement How should we interpret and use measurement uncertainty information This discussion will provide answers to these and other questions about uncertainty in research and in calibration. The methodology to be described has been developed by national and international groups over the past nearly thirty years, and individuals were publishing information even earlier. Yet the work is largely unknown in many science and engineering arenas. I will illustrate various aspects of uncertainty analysis with some examples drawn from the radiometry measurement and calibration discipline from research activities.
Principles and applications of measurement and uncertainty analysis in research and calibration
Wells, C.V.
1992-11-01
Interest in Measurement Uncertainty Analysis has grown in the past several years as it has spread to new fields of application, and research and development of uncertainty methodologies have continued. This paper discusses the subject from the perspectives of both research and calibration environments. It presents a history of the development and an overview of the principles of uncertainty analysis embodied in the United States National Standard, ANSI/ASME PTC 19.1-1985, Measurement Uncertainty. Examples are presented in which uncertainty analysis was utilized or is needed to gain further knowledge of a particular measurement process and to characterize final results. Measurement uncertainty analysis provides a quantitative estimate of the interval about a measured value or an experiment result within which the true value of that quantity is expected to lie. Years ago, Harry Ku of the United States National Bureau of Standards stated that ``The informational content of the statement of uncertainty determines, to a large extent, the worth of the calibrated value.`` Today, that statement is just as true about calibration or research results as it was in 1968. Why is that true? What kind of information should we include in a statement of uncertainty accompanying a calibrated value? How and where do we get the information to include in an uncertainty statement? How should we interpret and use measurement uncertainty information? This discussion will provide answers to these and other questions about uncertainty in research and in calibration. The methodology to be described has been developed by national and international groups over the past nearly thirty years, and individuals were publishing information even earlier. Yet the work is largely unknown in many science and engineering arenas. I will illustrate various aspects of uncertainty analysis with some examples drawn from the radiometry measurement and calibration discipline from research activities.
Final Report: DOE Project: DE-SC-0005399 Linking the uncertainty...
Office of Scientific and Technical Information (OSTI)
Linking the uncertainty of low frequency variability in tropical forcing in regional climate change Citation Details In-Document Search Title: Final Report: DOE Project:...
MASS MEASUREMENT UNCERTAINTY FOR PLUTONIUM ALIQUOTS ASSAYED BY CONTROLLED-POTENTIAL COULOMETRY
Holland, M.; Cordaro, J.
2009-03-18
Minimizing plutonium measurement uncertainty is essential to nuclear material control and international safeguards. In 2005, the International Organization for Standardization (ISO) published ISO 12183 'Controlled-potential coulometric assay of plutonium', 2nd edition. ISO 12183:2005 recommends a target of {+-}0.01% for the mass of original sample in the aliquot because it is a critical assay variable. Mass measurements in radiological containment were evaluated and uncertainties estimated. The uncertainty estimate for the mass measurement also includes uncertainty in correcting for buoyancy effects from air acting as a fluid and from decreased pressure of heated air from the specific heat of the plutonium isotopes.
WHEELER, TIMOTHY A.; WYSS, GREGORY D.; HARPER, FREDERICK T.
2000-11-01
Uncertainty distributions for specific parameters of the Cassini General Purpose Heat Source Radioisotope Thermoelectric Generator (GPHS-RTG) Final Safety Analysis Report consequence risk analysis were revised and updated. The revisions and updates were done for all consequence parameters for which relevant information exists from the joint project on Probabilistic Accident Consequence Uncertainty Analysis by the United States Nuclear Regulatory Commission and the Commission of European Communities.
CALiPER Exploratory Study: Accounting for Uncertainty in Lumen Measurements
Bergman, Rolf; Paget, Maria L.; Richman, Eric E.
2011-03-31
With a well-defined and shared understanding of uncertainty in lumen measurements, testing laboratories can better evaluate their processes, contributing to greater consistency and credibility of lighting testing a key component of the U.S. Department of Energy (DOE) Commercially Available LED Product Evaluation and Reporting (CALiPER) program. Reliable lighting testing is a crucial underlying factor contributing toward the success of many energy-efficient lighting efforts, such as the DOE GATEWAY demonstrations, Lighting Facts Label, ENERGY STAR energy efficient lighting programs, and many others. Uncertainty in measurements is inherent to all testing methodologies, including photometric and other lighting-related testing. Uncertainty exists for all equipment, processes, and systems of measurement in individual as well as combined ways. A major issue with testing and the resulting accuracy of the tests is the uncertainty of the complete process. Individual equipment uncertainties are typically identified, but their relative value in practice and their combined value with other equipment and processes in the same test are elusive concepts, particularly for complex types of testing such as photometry. The total combined uncertainty of a measurement result is important for repeatable and comparative measurements for light emitting diode (LED) products in comparison with other technologies as well as competing products. This study provides a detailed and step-by-step method for determining uncertainty in lumen measurements, working closely with related standards efforts and key industry experts. This report uses the structure proposed in the Guide to Uncertainty Measurements (GUM) for evaluating and expressing uncertainty in measurements. The steps of the procedure are described and a spreadsheet format adapted for integrating sphere and goniophotometric uncertainty measurements is provided for entering parameters, ordering the information, calculating intermediate values and, finally, obtaining expanded uncertainties. Using this basis and examining each step of the photometric measurement and calibration methods, mathematical uncertainty models are developed. Determination of estimated values of input variables is discussed. Guidance is provided for the evaluation of the standard uncertainties of each input estimate, covariances associated with input estimates and the calculation of the result measurements. With this basis, the combined uncertainty of the measurement results and finally, the expanded uncertainty can be determined.
Nguyen, J.; Moteabbed, M.; Paganetti, H.
2015-01-15
Purpose: Theoretical dose–response models offer the possibility to assess second cancer induction risks after external beam therapy. The parameters used in these models are determined with limited data from epidemiological studies. Risk estimations are thus associated with considerable uncertainties. This study aims at illustrating uncertainties when predicting the risk for organ-specific second cancers in the primary radiation field illustrated by choosing selected treatment plans for brain cancer patients. Methods: A widely used risk model was considered in this study. The uncertainties of the model parameters were estimated with reported data of second cancer incidences for various organs. Standard error propagation was then subsequently applied to assess the uncertainty in the risk model. Next, second cancer risks of five pediatric patients treated for cancer in the head and neck regions were calculated. For each case, treatment plans for proton and photon therapy were designed to estimate the uncertainties (a) in the lifetime attributable risk (LAR) for a given treatment modality and (b) when comparing risks of two different treatment modalities. Results: Uncertainties in excess of 100% of the risk were found for almost all organs considered. When applied to treatment plans, the calculated LAR values have uncertainties of the same magnitude. A comparison between cancer risks of different treatment modalities, however, does allow statistically significant conclusions. In the studied cases, the patient averaged LAR ratio of proton and photon treatments was 0.35, 0.56, and 0.59 for brain carcinoma, brain sarcoma, and bone sarcoma, respectively. Their corresponding uncertainties were estimated to be potentially below 5%, depending on uncertainties in dosimetry. Conclusions: The uncertainty in the dose–response curve in cancer risk models makes it currently impractical to predict the risk for an individual external beam treatment. On the other hand, the ratio of absolute risks between two modalities is less sensitive to the uncertainties in the risk model and can provide statistically significant estimates.
A preliminary study to Assess Model Uncertainties in Fluid Flows
Marc Oliver Delchini; Jean C. Ragusa
2009-09-01
The goal of this study is to assess the impact of various flow models for a simplified primary coolant loop of a light water nuclear reactor. The various fluid flow models are based on the Euler equations with an additional friction term, gravity term, momentum source, and energy source. The geometric model is purposefully chosen simple and consists of a one-dimensional (1D) loop system in order to focus the study on the validity of various fluid flow approximations. The 1D loop system is represented by a rectangle; the fluid is heated up along one of the vertical legs and cooled down along the opposite leg. A pressurizer and a pump are included in the horizontal legs. The amount of energy transferred and removed from the system is equal in absolute value along the two vertical legs. The various fluid flow approximations are compressible vs. incompressible, and complete momentum equation vs. Darcys approximation. The ultimate goal is to compute the fluid flow models uncertainties and, if possible, to generate validity ranges for these models when applied to reactor analysis. We also limit this study to single phase flows with low-Mach numbers. As a result, sound waves carry a very small amount of energy in this particular case. A standard finite volume method is used for the spatial discretization of the system.
Mesh refinement for uncertainty quantification through model reduction
Li, Jing Stinis, Panos
2015-01-01
We present a novel way of deciding when and where to refine a mesh in probability space in order to facilitate uncertainty quantification in the presence of discontinuities in random space. A discontinuity in random space makes the application of generalized polynomial chaos expansion techniques prohibitively expensive. The reason is that for discontinuous problems, the expansion converges very slowly. An alternative to using higher terms in the expansion is to divide the random space in smaller elements where a lower degree polynomial is adequate to describe the randomness. In general, the partition of the random space is a dynamic process since some areas of the random space, particularly around the discontinuity, need more refinement than others as time evolves. In the current work we propose a way to decide when and where to refine the random space mesh based on the use of a reduced model. The idea is that a good reduced model can monitor accurately, within a random space element, the cascade of activity to higher degree terms in the chaos expansion. In turn, this facilitates the efficient allocation of computational sources to the areas of random space where they are more needed. For the Kraichnan–Orszag system, the prototypical system to study discontinuities in random space, we present theoretical results which show why the proposed method is sound and numerical results which corroborate the theory.
Srinivasan, Sanjay
2014-09-30
In-depth understanding of the long-term fate of CO₂ in the subsurface requires study and analysis of the reservoir formation, the overlaying caprock formation, and adjacent faults. Because there is significant uncertainty in predicting the location and extent of geologic heterogeneity that can impact the future migration of CO₂ in the subsurface, there is a need to develop algorithms that can reliably quantify this uncertainty in plume migration. This project is focused on the development of a model selection algorithm that refines an initial suite of subsurface models representing the prior uncertainty to create a posterior set of subsurface models that reflect injection performance consistent with that observed. Such posterior models can be used to represent uncertainty in the future migration of the CO₂ plume. Because only injection data is required, the method provides a very inexpensive method to map the migration of the plume and the associated uncertainty in migration paths. The model selection method developed as part of this project mainly consists of assessing the connectivity/dynamic characteristics of a large prior ensemble of models, grouping the models on the basis of their expected dynamic response, selecting the subgroup of models that most closely yield dynamic response closest to the observed dynamic data, and finally quantifying the uncertainty in plume migration using the selected subset of models. The main accomplishment of the project is the development of a software module within the SGEMS earth modeling software package that implements the model selection methodology. This software module was subsequently applied to analyze CO₂ plume migration in two field projects – the In Salah CO₂ Injection project in Algeria and CO₂ injection into the Utsira formation in Norway. These applications of the software revealed that the proxies developed in this project for quickly assessing the dynamic characteristics of the reservoir were highly efficient and yielded accurate grouping of reservoir models. The plume migration paths probabilistically assessed by the method were confirmed by field observations and auxiliary data. The report also documents the application of the software to answer practical questions such as the optimum location of monitoring wells to reliably assess the migration of CO₂ plume, the effect of CO₂-rock interactions on plume migration and the ability to detect the plume under those conditions and the effect of a slow, unresolved leak on the predictions of plume migration.
Kim, Alex G.; Miquel, Ramon
2005-09-26
We present a new technique to extract the cosmological information from high-redshift supernova data in the presence of calibration errors and extinction due to dust. While in the traditional technique the distance modulus of each supernova is determined separately, in our approach we determine all distance moduli at once, in a process that achieves a significant degree of self-calibration. The result is a much reduced sensitivity of the cosmological parameters to the calibration uncertainties. As an example, for a strawman mission similar to that outlined in the SNAP satellite proposal, the increased precision obtained with the new approach is roughly equivalent to a factor of five decrease in the calibration uncertainty.
Helton, Jon Craig; Sallaberry, Cedric M.; Hansen, Clifford W.
2010-10-01
The 2008 performance assessment (PA) for the proposed repository for high-level radioactive waste at Yucca Mountain (YM), Nevada, illustrates the conceptual structure of risk assessments for complex systems. The 2008 YM PA is based on the following three conceptual entities: a probability space that characterizes aleatory uncertainty; a function that predicts consequences for individual elements of the sample space for aleatory uncertainty; and a probability space that characterizes epistemic uncertainty. These entities and their use in the characterization, propagation and analysis of aleatory and epistemic uncertainty are described and illustrated with results from the 2008 YM PA.
Uncertainty in Resilience to Climate Change in India and Indian States
Malone, Elizabeth L.; Brenkert, Antoinette L.
2008-10-03
This study builds on an earlier analysis of resilience of India and Indian states to climate change. The previous study (Brenkert and Malone 2005) assessed current resilience; this research uses the Vulnerability-Resilience Indicators Model (VRIM) to project resilience to 2095 and to perform an uncertainty analysis on the deterministic results. Projections utilized two SRES-based scenarios, one with fast-and-high growth, one with delayed growth. A detailed comparison of two states, the Punjab and Orissa, points to the kinds of insights that can be obtained using the VRIM. The scenarios differ most significantly in the timing of the uncertainty in economic prosperity (represented by GDP per capita) as a major factor in explaining the uncertainty in the resilience index. In the fast-and-high growth scenario the states differ most markedly regarding the role of ecosystem sensitivity, land use and water availability. The uncertainty analysis shows, for example, that resilience in the Punjab might be enhanced, especially in the delayed growth scenario, if early attention is paid to the impact of ecosystems sensitivity on environmental well-being of the state. By the same token, later in the century land-use pressures might be avoided if land is managed through intensification rather than extensification of agricultural land. Thus, this methodology illustrates how a policy maker can be informed about where to focus attention on specific issues, by understanding the potential changes at a specific location and time and, thus, what might yield desired outcomes. Model results can point to further analyses of the potential for resilience-building.
TOTAL MEASUREMENT UNCERTAINTY IN HOLDUP MEASUREMENTS AT THE PLUTONIUM FINISHING PLANT (PFP)
KEELE, B.D.
2007-07-05
An approach to determine the total measurement uncertainty (TMU) associated with Generalized Geometry Holdup (GGH) [1,2,3] measurements was developed and implemented in 2004 and 2005 [4]. This paper describes a condensed version of the TMU calculational model, including recent developments. Recent modifications to the TMU calculation model include a change in the attenuation uncertainty, clarifying the definition of the forward background uncertainty, reducing conservatism in the random uncertainty by selecting either a propagation of counting statistics or the standard deviation of the mean, and considering uncertainty in the width and height as a part of the self attenuation uncertainty. In addition, a detection limit is calculated for point sources using equations derived from summary equations contained in Chapter 20 of MARLAP [5]. The Defense Nuclear Facilities Safety Board (DNFSB) Recommendation 2007-1 to the Secretary of Energy identified a lack of requirements and a lack of standardization for performing measurements across the U.S. Department of Energy (DOE) complex. The DNFSB also recommended that guidance be developed for a consistent application of uncertainty values. As such, the recent modifications to the TMU calculational model described in this paper have not yet been implemented. The Plutonium Finishing Plant (PFP) is continuing to perform uncertainty calculations as per Reference 4. Publication at this time is so that these concepts can be considered in developing a consensus methodology across the complex.
Users manual for the FORSS sensitivity and uncertainty analysis code system
Lucius, J.L.; Weisbin, C.R.; Marable, J.H.; Drischler, J.D.; Wright, R.Q.; White, J.E.
1981-01-01
FORSS is a code system used to study relationships between nuclear reaction cross sections, integral experiments, reactor performance parameter predictions and associated uncertainties. This report describes the computing environment and the modules currently used to implement FORSS Sensitivity and Uncertainty Methodology.
Using Uncertainty Analysis to Guide the Development of Accelerated Stress Tests (Presentation)
Kempe, M.
2014-03-01
Extrapolation of accelerated testing to the long-term results expected in the field has uncertainty associated with the acceleration factors and the range of possible stresses in the field. When multiple stresses (such as temperature and humidity) can be used to increase the acceleration, the uncertainty may be reduced according to which stress factors are used to accelerate the degradation.
TSUNAMI Primer: A Primer for Sensitivity/Uncertainty Calculations with SCALE
Rearden, Bradley T; Mueller, Don; Bowman, Stephen M; Busch, Robert D.; Emerson, Scott
2009-01-01
This primer presents examples in the application of the SCALE/TSUNAMI tools to generate k{sub eff} sensitivity data for one- and three-dimensional models using TSUNAMI-1D and -3D and to examine uncertainties in the computed k{sub eff} values due to uncertainties in the cross-section data used in their calculation. The proper use of unit cell data and need for confirming the appropriate selection of input parameters through direct perturbations are described. The uses of sensitivity and uncertainty data to identify and rank potential sources of computational bias in an application system and TSUNAMI tools for assessment of system similarity using sensitivity and uncertainty criteria are demonstrated. Uses of these criteria in trending analyses to assess computational biases, bias uncertainties, and gap analyses are also described. Additionally, an application of the data adjustment tool TSURFER is provided, including identification of specific details of sources of computational bias.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Sun, Y.; Tong, C.; Trainor-Guitten, W. J.; Lu, C.; Mansoor, K.; Carroll, S. A.
2012-12-20
The risk of CO2 leakage from a deep storage reservoir into a shallow aquifer through a fault is assessed and studied using physics-specific computer models. The hypothetical CO2 geological sequestration system is composed of three subsystems: a deep storage reservoir, a fault in caprock, and a shallow aquifer, which are modeled respectively by considering sub-domain-specific physics. Supercritical CO2 is injected into the reservoir subsystem with uncertain permeabilities of reservoir, caprock, and aquifer, uncertain fault location, and injection rate (as a decision variable). The simulated pressure and CO2/brine saturation are connected to the fault-leakage model as a boundary condition. CO2 andmore » brine fluxes from the fault-leakage model at the fault outlet are then imposed in the aquifer model as a source term. Moreover, uncertainties are propagated from the deep reservoir model, to the fault-leakage model, and eventually to the geochemical model in the shallow aquifer, thus contributing to risk profiles. To quantify the uncertainties and assess leakage-relevant risk, we propose a global sampling-based method to allocate sub-dimensions of uncertain parameters to sub-models. The risk profiles are defined and related to CO2 plume development for pH value and total dissolved solids (TDS) below the EPA's Maximum Contaminant Levels (MCL) for drinking water quality. A global sensitivity analysis is conducted to select the most sensitive parameters to the risk profiles. The resulting uncertainty of pH- and TDS-defined aquifer volume, which is impacted by CO2 and brine leakage, mainly results from the uncertainty of fault permeability. Subsequently, high-resolution, reduced-order models of risk profiles are developed as functions of all the decision variables and uncertain parameters in all three subsystems.« less
Climate uncertainty and implications for U.S. state-level risk assessment through 2050.
Loose, Verne W.; Lowry, Thomas Stephen; Malczynski, Leonard A.; Tidwell, Vincent Carroll; Stamber, Kevin Louis; Kelic, Andjelka; Backus, George A.; Warren, Drake E.; Zagonel, Aldo A.; Ehlen, Mark Andrew; Klise, Geoffrey T.; Vargas, Vanessa N.
2009-10-01
Decisions for climate policy will need to take place in advance of climate science resolving all relevant uncertainties. Further, if the concern of policy is to reduce risk, then the best-estimate of climate change impacts may not be so important as the currently understood uncertainty associated with realizable conditions having high consequence. This study focuses on one of the most uncertain aspects of future climate change - precipitation - to understand the implications of uncertainty on risk and the near-term justification for interventions to mitigate the course of climate change. We show that the mean risk of damage to the economy from climate change, at the national level, is on the order of one trillion dollars over the next 40 years, with employment impacts of nearly 7 million labor-years. At a 1% exceedance-probability, the impact is over twice the mean-risk value. Impacts at the level of individual U.S. states are then typically in the multiple tens of billions dollar range with employment losses exceeding hundreds of thousands of labor-years. We used results of the Intergovernmental Panel on Climate Change's (IPCC) Fourth Assessment Report 4 (AR4) climate-model ensemble as the referent for climate uncertainty over the next 40 years, mapped the simulated weather hydrologically to the county level for determining the physical consequence to economic activity at the state level, and then performed a detailed, seventy-industry, analysis of economic impact among the interacting lower-48 states. We determined industry GDP and employment impacts at the state level, as well as interstate population migration, effect on personal income, and the consequences for the U.S. trade balance.
Uncertainty evaluation for the matrix 'solidified state' of fissionable elements
Iliescu, Elena; Iancso, Georgeta
2012-09-06
In case of the analysis of the radioactive liquid samples, no matter the relative physical analysis method used, two impediments act that belong to the behavior in time of the dispersion state of the liquid samples to be analyzed and of the standard used in the analysis. That is, one of them refers to the state of the sample to be analyzed when being sampled, which 'alter' during the time elapsed from sampling up to the analysis of the sample. The other impediment is the natural change of the dispersion state of the standard radioactive solutions, due to the occurrence and evolution in time of the radiocolloidal and pseudo-radiocolloidal states. These radiocolloidal states are states of aggregation and they lead to the destruction of the homogeneity of the solutions. Taking into consideration the advantages offered by the relative physical methods of analysis as against the chemical or the radiochemical ones, different ways of eliminating these impediments have been tried. We eliminated these impediments processing the liquid reference materials (the solutions calibrated in radionuclides of interest), immediately after the preparation. This processing changes the liquid physical state of the reference materials in a 'solidified state'. Through this procedure the dispersion states of the samples, practically, can no longer be essentially modified in time and also ensure the uniform distribution of the radionuclides of interest in the elemental matrix of the samples 'state solidified'. The homogeneity of the distribution of the atoms of the radionuclides from the samples 'solidified state' was checked up through the track micromapping technique of the alpha particles. Through this technique, in the chemically etched track detectors that were put in direct contact with the sample for a determined period of time, the alpha exposure time of the detectors, micromaps of alpha tracks were obtained. These micromaps are retorts through tracks of the distributions atoms of fissionable elements (Thorium e.g.), of which, heavy charged particles, in this case the alpha radiations naturally emitted, were registered in the CR-39 track detectors. The density of alpha track from the obtained track micromaps was studied through common optic microscopy. Micromaps were studied counting the tracks on equal areas, in different measurement points. For the study of the foils prepared within the paper, the studied area was of 4.9 mm2, formed of 10 fields of 0.49 mm2 area each. The estimation of the uncertainty was carried out for all the sizes that were measured within the paper, no matter if they participate, directly or indirectly, in the estimation of the uncertainty regarding the homogeneity of the Thorium atoms distribution in the 'solidified state' foils of the standard solution calibrated in Thorium, such as: i) the weighted masses, ii) the dropped volumes of solution, iii) the alpha duration of exposure of the detectors, iv) the area studied on the surface of the micromap and v) the densities of alpha tracks. The procedure suggested allowed us to considerate that the homogeneity of alpha tracks distribution, on the surface and in thickness, is within the limits of 3.1%.
McDonnell, J. D.; Schunck, N.; Higdon, D.; Sarich, J.; Wild, S. M.; Nazarewicz, W.
2015-03-24
Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squares optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. In addition, the example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.
McDonnell, J. D.; Schunck, N.; Higdon, D.; Sarich, J.; Wild, S. M.; Nazarewicz, W.
2015-03-24
Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squares optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. As a result, the example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
McDonnell, J. D.; Schunck, N.; Higdon, D.; Sarich, J.; Wild, S. M.; Nazarewicz, W.
2015-03-24
Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squaresmore » optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. In addition, the example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.« less
Uncertainty Quantification of Hypothesis Testing for the Integrated Knowledge Engine
Cuellar, Leticia
2012-05-31
The Integrated Knowledge Engine (IKE) is a tool of Bayesian analysis, based on Bayesian Belief Networks or Bayesian networks for short. A Bayesian network is a graphical model (directed acyclic graph) that allows representing the probabilistic structure of many variables assuming a localized type of dependency called the Markov property. The Markov property in this instance makes any node or random variable to be independent of any non-descendant node given information about its parent. A direct consequence of this property is that it is relatively easy to incorporate new evidence and derive the appropriate consequences, which in general is not an easy or feasible task. Typically we use Bayesian networks as predictive models for a small subset of the variables, either the leave nodes or the root nodes. In IKE, since most applications deal with diagnostics, we are interested in predicting the likelihood of the root nodes given new observations on any of the children nodes. The root nodes represent the various possible outcomes of the analysis, and an important problem is to determine when we have gathered enough evidence to lean toward one of these particular outcomes. This document presents criteria to decide when the evidence gathered is sufficient to draw a particular conclusion or decide in favor of a particular outcome by quantifying the uncertainty in the conclusions that are drawn from the data. The material in this document is organized as follows: Section 2 presents briefly a forensics Bayesian network, and we explore evaluating the information provided by new evidence by looking first at the posterior distribution of the nodes of interest, and then at the corresponding posterior odds ratios. Section 3 presents a third alternative: Bayes Factors. In section 4 we finalize by showing the relation between the posterior odds ratios and Bayes factors and showing examples these cases, and in section 5 we conclude by providing clear guidelines of how to use these for the type of Bayesian networks used in IKE.
Turinsky, Paul J; Abdel-Khalik, Hany S; Stover, Tracy E
2011-03-31
An optimization technique has been developed to select optimized experimental design specifications to produce data specifically designed to be assimilated to optimize a given reactor concept. Data from the optimized experiment is assimilated to generate posteriori uncertainties on the reactor concept’s core attributes from which the design responses are computed. The reactor concept is then optimized with the new data to realize cost savings by reducing margin. The optimization problem iterates until an optimal experiment is found to maximize the savings. A new generation of innovative nuclear reactor designs, in particular fast neutron spectrum recycle reactors, are being considered for the application of closing the nuclear fuel cycle in the future. Safe and economical design of these reactors will require uncertainty reduction in basic nuclear data which are input to the reactor design. These data uncertainty propagate to design responses which in turn require the reactor designer to incorporate additional safety margin into the design, which often increases the cost of the reactor. Therefore basic nuclear data needs to be improved and this is accomplished through experimentation. Considering the high cost of nuclear experiments, it is desired to have an optimized experiment which will provide the data needed for uncertainty reduction such that a reactor design concept can meet its target accuracies or to allow savings to be realized by reducing the margin required due to uncertainty propagated from basic nuclear data. However, this optimization is coupled to the reactor design itself because with improved data the reactor concept can be re-optimized itself. It is thus desired to find the experiment that gives the best optimized reactor design. Methods are first established to model both the reactor concept and the experiment and to efficiently propagate the basic nuclear data uncertainty through these models to outputs. The representativity of the experiment to the design concept is quantitatively determined. A technique is then established to assimilate this data and produce posteriori uncertainties on key attributes and responses of the design concept. Several experiment perturbations based on engineering judgment are used to demonstrate these methods and also serve as an initial generation of the optimization problem. Finally, an optimization technique is developed which will simultaneously arrive at an optimized experiment to produce an optimized reactor design. Solution of this problem is made possible by the use of the simulated annealing algorithm for solution of optimization problems. The optimization examined in this work is based on maximizing the reactor cost savings associated with the modified design made possible by using the design margin gained through reduced basic nuclear data uncertainties. Cost values for experiment design specifications and reactor design specifications are established and used to compute a total savings by comparing the posteriori reactor cost to the a priori cost plus the cost of the experiment. The optimized solution arrives at a maximized cost savings.
Results for Phase I of the IAEA Coordinated Research Program on HTGR Uncertainties
Strydom, Gerhard; Bostelmann, Friederike; Yoon, Su Jong
2015-01-01
The quantification of uncertainties in design and safety analysis of reactors is today not only broadly accepted, but in many cases became the preferred way to replace traditional conservative analysis for safety and licensing analysis. The use of a more fundamental methodology is also consistent with the reliable high fidelity physics models and robust, efficient, and accurate codes available today. To facilitate uncertainty analysis applications a comprehensive approach and methodology must be developed and applied. High Temperature Gas-cooled Reactors (HTGR) has its own peculiarities, coated particle design, large graphite quantities, different materials and high temperatures that also require other simulation requirements. The IAEA has therefore launched a Coordinated Research Project (CRP) on the HTGR Uncertainty Analysis in Modeling (UAM) in 2013 to study uncertainty propagation specifically in the HTGR analysis chain. Two benchmark problems are defined, with the prismatic design represented by the General Atomics (GA) MHTGR-350 and a 250 MW modular pebble bed design similar to the HTR-PM (INET, China). This report summarizes the contributions of the HTGR Methods Simulation group at Idaho National Laboratory (INL) up to this point of the CRP. The activities at INL have been focused so far on creating the problem specifications for the prismatic design, as well as providing reference solutions for the exercises defined for Phase I. An overview is provided of the HTGR UAM objectives and scope, and the detailed specifications for Exercises I-1, I-2, I-3 and I-4 are also included here for completeness. The main focus of the report is the compilation and discussion of reference results for Phase I (i.e. for input parameters at their nominal or best-estimate values), which is defined as the first step of the uncertainty quantification process. These reference results can be used by other CRP participants for comparison with other codes or their own reference results. The status on the Monte Carlo modeling of the experimental VHTRC facility is also discussed. Reference results were obtained for the neutronics stand-alone cases (Ex. I-1 and Ex. I-2) using the (relatively new) Monte Carlo code Serpent, and comparisons were performed with the more established Monte Carlo codes MCNP and KENO-VI. For the thermal-fluids stand-alone cases (Ex. I-3 and I-4) the commercial CFD code CFX was utilized to obtain reference results that can be compared with lower fidelity tools.
Kamp, F.; Brueningk, S.C.; Wilkens, J.J.
2014-06-15
Purpose: In particle therapy, treatment planning and evaluation are frequently based on biological models to estimate the relative biological effectiveness (RBE) or the equivalent dose in 2 Gy fractions (EQD2). In the context of the linear-quadratic model, these quantities depend on biological parameters (α, β) for ions as well as for the reference radiation and on the dose per fraction. The needed biological parameters as well as their dependency on ion species and ion energy typically are subject to large (relative) uncertainties of up to 20–40% or even more. Therefore it is necessary to estimate the resulting uncertainties in e.g. RBE or EQD2 caused by the uncertainties of the relevant input parameters. Methods: We use a variance-based sensitivity analysis (SA) approach, in which uncertainties in input parameters are modeled by random number distributions. The evaluated function is executed 10{sup 4} to 10{sup 6} times, each run with a different set of input parameters, randomly varied according to their assigned distribution. The sensitivity S is a variance-based ranking (from S = 0, no impact, to S = 1, only influential part) of the impact of input uncertainties. The SA approach is implemented for carbon ion treatment plans on 3D patient data, providing information about variations (and their origin) in RBE and EQD2. Results: The quantification enables 3D sensitivity maps, showing dependencies of RBE and EQD2 on different input uncertainties. The high number of runs allows displaying the interplay between different input uncertainties. The SA identifies input parameter combinations which result in extreme deviations of the result and the input parameter for which an uncertainty reduction is the most rewarding. Conclusion: The presented variance-based SA provides advantageous properties in terms of visualization and quantification of (biological) uncertainties and their impact. The method is very flexible, model independent, and enables a broad assessment of uncertainties. Supported by DFG grant WI 3745/1-1 and DFG cluster of excellence: Munich-Centre for Advanced Photonics.
A Two-Step Approach to Uncertainty Quantification of Core Simulators
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Yankov, Artem; Collins, Benjamin; Klein, Markus; Jessee, Matthew A.; Zwermann, Winfried; Velkov, Kiril; Pautz, Andreas; Downar, Thomas
2012-01-01
For the multiple sources of error introduced into the standard computational regime for simulating reactor cores, rigorous uncertainty analysis methods are available primarily to quantify the effects of cross section uncertainties. Two methods for propagating cross section uncertainties through core simulators are the XSUSA statistical approach and the “two-step” method. The XSUSA approach, which is based on the SUSA code package, is fundamentally a stochastic sampling method. Alternatively, the two-step method utilizes generalized perturbation theory in the first step and stochastic sampling in the second step. The consistency of these two methods in quantifying uncertainties in the multiplication factor andmore » in the core power distribution was examined in the framework of phase I-3 of the OECD Uncertainty Analysis in Modeling benchmark. With the Three Mile Island Unit 1 core as a base model for analysis, the XSUSA and two-step methods were applied with certain limitations, and the results were compared to those produced by other stochastic sampling-based codes. Based on the uncertainty analysis results, conclusions were drawn as to the method that is currently more viable for computing uncertainties in burnup and transient calculations.« less
Statistical Assessment of Proton Treatment Plans Under Setup and Range Uncertainties
Park, Peter C.; Cheung, Joey P.; Zhu, X. Ronald; Lee, Andrew K.; Sahoo, Narayan; Tucker, Susan L.; Liu, Wei; Li, Heng; Mohan, Radhe; Court, Laurence E.; Dong, Lei
2013-08-01
Purpose: To evaluate a method for quantifying the effect of setup errors and range uncertainties on dose distribution and dosevolume histogram using statistical parameters; and to assess existing planning practice in selected treatment sites under setup and range uncertainties. Methods and Materials: Twenty passively scattered proton lung cancer plans, 10 prostate, and 1 brain cancer scanning-beam proton plan(s) were analyzed. To account for the dose under uncertainties, we performed a comprehensive simulation in which the dose was recalculated 600 times per given plan under the influence of random and systematic setup errors and proton range errors. On the basis of simulation results, we determined the probability of dose variations and calculated the expected values and standard deviations of dosevolume histograms. The uncertainties in dose were spatially visualized on the planning CT as a probability map of failure to target coverage or overdose of critical structures. Results: The expected value of target coverage under the uncertainties was consistently lower than that of the nominal value determined from the clinical target volume coverage without setup error or range uncertainty, with a mean difference of ?1.1% (?0.9% for breath-hold), ?0.3%, and ?2.2% for lung, prostate, and a brain cases, respectively. The organs with most sensitive dose under uncertainties were esophagus and spinal cord for lung, rectum for prostate, and brain stem for brain cancer. Conclusions: A clinically feasible robustness plan analysis tool based on direct dose calculation and statistical simulation has been developed. Both the expectation value and standard deviation are useful to evaluate the impact of uncertainties. The existing proton beam planning method used in this institution seems to be adequate in terms of target coverage. However, structures that are small in volume or located near the target area showed greater sensitivity to uncertainties.
A Probabilistic Framework for Quantifying Mixed Uncertainties in Cyber Attacker Payoffs
Chatterjee, Samrat; Tipireddy, Ramakrishna; Oster, Matthew R.; Halappanavar, Mahantesh
2015-12-28
Quantification and propagation of uncertainties in cyber attacker payoffs is a key aspect within multiplayer, stochastic security games. These payoffs may represent penalties or rewards associated with player actions and are subject to various sources of uncertainty, including: (1) cyber-system state, (2) attacker type, (3) choice of player actions, and (4) cyber-system state transitions over time. Past research has primarily focused on representing defender beliefs about attacker payoffs as point utility estimates. More recently, within the physical security domain, attacker payoff uncertainties have been represented as Uniform and Gaussian probability distributions, and mathematical intervals. For cyber-systems, probability distributions may help address statistical (aleatory) uncertainties where the defender may assume inherent variability or randomness in the factors contributing to the attacker payoffs. However, systematic (epistemic) uncertainties may exist, where the defender may not have sufficient knowledge or there is insufficient information about the attackers payoff generation mechanism. Such epistemic uncertainties are more suitably represented as generalizations of probability boxes. This paper explores the mathematical treatment of such mixed payoff uncertainties. A conditional probabilistic reasoning approach is adopted to organize the dependencies between a cyber-systems state, attacker type, player actions, and state transitions. This also enables the application of probabilistic theories to propagate various uncertainties in the attacker payoffs. An example implementation of this probabilistic framework and resulting attacker payoff distributions are discussed. A goal of this paper is also to highlight this uncertainty quantification problem space to the cyber security research community and encourage further advancements in this area.
Balance Calibration A Method for Assigning a Direct-Reading Uncertainty to an Electronic Balance.
Mike Stears
2010-07-01
Paper Title: Balance Calibration A method for assigning a direct-reading uncertainty to an electronic balance. Intended Audience: Those who calibrate or use electronic balances. Abstract: As a calibration facility, we provide on-site (at the customers location) calibrations of electronic balances for customers within our company. In our experience, most of our customers are not using their balance as a comparator, but simply putting an unknown quantity on the balance and reading the displayed mass value. Manufacturers specifications for balances typically include specifications such as readability, repeatability, linearity, and sensitivity temperature drift, but what does this all mean when the balance user simply reads the displayed mass value and accepts the reading as the true value? This paper discusses a method for assigning a direct-reading uncertainty to a balance based upon the observed calibration data and the environment where the balance is being used. The method requires input from the customer regarding the environment where the balance is used and encourages discussion with the customer regarding sources of uncertainty and possible means for improvement; the calibration process becomes an educational opportunity for the balance user as well as calibration personnel. This paper will cover the uncertainty analysis applied to the calibration weights used for the field calibration of balances; the uncertainty is calculated over the range of environmental conditions typically encountered in the field and the resulting range of air density. The temperature stability in the area of the balance is discussed with the customer and the temperature range over which the balance calibration is valid is decided upon; the decision is based upon the uncertainty needs of the customer and the desired rigor in monitoring by the customer. Once the environmental limitations are decided, the calibration is performed and the measurement data is entered into a custom spreadsheet. The spreadsheet uses measurement results, along with the manufacturers specifications, to assign a direct-read measurement uncertainty to the balance. The fact that the assigned uncertainty is a best-case uncertainty is discussed with the customer; the assigned uncertainty contains no allowance for contributions associated with the unknown weighing sample, such as density, static charges, magnetism, etc. The attendee will learn uncertainty considerations associated with balance calibrations along with one method for assigning an uncertainty to a balance used for non-comparison measurements.
Dakota uncertainty quantification methods applied to the NEK-5000 SAHEX model.
Weirs, V. Gregory
2014-03-01
This report summarizes the results of a NEAMS project focused on the use of uncertainty and sensitivity analysis methods within the NEK-5000 and Dakota software framework for assessing failure probabilities as part of probabilistic risk assessment. NEK-5000 is a software tool under development at Argonne National Laboratory to perform computational fluid dynamics calculations for applications such as thermohydraulics of nuclear reactor cores. Dakota is a software tool developed at Sandia National Laboratories containing optimization, sensitivity analysis, and uncertainty quantification algorithms. The goal of this work is to demonstrate the use of uncertainty quantification methods in Dakota with NEK-5000.
Use of SUSA in Uncertainty and Sensitivity Analysis for INL VHTR Coupled Codes
Gerhard Strydom
2010-06-01
The need for a defendable and systematic Uncertainty and Sensitivity approach that conforms to the Code Scaling, Applicability, and Uncertainty (CSAU) process, and that could be used for a wide variety of software codes, was defined in 2008.The GRS (Gesellschaft fr Anlagen und Reaktorsicherheit) company of Germany has developed one type of CSAU approach that is particularly well suited for legacy coupled core analysis codes, and a trial version of their commercial software product SUSA (Software for Uncertainty and Sensitivity Analyses) was acquired on May 12, 2010. This interim milestone report provides an overview of the current status of the implementation and testing of SUSA at the INL VHTR Project Office.
Uncertainty Analysis Framework - Hanford Site-Wide Groundwater Flow and Transport Model
Cole, Charles R.; Bergeron, Marcel P.; Murray, Christopher J.; Thorne, Paul D.; Wurstner, Signe K.; Rogers, Phillip M.
2001-11-09
Pacific Northwest National Laboratory (PNNL) embarked on a new initiative to strengthen the technical defensibility of the predictions being made with a site-wide groundwater flow and transport model at the U.S. Department of Energy Hanford Site in southeastern Washington State. In FY 2000, the focus of the initiative was on the characterization of major uncertainties in the current conceptual model that would affect model predictions. The long-term goals of the initiative are the development and implementation of an uncertainty estimation methodology in future assessments and analyses using the site-wide model. This report focuses on the development and implementation of an uncertainty analysis framework.
A simplified analysis of uncertainty propagation in inherently controlled ATWS events
Wade, D.C.
1987-01-01
The quasi static approach can be used to provide useful insight concerning the propagation of uncertainties in the inherent response to ATWS events. At issue is how uncertainties in the reactivity coefficients and in the thermal-hydraulics and materials properties propagate to yield uncertainties in the asymptotic temperatures attained upon inherent shutdown. The basic notion to be quantified is that many of the same physical phenomena contribute to both the reactivity increase of power reduction and the reactivity decrease of core temperature rise. Since these reactivities cancel by definition, a good deal of uncertainty cancellation must also occur of necessity. For example, if the Doppler coefficient is overpredicted, too large a positive reactivity insertion is predicted upon power reduction and collapse of the ..delta..T across the fuel pin. However, too large a negative reactivity is also predicted upon the compensating increase in the isothermal core average temperature - which includes the fuel Doppler effect.
The Multi-Step CADIS method for shutdown dose rate calculations and uncertainty propagation
Ibrahim, Ahmad M.; Peplow, Douglas E.; Grove, Robert E.; Peterson, Joshua L.; Johnson, Seth R.
2015-12-01
Shutdown dose rate (SDDR) analysis requires (a) a neutron transport calculation to estimate neutron flux fields, (b) an activation calculation to compute radionuclide inventories and associated photon sources, and (c) a photon transport calculation to estimate final SDDR. In some applications, accurate full-scale Monte Carlo (MC) SDDR simulations are needed for very large systems with massive amounts of shielding materials. However, these simulations are impractical because calculation of space- and energy-dependent neutron fluxes throughout the structural materials is needed to estimate distribution of radioisotopes causing the SDDR. Biasing the neutron MC calculation using an importance function is not simple because it is difficult to explicitly express the response function, which depends on subsequent computational steps. Furthermore, the typical SDDR calculations do not consider how uncertainties in MC neutron calculation impact SDDR uncertainty, even though MC neutron calculation uncertainties usually dominate SDDR uncertainty.
The Multi-Step CADIS method for shutdown dose rate calculations and uncertainty propagation
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Ibrahim, Ahmad M.; Peplow, Douglas E.; Grove, Robert E.; Peterson, Joshua L.; Johnson, Seth R.
2015-12-01
Shutdown dose rate (SDDR) analysis requires (a) a neutron transport calculation to estimate neutron flux fields, (b) an activation calculation to compute radionuclide inventories and associated photon sources, and (c) a photon transport calculation to estimate final SDDR. In some applications, accurate full-scale Monte Carlo (MC) SDDR simulations are needed for very large systems with massive amounts of shielding materials. However, these simulations are impractical because calculation of space- and energy-dependent neutron fluxes throughout the structural materials is needed to estimate distribution of radioisotopes causing the SDDR. Biasing the neutron MC calculation using an importance function is not simple becausemore » it is difficult to explicitly express the response function, which depends on subsequent computational steps. Furthermore, the typical SDDR calculations do not consider how uncertainties in MC neutron calculation impact SDDR uncertainty, even though MC neutron calculation uncertainties usually dominate SDDR uncertainty.« less
Cardoni, Jeffrey N.; Kalinich, Donald A.
2014-02-01
Sandia National Laboratories (SNL) plans to conduct uncertainty analyses (UA) on the Fukushima Daiichi unit (1F1) plant with the MELCOR code. The model to be used was developed for a previous accident reconstruction investigation jointly sponsored by the US Department of Energy (DOE) and Nuclear Regulatory Commission (NRC). However, that study only examined a handful of various model inputs and boundary conditions, and the predictions yielded only fair agreement with plant data and current release estimates. The goal of this uncertainty study is to perform a focused evaluation of uncertainty in core melt progression behavior and its effect on key figures-of-merit (e.g., hydrogen production, vessel lower head failure, etc.). In preparation for the SNL Fukushima UA work, a scoping study has been completed to identify important core melt progression parameters for the uncertainty analysis. The study also lays out a preliminary UA methodology.
Pruet, J
2007-06-23
This report describes Kiwi, a program developed at Livermore to enable mature studies of the relation between imperfectly known nuclear physics and uncertainties in simulations of complicated systems. Kiwi includes a library of evaluated nuclear data uncertainties, tools for modifying data according to these uncertainties, and a simple interface for generating processed data used by transport codes. As well, Kiwi provides access to calculations of k eigenvalues for critical assemblies. This allows the user to check implications of data modifications against integral experiments for multiplying systems. Kiwi is written in python. The uncertainty library has the same format and directory structure as the native ENDL used at Livermore. Calculations for critical assemblies rely on deterministic and Monte Carlo codes developed by B division.
Sig Drellack, Lance Prothro
2007-12-01
The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result of the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The simulations are challenged by the distributed sources in each of the Corrective Action Units, by complex mass transfer processes, and by the size and complexity of the field-scale flow models. An efficient methodology utilizing particle tracking results and convolution integrals provides in situ concentrations appropriate for Monte Carlo analysis. Uncertainty in source releases and transport parameters including effective porosity, fracture apertures and spacing, matrix diffusion coefficients, sorption coefficients, and colloid load and mobility are considered. With the distributions of input uncertainties and output plume volumes, global analysis methods including stepwise regression, contingency table analysis, and classification tree analysis are used to develop sensitivity rankings of parameter uncertainties for each model considered, thus assisting a variety of decisions.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
and Uncertainty Quantification in CASL Michael Pernice Center for Advanced Modeling and Simulation Idaho National Laboratory SAMSI Uncertainty Quantification Transition Workshop May 21-23 2012 CASL-U-2012-0108-000 What Is CASL? * Consortium for Advanced Simulation of LWRs - An Energy Innovation Hub * Objective: predictive simulation of light water reactors - Reduce capital and operating costs * Power uprates * Lifetime extension - Reduce nuclear waste * Higher fuel burnup - Enhance operational
Estimation and Uncertainty Analysis of Impacts of Future Heat Waves on
Office of Scientific and Technical Information (OSTI)
Mortality in the Eastern United States (Journal Article) | SciTech Connect Estimation and Uncertainty Analysis of Impacts of Future Heat Waves on Mortality in the Eastern United States Citation Details In-Document Search Title: Estimation and Uncertainty Analysis of Impacts of Future Heat Waves on Mortality in the Eastern United States Background: It is anticipated that climate change will influence heat-related mortality in the future. However, the estimation of excess mortality
CANTALOUB, M.G.
2002-01-02
The Waste Receiving and Processing (WRAP) facility, located on the Hanford Site in southeast Washington, is a key link in the certification of Hanford's transuranic (TRU) waste for shipment to the Waste Isolation Pilot Plant (WIPP). Waste characterization is one of the vital functions performed at WRAP, and nondestructive assay (NDA) measurement of TRU waste containers is one of the methods used for waste characterization. Various programs exist to ensure the validity of waste characterization data; all of these cite the need for clearly defied knowledge of the uncertainties associated with any measurements performed. All measurements have an inherent uncertainty associated with them The combined effect of all uncertainties associated with a measurement is referred to as the Total Measurement Uncertainty (TMU). The NDA measurement uncertainties can be numerous and complex. In addition to system-induced measurement uncertainty, other factors contribute to the TMU, each associated with a particular measurement. The NDA measurements at WRAP are based on processes (radioactive decay and induced fission) which are statistical in nature. As a result, the proper statistical summation of the various uncertainty components is essential. This report examines the contributing factors to NDA measurement uncertainty at WRAP. The significance of each factor to the TMU is analyzed, and a final method is given for determining the TMU for NDA measurements at WRAP. A brief description of the data flow paths for the analytical process is also included in this report. As more data becomes available, and WRAP gains in operational experience, this report will be reviewed semi-annually and updated as necessary.
Uncertainty Estimates for SIRS, SKYRAD, & GNDRAD Data and Reprocessing the
Office of Scientific and Technical Information (OSTI)
Pyrgeometer Data (Presentation) (Technical Report) | SciTech Connect Uncertainty Estimates for SIRS, SKYRAD, & GNDRAD Data and Reprocessing the Pyrgeometer Data (Presentation) Citation Details In-Document Search Title: Uncertainty Estimates for SIRS, SKYRAD, & GNDRAD Data and Reprocessing the Pyrgeometer Data (Presentation) The National Renewable Energy Laboratory (NREL) and the Atmospheric Radiation Measurement (ARM) Climate Research Facility work together in providing data from
Uncertainty analysis of multi-rate kinetics of uranium desorption from
Office of Scientific and Technical Information (OSTI)
sediments (Journal Article) | SciTech Connect SciTech Connect Search Results Journal Article: Uncertainty analysis of multi-rate kinetics of uranium desorption from sediments Citation Details In-Document Search Title: Uncertainty analysis of multi-rate kinetics of uranium desorption from sediments A multi-rate expression for uranyl [U(VI)] surface complexation reactions has been proposed to describe diffusion-limited U(VI) sorption/desorption in heterogeneous subsurface sediments. An
Advancing Inverse Sensitivity/Uncertainty Methods for Nuclear Fuel Cycle Applications
Arbanas, Goran; Williams, Mark L; Leal, Luiz C; Dunn, Michael E; Khuwaileh, Bassam A.; Wang, C; Abdel-Khalik, Hany
2015-01-01
The inverse sensitivity/uncertainty quantification (IS/UQ) method has recently been implemented in the Inverse Sensitivity/UnceRtainty Estimiator (INSURE) module of the AMPX system [1]. The IS/UQ method aims to quantify and prioritize the cross section measurements along with uncer- tainties needed to yield a given nuclear application(s) target response uncertainty, and doing this at a minimum cost. Since in some cases the extant uncertainties of the differential cross section data are already near the limits of the present-day state-of-the-art measurements, requiring significantly smaller uncertainties may be unrealistic. Therefore we have incorporated integral benchmark exper- iments (IBEs) data into the IS/UQ method using the generalized linear least-squares method, and have implemented it in the INSURE module. We show how the IS/UQ method could be applied to systematic and statistical uncertainties in a self-consistent way. We show how the IS/UQ method could be used to optimize uncertainties of IBE s and differential cross section data simultaneously.
Position-momentum uncertainty relations in the presence of quantum memory
Furrer, Fabian; Berta, Mario; Tomamichel, Marco; Scholz, Volkher B.; Christandl, Matthias
2014-12-15
A prominent formulation of the uncertainty principle identifies the fundamental quantum feature that no particle may be prepared with certain outcomes for both position and momentum measurements. Often the statistical uncertainties are thereby measured in terms of entropies providing a clear operational interpretation in information theory and cryptography. Recently, entropic uncertainty relations have been used to show that the uncertainty can be reduced in the presence of entanglement and to prove security of quantum cryptographic tasks. However, much of this recent progress has been focused on observables with only a finite number of outcomes not including Heisenbergs original setting of position and momentum observables. Here, we show entropic uncertainty relations for general observables with discrete but infinite or continuous spectrum that take into account the power of an entangled observer. As an illustration, we evaluate the uncertainty relations for position and momentum measurements, which is operationally significant in that it implies security of a quantum key distribution scheme based on homodyne detection of squeezed Gaussian states.
Grid and basis adaptive polynomial chaos techniques for sensitivity and uncertainty analysis
Perk, Zoltn Gilli, Luca Lathouwers, Danny Kloosterman, Jan Leen
2014-03-01
The demand for accurate and computationally affordable sensitivity and uncertainty techniques is constantly on the rise and has become especially pressing in the nuclear field with the shift to Best Estimate Plus Uncertainty methodologies in the licensing of nuclear installations. Besides traditional, already well developed methods such as first order perturbation theory or Monte Carlo sampling Polynomial Chaos Expansion (PCE) has been given a growing emphasis in recent years due to its simple application and good performance. This paper presents new developments of the research done at TU Delft on such Polynomial Chaos (PC) techniques. Our work is focused on the Non-Intrusive Spectral Projection (NISP) approach and adaptive methods for building the PCE of responses of interest. Recent efforts resulted in a new adaptive sparse grid algorithm designed for estimating the PC coefficients. The algorithm is based on Gerstner's procedure for calculating multi-dimensional integrals but proves to be computationally significantly cheaper, while at the same it retains a similar accuracy as the original method. More importantly the issue of basis adaptivity has been investigated and two techniques have been implemented for constructing the sparse PCE of quantities of interest. Not using the traditional full PC basis set leads to further reduction in computational time since the high order grids necessary for accurately estimating the near zero expansion coefficients of polynomial basis vectors not needed in the PCE can be excluded from the calculation. Moreover the sparse PC representation of the response is easier to handle when used for sensitivity analysis or uncertainty propagation due to the smaller number of basis vectors. The developed grid and basis adaptive methods have been implemented in Matlab as the Fully Adaptive Non-Intrusive Spectral Projection (FANISP) algorithm and were tested on four analytical problems. These show consistent good performance both in terms of the accuracy of the resulting PC representation of quantities and the computational costs associated with constructing the sparse PCE. Basis adaptivity also seems to make the employment of PC techniques possible for problems with a higher number of input parameters (1520), alleviating a well known limitation of the traditional approach. The prospect of larger scale applicability and the simplicity of implementation makes such adaptive PC algorithms particularly appealing for the sensitivity and uncertainty analysis of complex systems and legacy codes.
Qin, A; Yan, D
2014-06-15
Purpose: To evaluate uncertainties of organ specific Deformable Image Registration (DIR) for H and N cancer Adaptive Radiation Therapy (ART). Methods: A commercial DIR evaluation tool, which includes a digital phantom library of 8 patients, and the corresponding “Ground truth Deformable Vector Field” (GT-DVF), was used in the study. Each patient in the phantom library includes the GT-DVF created from a pair of CT images acquired prior to and at the end of the treatment course. Five DIR tools, including 2 commercial tools (CMT1, CMT2), 2 in-house (IH-FFD1, IH-FFD2), and a classic DEMON algorithms, were applied on the patient images. The resulting DVF was compared to the GT-DVF voxel by voxel. Organ specific DVF uncertainty was calculated for 10 ROIs: Whole Body, Brain, Brain Stem, Cord, Lips, Mandible, Parotid, Esophagus and Submandibular Gland. Registration error-volume histogram was constructed for comparison. Results: The uncertainty is relatively small for brain stem, cord and lips, while large in parotid and submandibular gland. CMT1 achieved best overall accuracy (on whole body, mean vector error of 8 patients: 0.98±0.29 mm). For brain, mandible, parotid right, parotid left and submandibular glad, the classic Demon algorithm got the lowest uncertainty (0.49±0.09, 0.51±0.16, 0.46±0.11, 0.50±0.11 and 0.69±0.47 mm respectively). For brain stem, cord and lips, the DVF from CMT1 has the best accuracy (0.28±0.07, 0.22±0.08 and 0.27±0.12 mm respectively). All algorithms have largest right parotid uncertainty on patient #7, which has image artifact caused by tooth implantation. Conclusion: Uncertainty of deformable CT image registration highly depends on the registration algorithm, and organ specific. Large uncertainty most likely appears at the location of soft-tissue organs far from the bony structures. Among all 5 DIR methods, the classic DEMON and CMT1 seem to be the best to limit the uncertainty within 2mm for all OARs. Partially supported by research grant from Elekta.
Gerstl, S.A.W.
1980-01-01
SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections of standard multigroup cross section sets and for secondary energy distributions (SEDs) of multigroup scattering matrices. In the design sensitivity mode, SENSIT computes changes in an integral response due to design changes and gives the appropriate sensitivity coefficients. Cross section uncertainty analyses are performed for three types of input data uncertainties: cross-section covariance matrices for pairs of multigroup reaction cross sections, spectral shape uncertainty parameters for secondary energy distributions (integral SED uncertainties), and covariance matrices for energy-dependent response functions. For all three types of data uncertainties SENSIT computes the resulting variance and estimated standard deviation in an integral response of interest, on the basis of generalized perturbation theory. SENSIT attempts to be more comprehensive than earlier sensitivity analysis codes, such as SWANLAKE.
WESTSIK, G.A.
2001-06-06
This report presents the results of an evaluation of the Total Measurement Uncertainty (TMU) for the Canberra manufactured Segmented Gamma Scanner Assay System (SGSAS) as employed at the Hanford Plutonium Finishing Plant (PFP). In this document, TMU embodies the combined uncertainties due to all of the individual random and systematic sources of measurement uncertainty. It includes uncertainties arising from corrections and factors applied to the analysis of transuranic waste to compensate for inhomogeneities and interferences from the waste matrix and radioactive components. These include uncertainty components for any assumptions contained in the calibration of the system or computation of the data. Uncertainties are propagated at 1 sigma. The final total measurement uncertainty value is reported at the 95% confidence level. The SGSAS is a gamma assay system that is used to assay plutonium and uranium waste. The SGSAS system can be used in a stand-alone mode to perform the NDA characterization of a container, particularly for low to medium density (0-2.5 g/cc) container matrices. The SGSAS system provides a full gamma characterization of the container content. This document is an edited version of the Rocky Flats TMU Report for the Can Scan Segment Gamma Scanners, which are in use for the plutonium residues projects at the Rocky Flats plant. The can scan segmented gamma scanners at Rocky Flats are the same design as the PFP SGSAS system and use the same software (with the exception of the plutonium isotopics software). Therefore, all performance characteristics are expected to be similar. Modifications in this document reflect minor differences in the system configuration, container packaging, calibration technique, etc. These results are supported by the Quality Assurance Objective (QAO) counts, safeguards test data, calibration data, etc. for the PFP SGSAS system. Other parts of the TMU analysis utilize various modeling techniques such as Monte Carlo N-Particle (MCNP) and In Situ Object Counting Software (ISOCS).
The ends of uncertainty: Air quality science and planning in Central California
Fine, James
2003-09-01
Air quality planning in Central California is complicated and controversial despite millions of dollars invested to improve scientific understanding. This research describes and critiques the use of photochemical air quality simulation modeling studies in planning to attain standards for ground-level ozone in the San Francisco Bay Area and the San Joaquin Valley during the 1990's. Data are gathered through documents and interviews with planners, modelers, and policy-makers at public agencies and with representatives from the regulated and environmental communities. Interactions amongst organizations are diagramed to identify significant nodes of interaction. Dominant policy coalitions are described through narratives distinguished by their uses of and responses to uncertainty, their exposures to risks, and their responses to the principles of conservatism, civil duty, and caution. Policy narratives are delineated using aggregated respondent statements to describe and understand advocacy coalitions. I found that models impacted the planning process significantly, but were used not purely for their scientific capabilities. Modeling results provided justification for decisions based on other constraints and political considerations. Uncertainties were utilized opportunistically by stakeholders instead of managed explicitly. Ultimately, the process supported the partisan views of those in control of the modeling. Based on these findings, as well as a review of model uncertainty analysis capabilities, I recommend modifying the planning process to allow for the development and incorporation of uncertainty information, while addressing the need for inclusive and meaningful public participation. By documenting an actual air quality planning process these findings provide insights about the potential for using new scientific information and understanding to achieve environmental goals, most notably the analysis of uncertainties in modeling applications. Concurrently, needed uncertainty information is identified and capabilities to produce it are assessed. Practices to facilitate incorporation of uncertainty information are suggested based on research findings, as well as theory from the literatures of the policy sciences, decision sciences, science and technology studies, consensus-based and communicative planning, and modeling.
State-of-the-Art Solar Simulator Reduces Measurement Time and Uncertainty (Fact Sheet)
Not Available
2012-04-01
One-Sun Multisource Solar Simulator (OSMSS) brings accurate energy-rating predictions that account for the nonlinear behavior of multijunction photovoltaic devices. The National Renewable Energy Laboratory (NREL) is one of only a few International Organization for Standardization (ISO)-accredited calibration labs in the world for primary and secondary reference cells and modules. As such, it is critical to seek new horizons in developing simulators and measurement methods. Current solar simulators are not well suited for accurately measuring multijunction devices. To set the electrical current to each junction independently, simulators must precisely tune the spectral content with no overlap between the wavelength regions. Current simulators do not have this capability, and the overlaps lead to large measurement uncertainties of {+-}6%. In collaboration with LabSphere, NREL scientists have designed and implemented the One-Sun Multisource Solar Simulator (OSMSS), which enables automatic spectral adjustment with nine independent wavelength regions. This fiber-optic simulator allows researchers and developers to set the current to each junction independently, reducing errors relating to spectral effects. NREL also developed proprietary software that allows this fully automated simulator to rapidly 'build' a spectrum under which all junctions of a multijunction device are current matched and behave as they would under a reference spectrum. The OSMSS will reduce the measurement uncertainty for multijunction devices, while significantly reducing the current-voltage measurement time from several days to minutes. These features will enable highly accurate energy-rating predictions that take into account the nonlinear behavior of multijunction photovoltaic devices.
Frederik Reitsma; Gerhard Strydom; Bismark Tyobeka; Kostadin Ivanov
2012-10-01
The continued development of High Temperature Gas Cooled Reactors (HTGRs) requires verification of design and safety features with reliable high fidelity physics models and robust, efficient, and accurate codes. The uncertainties in the HTR analysis tools are today typically assessed with sensitivity analysis and then a few important input uncertainties (typically based on a PIRT process) are varied in the analysis to find a spread in the parameter of importance. However, one wish to apply a more fundamental approach to determine the predictive capability and accuracies of coupled neutronics/thermal-hydraulics and depletion simulations used for reactor design and safety assessment. Today there is a broader acceptance of the use of uncertainty analysis even in safety studies and it has been accepted by regulators in some cases to replace the traditional conservative analysis. Finally, there is also a renewed focus in supplying reliable covariance data (nuclear data uncertainties) that can then be used in uncertainty methods. Uncertainty and sensitivity studies are therefore becoming an essential component of any significant effort in data and simulation improvement. In order to address uncertainty in analysis and methods in the HTGR community the IAEA launched a Coordinated Research Project (CRP) on the HTGR Uncertainty Analysis in Modelling early in 2012. The project is built on the experience of the OECD/NEA Light Water Reactor (LWR) Uncertainty Analysis in Best-Estimate Modelling (UAM) benchmark activity, but focuses specifically on the peculiarities of HTGR designs and its simulation requirements. Two benchmark problems were defined with the prismatic type design represented by the MHTGR-350 design from General Atomics (GA) while a 250 MW modular pebble bed design, similar to the INET (China) and indirect-cycle PBMR (South Africa) designs are also included. In the paper more detail on the benchmark cases, the different specific phases and tasks and the latest status and plans are presented.
Gauntt, Randall O.; Mattie, Patrick D.; Bixler, Nathan E.; Ross, Kyle; Cardoni, Jeffrey N; Kalinich, Donald A.; Osborn, Douglas M.; Sallaberry, Cedric Jean-Marie; Ghosh, S. Tina
2014-02-01
This paper describes the knowledge advancements from the uncertainty analysis for the State-of- the-Art Reactor Consequence Analyses (SOARCA) unmitigated long-term station blackout accident scenario at the Peach Bottom Atomic Power Station. This work assessed key MELCOR and MELCOR Accident Consequence Code System, Version 2 (MACCS2) modeling uncertainties in an integrated fashion to quantify the relative importance of each uncertain input on potential accident progression, radiological releases, and off-site consequences. This quantitative uncertainty analysis provides measures of the effects on consequences, of each of the selected uncertain parameters both individually and in interaction with other parameters. The results measure the model response (e.g., variance in the output) to uncertainty in the selected input. Investigation into the important uncertain parameters in turn yields insights into important phenomena for accident progression and off-site consequences. This uncertainty analysis confirmed the known importance of some parameters, such as failure rate of the Safety Relief Valve in accident progression modeling and the dry deposition velocity in off-site consequence modeling. The analysis also revealed some new insights, such as dependent effect of cesium chemical form for different accident progressions. (auth)
Uncertainty Analysis for a Virtual Flow Meter Using an Air-Handling Unit Chilled Water Valve
Song, Li; Wang, Gang; Brambley, Michael R.
2013-04-28
A virtual water flow meter is developed that uses the chilled water control valve on an air-handling unit as a measurement device. The flow rate of water through the valve is calculated using the differential pressure across the valve and its associated coil, the valve command, and an empirically determined valve characteristic curve. Thus, the probability of error in the measurements is significantly greater than for conventionally manufactured flow meters. In this paper, mathematical models are developed and used to conduct uncertainty analysis for the virtual flow meter, and the results from the virtual meter are compared to measurements made with an ultrasonic flow meter. Theoretical uncertainty analysis shows that the total uncertainty in flow rates from the virtual flow meter is 1.46% with 95% confidence; comparison of virtual flow meter results with measurements from an ultrasonic flow meter yielded anuncertainty of 1.46% with 99% confidence. The comparable results from the theoretical uncertainty analysis and empirical comparison with the ultrasonic flow meter corroborate each other, and tend to validate the approach to computationally estimating uncertainty for virtual sensors introduced in this study.
Petruzzi, A.; D'Auria, F.; Giannotti, W.; Ivanov, K.
2005-02-15
The best-estimate calculation results from complex system codes are affected by approximations that are unpredictable without the use of computational tools that account for the various sources of uncertainty.The code with (the capability of) internal assessment of uncertainty (CIAU) has been previously proposed by the University of Pisa to realize the integration between a qualified system code and an uncertainty methodology and to supply proper uncertainty bands each time a nuclear power plant (NPP) transient scenario is calculated. The derivation of the methodology and the results achieved by the use of CIAU are discussed to demonstrate the main features and capabilities of the method.In a joint effort between the University of Pisa and The Pennsylvania State University, the CIAU method has been recently extended to evaluate the uncertainty of coupled three-dimensional neutronics/thermal-hydraulics calculations. The result is CIAU-TN. The feasibility of the approach has been demonstrated, and sample results related to the turbine trip transient in the Peach Bottom NPP are shown. Notwithstanding that the full implementation and use of the procedure requires a database of errors not available at the moment, the results give an idea of the errors expected from the present computational tools.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Salloum, Maher N.; Sargsyan, Khachik; Jones, Reese E.; Najm, Habib N.; Debusschere, Bert
2015-08-11
We present a methodology to assess the predictive fidelity of multiscale simulations by incorporating uncertainty in the information exchanged between the components of an atomistic-to-continuum simulation. We account for both the uncertainty due to finite sampling in molecular dynamics (MD) simulations and the uncertainty in the physical parameters of the model. Using Bayesian inference, we represent the expensive atomistic component by a surrogate model that relates the long-term output of the atomistic simulation to its uncertain inputs. We then present algorithms to solve for the variables exchanged across the atomistic-continuum interface in terms of polynomial chaos expansions (PCEs). We alsomore » consider a simple Couette flow where velocities are exchanged between the atomistic and continuum components, while accounting for uncertainty in the atomistic model parameters and the continuum boundary conditions. Results show convergence of the coupling algorithm at a reasonable number of iterations. As a result, the uncertainty in the obtained variables significantly depends on the amount of data sampled from the MD simulations and on the width of the time averaging window used in the MD simulations.« less
Eldred, Michael Scott; Vigil, Dena M.; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Lefantzi, Sophia; Hough, Patricia Diane; Eddy, John P.
2011-12-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the DAKOTA software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of DAKOTA-related research publications in the areas of surrogate-based optimization, uncertainty quantification, and optimization under uncertainty that provide the foundation for many of DAKOTA's iterative analysis capabilities.
Survey of sampling-based methods for uncertainty and sensitivity analysis.
Johnson, Jay Dean; Helton, Jon Craig; Sallaberry, Cedric J. PhD.; Storlie, Curt B.
2006-06-01
Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (1) Definition of probability distributions to characterize epistemic uncertainty in analysis inputs, (2) Generation of samples from uncertain analysis inputs, (3) Propagation of sampled inputs through an analysis, (4) Presentation of uncertainty analysis results, and (5) Determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition.
Uncertainty quantification of a radionuclide release model using an adaptive spectral technique
Gilli, L.; Hoogwerf, C.; Lathouwers, D.; Kloosterman, J. L.
2013-07-01
In this paper we present the application of a non-intrusive spectral techniques we recently developed for the evaluation of the uncertainties associated with a radionuclide migration problem. Spectral techniques can be used to reconstruct stochastic quantities of interest by means of a Fourier-like expansion. Their application to uncertainty propagation problems can be performed by evaluating a set of realizations which are chosen adaptively, in this work the main details about how this is done are presented. The uncertainty quantification problem we are going to deal with was first solved in a recent work where the authors used a spectral technique based on an intrusive approach. In this paper we are going to reproduce the results of this reference work, compare them and discuss the main numerical aspects. (authors)
Eslick, John C.; Ng, Brenda; Gao, Qianwen; Tong, Charles H.; Sahinidis, Nikolaos V.; Miller, David C.
2014-12-31
Under the auspices of the U.S. Department of Energys Carbon Capture Simulation Initiative (CCSI), a Framework for Optimization and Quantification of Uncertainty and Sensitivity (FOQUS) has been developed. This tool enables carbon capture systems to be rapidly synthesized and rigorously optimized, in an environment that accounts for and propagates uncertainties in parameters and models. FOQUS currently enables (1) the development of surrogate algebraic models utilizing the ALAMO algorithm, which can be used for superstructure optimization to identify optimal process configurations, (2) simulation-based optimization utilizing derivative free optimization (DFO) algorithms with detailed black-box process models, and (3) rigorous uncertainty quantification through PSUADE. FOQUS utilizes another CCSI technology, the Turbine Science Gateway, to manage the thousands of simulated runs necessary for optimization and UQ. Thus, this computational framework has been demonstrated for the design and analysis of a solid sorbent based carbon capture system.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Eslick, John C.; Ng, Brenda; Gao, Qianwen; Tong, Charles H.; Sahinidis, Nikolaos V.; Miller, David C.
2014-12-31
Under the auspices of the U.S. Department of Energy’s Carbon Capture Simulation Initiative (CCSI), a Framework for Optimization and Quantification of Uncertainty and Sensitivity (FOQUS) has been developed. This tool enables carbon capture systems to be rapidly synthesized and rigorously optimized, in an environment that accounts for and propagates uncertainties in parameters and models. FOQUS currently enables (1) the development of surrogate algebraic models utilizing the ALAMO algorithm, which can be used for superstructure optimization to identify optimal process configurations, (2) simulation-based optimization utilizing derivative free optimization (DFO) algorithms with detailed black-box process models, and (3) rigorous uncertainty quantification throughmore » PSUADE. FOQUS utilizes another CCSI technology, the Turbine Science Gateway, to manage the thousands of simulated runs necessary for optimization and UQ. Thus, this computational framework has been demonstrated for the design and analysis of a solid sorbent based carbon capture system.« less
Uncertainty Analysis of Spectral Irradiance Reference Standards Used for NREL Calibrations
Habte, A.; Andreas, A.; Reda, I.; Campanelli, M.; Stoffel, T.
2013-05-01
Spectral irradiance produced by lamp standards such as the National Institute of Standards and Technology (NIST) FEL-type tungsten halogen lamps are used to calibrate spectroradiometers at the National Renewable Energy Laboratory. Spectroradiometers are often used to characterize spectral irradiance of solar simulators, which in turn are used to characterize photovoltaic device performance, e.g., power output and spectral response. Therefore, quantifying the calibration uncertainty of spectroradiometers is critical to understanding photovoltaic system performance. In this study, we attempted to reproduce the NIST-reported input variables, including the calibration uncertainty in spectral irradiance for a standard NIST lamp, and quantify uncertainty for measurement setup at the Optical Metrology Laboratory at the National Renewable Energy Laboratory.
Franco, Guillermo; Shen-Tu, Bing Ming; Bazzurro, Paolo; Goretti, Agostino; Valensise, Gianluca
2008-07-08
Increasing sophistication in the insurance and reinsurance market is stimulating the move towards catastrophe models that offer a greater degree of flexibility in the definition of model parameters and model assumptions. This study explores the impact of uncertainty in the input parameters on the loss estimates by departing from the exclusive usage of mean values to establish the earthquake event mechanism, the ground motion fields, or the damageability of the building stock. Here the potential losses due to a repeat of the 1908 Messina-Reggio Calabria event are calculated using different plausible alternatives found in the literature that encompass 12 event scenarios, 2 different ground motion prediction equations, and 16 combinations of damage functions for the building stock, a total of 384 loss scenarios. These results constitute the basis for a sensitivity analysis of the different assumptions on the loss estimates that allows the model user to estimate the impact of the uncertainty on input parameters and the potential spread of the model results. For the event under scrutiny, average losses would amount today to about 9.000 to 10.000 million Euros. The uncertainty in the model parameters is reflected in the high coefficient of variation of this loss, reaching approximately 45%. The choice of ground motion prediction equations and vulnerability functions of the building stock contribute the most to the uncertainty in loss estimates. This indicates that the application of non-local-specific information has a great impact on the spread of potential catastrophic losses. In order to close this uncertainty gap, more exhaustive documentation practices in insurance portfolios will have to go hand in hand with greater flexibility in the model input parameters.
Uncertainty Quantification of Composite Laminate Damage with the Generalized Information Theory
J. Lucero; F. Hemez; T. Ross; K.Kline; J.Hundhausen; T. Tippetts
2006-05-01
This work presents a survey of five theories to assess the uncertainty of projectile impact induced damage on multi-layered carbon-epoxy composite plates. Because the types of uncertainty dealt with in this application are multiple (variability, ambiguity, and conflict) and because the data sets collected are sparse, characterizing the amount of delamination damage with probability theory alone is possible but incomplete. This motivates the exploration of methods contained within a broad Generalized Information Theory (GIT) that rely on less restrictive assumptions than probability theory. Probability, fuzzy sets, possibility, and imprecise probability (probability boxes (p-boxes) and Dempster-Shafer) are used to assess the uncertainty in composite plate damage. Furthermore, this work highlights the usefulness of each theory. The purpose of the study is not to compare directly the different GIT methods but to show that they can be deployed on a practical application and to compare the assumptions upon which these theories are based. The data sets consist of experimental measurements and finite element predictions of the amount of delamination and fiber splitting damage as multilayered composite plates are impacted by a projectile at various velocities. The physical experiments consist of using a gas gun to impact suspended plates with a projectile accelerated to prescribed velocities, then, taking ultrasound images of the resulting delamination. The nonlinear, multiple length-scale numerical simulations couple local crack propagation implemented through cohesive zone modeling to global stress-displacement finite element analysis. The assessment of damage uncertainty is performed in three steps by, first, considering the test data only; then, considering the simulation data only; finally, performing an assessment of total uncertainty where test and simulation data sets are combined. This study leads to practical recommendations for reducing the uncertainty and improving the prediction accuracy of the damage modeling and finite element simulation.
Hu, Jianwei; Gauld, Ian C.
2014-12-01
The U.S. Department of Energy’s Next Generation Safeguards Initiative Spent Fuel (NGSI-SF) project is nearing the final phase of developing several advanced nondestructive assay (NDA) instruments designed to measure spent nuclear fuel assemblies for the purpose of improving nuclear safeguards. Current efforts are focusing on calibrating several of these instruments with spent fuel assemblies at two international spent fuel facilities. Modelling and simulation is expected to play an important role in predicting nuclide compositions, neutron and gamma source terms, and instrument responses in order to inform the instrument calibration procedures. As part of NGSI-SF project, this work was carried out to assess the impacts of uncertainties in the nuclear data used in the calculations of spent fuel content, radiation emissions and instrument responses. Nuclear data is an essential part of nuclear fuel burnup and decay codes and nuclear transport codes. Such codes are routinely used for analysis of spent fuel and NDA safeguards instruments. Hence, the uncertainties existing in the nuclear data used in these codes affect the accuracies of such analysis. In addition, nuclear data uncertainties represent the limiting (smallest) uncertainties that can be expected from nuclear code predictions, and therefore define the highest attainable accuracy of the NDA instrument. This work studies the impacts of nuclear data uncertainties on calculated spent fuel nuclide inventories and the associated NDA instrument response. Recently developed methods within the SCALE code system are applied in this study. The Californium Interrogation with Prompt Neutron instrument was selected to illustrate the impact of these uncertainties on NDA instrument response.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Hu, Jianwei; Gauld, Ian C.
2014-12-01
The U.S. Department of Energy’s Next Generation Safeguards Initiative Spent Fuel (NGSI-SF) project is nearing the final phase of developing several advanced nondestructive assay (NDA) instruments designed to measure spent nuclear fuel assemblies for the purpose of improving nuclear safeguards. Current efforts are focusing on calibrating several of these instruments with spent fuel assemblies at two international spent fuel facilities. Modelling and simulation is expected to play an important role in predicting nuclide compositions, neutron and gamma source terms, and instrument responses in order to inform the instrument calibration procedures. As part of NGSI-SF project, this work was carried outmore » to assess the impacts of uncertainties in the nuclear data used in the calculations of spent fuel content, radiation emissions and instrument responses. Nuclear data is an essential part of nuclear fuel burnup and decay codes and nuclear transport codes. Such codes are routinely used for analysis of spent fuel and NDA safeguards instruments. Hence, the uncertainties existing in the nuclear data used in these codes affect the accuracies of such analysis. In addition, nuclear data uncertainties represent the limiting (smallest) uncertainties that can be expected from nuclear code predictions, and therefore define the highest attainable accuracy of the NDA instrument. This work studies the impacts of nuclear data uncertainties on calculated spent fuel nuclide inventories and the associated NDA instrument response. Recently developed methods within the SCALE code system are applied in this study. The Californium Interrogation with Prompt Neutron instrument was selected to illustrate the impact of these uncertainties on NDA instrument response.« less
ROBUSTNESS OF DECISION INSIGHTS UNDER ALTERNATIVE ALEATORY/EPISTEMIC UNCERTAINTY CLASSIFICATIONS
Unwin, Stephen D.; Eslinger, Paul W.; Johnson, Kenneth I.
2013-09-22
The Risk-Informed Safety Margin Characterization (RISMC) pathway is a set of activities defined under the U.S. Department of Energy Light Water Reactor Sustainability Program. The overarching objective of RISMC is to support plant life-extension decision-making by providing a state-of-knowledge characterization of safety margins in key systems, structures, and components (SSCs). A key technical challenge is to establish the conceptual and technical feasibility of analyzing safety margin in a risk-informed way, which, unlike conventionally defined deterministic margin analysis, would be founded on probabilistic characterizations of SSC performance. Evaluation of probabilistic safety margins will in general entail the uncertainty characterization both of the prospective challenge to the performance of an SSC ("load") and of its "capacity" to withstand that challenge. The RISMC framework contrasts sharply with the traditional probabilistic risk assessment (PRA) structure in that the underlying models are not inherently aleatory. Rather, they are largely deterministic physical/engineering models with ambiguities about the appropriate uncertainty classification of many model parameters. The current analysis demonstrates that if the distinction between epistemic and aleatory uncertainties is to be preserved in a RISMC-like modeling environment, then it is unlikely that analysis insights supporting decision-making will in general be robust under recategorization of input uncertainties. If it is believed there is a true conceptual distinction between epistemic and aleatory uncertainty (as opposed to the distinction being primarily a legacy of the PRA paradigm) then a consistent and defensible basis must be established by which to categorize input uncertainties.
Quantification of margins and uncertainty for risk-informed decision analysis.
Alvin, Kenneth Fredrick
2010-09-01
QMU stands for 'Quantification of Margins and Uncertainties'. QMU is a basic framework for consistency in integrating simulation, data, and/or subject matter expertise to provide input into a risk-informed decision-making process. QMU is being applied to a wide range of NNSA stockpile issues, from performance to safety. The implementation of QMU varies with lab and application focus. The Advanced Simulation and Computing (ASC) Program develops validated computational simulation tools to be applied in the context of QMU. QMU provides input into a risk-informed decision making process. The completeness aspect of QMU can benefit from the structured methodology and discipline of quantitative risk assessment (QRA)/probabilistic risk assessment (PRA). In characterizing uncertainties it is important to pay attention to the distinction between those arising from incomplete knowledge ('epistemic' or systematic), and those arising from device-to-device variation ('aleatory' or random). The national security labs should investigate the utility of a probability of frequency (PoF) approach in presenting uncertainties in the stockpile. A QMU methodology is connected if the interactions between failure modes are included. The design labs should continue to focus attention on quantifying uncertainties that arise from epistemic uncertainties such as poorly-modeled phenomena, numerical errors, coding errors, and systematic uncertainties in experiment. The NNSA and design labs should ensure that the certification plan for any RRW is supported by strong, timely peer review and by an ongoing, transparent QMU-based documentation and analysis in order to permit a confidence level necessary for eventual certification.
Seitz, R.
2011-03-02
It is widely recognized that the results of safety assessment calculations provide an important contribution to the safety arguments for a disposal facility, but cannot in themselves adequately demonstrate the safety of the disposal system. The safety assessment and a broader range of arguments and activities need to be considered holistically to justify radioactive waste disposal at any particular site. Many programs are therefore moving towards the production of what has become known as a Safety Case, which includes all of the different activities that are conducted to demonstrate the safety of a disposal concept. Recognizing the growing interest in the concept of a Safety Case, the International Atomic Energy Agency (IAEA) is undertaking an intercomparison and harmonization project called PRISM (Practical Illustration and use of the Safety Case Concept in the Management of Near-surface Disposal). The PRISM project is organized into four Task Groups that address key aspects of the Safety Case concept: Task Group 1 - Understanding the Safety Case; Task Group 2 - Disposal facility design; Task Group 3 - Managing waste acceptance; and Task Group 4 - Managing uncertainty. This paper addresses the work of Task Group 4, which is investigating approaches for managing the uncertainties associated with near-surface disposal of radioactive waste and their consideration in the context of the Safety Case. Emphasis is placed on identifying a wide variety of approaches that can and have been used to manage different types of uncertainties, especially non-quantitative approaches that have not received as much attention in previous IAEA projects. This paper includes discussions of the current results of work on the task on managing uncertainty, including: the different circumstances being considered, the sources/types of uncertainties being addressed and some initial proposals for approaches that can be used to manage different types of uncertainties.
On The short-term uncertainty in performance of a point absorber wave energy converter
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
ON THE SHORT-TERM UNCERTAINTY IN PERFORMANCE OF A POINT ABSORBER WAVE ENERGY CONVERTER Lance Manuel 1 and Jarred Canning University of Texas at Austin Austin, TX, USA Ryan G. Coe and Carlos Michelen Sandia National Laboratories Albuquerque, NM, USA 1 Corresponding author: lmanuel@mail.utexas.edu INTRODUCTION Of interest, in this study, is the quantification of uncertainty in the performance of a two-body wave point absorber (Reference Model 3 or RM3), which serves as a wave energy converter
Constantinescu, E. M; Zavala, V. M.; Rocklin, M.; Lee, S.; Anitescu, M.
2011-02-01
We present a computational framework for integrating a state-of-the-art numerical weather prediction (NWP) model in stochastic unit commitment/economic dispatch formulations that account for wind power uncertainty. We first enhance the NWP model with an ensemble-based uncertainty quantification strategy implemented in a distributed-memory parallel computing architecture. We discuss computational issues arising in the implementation of the framework and validate the model using real wind-speed data obtained from a set of meteorological stations. We build a simulated power system to demonstrate the developments.
Calibration and Forward Uncertainty Propagation for Large-eddy Simulations of Engineering Flows
Templeton, Jeremy Alan; Blaylock, Myra L.; Domino, Stefan P.; Hewson, John C.; Kumar, Pritvi Raj; Ling, Julia; Najm, Habib N.; Ruiz, Anthony; Safta, Cosmin; Sargsyan, Khachik; Stewart, Alessia; Wagner, Gregory
2015-09-01
The objective of this work is to investigate the efficacy of using calibration strategies from Uncertainty Quantification (UQ) to determine model coefficients for LES. As the target methods are for engineering LES, uncertainty from numerical aspects of the model must also be quantified. 15 The ultimate goal of this research thread is to generate a cost versus accuracy curve for LES such that the cost could be minimized given an accuracy prescribed by an engineering need. Realization of this goal would enable LES to serve as a predictive simulation tool within the engineering design process.
Uncertainty analyses of CO2 plume expansion subsequent to wellbore CO2
Office of Scientific and Technical Information (OSTI)
leakage into aquifers (Journal Article) | SciTech Connect analyses of CO2 plume expansion subsequent to wellbore CO2 leakage into aquifers Citation Details In-Document Search Title: Uncertainty analyses of CO2 plume expansion subsequent to wellbore CO2 leakage into aquifers In this study, we apply an uncertainty quantification (UQ) framework to CO2 sequestration problems. In one scenario, we look at the risk of wellbore leakage of CO2 into a shallow unconfined aquifer in an urban area; in
Reda, I.; Stoffel, T.; Habte, A.
2014-03-01
The National Renewable Energy Laboratory (NREL) and the Atmospheric Radiation Measurement (ARM) Climate Research Facility work together in providing data from strategically located in situ measurement observatories around the world. Both work together in improving and developing new technologies that assist in acquiring high quality radiometric data. In this presentation we summarize the uncertainty estimates of the ARM data collected at the ARM Solar Infrared Radiation Station (SIRS), Sky Radiometers on Stand for Downwelling Radiation (SKYRAD), and Ground Radiometers on Stand for Upwelling Radiation (GNDRAD), which ultimately improve the existing radiometric data. Three studies are also included to show the difference between calibrating pyrgeometers (e.g., Eppley PIR) using the manufacturer blackbody versus the interim World Infrared Standard Group (WISG), a pyrgeometer aging study, and the sampling rate effect of correcting historical data.
Rapidity gap survival in central exclusive diffraction: Dynamical mechanisms and uncertainties
Strikman, Mark; Weiss, Christian
2009-01-01
We summarize our understanding of the dynamical mechanisms governing rapidity gap survival in central exclusive diffraction, pp -> p + H + p (H = high-mass system), and discuss the uncertainties in present estimates of the survival probability. The main suppression of diffractive scattering is due to inelastic soft spectator interactions at small pp impact parameters and can be described in a mean-field approximation (independent hard and soft interactions). Moderate extra suppression results from fluctuations of the partonic configurations of the colliding protons. At LHC energies absorptive interactions of hard spectator partons associated with the gg -> H process reach the black-disk regime and cause substantial additional suppression, pushing the survival probability below 0.01.
Which Models Matter: Uncertainty and Sensitivity Analysis for
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
of Energy EPS Billboard) Which Bulb Is Right for You? (High-Resolution EPS Billboard) High-resolution EPS of billboard reading, 'Which bulb is right for you? Save energy, save money. Energysaver.gov.' File DoE_Billboard_Which_Bulb.eps More Documents & Publications Which Bulb Is Right for You? (High-Resolution JPG Billboard) Which Bulb Is Right for You? (Low-Resolution JPG Billboard) Goodbye, Watts. Hello, Lumens. (High-Resolution EPS of Energy
JPG Billboard) Which Bulb Is Right
Influence of Advanced Fuel Cycles on Uncertainty in the Performance...
Office of Scientific and Technical Information (OSTI)
Resource Relation: Conference: Proposed for presentation at the International High-Level Radioactive Waste Management Conference held April 28 - May 2, 2013 in Albuquerque, NM...
Keller, Dustin M.
2013-11-01
A comprehensive investigation into the measurement uncertainty in polarization produced by Dynamic Nuclear Polarization is outlined. The polarization data taken during Jefferson Lab experiment E08-007 is used to obtain error estimates and to develop an algorithm to minimize uncertainty of the measurement of polarization in irradiated View the ^14NH_3 targets, which is readily applied to other materials. The target polarization and corresponding uncertainties for E08-007 are reported. The resulting relative uncertainty found in the target polarization is determined to be less than or equal to 3.9%.
Johnson, J. D.; Oberkampf, William Louis; Helton, Jon Craig (Arizona State University, Tempe, AZ); Storlie, Curtis B. (North Carolina State University, Raleigh, NC)
2006-10-01
Evidence theory provides an alternative to probability theory for the representation of epistemic uncertainty in model predictions that derives from epistemic uncertainty in model inputs, where the descriptor epistemic is used to indicate uncertainty that derives from a lack of knowledge with respect to the appropriate values to use for various inputs to the model. The potential benefit, and hence appeal, of evidence theory is that it allows a less restrictive specification of uncertainty than is possible within the axiomatic structure on which probability theory is based. Unfortunately, the propagation of an evidence theory representation for uncertainty through a model is more computationally demanding than the propagation of a probabilistic representation for uncertainty, with this difficulty constituting a serious obstacle to the use of evidence theory in the representation of uncertainty in predictions obtained from computationally intensive models. This presentation describes and illustrates a sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory. Preliminary trials indicate that the presented strategy can be used to propagate uncertainty representations based on evidence theory in analysis situations where naive sampling-based (i.e., unsophisticated Monte Carlo) procedures are impracticable due to computational cost.
Wildermann, R.; Beittel, R. )
1993-01-01
The Minerals Management Service (MMS) of the US Department of the Interior prepares an environmental impact statement (EIS) for each proposal to lease a portion of the Outer Continental Shelf (OCS) for oil and gas exploration and development. The nature, magnitude, and timing of the activities that would ultimately result from leasing are subject to wide speculation, primarily because of uncertainties about the locations and amounts of petroleum hydrocarbons that exist on most potential leases. These uncertainties create challenges in preparing EIS's that meet National Environmental Policy Act requirements and provide information useful to decision-makers. This paper examines the constraints that uncertainty places on the detail and reliability of assessments of impacts from potential OCS development. It further describes how the MMS accounts for uncertainty in developing reasonable scenarios of future events that can be evaluated in the EIS. A process for incorporating the risk of accidental oil spills into assessments of expected impacts is also presented. Finally, the paper demonstrates through examination of case studies how a balance can be achieved between the need for an EIS to present impacts in sufficient detail to allow a meaningful comparison of alternatives and the tendency to push the analysis beyond credible limits.
Measuring Cross-Section and Estimating Uncertainties with the fissionTPC
Bowden, N.; Manning, B.; Sangiorgio, S.; Seilhan, B.
2015-01-30
The purpose of this document is to outline the prescription for measuring fission cross-sections with the NIFFTE fissionTPC and estimating the associated uncertainties. As such it will serve as a work planning guide for NIFFTE collaboration members and facilitate clear communication of the procedures used to the broader community.
Uncertainty and Sensitivity of Alternative Rn-222 Flux Density Models Used in Performance Assessment
Greg J. Shott, Vefa Yucel, Lloyd Desotell; Non-Nstec Authors: G. Pyles and Jon Carilli
2007-06-01
Performance assessments for the Area 5 Radioactive Waste Management Site on the Nevada Test Site have used three different mathematical models to estimate Rn-222 flux density. This study describes the performance, uncertainty, and sensitivity of the three models which include the U.S. Nuclear Regulatory Commission Regulatory Guide 3.64 analytical method and two numerical methods. The uncertainty of each model was determined by Monte Carlo simulation using Latin hypercube sampling. The global sensitivity was investigated using Morris one-at-time screening method, sample-based correlation and regression methods, the variance-based extended Fourier amplitude sensitivity test, and Sobol's sensitivity indices. The models were found to produce similar estimates of the mean and median flux density, but to have different uncertainties and sensitivities. When the Rn-222 effective diffusion coefficient was estimated using five different published predictive models, the radon flux density models were found to be most sensitive to the effective diffusion coefficient model selected, the emanation coefficient, and the radionuclide inventory. Using a site-specific measured effective diffusion coefficient significantly reduced the output uncertainty. When a site-specific effective-diffusion coefficient was used, the models were most sensitive to the emanation coefficient and the radionuclide inventory.
ASSESSMENT OF UNCERTAINTY IN THE RADIATION DOSES FOR THE TECHA RIVER DOSIMETRY SYSTEM
Napier, Bruce A.; Degteva, M. O.; Anspaugh, L. R.; Shagina, N. B.
2009-10-23
In order to provide more accurate and precise estimates of individual dose (and thus more precise estimates of radiation risk) for the members of the ETRC, a new dosimetric calculation system, the Techa River Dosimetry System-2009 (TRDS-2009) has been prepared. The deterministic version of the improved dosimetry system TRDS-2009D was basically completed in April 2009. Recent developments in evaluation of dose-response models in light of uncertain dose have highlighted the importance of different types of uncertainties in the development of individual dose estimates. These include uncertain parameters that may be either shared or unshared within the dosimetric cohort, and also the nature of the type of uncertainty as aleatory or epistemic and either classical or Berkson. This report identifies the nature of the various input parameters and calculational methods incorporated in the Techa River Dosimetry System (based on the TRDS-2009D implementation), with the intention of preparing a stochastic version to estimate the uncertainties in the dose estimates. This report reviews the equations, databases, and input parameters, and then identifies the authors interpretations of their general nature. It presents the approach selected so that the stochastic, Monte-Carlo, implementation of the dosimetry System - TRDS-2009MC - will provide useful information regarding the uncertainties of the doses.
Zhang, J.; Hodge, B.; Miettinen, J.; Holttinen, H.; Gomez-Lozaro, E.; Cutululis, N.; Litong-Palima, M.; Sorensen, P.; Lovholm, A.; Berge, E.; Dobschinski, J.
2013-10-01
This presentation summarizes the work to investigate the uncertainty in wind forecasting at different times of year and compare wind forecast errors in different power systems using large-scale wind power prediction data from six countries: the United States, Finland, Spain, Denmark, Norway, and Germany.
Distributed Generation Investment by a Microgrid UnderUncertainty
Siddiqui, Afzal; Marnay, Chris
2006-06-16
This paper examines a California-based microgrid s decision to invest in a distributed generation (DG) unit that operates on natural gas. While the long-term natural gas generation cost is stochastic, we initially assume that the microgrid may purchase electricity at a fixed retail rate from its utility. Using the real options approach, we find natural gas generating cost thresholds that trigger DG investment. Furthermore, the consideration of operational flexibility by the microgrid accelerates DG investment, while the option to disconnect entirely from the utility is not attractive. By allowing the electricity price to be stochastic, we next determine an investment threshold boundary and find that high electricity price volatility relative to that of natural gas generating cost delays investment while simultaneously increasing the value of the investment. We conclude by using this result to find the implicit option value of the DG unit.
Hurdling barriers through market uncertainty: Case studies ininnovative technology adoption
Payne, Christopher T.; Radspieler Jr., Anthony; Payne, Jack
2002-08-18
The crisis atmosphere surrounding electricity availability in California during the summer of 2001 produced two distinct phenomena in commercial energy consumption decision-making: desires to guarantee energy availability while blackouts were still widely anticipated, and desires to avoid or mitigate significant price increases when higher commercial electricity tariffs took effect. The climate of increased consideration of these factors seems to have led, in some cases, to greater willingness on the part of business decision-makers to consider highly innovative technologies. This paper examines three case studies of innovative technology adoption: retrofit of time-and-temperature signs on an office building; installation of fuel cells to supply power, heating, and cooling to the same building; and installation of a gas-fired heat pump at a microbrewery. We examine the decision process that led to adoption of these technologies. In each case, specific constraints had made more conventional energy-efficient technologies inapplicable. We examine how these barriers to technology adoption developed over time, how the California energy decision-making climate combined with the characteristics of these innovative technologies to overcome the barriers, and what the implications of hurdling these barriers are for future energy decisions within the firms.
Implementation of a Bayesian Engine for Uncertainty Analysis
Leng Vang; Curtis Smith; Steven Prescott
2014-08-01
In probabilistic risk assessment, it is important to have an environment where analysts have access to a shared and secured high performance computing and a statistical analysis tool package. As part of the advanced small modular reactor probabilistic risk analysis framework implementation, we have identified the need for advanced Bayesian computations. However, in order to make this technology available to non-specialists, there is also a need of a simplified tool that allows users to author models and evaluate them within this framework. As a proof-of-concept, we have implemented an advanced open source Bayesian inference tool, OpenBUGS, within the browser-based cloud risk analysis framework that is under development at the Idaho National Laboratory. This development, the “OpenBUGS Scripter” has been implemented as a client side, visual web-based and integrated development environment for creating OpenBUGS language scripts. It depends on the shared server environment to execute the generated scripts and to transmit results back to the user. The visual models are in the form of linked diagrams, from which we automatically create the applicable OpenBUGS script that matches the diagram. These diagrams can be saved locally or stored on the server environment to be shared with other users.
SU-D-16A-06: Modeling Biological Effects of Residual Uncertainties For Stereotactic Radiosurgery
Ma, L; Larson, D; McDermott, M; Sneed, P; Sahgal, A
2014-06-01
Purpose: Residual uncertainties on the order of 1-2 mm are frequently observed when delivering stereotactic radiosurgery via on-line imaging guidance with a relocatable frame. In this study, a predictive model was developed to evalute potentiral late radiation effects associated with such uncertainties. Methods: A mathematical model was first developed to correlate the peripherial isodose volume with the internal and/or setup margins for a radiosurgical target. Such a model was then integrated with a previoulsy published logistic regression normal tissue complication model for determining the symptomatic radiation necrosis rate at various target sizes and prescription dose levels. The model was tested on a cohort of 15 brain tumor and tumor resection cavity patient cases and model predicted results were compared with the clinical results reported in the literature. Results: A normalized target diameter (D{sub 0}) in term of D{sub 0} = 6V/S, where V is the volume of a radiosurgical target and S is the surface of the target, was found to correlate excellently with the peripheral isodose volume for a radiosurgical delivery (logarithmic regression R{sup 2} > 0.99). The peripheral isodose volumes were found increase rapidly with increasing uncertainties levels. In general, a 1-mm residual uncertainties as calculated to result in approximately 0.5%, 1%, and 3% increases in the symptomatic radiation necrosis rate for D{sub 0} = 1 cm, 2 cm, and 3 cm based on the prescription guideline of RTOG 9005, i.e., 21 Gy to a lesion of 1 cm in diameter, 18 Gy to a lesion 2 cm in diameter, and 15 Gy to a lesion 3 cm in diameter respectively. Conclusion: The results of study suggest more stringent criteria on residual uncertainties are needed when treating a large target such as D{sub 0}≤ 3 cm with stereotactic radiosurgery. Dr. Ma and Dr. Sahgal are currently serving on the board of international society of stereotactic radiosurgery (ISRS)
Use of Quantitative Uncertainty Analysis to Support M&VDecisions in ESPCs
Mathew, Paul A.; Koehling, Erick; Kumar, Satish
2005-05-11
Measurement and Verification (M&V) is a critical elementof an Energy Savings Performance Contract (ESPC) - without M&V, thereisno way to confirm that the projected savings in an ESPC are in factbeing realized. For any given energy conservation measure in an ESPC,there are usually several M&V choices, which will vary in terms ofmeasurement uncertainty, cost, and technical feasibility. Typically,M&V decisions are made almost solely based on engineering judgmentand experience, with little, if any, quantitative uncertainty analysis(QUA). This paper describes the results of a pilot project initiated bythe Department of Energy s Federal Energy Management Program to explorethe use of Monte-Carlo simulation to assess savings uncertainty andthereby augment the M&V decision-making process in ESPCs. The intentwas to use QUA selectively in combination with heuristic knowledge, inorder to obtain quantitative estimates of the savings uncertainty withoutthe burden of a comprehensive "bottoms-up" QUA. This approach was used toanalyze the savings uncertainty in an ESPC for a large federal agency.The QUA was seamlessly integrated into the ESPC development process andthe incremental effort was relatively small with user-friendly tools thatare commercially available. As the case study illustrates, in some casesthe QUA simply confirms intuitive or qualitative information, while inother cases, it provides insight that suggests revisiting the M&Vplan. The case study also showed that M&V decisions should beinformed by the portfolio risk diversification. By providing quantitativeuncertainty information, QUA can effectively augment the M&Vdecision-making process as well as the overall ESPC financialanalysis.
SU-E-T-573: The Robustness of a Combined Margin Recipe for Uncertainties During Radiotherapy
Stroom, J; Vieira, S; Greco, C [Champalimaud Foundation, Lisbon, Lisbon (Portugal)
2014-06-01
Purpose: To investigate the variability of a safety margin recipe that combines CTV and PTV margins quadratically, with several tumor, treatment, and user related factors. Methods: Margin recipes were calculated by monte-carlo simulations in 5 steps. 1. A spherical tumor with or without isotropic microscopic was irradiated with a 5 field dose plan2. PTV: Geometric uncertainties were introduced using systematic (Sgeo) and random (sgeo) standard deviations. CTV: Microscopic disease distribution was modelled by semi-gaussian (Smicro) with varying number of islets (Ni)3. For a specific uncertainty set (Sgeo, sgeo, Smicro(Ni)), margins were varied until pre-defined decrease in TCP or dose coverage was fulfilled. 4. First, margin recipes were calculated for each of the three uncertainties separately. CTV and PTV recipes were then combined quadratically to yield a final recipe M(Sgeo, sgeo, Smicro(Ni)).5. The final M was verified by simultaneous simulations of the uncertainties.Now, M has been calculated for various changing parameters like margin criteria, penumbra steepness, islet radio-sensitivity, dose conformity, and number of fractions. We subsequently investigated A: whether the combined recipe still holds in all these situations, and B: what the margin variation was in all these cases. Results: We found that the accuracy of the combined margin recipes remains on average within 1mm for all situations, confirming the correctness of the quadratic addition. Depending on the specific parameter, margin factors could change such that margins change over 50%. Especially margin recipes based on TCP-criteria are more sensitive to more parameters than those based on purely geometric Dmin-criteria. Interestingly, measures taken to minimize treatment field sizes (by e.g. optimizing dose conformity) are counteracted by the requirement of larger margins to get the same tumor coverage. Conclusion: Margin recipes combining geometric and microscopic uncertainties quadratically are accurate under varying circumstances. However margins can change up to 50% for different situations.
Gomez, J. C.; Glatzmaier, G. C.; Mehos, M.
2012-09-01
The main objective of this study was to calculate the uncertainty at 95% confidence for the experimental values of heat capacity of the eutectic mixture of biphenyl/diphenyl ether (Therminol VP-1) determined from 300 to 370 degrees C. Twenty-five samples were evaluated using differential scanning calorimetry (DSC) to obtain the sample heat flow as a function of temperature. The ASTM E-1269-05 standard was used to determine the heat capacity using DSC evaluations. High-pressure crucibles were employed to contain the sample in the liquid state without vaporizing. Sample handling has a significant impact on the random uncertainty. It was determined that the fluid is difficult to handle, and a high variability of the data was produced. The heat capacity of Therminol VP-1 between 300 and 370 degrees C was measured to be equal to 0.0025T+0.8672 with an uncertainty of +/- 0.074 J/g.K (3.09%) at 95% confidence with T (temperature) in Kelvin.
Gerhard Strydom; Su-Jong Yoon
2014-04-01
Computational Fluid Dynamics (CFD) evaluation of homogeneous and heterogeneous fuel models was performed as part of the Phase I calculations of the International Atomic Energy Agency (IAEA) Coordinate Research Program (CRP) on High Temperature Reactor (HTR) Uncertainties in Modeling (UAM). This study was focused on the nominal localized stand-alone fuel thermal response, as defined in Ex. I-3 and I-4 of the HTR UAM. The aim of the stand-alone thermal unit-cell simulation is to isolate the effect of material and boundary input uncertainties on a very simplified problem, before propagation of these uncertainties are performed in subsequent coupled neutronics/thermal fluids phases on the benchmark. In many of the previous studies for high temperature gas cooled reactors, the volume-averaged homogeneous mixture model of a single fuel compact has been applied. In the homogeneous model, the Tristructural Isotropic (TRISO) fuel particles in the fuel compact were not modeled directly and an effective thermal conductivity was employed for the thermo-physical properties of the fuel compact. On the contrary, in the heterogeneous model, the uranium carbide (UCO), inner and outer pyrolytic carbon (IPyC/OPyC) and silicon carbide (SiC) layers of the TRISO fuel particles are explicitly modeled. The fuel compact is modeled as a heterogeneous mixture of TRISO fuel kernels embedded in H-451 matrix graphite. In this study, a steady-state and transient CFD simulations were performed with both homogeneous and heterogeneous models to compare the thermal characteristics. The nominal values of the input parameters are used for this CFD analysis. In a future study, the effects of input uncertainties in the material properties and boundary parameters will be investigated and reported.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Analysis - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced Nuclear Energy
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Quantification - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced Nuclear
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel ... SubTER Carbon Sequestration Program Leadership EnergyWater Nexus EnergyWater History ...
El Hanandeh, Ali; El-Zein, Abbas
2010-05-15
This paper describes the development and application of the Stochastic Integrated Waste Management Simulator (SIWMS) model. SIWMS provides a detailed view of the environmental impacts and associated costs of municipal solid waste (MSW) management alternatives under conditions of uncertainty. The model follows a life-cycle inventory approach extended with compensatory systems to provide more equitable bases for comparing different alternatives. Economic performance is measured by the net present value. The model is verified against four publicly available models under deterministic conditions and then used to study the impact of uncertainty on Sydney's MSW management 'best practices'. Uncertainty has a significant effect on all impact categories. The greatest effect is observed in the global warming category where a reversal of impact direction is predicted. The reliability of the system is most sensitive to uncertainties in the waste processing and disposal. The results highlight the importance of incorporating uncertainty at all stages to better understand the behaviour of the MSW system.
Propagation of Isotopic Bias and Uncertainty to Criticality Safety Analyses of PWR Waste Packages
Radulescu, Georgeta
2010-06-01
Burnup credit methodology is economically advantageous because significantly higher loading capacity may be achieved for spent nuclear fuel (SNF) casks based on this methodology as compared to the loading capacity based on a fresh fuel assumption. However, the criticality safety analysis for establishing the loading curve based on burnup credit becomes increasingly complex as more parameters accounting for spent fuel isotopic compositions are introduced to the safety analysis. The safety analysis requires validation of both depletion and criticality calculation methods. Validation of a neutronic-depletion code consists of quantifying the bias and the uncertainty associated with the bias in predicted SNF compositions caused by cross-section data uncertainty and by approximations in the calculational method. The validation is based on comparison between radiochemical assay (RCA) data and calculated isotopic concentrations for fuel samples representative of SNF inventory. The criticality analysis methodology for commercial SNF disposal allows burnup credit for 14 actinides and 15 fission product isotopes in SNF compositions. The neutronic-depletion method for disposal criticality analysis employing burnup credit is the two-dimensional (2-D) depletion sequence TRITON (Transport Rigor Implemented with Time-dependent Operation for Neutronic depletion)/NEWT (New ESC-based Weighting Transport code) and the 44GROUPNDF5 crosssection library in the Standardized Computer Analysis for Licensing Evaluation (SCALE 5.1) code system. The SCALE 44GROUPNDF5 cross section library is based on the Evaluated Nuclear Data File/B Version V (ENDF/B-V) library. The criticality calculation code for disposal criticality analysis employing burnup credit is General Monte Carlo N-Particle (MCNP) Transport Code. The purpose of this calculation report is to determine the bias on the calculated effective neutron multiplication factor, k{sub eff}, due to the bias and bias uncertainty associated with predicted spent fuel compositions (i.e., determine the penalty in reactivity due to isotopic composition bias and uncertainty) for use in disposal criticality analysis employing burnup credit. The method used in this calculation to propagate the isotopic bias and bias-uncertainty values to k{sub eff} is the Monte Carlo uncertainty sampling method. The development of this report is consistent with 'Test Plan for: Isotopic Validation for Postclosure Criticality of Commercial Spent Nuclear Fuel'. This calculation report has been developed in support of burnup credit activities for the proposed repository at Yucca Mountain, Nevada, and provides a methodology that can be applied to other criticality safety applications employing burnup credit.
Sun, Y.; Tong, C.; Trainor-Guitten, W. J.; Lu, C.; Mansoor, K.; Carroll, S. A.
2012-12-20
The risk of CO_{2} leakage from a deep storage reservoir into a shallow aquifer through a fault is assessed and studied using physics-specific computer models. The hypothetical CO_{2} geological sequestration system is composed of three subsystems: a deep storage reservoir, a fault in caprock, and a shallow aquifer, which are modeled respectively by considering sub-domain-specific physics. Supercritical CO_{2} is injected into the reservoir subsystem with uncertain permeabilities of reservoir, caprock, and aquifer, uncertain fault location, and injection rate (as a decision variable). The simulated pressure and CO_{2}/brine saturation are connected to the fault-leakage model as a boundary condition. CO_{2} and brine fluxes from the fault-leakage model at the fault outlet are then imposed in the aquifer model as a source term. Moreover, uncertainties are propagated from the deep reservoir model, to the fault-leakage model, and eventually to the geochemical model in the shallow aquifer, thus contributing to risk profiles. To quantify the uncertainties and assess leakage-relevant risk, we propose a global sampling-based method to allocate sub-dimensions of uncertain parameters to sub-models. The risk profiles are defined and related to CO_{2} plume development for pH value and total dissolved solids (TDS) below the EPA's Maximum Contaminant Levels (MCL) for drinking water quality. A global sensitivity analysis is conducted to select the most sensitive parameters to the risk profiles. The resulting uncertainty of pH- and TDS-defined aquifer volume, which is impacted by CO_{2} and brine leakage, mainly results from the uncertainty of fault permeability. Subsequently, high-resolution, reduced-order models of risk profiles are developed as functions of all the decision variables and uncertain parameters in all three subsystems.
Scolnic, D.; Riess, A.; Brout, D.; Rodney, S. [Department of Physics and Astronomy, Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218 (United States); Rest, A. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Huber, M. E.; Tonry, J. L. [Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96822 (United States); Foley, R. J.; Chornock, R.; Berger, E.; Soderberg, A. M.; Stubbs, C. W.; Kirshner, R. P.; Challis, P.; Czekala, I.; Drout, M. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Narayan, G. [Department of Physics, Harvard University, 17 Oxford Street, Cambridge, MA 02138 (United States); Smartt, S. J.; Botticella, M. T. [Astrophysics Research Centre, School of Mathematics and Physics, Queens University Belfast, Belfast BT7 1NN (United Kingdom); Schlafly, E. [Max Planck Institute for Astronomy, Konigstuhl 17, D-69117 Heidelberg (Germany); and others
2014-11-01
We probe the systematic uncertainties from the 113 Type Ia supernovae (SN Ia) in the Pan-STARRS1 (PS1) sample along with 197 SN Ia from a combination of low-redshift surveys. The companion paper by Rest et al. describes the photometric measurements and cosmological inferences from the PS1 sample. The largest systematic uncertainty stems from the photometric calibration of the PS1 and low-z samples. We increase the sample of observed Calspec standards from 7 to 10 used to define the PS1 calibration system. The PS1 and SDSS-II calibration systems are compared and discrepancies up to ?0.02 mag are recovered. We find uncertainties in the proper way to treat intrinsic colors and reddening produce differences in the recovered value of w up to 3%. We estimate masses of host galaxies of PS1 supernovae and detect an insignificant difference in distance residuals of the full sample of 0.037 0.031 mag for host galaxies with high and low masses. Assuming flatness and including systematic uncertainties in our analysis of only SNe measurements, we find w =?1.120{sub ?0.206}{sup +0.360}(Stat){sub ?0.291}{sup +0.269}(Sys). With additional constraints from Baryon acoustic oscillation, cosmic microwave background (CMB) (Planck) and H {sub 0} measurements, we find w=?1.166{sub ?0.069}{sup +0.072} and ?{sub m}=0.280{sub ?0.012}{sup +0.013} (statistical and systematic errors added in quadrature). The significance of the inconsistency with w = 1 depends on whether we use Planck or Wilkinson Microwave Anisotropy Probe measurements of the CMB: w{sub BAO+H0+SN+WMAP}=?1.124{sub ?0.065}{sup +0.083}.
Cardoso, Goncalo; Stadler, Michael; Bozchalui, Mohammed C.; Sharma, Ratnesh; Marnay, Chris; Barbosa-Povoa, Ana; Ferrao, Paulo
2013-12-06
The large scale penetration of electric vehicles (EVs) will introduce technical challenges to the distribution grid, but also carries the potential for vehicle-to-grid services. Namely, if available in large enough numbers, EVs can be used as a distributed energy resource (DER) and their presence can influence optimal DER investment and scheduling decisions in microgrids. In this work, a novel EV fleet aggregator model is introduced in a stochastic formulation of DER-CAM [1], an optimization tool used to address DER investment and scheduling problems. This is used to assess the impact of EV interconnections on optimal DER solutions considering uncertainty in EV driving schedules. Optimization results indicate that EVs can have a significant impact on DER investments, particularly if considering short payback periods. Furthermore, results suggest that uncertainty in driving schedules carries little significance to total energy costs, which is corroborated by results obtained using the stochastic formulation of the problem.
Constantinescu, E. M.; Zavala, V. M.; Rocklin, M.; Lee, S.; Anitescu, M.
2009-10-09
We present a computational framework for integrating the state-of-the-art Weather Research and Forecasting (WRF) model in stochastic unit commitment/energy dispatch formulations that account for wind power uncertainty. We first enhance the WRF model with adjoint sensitivity analysis capabilities and a sampling technique implemented in a distributed-memory parallel computing architecture. We use these capabilities through an ensemble approach to model the uncertainty of the forecast errors. The wind power realizations are exploited through a closed-loop stochastic unit commitment/energy dispatch formulation. We discuss computational issues arising in the implementation of the framework. In addition, we validate the framework using real wind speed data obtained from a set of meteorological stations. We also build a simulated power system to demonstrate the developments.
Martinez-Canales, Monica L.; Heaphy, Robert; Gramacy, Robert B.; Taddy, Matt; Chiesa, Michael L.; Thomas, Stephen W.; Swiler, Laura Painton; Hough, Patricia Diane; Lee, Herbert K. H.; Trucano, Timothy Guy; Gray, Genetha Anne
2006-11-01
This project focused on research and algorithmic development in optimization under uncertainty (OUU) problems driven by earth penetrator (EP) designs. While taking into account uncertainty, we addressed three challenges in current simulation-based engineering design and analysis processes. The first challenge required leveraging small local samples, already constructed by optimization algorithms, to build effective surrogate models. We used Gaussian Process (GP) models to construct these surrogates. We developed two OUU algorithms using 'local' GPs (OUU-LGP) and one OUU algorithm using 'global' GPs (OUU-GGP) that appear competitive or better than current methods. The second challenge was to develop a methodical design process based on multi-resolution, multi-fidelity models. We developed a Multi-Fidelity Bayesian Auto-regressive process (MF-BAP). The third challenge involved the development of tools that are computational feasible and accessible. We created MATLAB{reg_sign} and initial DAKOTA implementations of our algorithms.
Reducing Uncertainty in the Seismic Design Basis for the Waste Treatment Plant, Hanford, Washington
Brouns, Thomas M.; Rohay, Alan C.; Reidel, Steve; Gardner, Martin G.
2007-02-27
The seismic design basis for the Waste Treatment Plant (WTP) at the Department of Energys (DOE) Hanford Site near Richland was re-evaluated in 2005, resulting in an increase by up to 40% in the seismic design basis. The original seismic design basis for the WTP was established in 1999 based on a probabilistic seismic hazard analysis completed in 1996. The 2005 analysis was performed to address questions raised by the Defense Nuclear Facilities Safety Board (DNFSB) about the assumptions used in developing the original seismic criteria and adequacy of the site geotechnical surveys. The updated seismic response analysis used existing and newly acquired seismic velocity data, statistical analysis, expert elicitation, and ground motion simulation to develop interim design ground motion response spectra which enveloped the remaining uncertainties. The uncertainties in these response spectra were enveloped at approximately the 84th percentile to produce conservative design spectra, which contributed significantly to the increase in the seismic design basis.
Stewart, G.; Lackner, M.; Haid, L.; Matha, D.; Jonkman, J.; Robertson, A.
2013-07-01
With the push towards siting wind turbines farther offshore due to higher wind quality and less visibility, floating offshore wind turbines, which can be located in deep water, are becoming an economically attractive option. The International Electrotechnical Commission's (IEC) 61400-3 design standard covers fixed-bottom offshore wind turbines, but there are a number of new research questions that need to be answered to modify these standards so that they are applicable to floating wind turbines. One issue is the appropriate simulation length needed for floating turbines. This paper will discuss the results from a study assessing the impact of simulation length on the ultimate and fatigue loads of the structure, and will address uncertainties associated with changing the simulation length for the analyzed floating platform. Recommendations of required simulation length based on load uncertainty will be made and compared to current simulation length requirements.
Integration of Wind Generation and Load Forecast Uncertainties into Power Grid Operations
Makarov, Yuri V.; Etingov, Pavel V.; Huang, Zhenyu; Ma, Jian; Chakrabarti, Bhujanga B.; Subbarao, Krishnappa; Loutan, Clyde; Guttromson, Ross T.
2010-04-20
In this paper, a new approach to evaluate the uncertainty ranges for the required generation performance envelope, including the balancing capacity, ramping capability and ramp duration is presented. The approach includes three stages: statistical and actual data acquisition, statistical analysis of retrospective information, and prediction of future grid balancing requirements for specified time horizons and confidence intervals. Assessment of the capacity and ramping requirements is performed using a specially developed probabilistic algorithm based on a histogram analysis incorporating all sources of uncertainty and parameters of a continuous (wind forecast and load forecast errors) and discrete (forced generator outages and failures to start up) nature. Preliminary simulations using California Independent System Operator (CAISO) real life data have shown the effectiveness and efficiency of the proposed approach.
Archibald, Richard K; Chakoumakos, Madison; Zhuang, Zibo
2012-01-01
Understanding and characterizing sources of uncertainty in climate modeling is an important task. Because of the ever increasing sophistication and resolution of climate modeling it is increasing important to develop uncertainty quantification methods that minimize the computational cost that occurs when these methods are added to climate modeling. This research explores the application of sparse stochastic collocation with polynomial edge detection to characterize portions of the probability space associated with the Earth s radiative budget in the Community Earth System Model (CESM). Specifically, we develop surrogate models with error estimates for a range of acceptable input parameters that predict statistical values of the Earth s radiative budget as derived from the CESM simulation. We extend these results in resolution from T31 to T42 and in parameter space increasing the degrees of freedom from two to three.
Miller, David C.; Ng, Brenda; Eslick, John
2014-01-01
Advanced multi-scale modeling and simulation has the potential to dramatically reduce development time, resulting in considerable cost savings. The Carbon Capture Simulation Initiative (CCSI) is a partnership among national laboratories, industry and universities that is developing, demonstrating, and deploying a suite of multi-scale modeling and simulation tools. One significant computational tool is FOQUS, a Framework for Optimization and Quantification of Uncertainty and Sensitivity, which enables basic data submodels, including thermodynamics and kinetics, to be used within detailed process models to rapidly synthesize and optimize a process and determine the level of uncertainty associated with the resulting process. The overall approach of CCSI is described with a more detailed discussion of FOQUS and its application to carbon capture systems.
Uncertainty Quantification Techniques for Sensor Calibration Monitoring in Nuclear Power Plants
Ramuhalli, Pradeep; Lin, Guang; Crawford, Susan L.; Konomi, Bledar A.; Coble, Jamie B.; Shumaker, Brent; Hashemian, Hash
2014-04-30
This report describes research towards the development of advanced algorithms for online calibration monitoring. The objective of this research is to develop the next generation of online monitoring technologies for sensor calibration interval extension and signal validation in operating and new reactors. These advances are expected to improve the safety and reliability of current and planned nuclear power systems as a result of higher accuracies and increased reliability of sensors used to monitor key parameters. The focus of this report is on documenting the outcomes of the first phase of R&D under this project, which addressed approaches to uncertainty quantification (UQ) in online monitoring that are data-driven, and can therefore adjust estimates of uncertainty as measurement conditions change. Such data-driven approaches to UQ are necessary to address changing plant conditions, for example, as nuclear power plants experience transients, or as next-generation small modular reactors (SMR) operate in load-following conditions.
Uncertainty Quantification Techniques for Sensor Calibration Monitoring in Nuclear Power Plants
Ramuhalli, Pradeep; Lin, Guang; Crawford, Susan L.; Konomi, Bledar A.; Braatz, Brett G.; Coble, Jamie B.; Shumaker, Brent; Hashemian, Hash
2013-09-01
This report describes the status of ongoing research towards the development of advanced algorithms for online calibration monitoring. The objective of this research is to develop the next generation of online monitoring technologies for sensor calibration interval extension and signal validation in operating and new reactors. These advances are expected to improve the safety and reliability of current and planned nuclear power systems as a result of higher accuracies and increased reliability of sensors used to monitor key parameters. The focus of this report is on documenting the outcomes of the first phase of R&D under this project, which addressed approaches to uncertainty quantification (UQ) in online monitoring that are data-driven, and can therefore adjust estimates of uncertainty as measurement conditions change. Such data-driven approaches to UQ are necessary to address changing plant conditions, for example, as nuclear power plants experience transients, or as next-generation small modular reactors (SMR) operate in load-following conditions.
Idealization, uncertainty and heterogeneity : game frameworks defined with formal concept analysis.
Racovitan, M. T.; Sallach, D. L.; Decision and Information Sciences; Northern Illinois Univ.
2006-01-01
The present study begins with Formal Concept Analysis, and undertakes to demonstrate how a succession of game frameworks may, by design, address increasingly complex and interesting social phenomena. We develop a series of multi-agent exchange games, each of which incorporates an additional dimension of complexity. All games are based on coalition patterns in exchanges where diverse cultural markers provide a basis for trust and reciprocity. The first game is characterized by an idealized concept of trust. A second game framework introduces uncertainty regarding the reciprocity of prospective transactions. A third game framework retains idealized trust and uncertainty, and adds additional agent heterogeneity. Cultural markers are not equally salient in conferring or withholding trust, and the result is a richer transactional process.
Office of Scientific and Technical Information (OSTI)
94C From Deterministic Inversion to Uncertainty Quantification: Planning a Long Journey in Ice Sheet Modeling M. Eledred, J. Jakeman, M. Perego, A. Salinger, I. K. Tezaur [SNL], P. Heimbach, C. Jackson [UT Austin], M. Hoffman, S. Price [LANL], G. Stadler [Courant] THE UNIVERSITY OF TEXAS AT AUSTIN «Los Alamos NATIONAL LABORATORY QOEST Workshop, Los Angeles, April 2, 2015 Sandia National (Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Munoz, Francisco D.; Watson, Jean -Paul; Hobbs, Benjamin F.
2015-06-04
In this study, the anticipated magnitude of needed investments in new transmission infrastructure in the U.S. requires that these be allocated in a way that maximizes the likelihood of achieving society's goals for power system operation. The use of state-of-the-art optimization tools can identify cost-effective investment alternatives, extract more benefits out of transmission expansion portfolios, and account for the huge economic, technology, and policy uncertainties that the power sector faces over the next several decades.
SWEPP PAN assay system uncertainty analysis: Passive mode measurements of graphite waste
Blackwood, L.G.; Harker, Y.D.; Meachum, T.R.; Yoon, Woo Y.
1997-07-01
The Idaho National Engineering and Environmental Laboratory is being used as a temporary storage facility for transuranic waste generated by the U.S. Nuclear Weapons program at the Rocky Flats Plant (RFP) in Golden, Colorado. Currently, there is a large effort in progress to prepare to ship this waste to the Waste Isolation Pilot Plant (WIPP) in Carlsbad, New Mexico. In order to meet the TRU Waste Characterization Quality Assurance Program Plan nondestructive assay compliance requirements and quality assurance objectives, it is necessary to determine the total uncertainty of the radioassay results produced by the Stored Waste Examination Pilot Plant (SWEPP) Passive Active Neutron (PAN) radioassay system. To this end a modified statistical sampling and verification approach has been developed to determine the total uncertainty of a PAN measurement. In this approach the total performance of the PAN nondestructive assay system is simulated using computer models of the assay system and the resultant output is compared with the known input to assess the total uncertainty. This paper is one of a series of reports quantifying the results of the uncertainty analysis of the PAN system measurements for specific waste types and measurement modes. In particular this report covers passive mode measurements of weapons grade plutonium-contaminated graphite molds contained in 208 liter drums (waste code 300). The validity of the simulation approach is verified by comparing simulated output against results from measurements using known plutonium sources and a surrogate graphite waste form drum. For actual graphite waste form conditions, a set of 50 cases covering a statistical sampling of the conditions exhibited in graphite wastes was compiled using a Latin hypercube statistical sampling approach.
Final Report-Optimization Under Uncertainty and Nonconvexity: Algorithms and Software
Jeff Linderoth
2008-10-10
The goal of this research was to develop new algorithmic techniques for solving large-scale numerical optimization problems, focusing on problems classes that have proven to be among the most challenging for practitioners: those involving uncertainty and those involving nonconvexity. This research advanced the state-of-the-art in solving mixed integer linear programs containing symmetry, mixed integer nonlinear programs, and stochastic optimization problems.
Michael Pernice
2012-10-01
Grid-to-rod fretting is the leading cause of fuel failures in pressurized water reactors, and is one of the challenge problems being addressed by the Consortium for Advanced Simulation of Light Water Reactors to guide its efforts to develop a virtual reactor environment. Prior and current efforts in modeling and simulation of grid-to-rod fretting are discussed. Sources of uncertainty in grid-to-rod fretting are also described.
A flexible uncertainty quantification method for linearly coupled multi-physics systems
Chen, Xiao Ng, Brenda; Sun, Yunwei; Tong, Charles
2013-09-01
Highlights: We propose a modularly hybrid UQ methodology suitable for independent development of module-based multi-physics simulation. Our algorithmic framework allows for each module to have its own UQ method (either intrusive or non-intrusive). Information from each module is combined systematically to propagate global uncertainty. Our proposed approach can allow for easy swapping of new methods for any modules without the need to address incompatibilities. We demonstrate the proposed framework on a practical application involving a multi-species reactive transport model. -- Abstract: This paper presents a novel approach to building an integrated uncertainty quantification (UQ) methodology suitable for modern-day component-based approach for multi-physics simulation development. Our hybrid UQ methodology supports independent development of the most suitable UQ method, intrusive or non-intrusive, for each physics module by providing an algorithmic framework to couple these stochastic modules for propagating global uncertainties. We address algorithmic and computational issues associated with the construction of this hybrid framework. We demonstrate the utility of such a framework on a practical application involving a linearly coupled multi-species reactive transport model.
Sungyeol Choi; Jaeyeong Park; Robert O. Hoover; Supathorn Phongikaroon; Michael F. Simpson; Kwang-Rag Kim; Il Soon Hwang
2011-09-01
This study examines how much cell potential changes with five differently assumed real anode surface area cases. Determining real anode surface area is a significant issue to be resolved for precisely modeling molten salt electrorefining. Based on a three-dimensional electrorefining model, calculated cell potentials compare with an experimental cell potential variation over 80 hours of operation of the Mark-IV electrorefiner with driver fuel from the Experimental Breeder Reactor II. We succeeded to achieve a good agreement with an overall trend of the experimental data with appropriate selection of a mode for real anode surface area, but there are still local inconsistencies between theoretical calculation and experimental observation. In addition, the results were validated and compared with two-dimensional results to identify possible uncertainty factors that had to be further considered in a computational electrorefining analysis. These uncertainty factors include material properties, heterogeneous material distribution, surface roughness, and current efficiency. Zirconium's abundance and complex behavior have more impact on uncertainty towards the latter period of electrorefining at given batch of fuel. The benchmark results found that anode materials would be dissolved from both axial and radial directions at least for low burn-up metallic fuels after active liquid sodium bonding was dissolved.
Helton, J.C.; Johnson, J.D.; McKay, M.D.; Shiver, A.W.; Sprung, J.L.
1995-01-01
Uncertainty and sensitivity analysis techniques based on Latin hypercube sampling, partial correlation analysis and stepwise regression analysis are used in an investigation with the MACCS model of the early health effects associated with a severe accident at a nuclear power station. The primary purpose of this study is to provide guidance on the variables to be considered in future review work to reduce the uncertainty in the important variables used in the calculation of reactor accident consequences. The effects of 34 imprecisely known input variables on the following reactor accident consequences are studied: number of early fatalities, number of cases of prodromal vomiting, population dose within 10 mi of the reactor, population dose within 1000 mi of the reactor, individual early fatality probability within 1 mi of the reactor, and maximum early fatality distance. When the predicted variables are considered collectively, the following input variables were found to be the dominant contributors to uncertainty: scaling factor for horizontal dispersion, dry deposition velocity, inhalation protection factor for nonevacuees, groundshine shielding factor for nonevacuees, early fatality hazard function alpha value for bone marrow exposure, and scaling factor for vertical dispersion.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Burr, Tom; Croft, Stephen; Jarman, Kenneth D.
2015-09-05
The various methods of nondestructive assay (NDA) of special nuclear material (SNM) have applications in nuclear nonproliferation, including detection and identification of illicit SNM at border crossings, and quantifying SNM at nuclear facilities for safeguards. No assay method is complete without “error bars,” which provide one way of expressing confidence in the assay result. Consequently, NDA specialists typically quantify total uncertainty in terms of “random” and “systematic” components, and then specify error bars for the total mass estimate in multiple items. Uncertainty quantification (UQ) for NDA has always been important, but it is recognized that greater rigor is needed andmore » achievable using modern statistical methods. To this end, we describe the extent to which the guideline for expressing uncertainty in measurements (GUM) can be used for NDA. Also, we propose improvements over GUM for NDA by illustrating UQ challenges that it does not address, including calibration with errors in predictors, model error, and item-specific biases. A case study is presented using low-resolution NaI spectra and applying the enrichment meter principle to estimate the U-235 mass in an item. The case study illustrates how to update the current American Society for Testing and Materials guide for application of the enrichment meter principle using gamma spectra from a NaI detector.« less
Salloum, Maher N.; Gharagozloo, Patricia E.
2013-10-01
Metal particle beds have recently become a major technique for hydrogen storage. In order to extract hydrogen from such beds, it is crucial to understand the decomposition kinetics of the metal hydride. We are interested in obtaining a a better understanding of the uranium hydride (UH3) decomposition kinetics. We first developed an empirical model by fitting data compiled from different experimental studies in the literature and quantified the uncertainty resulting from the scattered data. We found that the decomposition time range predicted by the obtained kinetics was in a good agreement with published experimental results. Secondly, we developed a physics based mathematical model to simulate the rate of hydrogen diffusion in a hydride particle during the decomposition. We used this model to simulate the decomposition of the particles for temperatures ranging from 300K to 1000K while propagating parametric uncertainty and evaluated the kinetics from the results. We compared the kinetics parameters derived from the empirical and physics based models and found that the uncertainty in the kinetics predicted by the physics based model covers the scattered experimental data. Finally, we used the physics-based kinetics parameters to simulate the effects of boundary resistances and powder morphological changes during decomposition in a continuum level model. We found that the species change within the bed occurring during the decomposition accelerates the hydrogen flow by increasing the bed permeability, while the pressure buildup and the thermal barrier forming at the wall significantly impede the hydrogen extraction.
Al-Hashimi, M.H. Wiese, U.-J.
2009-12-15
We consider wave packets of free particles with a general energy-momentum dispersion relation E(p). The spreading of the wave packet is determined by the velocity v={partial_derivative}{sub p}E. The position-velocity uncertainty relation {delta}x{delta}v{>=}1/2 |<{partial_derivative}{sub p}{sup 2}E>| is saturated by minimal uncertainty wave packets {phi}(p)=Aexp(-{alpha}E(p)+{beta}p). In addition to the standard minimal Gaussian wave packets corresponding to the non-relativistic dispersion relation E(p)=p{sup 2}/2m, analytic calculations are presented for the spreading of wave packets with minimal position-velocity uncertainty product for the lattice dispersion relation E(p)=-cos(pa)/ma{sup 2} as well as for the relativistic dispersion relation E(p)={radical}(p{sup 2}+m{sup 2}). The boost properties of moving relativistic wave packets as well as the propagation of wave packets in an expanding Universe are also discussed.
Burr, Tom; Croft, Stephen; Jarman, Kenneth D.
2015-09-05
The various methods of nondestructive assay (NDA) of special nuclear material (SNM) have applications in nuclear nonproliferation, including detection and identification of illicit SNM at border crossings, and quantifying SNM at nuclear facilities for safeguards. No assay method is complete without “error bars,” which provide one way of expressing confidence in the assay result. Consequently, NDA specialists typically quantify total uncertainty in terms of “random” and “systematic” components, and then specify error bars for the total mass estimate in multiple items. Uncertainty quantification (UQ) for NDA has always been important, but it is recognized that greater rigor is needed and achievable using modern statistical methods. To this end, we describe the extent to which the guideline for expressing uncertainty in measurements (GUM) can be used for NDA. Also, we propose improvements over GUM for NDA by illustrating UQ challenges that it does not address, including calibration with errors in predictors, model error, and item-specific biases. A case study is presented using low-resolution NaI spectra and applying the enrichment meter principle to estimate the U-235 mass in an item. The case study illustrates how to update the current American Society for Testing and Materials guide for application of the enrichment meter principle using gamma spectra from a NaI detector.
Zhang, J.; Hodge, B. M.; Gomez-Lazaro, E.; Lovholm, A. L.; Berge, E.; Miettinen, J.; Holttinen, H.; Cutululis, N.; Litong-Palima, M.; Sorensen, P.; Dobschinski, J.
2013-10-01
One of the critical challenges of wind power integration is the variable and uncertain nature of the resource. This paper investigates the variability and uncertainty in wind forecasting for multiple power systems in six countries. An extensive comparison of wind forecasting is performed among the six power systems by analyzing the following scenarios: (i) wind forecast errors throughout a year; (ii) forecast errors at a specific time of day throughout a year; (iii) forecast errors at peak and off-peak hours of a day; (iv) forecast errors in different seasons; (v) extreme forecasts with large overforecast or underforecast errors; and (vi) forecast errors when wind power generation is at different percentages of the total wind capacity. The kernel density estimation method is adopted to characterize the distribution of forecast errors. The results show that the level of uncertainty and the forecast error distribution vary among different power systems and scenarios. In addition, for most power systems, (i) there is a tendency to underforecast in winter; and (ii) the forecasts in winter generally have more uncertainty than the forecasts in summer.
A transform of complementary aspects with applications to entropic uncertainty relations
Mandayam, Prabha; Wehner, Stephanie; Balachandran, Niranjan
2010-08-15
Even though mutually unbiased bases and entropic uncertainty relations play an important role in quantum cryptographic protocols, they remain ill understood. Here, we construct special sets of up to 2n+1 mutually unbiased bases (MUBs) in dimension d=2{sup n}, which have particularly beautiful symmetry properties derived from the Clifford algebra. More precisely, we show that there exists a unitary transformation that cyclically permutes such bases. This unitary can be understood as a generalization of the Fourier transform, which exchanges two MUBs, to multiple complementary aspects. We proceed to prove a lower bound for min-entropic entropic uncertainty relations for any set of MUBs and show that symmetry plays a central role in obtaining tight bounds. For example, we obtain for the first time a tight bound for four MUBs in dimension d=4, which is attained by an eigenstate of our complementarity transform. Finally, we discuss the relation to other symmetries obtained by transformations in discrete phase space and note that the extrema of discrete Wigner functions are directly related to min-entropic uncertainty relations for MUBs.
Cogeneration: A northwest medical facility`s answer to the uncertainties of deregulation
Almeda, R.; Rivers, J.
1998-10-01
Not so long ago, in the good old days, the energy supply to a health care facility was one of the most stable. The local utility provided what was needed at a reasonable cost. Now the energy industry is being deregulated. Major uncertainties exist in all parts of the energy industry. Since reasonably priced and readily available energy is mandatory for a health care facility operation, the energy industry uncertainties reverberate through the health care industry. This article reviews how the uncertainty of electric utility deregulation was converted to an opportunity to implement the ultimate energy conservation project--cogeneration. The project development was made essentially risk free by tailoring project development to deregulation. Costs and financial exposure were minimized by taking numerous small steps in sequence. Valley Medical Center, by persevering with the development of a cogeneration plant, has been able to reduce its energy costs and more importantly, stabilize its energy supply and costs for many years to come. This article reviews activities in two arenas, internal project development and external energy industry developments, by periodically updating each arena and showing how external developments affected the project.
UNCERTAINTIES OF MODELING GAMMA-RAY PULSAR LIGHT CURVES USING VACUUM DIPOLE MAGNETIC FIELD
Bai Xuening; Spitkovsky, Anatoly E-mail: anatoly@astro.princeton.ed
2010-06-01
Current models of pulsar gamma-ray emission use the magnetic field of a rotating dipole in vacuum as a first approximation to the shape of a plasma-filled pulsar magnetosphere. In this paper, we revisit the question of gamma-ray light curve formation in pulsars in order to ascertain the robustness of the 'two-pole caustic (TPC)' and 'outer gap (OG)' models based on the vacuum magnetic field. We point out an inconsistency in the literature on the use of the relativistic aberration formula, where in several works the shape of the vacuum field was treated as known in the instantaneous corotating frame, rather than in the laboratory frame. With the corrected formula, we find that the peaks in the light curves predicted from the TPC model using the vacuum field are less sharp. The sharpness of the peaks in the OG model is less affected by this change, but the range of magnetic inclination angles and viewing geometries resulting in double-peaked light curves is reduced. In a realistic magnetosphere, the modification of field structure near the light cylinder (LC) due to plasma effects may change the shape of the polar cap and the location of the emission zones. We study the sensitivity of the light curves to different shapes of the polar cap for static and retarded vacuum dipole fields. In particular, we consider polar caps traced by the last open field lines and compare them to circular polar caps. We find that the TPC model is very sensitive to the shape of the polar cap, and a circular polar cap can lead to four peaks of emission. The OG model is less affected by different polar cap shapes, but is subject to big uncertainties of applying the vacuum field near the LC. We conclude that deviations from the vacuum field can lead to large uncertainties in pulse shapes, and a more realistic force-free field should be applied to the study of pulsar high-energy emission.
Knoetig, Max L., E-mail: mknoetig@phys.ethz.ch [Institute for Particle Physics, ETH Zurich, 8093 Zurich (Switzerland)
2014-08-01
For decades researchers have studied the On/Off counting problem where a measured rate consists of two parts. One part is due to a signal process and the other is due to a background process, the magnitudes for both of which are unknown. While most frequentist methods are adequate for large number counts, they cannot be applied to sparse data. Here, I want to present a new objective Bayesian solution that only depends on three parameters: the number of events in the signal region, the number of events in the background region, and the ratio of the exposure for both regions. First, the probability of the counts only being due to background is derived analytically. Second, the marginalized posterior for the signal parameter is also derived analytically. With this two-step approach it is easy to calculate the signal's significance, strength, uncertainty, or upper limit in a unified way. This approach is valid without restrictions for any number count, including zero, and may be widely applied in particle physics, cosmic-ray physics, and high-energy astrophysics. In order to demonstrate the performance of this approach, I apply the method to gamma-ray burst data.
Accounting for Global Climate Model Projection Uncertainty in Modern Statistical Downscaling
Johannesson, G
2010-03-17
Future climate change has emerged as a national and a global security threat. To carry out the needed adaptation and mitigation steps, a quantification of the expected level of climate change is needed, both at the global and the regional scale; in the end, the impact of climate change is felt at the local/regional level. An important part of such climate change assessment is uncertainty quantification. Decision and policy makers are not only interested in 'best guesses' of expected climate change, but rather probabilistic quantification (e.g., Rougier, 2007). For example, consider the following question: What is the probability that the average summer temperature will increase by at least 4 C in region R if global CO{sub 2} emission increases by P% from current levels by time T? It is a simple question, but one that remains very difficult to answer. It is answering these kind of questions that is the focus of this effort. The uncertainty associated with future climate change can be attributed to three major factors: (1) Uncertainty about future emission of green house gasses (GHG). (2) Given a future GHG emission scenario, what is its impact on the global climate? (3) Given a particular evolution of the global climate, what does it mean for a particular location/region? In what follows, we assume a particular GHG emission scenario has been selected. Given the GHG emission scenario, the current batch of the state-of-the-art global climate models (GCMs) is used to simulate future climate under this scenario, yielding an ensemble of future climate projections (which reflect, to some degree our uncertainty of being able to simulate future climate give a particular GHG scenario). Due to the coarse-resolution nature of the GCM projections, they need to be spatially downscaled for regional impact assessments. To downscale a given GCM projection, two methods have emerged: dynamical downscaling and statistical (empirical) downscaling (SDS). Dynamic downscaling involves configuring and running a regional climate model (RCM) nested within a given GCM projection (i.e., the GCM provides bounder conditions for the RCM). On the other hand, statistical downscaling aims at establishing a statistical relationship between observed local/regional climate variables of interest and synoptic (GCM-scale) climate predictors. The resulting empirical relationship is then applied to future GCM projections. A comparison of the pros and cons of dynamical versus statistical downscaling is outside the scope of this effort, but has been extensively studied and the reader is referred to Wilby et al. (1998); Murphy (1999); Wood et al. (2004); Benestad et al. (2007); Fowler et al. (2007), and references within those. The scope of this effort is to study methodology, a statistical framework, to propagate and account for GCM uncertainty in regional statistical downscaling assessment. In particular, we will explore how to leverage an ensemble of GCM projections to quantify the impact of the GCM uncertainty in such an assessment. There are three main component to this effort: (1) gather the necessary climate-related data for a regional SDS study, including multiple GCM projections, (2) carry out SDS, and (3) assess the uncertainty. The first step is carried out using tools written in the Python programming language, while analysis tools were developed in the statistical programming language R; see Figure 1.
Generalized Uncertainty Quantification for Linear Inverse Problems in X-ray Imaging
Fowler, Michael James
2014-04-25
In industrial and engineering applications, X-ray radiography has attained wide use as a data collection protocol for the assessment of material properties in cases where direct observation is not possible. The direct measurement of nuclear materials, particularly when they are under explosive or implosive loading, is not feasible, and radiography can serve as a useful tool for obtaining indirect measurements. In such experiments, high energy X-rays are pulsed through a scene containing material of interest, and a detector records a radiograph by measuring the radiation that is not attenuated in the scene. One approach to the analysis of these radiographs is to model the imaging system as an operator that acts upon the object being imaged to produce a radiograph. In this model, the goal is to solve an inverse problem to reconstruct the values of interest in the object, which are typically material properties such as density or areal density. The primary objective in this work is to provide quantitative solutions with uncertainty estimates for three separate applications in X-ray radiography: deconvolution, Abel inversion, and radiation spot shape reconstruction. For each problem, we introduce a new hierarchical Bayesian model for determining a posterior distribution on the unknowns and develop efficient Markov chain Monte Carlo (MCMC) methods for sampling from the posterior. A Poisson likelihood, based on a noise model for photon counts at the detector, is combined with a prior tailored to each application: an edge-localizing prior for deconvolution; a smoothing prior with non-negativity constraints for spot reconstruction; and a full covariance sampling prior based on a Wishart hyperprior for Abel inversion. After developing our methods in a general setting, we demonstrate each model on both synthetically generated datasets, including those from a well known radiation transport code, and real high energy radiographs taken at two U. S. Department of Energy laboratories.
Helton, J.C.; Johnson, J.D.; Rollstin, J.A.; Shiver, A.W.; Sprung, J.L.
1995-01-01
Uncertainty and sensitivity analysis techniques based on Latin hypercube sampling, partial correlation analysis and stepwise regression analysis are used in an investigation with the MACCS model of the food pathways associated with a severe accident at a nuclear power station. The primary purpose of this study is to provide guidance on the variables to be considered in future review work to reduce the uncertainty in the important variables used in the calculation of reactor accident consequences. The effects of 87 imprecisely-known input variables on the following reactor accident consequences are studied: crop growing season dose, crop long-term dose, milk growing season dose, total food pathways dose, total ingestion pathways dose, total long-term pathways dose, area dependent cost, crop disposal cost, milk disposal cost, condemnation area, crop disposal area and milk disposal area. When the predicted variables are considered collectively, the following input variables were found to be the dominant contributors to uncertainty: fraction of cesium deposition on grain fields that is retained on plant surfaces and transferred directly to grain, maximum allowable ground concentrations of Cs-137 and Sr-90 for production of crops, ground concentrations of Cs-134, Cs-137 and I-131 at which the disposal of milk will be initiated due to accidents that occur during the growing season, ground concentrations of Cs-134, I-131 and Sr-90 at which the disposal of crops will be initiated due to accidents that occur during the growing season, rate of depletion of Cs-137 and Sr-90 from the root zone, transfer of Sr-90 from soil to legumes, transfer of Cs-137 from soil to pasture, transfer of cesium from animal feed to meat, and the transfer of cesium, iodine and strontium from animal feed to milk.
Helton, J.C.; Johnson, J.D.; Rollstin, J.A.; Shiver, A.W.; Sprung, J.L.
1995-01-01
Uncertainty and sensitivity analysis techniques based on Latin hypercube sampling, partial correlation analysis and stepwise regression analysis are used in an investigation with the MACCS model of the chronic exposure pathways associated with a severe accident at a nuclear power station. The primary purpose of this study is to provide guidance on the variables to be considered in future review work to reduce the uncertainty in the important variables used in the calculation of reactor accident consequences. The effects of 75 imprecisely known input variables on the following reactor accident consequences are studied: crop growing season dose, crop long-term dose, water ingestion dose, milk growing season dose, long-term groundshine dose, long-term inhalation dose, total food pathways dose, total ingestion pathways dose, total long-term pathways dose, total latent cancer fatalities, area-dependent cost, crop disposal cost, milk disposal cost, population-dependent cost, total economic cost, condemnation area, condemnation population, crop disposal area and milk disposal area. When the predicted variables are considered collectively, the following input variables were found to be the dominant contributors to uncertainty: dry deposition velocity, transfer of cesium from animal feed to milk, transfer of cesium from animal feed to meat, ground concentration of Cs-134 at which the disposal of milk products will be initiated, transfer of Sr-90 from soil to legumes, maximum allowable ground concentration of Sr-90 for production of crops, fraction of cesium entering surface water that is consumed in drinking water, groundshine shielding factor, scale factor defining resuspension, dose reduction associated with decontamination, and ground concentration of 1-131 at which disposal of crops will be initiated due to accidents that occur during the growing season.
Treatment planning for prostate focal laser ablation in the face of needle placement uncertainty
Cepek, Jeremy Fenster, Aaron; Lindner, Uri; Trachtenberg, John; Davidson, Sean R. H.; Haider, Masoom A.; Ghai, Sangeet
2014-01-15
Purpose: To study the effect of needle placement uncertainty on the expected probability of achieving complete focal target destruction in focal laser ablation (FLA) of prostate cancer. Methods: Using a simplified model of prostate cancer focal target, and focal laser ablation region shapes, Monte Carlo simulations of needle placement error were performed to estimate the probability of completely ablating a region of target tissue. Results: Graphs of the probability of complete focal target ablation are presented over clinically relevant ranges of focal target sizes and shapes, ablation region sizes, and levels of needle placement uncertainty. In addition, a table is provided for estimating the maximum target size that is treatable. The results predict that targets whose length is at least 5 mm smaller than the diameter of each ablation region can be confidently ablated using, at most, four laser fibers if the standard deviation in each component of needle placement error is less than 3 mm. However, targets larger than this (i.e., near to or exceeding the diameter of each ablation region) require more careful planning. This process is facilitated by using the table provided. Conclusions: The probability of completely ablating a focal target using FLA is sensitive to the level of needle placement uncertainty, especially as the target length approaches and becomes greater than the diameter of ablated tissue that each individual laser fiber can achieve. The results of this work can be used to help determine individual patient eligibility for prostate FLA, to guide the planning of prostate FLA, and to quantify the clinical benefit of using advanced systems for accurate needle delivery for this treatment modality.
Yang, Xianjin; Chen, Xiao; Carrigan, Charles R.; Ramirez, Abelardo L.
2014-06-03
A parametric bootstrap approach is presented for uncertainty quantification (UQ) of CO₂ saturation derived from electrical resistance tomography (ERT) data collected at the Cranfield, Mississippi (USA) carbon sequestration site. There are many sources of uncertainty in ERT-derived CO₂ saturation, but we focus on how the ERT observation errors propagate to the estimated CO₂ saturation in a nonlinear inversion process. Our UQ approach consists of three steps. We first estimated the observational errors from a large number of reciprocal ERT measurements. The second step was to invert the pre-injection baseline data and the resulting resistivity tomograph was used as the prior information for nonlinear inversion of time-lapse data. We assigned a 3% random noise to the baseline model. Finally, we used a parametric bootstrap method to obtain bootstrap CO₂ saturation samples by deterministically solving a nonlinear inverse problem many times with resampled data and resampled baseline models. Then the mean and standard deviation of CO₂ saturation were calculated from the bootstrap samples. We found that the maximum standard deviation of CO₂ saturation was around 6% with a corresponding maximum saturation of 30% for a data set collected 100 days after injection began. There was no apparent spatial correlation between the mean and standard deviation of CO₂ saturation but the standard deviation values increased with time as the saturation increased. The uncertainty in CO₂ saturation also depends on the ERT reciprocal error threshold used to identify and remove noisy data and inversion constraints such as temporal roughness. Five hundred realizations requiring 3.5 h on a single 12-core node were needed for the nonlinear Monte Carlo inversion to arrive at stationary variances while the Markov Chain Monte Carlo (MCMC) stochastic inverse approach may expend days for a global search. This indicates that UQ of 2D or 3D ERT inverse problems can be performed on a laptop or desktop PC.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Yang, Xianjin; Chen, Xiao; Carrigan, Charles R.; Ramirez, Abelardo L.
2014-06-03
A parametric bootstrap approach is presented for uncertainty quantification (UQ) of CO₂ saturation derived from electrical resistance tomography (ERT) data collected at the Cranfield, Mississippi (USA) carbon sequestration site. There are many sources of uncertainty in ERT-derived CO₂ saturation, but we focus on how the ERT observation errors propagate to the estimated CO₂ saturation in a nonlinear inversion process. Our UQ approach consists of three steps. We first estimated the observational errors from a large number of reciprocal ERT measurements. The second step was to invert the pre-injection baseline data and the resulting resistivity tomograph was used as the priormore » information for nonlinear inversion of time-lapse data. We assigned a 3% random noise to the baseline model. Finally, we used a parametric bootstrap method to obtain bootstrap CO₂ saturation samples by deterministically solving a nonlinear inverse problem many times with resampled data and resampled baseline models. Then the mean and standard deviation of CO₂ saturation were calculated from the bootstrap samples. We found that the maximum standard deviation of CO₂ saturation was around 6% with a corresponding maximum saturation of 30% for a data set collected 100 days after injection began. There was no apparent spatial correlation between the mean and standard deviation of CO₂ saturation but the standard deviation values increased with time as the saturation increased. The uncertainty in CO₂ saturation also depends on the ERT reciprocal error threshold used to identify and remove noisy data and inversion constraints such as temporal roughness. Five hundred realizations requiring 3.5 h on a single 12-core node were needed for the nonlinear Monte Carlo inversion to arrive at stationary variances while the Markov Chain Monte Carlo (MCMC) stochastic inverse approach may expend days for a global search. This indicates that UQ of 2D or 3D ERT inverse problems can be performed on a laptop or desktop PC.« less
Assessing the near-term risk of climate uncertainty : interdependencies among the U.S. states.
Loose, Verne W.; Lowry, Thomas Stephen; Malczynski, Leonard A.; Tidwell, Vincent Carroll; Stamber, Kevin Louis; Reinert, Rhonda K.; Backus, George A.; Warren, Drake E.; Zagonel, Aldo A.; Ehlen, Mark Andrew; Klise, Geoffrey T.; Vargas, Vanessa N.
2010-04-01
Policy makers will most likely need to make decisions about climate policy before climate scientists have resolved all relevant uncertainties about the impacts of climate change. This study demonstrates a risk-assessment methodology for evaluating uncertain future climatic conditions. We estimate the impacts of climate change on U.S. state- and national-level economic activity from 2010 to 2050. To understand the implications of uncertainty on risk and to provide a near-term rationale for policy interventions to mitigate the course of climate change, we focus on precipitation, one of the most uncertain aspects of future climate change. We use results of the climate-model ensemble from the Intergovernmental Panel on Climate Change's (IPCC) Fourth Assessment Report 4 (AR4) as a proxy for representing climate uncertainty over the next 40 years, map the simulated weather from the climate models hydrologically to the county level to determine the physical consequences on economic activity at the state level, and perform a detailed 70-industry analysis of economic impacts among the interacting lower-48 states. We determine the industry-level contribution to the gross domestic product and employment impacts at the state level, as well as interstate population migration, effects on personal income, and consequences for the U.S. trade balance. We show that the mean or average risk of damage to the U.S. economy from climate change, at the national level, is on the order of $1 trillion over the next 40 years, with losses in employment equivalent to nearly 7 million full-time jobs.
nCTEQ15 - Global analysis of nuclear parton distributions with uncertainties
Kusina, A.; Jezo, T.; Clark, D. B.; Keppel, Cynthia; Lyonnet, F.; Morfin, Jorge; Olness, F. I.; Owens, Jeff; Schienbein, I.
2015-09-01
We present the first official release of the nCTEQ nuclear parton distribution functions with errors. The main addition to the previous nCTEQ PDFs is the introduction of PDF uncertainties based on the Hessian method. Another important addition is the inclusion of pion production data from RHIC that give us a handle on constraining the gluon PDF. This contribution summarizes our results from arXiv:1509.00792 and concentrates on the comparison with other groups providing nuclear parton distributions.
Generalized uncertainty principle in f(R) gravity for a charged black hole
Said, Jackson Levi; Adami, Kristian Zarb
2011-02-15
Using f(R) gravity in the Palatini formularism, the metric for a charged spherically symmetric black hole is derived, taking the Ricci scalar curvature to be constant. The generalized uncertainty principle is then used to calculate the temperature of the resulting black hole; through this the entropy is found correcting the Bekenstein-Hawking entropy in this case. Using the entropy the tunneling probability and heat capacity are calculated up to the order of the Planck length, which produces an extra factor that becomes important as black holes become small, such as in the case of mini-black holes.
DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]
E.T. Coon; C.J. Wilson; S.L. Painter; V.E. Romanovsky; D.R. Harp; A.L. Atchley; J.C. Rowland
2016-02-02
This dataset contains an ensemble of thermal-hydro soil parameters including porosity, thermal conductivity, thermal conductivity shape parameters, and residual saturation of peat and mineral soil. The ensemble was generated using a Null-Space Monte Carlo analysis of parameter uncertainty based on a calibration to soil temperatures collected at the Barrow Environmental Observatory site by the NGEE team. The micro-topography of ice wedge polygons present at the site is included in the analysis using three 1D column models to represent polygon center, rim and trough features. The Arctic Terrestrial Simulator (ATS) was used in the calibration to model multiphase thermal and hydrological processes in the subsurface.
Broader source: Energy.gov [DOE]
Development and implementation of future advanced fuel cycles including those that recycle fuel materials, use advanced fuels different from current fuels, or partition and transmute actinide radionuclides, will impact the waste management system. The UFD Campaign can reasonably conclude that advanced fuel cycles, in combination with partitioning and transmutation, which remove actinides, will not materially alter the performance, the spread in dose results around the mean, the modeling effort to include significant features, events, and processes (FEPs) in the performance assessment, or the characterization of uncertainty associated with a geologic disposal system in the regulatory environment of the US.
Analysis of sampling plan options for tank 16H from the perspective of statistical uncertainty
Shine, E. P.
2013-02-28
This report develops a concentration variability model for Tank 16H in order to compare candidate sampling plans for assessing the concentrations of analytes in the residual material in the annulus and on the floor of the primary vessel. A concentration variability model is used to compare candidate sampling plans based on the expected upper 95% confidence limit (UCL95) for the mean. The result is expressed as a rank order of candidate sampling plans from lowest to highest expected UCL95, with the lowest being the most desirable from an uncertainty perspective.
Parker, W
2009-09-18
Non-destructive gamma-ray analysis is a fundamental part of nuclear safeguards, including nuclear energy safeguards technology. Developing safeguards capabilities for nuclear energy will certainly benefit from the advanced use of gamma-ray spectroscopy as well as the ability to model various reactor scenarios. There is currently a wide variety of nuclear data that could be used in computer modeling and gamma-ray spectroscopy analysis. The data can be discrepant (with varying uncertainties), and it may difficult for a modeler or software developer to determine the best nuclear data set for a particular situation. To use gamma-ray spectroscopy to determine the relative isotopic composition of nuclear materials, the gamma-ray energies and the branching ratios or intensities of the gamma-rays emitted from the nuclides in the material must be well known. A variety of computer simulation codes will be used during the development of the nuclear energy safeguards, and, to compare the results of various codes, it will be essential to have all the {gamma}-ray libraries agree. Assessing our nuclear data needs allows us to create a prioritized list of desired measurements, and provides uncertainties for energies and especially for branching intensities. Of interest are actinides, fission products, and activation products, and most particularly mixtures of all of these radioactive isotopes, including mixtures of actinides and other products. Recent work includes the development of new detectors with increased energy resolution, and studies of gamma-rays and their lines used in simulation codes. Because new detectors are being developed, there is an increased need for well known nuclear data for radioactive isotopes of some elements. Safeguards technology should take advantage of all types of gamma-ray detectors, including new super cooled detectors, germanium detectors and cadmium zinc telluride detectors. Mixed isotopes, particularly mixed actinides found in nuclear reactor streams can be especially challenging to identify. The super cooled detectors have a marked improvement in energy resolution, allowing the possibility of deconvolution of mixtures of gamma rays that was unavailable with high purity germanium detectors. Isotopic analysis codes require libraries of gamma rays. In certain situations, isotope identification can be made in the field, sometimes with a short turnaround time, depending on the choice of detector and software analysis package. Sodium iodide and high purity germanium detectors have been successfully used in field scenarios. The newer super cooled detectors offer dramatically increased resolution, but they have lower efficiency and so can require longer collection times. The different peak shapes require software development for the specific detector type and field application. Libraries can be tailored to specific scenarios; by eliminating isotopes that are certainly not present, the analysis time may be shortened and the accuracy may be increased. The intent of this project was to create one accurate library of gamma rays emitted from isotopes of interest to be used as a reliable reference in safeguards work. All simulation and spectroscopy analysis codes can draw upon this best library to improve accuracy and cross-code consistency. Modeling codes may include MCNP and COG. Gamma-ray spectroscopy analysis codes may include MGA, MGAU, U235 and FRAM. The intent is to give developers and users the tools to use in nuclear energy safeguards work. In this project, the library created was limited to a selection of actinide isotopes of immediate interest to reactor technology. These isotopes included {sup 234-238}U, {sup 237}Np, {sup 238-242}Pu, {sup 241,243}Am and {sup 244}Cm. These isotopes were examined, and the best of gamma-ray data, including line energies and relative strengths were selected.
Validation and quantification of uncertainty in coupled climate models using network analysis
Bracco, Annalisa
2015-08-10
We developed a fast, robust and scalable methodology to examine, quantify, and visualize climate patterns and their relationships. It is based on a set of notions, algorithms and metrics used in the study of graphs, referred to as complex network analysis. This approach can be applied to explain known climate phenomena in terms of an underlying network structure and to uncover regional and global linkages in the climate system, while comparing general circulation models outputs with observations. The proposed method is based on a two-layer network representation, and is substantially new within the available network methodologies developed for climate studies. At the first layer, gridded climate data are used to identify ‘‘areas’’, i.e., geographical regions that are highly homogeneous in terms of the given climate variable. At the second layer, the identified areas are interconnected with links of varying strength, forming a global climate network. The robustness of the method (i.e. the ability to separate between topological distinct fields, while identifying correctly similarities) has been extensively tested. It has been proved that it provides a reliable, fast framework for comparing and ranking the ability of climate models of reproducing observed climate patterns and their connectivity. We further developed the methodology to account for lags in the connectivity between climate patterns and refined our area identification algorithm to account for autocorrelation in the data. The new methodology based on complex network analysis has been applied to state-of-the-art climate model simulations that participated to the last IPCC (International Panel for Climate Change) assessment to verify their performances, quantify uncertainties, and uncover changes in global linkages between past and future projections. Network properties of modeled sea surface temperature and rainfall over 1956–2005 have been constrained towards observations or reanalysis data sets, and their differences quantified using two metrics. Projected changes from 2051 to 2300 under the scenario with the highest representative and extended concentration pathways (RCP8.5 and ECP8.5) have then been determined. The network of models capable of reproducing well major climate modes in the recent past, changes little during this century. In contrast, among those models the uncertainties in the projections after 2100 remain substantial, and primarily associated with divergences in the representation of the modes of variability, particularly of the El Niño Southern Oscillation (ENSO), and their connectivity, and therefore with their intrinsic predictability, more so than with differences in the mean state evolution. Additionally, we evaluated the relation between the size and the ‘strength’ of the area identified by the network analysis as corresponding to ENSO noting that only a small subset of models can reproduce realistically the observations.
Chung, Bub Dong; Lee, Young Lee; Park, Chan Eok; Lee, Sang Yong
1996-10-01
Assessment of the original REAP/N4OD3.1 code against the FLECHT SEASET series of experiments has identified some weaknesses of the reflood model, such as the lack of a quenching temperature model, the shortcoming of the Chen transition boiling model, and the incorrect prediction of droplet size and interfacial heat transfer. Also, high temperature spikes during the reflood calculation resulted in high steam flow oscillation and liquid carryover. An effort had been made to improve the code with respect to the above weakness, and the necessary model for the wall heat transfer package and the numerical scheme had been modified. Some important FLECHT-SEASET experiments were assessed using the improved version and standard version. The result from the improved REAP/MOD3.1 shows the weaknesses of REAP/N4OD3.1 were much improved when compared to the standard MOD3.1 code. The prediction of void profile and cladding temperature agreed better with test data, especially for the gravity feed test. The scatter diagram of peak cladding temperatures (PCTs) is made from the comparison of all the calculated PCTs and the corresponding experimental values. The deviation between experimental and calculated PCTs were calculated for 2793 data points. The deviations are shown to be normally distributed, and used to quantify statistically the PCT uncertainty of the code. The upper limit of PCT uncertainty at 95% confidence level is evaluated to be about 99K.
Impacts of Variability and Uncertainty in Solar Photovoltaic Generation at Multiple Timescales
Ela, E.; Diakov, V.; Ibanez, E.; Heaney, M.
2013-05-01
The characteristics of variability and uncertainty of PV solar power have been studied extensively. These characteristics can create challenges for system operators who must ensure a balance between generation and demand while obeying power system constraints at the lowest possible cost. A number of studies have looked at the impact of wind power plants, and some recent studies have also included solar PV. The simulations that are used in these studies, however, are typically fixed to one time resolution. This makes it difficult to analyze the variability across several timescales. In this study, we use a simulation tool that has the ability to evaluate both the economic and reliability impacts of PV variability and uncertainty at multiple timescales. This information should help system operators better prepare for increases of PV on their systems and develop improved mitigation strategies to better integrate PV with enhanced reliability. Another goal of this study is to understand how different mitigation strategies and methods can improve the integration of solar power more reliably and efficiently.
Cosmological parameter uncertainties from SALT-II type Ia supernova light curve models
Mosher, J.; Sako, M. [Department of Physics and Astronomy, University of Pennsylvania, 209 South 33rd Street, Philadelphia, PA 19104 (United States); Guy, J.; Astier, P.; Betoule, M.; El-Hage, P.; Pain, R.; Regnault, N. [LPNHE, CNRS/IN2P3, Universit Pierre et Marie Curie Paris 6, Universi Denis Diderot Paris 7, 4 place Jussieu, F-75252 Paris Cedex 05 (France); Kessler, R.; Frieman, J. A. [Kavli Institute for Cosmological Physics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637 (United States); Marriner, J. [Center for Particle Astrophysics, Fermi National Accelerator Laboratory, P.O. Box 500, Batavia, IL 60510 (United States); Biswas, R.; Kuhlmann, S. [Argonne National Laboratory, 9700 South Cass Avenue, Lemont, IL 60439 (United States); Schneider, D. P., E-mail: kessler@kicp.chicago.edu [Department of Astronomy and Astrophysics, The Pennsylvania State University, University Park, PA 16802 (United States)
2014-09-20
We use simulated type Ia supernova (SN Ia) samples, including both photometry and spectra, to perform the first direct validation of cosmology analysis using the SALT-II light curve model. This validation includes residuals from the light curve training process, systematic biases in SN Ia distance measurements, and a bias on the dark energy equation of state parameter w. Using the SN-analysis package SNANA, we simulate and analyze realistic samples corresponding to the data samples used in the SNLS3 analysis: ?120 low-redshift (z < 0.1) SNe Ia, ?255 Sloan Digital Sky Survey SNe Ia (z < 0.4), and ?290 SNLS SNe Ia (z ? 1). To probe systematic uncertainties in detail, we vary the input spectral model, the model of intrinsic scatter, and the smoothing (i.e., regularization) parameters used during the SALT-II model training. Using realistic intrinsic scatter models results in a slight bias in the ultraviolet portion of the trained SALT-II model, and w biases (w {sub input} w {sub recovered}) ranging from 0.005 0.012 to 0.024 0.010. These biases are indistinguishable from each other within the uncertainty; the average bias on w is 0.014 0.007.
Uncertainty analyses of CO2 plume expansion subsequent to wellbore CO2 leakage into aquifers
Hou, Zhangshuan; Bacon, Diana H.; Engel, David W.; Lin, Guang; Fang, Yilin; Ren, Huiying; Fang, Zhufeng
2014-08-01
In this study, we apply an uncertainty quantification (UQ) framework to CO2 sequestration problems. In one scenario, we look at the risk of wellbore leakage of CO2 into a shallow unconfined aquifer in an urban area; in another scenario, we study the effects of reservoir heterogeneity on CO2 migration. We combine various sampling approaches (quasi-Monte Carlo, probabilistic collocation, and adaptive sampling) in order to reduce the number of forward calculations while trying to fully explore the input parameter space and quantify the input uncertainty. The CO2 migration is simulated using the PNNL-developed simulator STOMP-CO2e (the water-salt-CO2 module). For computationally demanding simulations with 3D heterogeneity fields, we combined the framework with a scalable version module, eSTOMP, as the forward modeling simulator. We built response curves and response surfaces of model outputs with respect to input parameters, to look at the individual and combined effects, and identify and rank the significance of the input parameters.
Hydropower generation management under uncertainty via scenario analysis and parallel computation
Escudero, L.F.; Garcia, C.; Fuente, J.L. de la; Prieto, F.J.
1996-05-01
The authors present a modeling framework for the robust solution of hydroelectric power management problems with uncertainty in the values of the water inflows and outflows. A deterministic treatment of the problem provides unsatisfactory results, except for very short time horizons. The authors describe a model based on scenario analysis that allows a satisfactory treatment of uncertainty in the model data for medium and long-term planning problems. Their approach results in a huge model with a network submodel per scenario plus coupling constraints. The size of the problem and the structure of the constraints are adequate for the use of decomposition techniques and parallel computation tools. The authors present computational results for both sequential and parallel implementation versions of the codes, running on a cluster of workstations. The codes have been tested on data obtained from the reservoir network of Iberdrola, a power utility owning 50% of the total installed hydroelectric capacity of Spain, and generating 40% of the total energy demand.
Hydropower generation management under uncertainty via scenario analysis and parallel computation
Escudero, L.F.; Garcia, C.; Fuente, J.L. de la; Prieto, F.J.
1995-12-31
The authors present a modeling framework for the robust solution of hydroelectric power management problems and uncertainty in the values of the water inflows and outflows. A deterministic treatment of the problem provides unsatisfactory results, except for very short time horizons. The authors describe a model based on scenario analysis that allows a satisfactory treatment of uncertainty in the model data for medium and long-term planning problems. This approach results in a huge model with a network submodel per scenario plus coupling constraints. The size of the problem and the structure of the constraints are adequate for the use of decomposition techniques and parallel computation tools. The authors present computational results for both sequential and parallel implementation versions of the codes, running on a cluster of workstations. The code have been tested on data obtained from the reservoir network of Iberdrola, a power utility owning 50% of the total installed hydroelectric capacity of Spain, and generating 40% of the total energy demand.
Prather, Michael J.; Hsu, Juno; Nicolau, Alex; Veidenbaum, Alex; Smith, Philip Cameron; Bergmann, Dan
2014-11-07
Atmospheric chemistry controls the abundances and hence climate forcing of important greenhouse gases including N_{2}O, CH_{4}, HFCs, CFCs, and O_{3}. Attributing climate change to human activities requires, at a minimum, accurate models of the chemistry and circulation of the atmosphere that relate emissions to abundances. This DOE-funded research provided realistic, yet computationally optimized and affordable, photochemical modules to the Community Earth System Model (CESM) that augment the CESM capability to explore the uncertainty in future stratospheric-tropospheric ozone, stratospheric circulation, and thus the lifetimes of chemically controlled greenhouse gases from climate simulations. To this end, we have successfully implemented Fast-J (radiation algorithm determining key chemical photolysis rates) and Linoz v3.0 (linearized photochemistry for interactive O_{3}, N_{2}O, NO_{y} and CH_{4}) packages in LLNL-CESM and for the first time demonstrated how change in O2 photolysis rate within its uncertainty range can significantly impact on the stratospheric climate and ozone abundances. From the UCI side, this proposal also helped LLNL develop a CAM-Superfast Chemistry model that was implemented for the IPCC AR5 and contributed chemical-climate simulations to CMIP5.
Covariant energymomentum and an uncertainty principle for general relativity
Cooperstock, F.I.; Dupre, M.J.
2013-12-15
We introduce a naturally-defined totally invariant spacetime energy expression for general relativity incorporating the contribution from gravity. The extension links seamlessly to the action integral for the gravitational field. The demand that the general expression for arbitrary systems reduces to the Tolman integral in the case of stationary bounded distributions, leads to the matter-localized Ricci integral for energymomentum in support of the energy localization hypothesis. The role of the observer is addressed and as an extension of the special relativistic case, the field of observers comoving with the matter is seen to compute the intrinsic global energy of a system. The new localized energy supports the Bonnor claim that the Szekeres collapsing dust solutions are energy-conserving. It is suggested that in the extreme of strong gravity, the Heisenberg Uncertainty Principle be generalized in terms of spacetime energymomentum. -- Highlights: We present a totally invariant spacetime energy expression for general relativity incorporating the contribution from gravity. Demand for the general expression to reduce to the Tolman integral for stationary systems supports the Ricci integral as energymomentum. Localized energy via the Ricci integral is consistent with the energy localization hypothesis. New localized energy supports the Bonnor claim that the Szekeres collapsing dust solutions are energy-conserving. Suggest the Heisenberg Uncertainty Principle be generalized in terms of spacetime energymomentum in strong gravity extreme.
Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.; Jakeman, John Davis; Swiler, Laura Painton; Stephens, John Adam; Vigil, Dena M.; Wildey, Timothy Michael; Bohnhoff, William J.; Eddy, John P.; Hu, Kenneth T.; Dalbey, Keith R.; Bauman, Lara E; Hough, Patricia Diane
2014-05-01
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
Hou, Zhangshuan; Engel, David W.; Lin, Guang; Fang, Yilin; Fang, Zhufeng
2013-10-01
In this paper, we introduce an uncertainty quantification (UQ) software framework for carbon sequestration, focused on the effect of spatial heterogeneity of reservoir properties on CO2 migration. We use a sequential Gaussian method (SGSIM) to generate realizations of permeability fields with various spatial statistical attributes. To deal with the computational difficulties, we integrate the following ideas/approaches. First, we use three different sampling approaches (probabilistic collocation, quasi-Monte Carlo, and adaptive sampling) to reduce the number of forward calculations while trying to explore the parameter space and quantify the input uncertainty. Second, we use eSTOMP as the forward modeling simulator. eSTOMP is implemented with the Global Arrays toolkit that is based on one-sided inter-processor communication and supports a shared memory programming style on distributed memory platforms, providing a highly-scalable performance. Third, we built an adaptive system infrastructure to select the best possible data transfer mechanisms, to optimally allocate system resources to improve performance and to integrate software packages and data for composing carbon sequestration simulation, computation, analysis, estimation and visualization. We demonstrate the framework with a given CO2 injection scenario in heterogeneous sandstone reservoirs.
Cafferty, Kara G.; Searcy, Erin M.; Nguyen, Long; Spatari, Sabrina
2014-11-01
To meet Energy Independence and Security Act (EISA) cellulosic biofuel mandates, the United States will require an annual domestic supply of about 242 million Mg of biomass by 2022. To improve the feedstock logistics of lignocellulosic biofuels and access available biomass resources from areas with varying yields, commodity systems have been proposed and designed to deliver on-spec biomass feedstocks at preprocessing “depots”, which densify and stabilize the biomass prior to long-distance transport and delivery to centralized biorefineries. The harvesting, preprocessing, and logistics (HPL) of biomass commodity supply chains thus could introduce spatially variable environmental impacts into the biofuel life cycle due to needing to harvest, move, and preprocess biomass from multiple distances that have variable spatial density. This study examines the uncertainty in greenhouse gas (GHG) emissions of corn stover logisticsHPL within a bio-ethanol supply chain in the state of Kansas, where sustainable biomass supply varies spatially. Two scenarios were evaluated each having a different number of depots of varying capacity and location within Kansas relative to a central commodity-receiving biorefinery to test GHG emissions uncertainty. Monte Carlo simulation was used to estimate the spatial uncertainty in the HPL gate-to-gate sequence. The results show that the transport of densified biomass introduces the highest variability and contribution to the carbon footprint of the logistics HPL supply chain (0.2-13 g CO_{2}e/MJ). Moreover, depending upon the biomass availability and its spatial density and surrounding transportation infrastructure (road and rail), logistics HPL processes can increase the variability in life cycle environmental impacts for lignocellulosic biofuels. Within Kansas, life cycle GHG emissions could range from 24 to 41 g CO_{2}e/MJ depending upon the location, size and number of preprocessing depots constructed. However, this range can be minimized through optimizing the siting of preprocessing depots where ample rail infrastructure exists to supply biomass commodity to a regional biorefinery supply system
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Singh, Aditya; Serbin, Shawn P.; McNeil, Brenden E.; Kingdon, Clayton C.; Townsend, Philip A.
2015-12-01
A major goal of remote sensing is the development of generalizable algorithms to repeatedly and accurately map ecosystem properties across space and time. Imaging spectroscopy has great potential to map vegetation traits that cannot be retrieved from broadband spectral data, but rarely have such methods been tested across broad regions. Here we illustrate a general approach for estimating key foliar chemical and morphological traits through space and time using NASA's Airborne Visible/Infrared Imaging Spectrometer (AVIRIS-Classic). We apply partial least squares regression (PLSR) to data from 237 field plots within 51 images acquired between 2008 and 2011. Using a series ofmore » 500 randomized 50/50 subsets of the original data, we generated spatially explicit maps of seven traits (leaf mass per area (Marea), percentage nitrogen, carbon, fiber, lignin, and cellulose, and isotopic nitrogen concentration, δ15N) as well as pixel-wise uncertainties in their estimates based on error propagation in the analytical methods. Both Marea and %N PLSR models had a R2 > 0.85. Root mean square errors (RMSEs) for both variables were less than 9% of the range of data. Fiber and lignin were predicted with R2 > 0.65 and carbon and cellulose with R2 > 0.45. Although R2 of %C and cellulose were lower than Marea and %N, the measured variability of these constituents (especially %C) was also lower, and their RMSE values were beneath 12% of the range in overall variability. Model performance for δ15N was the lowest (R2 = 0.48, RMSE = 0.95‰), but within 15% of the observed range. The resulting maps of chemical and morphological traits, together with their overall uncertainties, represent a first-of-its-kind approach for examining the spatiotemporal patterns of forest functioning and nutrient cycling across a broad range of temperate and sub-boreal ecosystems. These results offer an alternative to categorical maps of functional or physiognomic types by providing non-discrete maps (i.e., on a continuum) of traits that define those functional types. A key contribution of this work is the ability to assign retrieval uncertainties by pixel, a requirement to enable assimilation of these data products into ecosystem modeling frameworks to constrain carbon and nutrient cycling projections.« less
Uncertainty Quantification of Calculated Temperatures for the U.S. Capsules in the AGR-2 Experiment
Lybeck, Nancy; Einerson, Jeffrey J.; Pham, Binh T.; Hawkes, Grant L.
2015-03-01
A series of Advanced Gas Reactor (AGR) irradiation experiments are being conducted within the Advanced Reactor Technology (ART) Fuel Development and Qualification Program. The main objectives of the fuel experimental campaign are to provide the necessary data on fuel performance to support fuel process development, qualify a fuel design and fabrication process for normal operation and accident conditions, and support development and validation of fuel performance and fission product transport models and codes (PLN-3636). The AGR-2 test was inserted in the B-12 position in the Advanced Test Reactor (ATR) core at Idaho National Laboratory (INL) in June 2010 and successfully completed irradiation in October 2013, resulting in irradiation of the TRISO fuel for 559.2 effective full power days (EFPDs) during approximately 3.3 calendar years. The AGR-2 data, including the irradiation data and calculated results, were qualified and stored in the Nuclear Data Management and Analysis System (NDMAS) (Pham and Einerson 2014). To support the U.S. TRISO fuel performance assessment and to provide data for validation of fuel performance and fission product transport models and codes, the daily as-run thermal analysis has been performed separately on each of four AGR-2 U.S. capsules for the entire irradiation as discussed in (Hawkes 2014). The ABAQUS code’s finite element-based thermal model predicts the daily average volume-average fuel temperature and peak fuel temperature in each capsule. This thermal model involves complex physical mechanisms (e.g., graphite holder and fuel compact shrinkage) and properties (e.g., conductivity and density). Therefore, the thermal model predictions are affected by uncertainty in input parameters and by incomplete knowledge of the underlying physics leading to modeling assumptions. Therefore, alongside with the deterministic predictions from a set of input thermal conditions, information about prediction uncertainty is instrumental for the ART program decision-making. Well defined and reduced uncertainty in model predictions helps increase the quality of and confidence in the AGR technical findings.
Singh, Aditya; Serbin, Shawn P.; McNeil, Brenden E.; Kingdon, Clayton C.; Townsend, Philip A.
2015-12-01
A major goal of remote sensing is the development of generalizable algorithms to repeatedly and accurately map ecosystem properties across space and time. Imaging spectroscopy has great potential to map vegetation traits that cannot be retrieved from broadband spectral data, but rarely have such methods been tested across broad regions. Here we illustrate a general approach for estimating key foliar chemical and morphological traits through space and time using NASA's Airborne Visible/Infrared Imaging Spectrometer (AVIRIS-Classic). We apply partial least squares regression (PLSR) to data from 237 field plots within 51 images acquired between 2008 and 2011. Using a series of 500 randomized 50/50 subsets of the original data, we generated spatially explicit maps of seven traits (leaf mass per area (M_{area}), percentage nitrogen, carbon, fiber, lignin, and cellulose, and isotopic nitrogen concentration, δ^{15}N) as well as pixel-wise uncertainties in their estimates based on error propagation in the analytical methods. Both Marea and %N PLSR models had a R^{2} > 0.85. Root mean square errors (RMSEs) for both variables were less than 9% of the range of data. Fiber and lignin were predicted with R^{2} > 0.65 and carbon and cellulose with R^{2} > 0.45. Although R^{2} of %C and cellulose were lower than Marea and %N, the measured variability of these constituents (especially %C) was also lower, and their RMSE values were beneath 12% of the range in overall variability. Model performance for δ^{15}N was the lowest (R^{2} = 0.48, RMSE = 0.95‰), but within 15% of the observed range. The resulting maps of chemical and morphological traits, together with their overall uncertainties, represent a first-of-its-kind approach for examining the spatiotemporal patterns of forest functioning and nutrient cycling across a broad range of temperate and sub-boreal ecosystems. These results offer an alternative to categorical maps of functional or physiognomic types by providing non-discrete maps (i.e., on a continuum) of traits that define those functional types. A key contribution of this work is the ability to assign retrieval uncertainties by pixel, a requirement to enable assimilation of these data products into ecosystem modeling frameworks to constrain carbon and nutrient cycling projections.
Lombardo, A.J.; Orthen, R.F.; Shonka, J.J.; Scott, L.M.
2007-07-01
The regulatory release of sites and facilities (property) for restricted or unrestricted use has evolved beyond prescribed levels to model-derived dose and risk based limits. Dose models for deriving corresponding soil radionuclide concentration guidelines are necessarily simplified representations of complex processes. It is not practical to obtain data to fully or accurately characterize transport and exposure pathway processes. Similarly, it is not possible to predict future conditions with certainty absent durable land use restrictions. To compensate for the shortage of comprehensive characterization data and site specific inputs to describe the projected 'as-left' contaminated zone, conservative default values are used to derive acceptance criteria. The result is overly conservative criteria. Furthermore, implementation of a remediation plan and subsequent final surveys to show compliance with the conservative criteria often result in excessive remediation due to the large uncertainty. During a recent decommissioning project of a site contaminated with thorium, a unique approach to dose modeling and remedial action design was implemented to effectively manage end-point uncertainty. The approach used a dynamic feedback dose model and soil segregation technology to characterize impacted material with precision and accuracy not possible with static control approaches. Utilizing the remedial action goal 'over excavation' and subsequent auto-segregation of excavated material for refill, the end-state (as-left conditions of the refilled excavation) RESRAD input parameters were re-entered to assess the final dose. The segregation process produced separate below and above criteria material stockpiles whose volumes were optimized for maximum refill and minimum waste. The below criteria material was returned to the excavation without further analysis, while the above criteria material was packaged for offsite disposal. Using the activity concentration data recorded by the segregation system and the as-left configuration of the refilled excavation, the end state model of the site was prepared with substantially reduced uncertainty. The major projected benefits of this approach are reviewed as well as the performance of the segregation system and lessons learned including: 1) Total, first-attempt data discovery brought about by simultaneously conducted characterization and final status surveys, 2) Lowered project costs stemming from efficient analysis and abstraction of impacted material and reduced offsite waste disposal volume, 3) Lowered project costs due to increased remediation/construction efficiency and decreased survey and radio-analytical expenses, and 4) Improving the decommissioning experience with new regulatory guidance. (authors)
Uncertainties in the characterization of the thermal environment of a solid rocket propellant fire
Diaz, J.C.
1993-10-01
There has been an interest in developing models capable of predicting the response of systems to Minuteman (MM) III third-stage solid propellant fires. Input parameters for such an effort include the boundary conditions that describe the fire temperature, heat flux, emissivity, and propellant burn rate. In this study scanning spectroscopy and pyrometry were used to infer plume temperatures. Each diagnostic system possessed strengths and weaknesses. The intention was to use various supportive methods to infer plume temperature and emissivity, because no one diagnostic had proven capabilities for determining temperature under these conditions. Furthermore, these diagnostics were being used near the limit of their applicability. All these points created some uncertainty in the data collected.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Stracuzzi, David John; Brost, Randolph C.; Phillips, Cynthia A.; Robinson, David G.; Wilson, Alyson G.; Woodbridge, Diane M. -K.
2015-09-26
Geospatial semantic graphs provide a robust foundation for representing and analyzing remote sensor data. In particular, they support a variety of pattern search operations that capture the spatial and temporal relationships among the objects and events in the data. However, in the presence of large data corpora, even a carefully constructed search query may return a large number of unintended matches. This work considers the problem of calculating a quality score for each match to the query, given that the underlying data are uncertain. As a result, we present a preliminary evaluation of three methods for determining both match qualitymore » scores and associated uncertainty bounds, illustrated in the context of an example based on overhead imagery data.« less
Babendreier, Justin E.; Castleton, Karl J.
2005-08-01
Elucidating uncertainty and sensitivity structures in environmental models can be a difficult task, even for low-order, single-medium constructs driven by a unique set of site-specific data. Quantitative assessment of integrated, multimedia models that simulate hundreds of sites, spanning multiple geographical and ecological regions, will ultimately require a comparative approach using several techniques, coupled with sufficient computational power. The Framework for Risk Analysis in Multimedia Environmental Systems - Multimedia, Multipathway, and Multireceptor Risk Assessment (FRAMES-3MRA) is an important software model being developed by the United States Environmental Protection Agency for use in risk assessment of hazardous waste management facilities. The 3MRA modeling system includes a set of 17 science modules that collectively simulate release, fate and transport, exposure, and risk associated with hazardous contaminants disposed of in land-based waste management units (WMU) .
Sensitivity of CO2 migration estimation on reservoir temperature and pressure uncertainty
Jordan, Preston; Doughty, Christine
2008-11-01
The density and viscosity of supercritical CO{sub 2} are sensitive to pressure and temperature (PT) while the viscosity of brine is sensitive primarily to temperature. Oil field PT data in the vicinity of WESTCARB's Phase III injection pilot test site in the southern San Joaquin Valley, California, show a range of PT values, indicating either PT uncertainty or variability. Numerical simulation results across the range of likely PT indicate brine viscosity variation causes virtually no difference in plume evolution and final size, but CO{sub 2} density variation causes a large difference. Relative ultimate plume size is almost directly proportional to the relative difference in brine and CO{sub 2} density (buoyancy flow). The majority of the difference in plume size occurs during and shortly after the cessation of injection.
Menndez, Javier
2013-12-30
We explore the theoretical uncertainties related to the transition operator of neutrinoless double-beta (0???) decay. The transition operator used in standard calculations is a product of one-body currents, that can be obtained phenomenologically as in Tomoda [1] or imkovic et al. [2]. However, corrections to the operator are hard to obtain in the phenomenological approach. Instead, we calculate the 0??? decay operator in the framework of chiral effective theory (EFT), which gives a systematic order-by-order expansion of the transition currents. At leading orders in chiral EFT we reproduce the standard one-body currents of Refs. [1] and [2]. Corrections appear as two-body (2b) currents predicted by chiral EFT. We compute the effects of the leading 2b currents to the nuclear matrix elements of 0??? decay for several transition candidates. The 2b current contributions are related to the quenching of Gamow-Teller transitions found in nuclear structure calculations.
Entropic uncertainty relations and locking: Tight bounds for mutually unbiased bases
Ballester, Manuel A.; Wehner, Stephanie
2007-02-15
We prove tight entropic uncertainty relations for a large number of mutually unbiased measurements. In particular, we show that a bound derived from the result by Maassen and Uffink [Phys. Rev. Lett. 60, 1103 (1988)] for two such measurements can in fact be tight for up to {radical}(d) measurements in mutually unbiased bases. We then show that using more mutually unbiased bases does not always lead to a better locking effect. We prove that the optimal bound for the accessible information using up to {radical}(d) specific mutually unbiased bases is log d/2, which is the same as can be achieved by using only two bases. Our result indicates that merely using mutually unbiased bases is not sufficient to achieve a strong locking effect and we need to look for additional properties.
DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]
J.C. Rowland; D.R. Harp; C.J. Wilson; A.L. Atchley; V.E. Romanovsky; E.T. Coon; S.L. Painter
2016-02-02
This Modeling Archive is in support of an NGEE Arctic publication available at doi:10.5194/tc-10-341-2016. This dataset contains an ensemble of thermal-hydro soil parameters including porosity, thermal conductivity, thermal conductivity shape parameters, and residual saturation of peat and mineral soil. The ensemble was generated using a Null-Space Monte Carlo analysis of parameter uncertainty based on a calibration to soil temperatures collected at the Barrow Environmental Observatory site by the NGEE team. The micro-topography of ice wedge polygons present at the site is included in the analysis using three 1D column models to represent polygon center, rim and trough features. The Arctic Terrestrial Simulator (ATS) was used in the calibration to model multiphase thermal and hydrological processes in the subsurface.
Challenges, uncertainties and issues facing gas production from gas hydrate deposits
Moridis, G.J.; Collett, T.S.; Pooladi-Darvish, M.; Hancock, S.; Santamarina, C.; Boswell, R.; Kneafsey, T.; Rutqvist, J.; Kowalsky, M.; Reagan, M.T.; Sloan, E.D.; Sum, A.K.; Koh, C.
2010-11-01
The current paper complements the Moridis et al. (2009) review of the status of the effort toward commercial gas production from hydrates. We aim to describe the concept of the gas hydrate petroleum system, to discuss advances, requirement and suggested practices in gas hydrate (GH) prospecting and GH deposit characterization, and to review the associated technical, economic and environmental challenges and uncertainties, including: the accurate assessment of producible fractions of the GH resource, the development of methodologies for identifying suitable production targets, the sampling of hydrate-bearing sediments and sample analysis, the analysis and interpretation of geophysical surveys of GH reservoirs, well testing methods and interpretation of the results, geomechanical and reservoir/well stability concerns, well design, operation and installation, field operations and extending production beyond sand-dominated GH reservoirs, monitoring production and geomechanical stability, laboratory investigations, fundamental knowledge of hydrate behavior, the economics of commercial gas production from hydrates, and the associated environmental concerns.
Electromagnetic form factors of the nucleon: New fit and analysis of uncertainties
Alberico, W. M.; Giunti, C.; Bilenky, S. M.; Graczyk, K. M.
2009-06-15
Electromagnetic form factors of proton and neutron, obtained from a new fit of data, are presented. The proton form factors are obtained from a simultaneous fit to the ratio {mu}{sub p}G{sub Ep}/G{sub Mp} determined from polarization transfer measurements and to ep elastic cross section data. Phenomenological two-photon exchange corrections are taken into account. The present fit for protons was performed in the kinematical region Q{sup 2} is an element of (0,6) GeV{sup 2}. For both protons and neutrons we use the latest available data. For all form factors, the uncertainties and correlations of form factor parameters are investigated with the {chi}{sup 2} method.
Stracuzzi, David John; Brost, Randolph C.; Phillips, Cynthia A.; Robinson, David G.; Wilson, Alyson G.; Woodbridge, Diane M. -K.
2015-09-26
Geospatial semantic graphs provide a robust foundation for representing and analyzing remote sensor data. In particular, they support a variety of pattern search operations that capture the spatial and temporal relationships among the objects and events in the data. However, in the presence of large data corpora, even a carefully constructed search query may return a large number of unintended matches. This work considers the problem of calculating a quality score for each match to the query, given that the underlying data are uncertain. As a result, we present a preliminary evaluation of three methods for determining both match quality scores and associated uncertainty bounds, illustrated in the context of an example based on overhead imagery data.
Uncertainties in compliance with harmonic current distortion limits in electric power systems
Gruzs, T.M. )
1991-07-01
The harmonic distortion of any repetitive voltage or current waveform is typically described by the quantity total harmonic distortion (THD). With the proliferation of nonlinear loads, such as static power converters, there has been increasing concern over the generation of harmonic currents and the effects of these currents on the power system. Proposals have been made to limit harmonic currents in power systems using the total harmonic distortion of the current as the criterion. This criterion, although it may be necessary, can be ambiguous and lead to compliance uncertainties. In this paper a discussion is presented on several of the practical problems by applying total harmonic current distortion limits to industrial and commercial power systems.
Solar Irradiances Measured using SPN1 Radiometers: Uncertainties and Clues for Development
Badosa, Jordi; Wood, John; Blanc, Philippe; Long, Charles N.; Vuilleumier, Laurent; Demengel, Dominique; Haeffelin, Martial
2014-12-08
The fast development of solar radiation and energy applications, such as photovoltaic and solar thermodynamic systems, has increased the need for solar radiation measurement and monitoring, not only for the global component but also the diffuse and direct. End users look for the best compromise between getting close to state-of-the-art measurements and keeping capital, maintenance and operating costs to a minimum. Among the existing commercial options, SPN1 is a relatively low cost solar radiometer that estimates global and diffuse solar irradiances from seven thermopile sensors under a shading mask and without moving parts. This work presents a comprehensive study of SPN1 accuracy and sources of uncertainty, which results from laboratory experiments, numerical modeling and comparison studies between measurements from this sensor and state-of-the art instruments for six diverse sites. Several clues are provided for improving the SPN1 accuracy and agreement with state-of-the-art measurements.
Langton, C.; Kosson, D.
2009-11-30
Cementitious barriers for nuclear applications are one of the primary controls for preventing or limiting radionuclide release into the environment. At the present time, performance and risk assessments do not fully incorporate the effectiveness of engineered barriers because the processes that influence performance are coupled and complicated. Better understanding the behavior of cementitious barriers is necessary to evaluate and improve the design of materials and structures used for radioactive waste containment, life extension of current nuclear facilities, and design of future nuclear facilities, including those needed for nuclear fuel storage and processing, nuclear power production and waste management. The focus of the Cementitious Barriers Partnership (CBP) literature review is to document the current level of knowledge with respect to: (1) mechanisms and processes that directly influence the performance of cementitious materials (2) methodologies for modeling the performance of these mechanisms and processes and (3) approaches to addressing and quantifying uncertainties associated with performance predictions. This will serve as an important reference document for the professional community responsible for the design and performance assessment of cementitious materials in nuclear applications. This review also provides a multi-disciplinary foundation for identification, research, development and demonstration of improvements in conceptual understanding, measurements and performance modeling that would be lead to significant reductions in the uncertainties and improved confidence in the estimating the long-term performance of cementitious materials in nuclear applications. This report identifies: (1) technology gaps that may be filled by the CBP project and also (2) information and computational methods that are in currently being applied in related fields but have not yet been incorporated into performance assessments of cementitious barriers. The various chapters contain both a description of the mechanism or and a discussion of the current approaches to modeling the phenomena.
Frey, H. Christopher; Rhodes, David S.
1999-04-30
This is Volume 1 of a two-volume set of reports describing work conducted at North Carolina State University sponsored by Grant Number DE-FG05-95ER30250 by the U.S. Department of Energy. The title of the project is “Quantitative Analysis of Variability and Uncertainty in Acid Rain Assessments.” The work conducted under sponsorship of this grant pertains primarily to two main topics: (1) development of new methods for quantitative analysis of variability and uncertainty applicable to any type of model; and (2) analysis of variability and uncertainty in the performance, emissions, and cost of electric power plant combustion-based NOx control technologies. These two main topics are reported separately in Volumes 1 and 2.
FRAP-T6 uncertainty study of LOCA tests LOFT L2-3 and PBF LLR-3. [PWR
Chambers, R.; Driskell, W.E.; Resch, S.C.
1983-01-01
This paper presents the accuracy and uncertainty of fuel rod behavior calculations performed by the transient Fuel Rod Analysis Program (FRAP-T6) during large break loss-of-coolant accidents. The accuracy of the code was determined primarily through comparisons of code calculations with cladding surface temperature measurements from two loss-of-coolant experiments (LOCEs). These LOCEs were the L2-3 experiment conducted in the Loss-of-Fluid Test (LOFT) Facility and the LOFT Lead Rod 3 (LLR-3) experiment conducted in the Power Burst Facility (PBF). Uncertainties in code calculations resulting from uncertainties in fuel and cladding design variables, material property and heat transfer correlations, and thermal-hydraulic boundary conditions were analyzed.
Kim, Young-Min; Zhou, Ying; Gao, Yang; Fu, Joshua S.; Johnson, Brent; Huang, Cheng; Liu, Yang
2015-01-01
BACKGROUND: The spatial pattern of the uncertainty in climate air pollution health impact has rarely been studied due to the lack of high-resolution model simulations, especially under the latest Representative Concentration Pathways (RCPs). OBJECTIVES: We estimated county-level ozone (O3) and PM2.5 related excess mortality (EM) and evaluated the associated uncertainties in the continental United States in the 2050s under RCP4.5 and RCP8.5. METHODS: Using dynamically downscaled climate model simulations, we calculated changes in O3 and PM2.5 levels at 12 km resolution between the future (2057-2059) and present (2001-2004) under two RCP scenarios. Using concentration-response relationships in the literature and projected future populations, we estimated EM attributable to the changes in O3 and PM2.5. We finally analyzed the contribution of input variables to the uncertainty in the county-level EM estimation using Monte Carlo simulation. RESULTS: O3-related premature deaths in the continental U.S. were estimated to be 1,082 deaths/year under RCP8.5 (95% confidence interval (CI): -288 to 2,453), and -5,229 deaths/year under RCP4.5 (-7,212 to -3,246). Simulated PM2.5 changes resulted in a significant decrease in EM under the two RCPs. The uncertainty of O3-related EM estimates was mainly caused by RCP scenarios, whereas that of PM2.5-related EMs was mainly from concentration-response functions. CONCLUSION: EM estimates attributable to climate change-induced air pollution change as well as the associated uncertainties vary substantially in space, and so are the most influential input variables. Spatially resolved data is crucial to develop effective mitigation and adaptation policy.
Heath, Garvin; Warner, Ethan; Steinberg, Daniel; Brandt, Adam
2015-08-01
A growing number of studies have raised questions regarding uncertainties in our understanding of methane (CH_{4}) emissions from fugitives and venting along the natural gas (NG) supply chain. In particular, a number of measurement studies have suggested that actual levels of CH_{4} emissions may be higher than estimated by EPA" tm s U.S. GHG Emission Inventory. We reviewed the literature to identify the growing number of studies that have raised questions regarding uncertainties in our understanding of methane (CH_{4}) emissions from fugitives and venting along the natural gas (NG) supply chain.
Vierow, Karen; Aldemir, Tunc
2009-09-10
The project entitled, Uncertainty Quantification in the Reliability and Risk Assessment of Generation IV Reactors, was conducted as a DOE NERI project collaboration between Texas A&M University and The Ohio State University between March 2006 and June 2009. The overall goal of the proposed project was to develop practical approaches and tools by which dynamic reliability and risk assessment techniques can be used to augment the uncertainty quantification process in probabilistic risk assessment (PRA) methods and PRA applications for Generation IV reactors. This report is the Final Scientific/Technical Report summarizing the project.
Bean, J.E.; Berglund, J.W.; Davis, F.J.; Economy, K.; Garner, J.W.; Helton, J.C.; Johnson, J.D.; MacKinnon, R.J.; Miller, J.; O'Brien, D.G.; Ramsey, J.L.; Schreiber, J.D.; Shinta, A.; Smith, L.N.; Stockman, C.; Stoelzel, D.M.; Vaughn, P.
1998-09-01
The Waste Isolation Pilot Plant (WPP) is located in southeastern New Mexico and is being developed by the U.S. Department of Energy (DOE) for the geologic (deep underground) disposal of transuranic (TRU) waste. A detailed performance assessment (PA) for the WIPP was carried out in 1996 and supports an application by the DOE to the U.S. Environmental Protection Agency (EPA) for the certification of the WIPP for the disposal of TRU waste. The 1996 WIPP PA uses a computational structure that maintains a separation between stochastic (i.e., aleatory) and subjective (i.e., epistemic) uncertainty, with stochastic uncertainty arising from the many possible disruptions that could occur over the 10,000 yr regulatory period that applies to the WIPP and subjective uncertainty arising from the imprecision with which many of the quantities required in the PA are known. Important parts of this structure are (1) the use of Latin hypercube sampling to incorporate the effects of subjective uncertainty, (2) the use of Monte Carlo (i.e., random) sampling to incorporate the effects of stochastic uncertainty, and (3) the efficient use of the necessarily limited number of mechanistic calculations that can be performed to support the analysis. The use of Latin hypercube sampling generates a mapping from imprecisely known analysis inputs to analysis outcomes of interest that provides both a display of the uncertainty in analysis outcomes (i.e., uncertainty analysis) and a basis for investigating the effects of individual inputs on these outcomes (i.e., sensitivity analysis). The sensitivity analysis procedures used in the PA include examination of scatterplots, stepwise regression analysis, and partial correlation analysis. Uncertainty and sensitivity analysis results obtained as part of the 1996 WIPP PA are presented and discussed. Specific topics considered include two phase flow in the vicinity of the repository, radionuclide release from the repository, fluid flow and radionuclide transport in formations overlying the repository, and complementary cumulative distribution functions used in comparisons with regulatory standards (i.e., 40 CFR 191, Subpart B).
Narayan, Amrendra
2015-05-01
The Q-weak experiment aims to measure the weak charge of proton with a precision of 4.2%. The proposed precision on weak charge required a 2.5% measurement of the parity violating asymmetry in elastic electron - proton scattering. Polarimetry was the largest experimental contribution to this uncertainty and a new Compton polarimeter was installed in Hall C at Jefferson Lab to make the goal achievable. In this polarimeter the electron beam collides with green laser light in a low gain Fabry-Perot Cavity; the scattered electrons are detected in 4 planes of a novel diamond micro strip detector while the back scattered photons are detected in lead tungstate crystals. This diamond micro-strip detector is the first such device to be used as a tracking detector in a nuclear and particle physics experiment. The diamond detectors are read out using custom built electronic modules that include a preamplifier, a pulse shaping amplifier and a discriminator for each detector micro-strip. We use field programmable gate array based general purpose logic modules for event selection and histogramming. Extensive Monte Carlo simulations and data acquisition simulations were performed to estimate the systematic uncertainties. Additionally, the Moller and Compton polarimeters were cross calibrated at low electron beam currents using a series of interleaved measurements. In this dissertation, we describe all the subsystems of the Compton polarimeter with emphasis on the electron detector. We focus on the FPGA based data acquisition system built by the author and the data analysis methods implemented by the author. The simulations of the data acquisition and the polarimeter that helped rigorously establish the systematic uncertainties of the polarimeter are also elaborated, resulting in the first sub 1% measurement of low energy (?1 GeV) electron beam polarization with a Compton electron detector. We have demonstrated that diamond based micro-strip detectors can be used for tracking in a high radiation environment and it has enabled us to achieve the desired precision in the measurement of the electron beam polarization which in turn has allowed the most precise determination of the weak charge of the proton.
Makarov, Yuri V.; Huang, Zhenyu; Etingov, Pavel V.; Ma, Jian; Guttromson, Ross T.; Subbarao, Krishnappa; Chakrabarti, Bhujanga B.
2010-01-01
The power system balancing process, which includes the scheduling, real time dispatch (load following) and regulation processes, is traditionally based on deterministic models. Since the conventional generation needs time to be committed and dispatched to a desired megawatt level, the scheduling and load following processes use load and wind and solar power production forecasts to achieve future balance between the conventional generation and energy storage on the one side, and system load, intermittent resources (such as wind and solar generation), and scheduled interchange on the other side. Although in real life the forecasting procedures imply some uncertainty around the load and wind/solar forecasts (caused by forecast errors), only their mean values are actually used in the generation dispatch and commitment procedures. Since the actual load and intermittent generation can deviate from their forecasts, it becomes increasingly unclear (especially, with the increasing penetration of renewable resources) whether the system would be actually able to meet the conventional generation requirements within the look-ahead horizon, what the additional balancing efforts would be needed as we get closer to the real time, and what additional costs would be incurred by those needs. To improve the system control performance characteristics, maintain system reliability, and minimize expenses related to the system balancing functions, it becomes necessary to incorporate the predicted uncertainty ranges into the scheduling, load following, and, in some extent, into the regulation processes. It is also important to address the uncertainty problem comprehensively by including all sources of uncertainty (load, intermittent generation, generators forced outages, etc.) into consideration. All aspects of uncertainty such as the imbalance size (which is the same as capacity needed to mitigate the imbalance) and generation ramping requirement must be taken into account. The latter unique features make this work a significant step forward toward the objective of incorporating of wind, solar, load, and other uncertainties into power system operations. Currently, uncertainties associated with wind and load forecasts, as well as uncertainties associated with random generator outages and unexpected disconnection of supply lines, are not taken into account in power grid operation. Thus, operators have little means to weigh the likelihood and magnitude of upcoming events of power imbalance. In this project, funded by the U.S. Department of Energy (DOE), a framework has been developed for incorporating uncertainties associated with wind and load forecast errors, unpredicted ramps, and forced generation disconnections into the energy management system (EMS) as well as generation dispatch and commitment applications. A new approach to evaluate the uncertainty ranges for the required generation performance envelope including balancing capacity, ramping capability, and ramp duration has been proposed. The approach includes three stages: forecast and actual data acquisition, statistical analysis of retrospective information, and prediction of future grid balancing requirements for specified time horizons and confidence levels. Assessment of the capacity and ramping requirements is performed using a specially developed probabilistic algorithm based on a histogram analysis, incorporating all sources of uncertainties of both continuous (wind and load forecast errors) and discrete (forced generator outages and start-up failures) nature. A new method called the flying brick technique has been developed to evaluate the look-ahead required generation performance envelope for the worst case scenario within a user-specified confidence level. A self-validation algorithm has been developed to validate the accuracy of the confidence intervals.
Uncertainty in Modeling Dust Mass Balance and Radiative Forcing from Size Parameterization
Zhao, Chun; Chen, Siyu; Leung, Lai-Yung R.; Qian, Yun; Kok, Jasper; Zaveri, Rahul A.; Huang, J.
2013-11-05
This study examines the uncertainties in simulating mass balance and radiative forcing of mineral dust due to biases in the aerosol size parameterization. Simulations are conducted quasi-globally (180oW-180oE and 60oS-70oN) using the WRF24 Chem model with three different approaches to represent aerosol size distribution (8-bin, 4-bin, and 3-mode). The biases in the 3-mode or 4-bin approaches against a relatively more accurate 8-bin approach in simulating dust mass balance and radiative forcing are identified. Compared to the 8-bin approach, the 4-bin approach simulates similar but coarser size distributions of dust particles in the atmosphere, while the 3-mode pproach retains more fine dust particles but fewer coarse dust particles due to its prescribed og of each mode. Although the 3-mode approach yields up to 10 days longer dust mass lifetime over the remote oceanic regions than the 8-bin approach, the three size approaches produce similar dust mass lifetime (3.2 days to 3.5 days) on quasi-global average, reflecting that the global dust mass lifetime is mainly determined by the dust mass lifetime near the dust source regions. With the same global dust emission (~6000 Tg yr-1), the 8-bin approach produces a dust mass loading of 39 Tg, while the 4-bin and 3-mode approaches produce 3% (40.2 Tg) and 25% (49.1 Tg) higher dust mass loading, respectively. The difference in dust mass loading between the 8-bin approach and the 4-bin or 3-mode approaches has large spatial variations, with generally smaller relative difference (<10%) near the surface over the dust source regions. The three size approaches also result in significantly different dry and wet deposition fluxes and number concentrations of dust. The difference in dust aerosol optical depth (AOD) (a factor of 3) among the three size approaches is much larger than their difference (25%) in dust mass loading. Compared to the 8-bin approach, the 4-bin approach yields stronger dust absorptivity, while the 3-mode approach yields weaker dust absorptivity. Overall, on quasi-global average, the three size parameterizations result in a significant difference of a factor of 2~3 in dust surface cooling (-1.02~-2.87 W m-2) and atmospheric warming (0.39~0.96 W m-2) and in a tremendous difference of a factor of ~10 in dust TOA cooling (-0.24~-2.20 W m-2). An uncertainty of a factor of 2 is quantified in dust emission estimation due to the different size parameterizations. This study also highlights the uncertainties in modeling dust mass and number loading, deposition fluxes, and radiative forcing resulting from different size parameterizations, and motivates further investigation of the impact of size parameterizations on modeling dust impacts on air quality, climate, and ecosystem.
Broader source: Energy.gov [DOE]
Addressing Uncertainties in Design Inputs: A Case Study of Probabilistic Settlement Evaluations for Soft Zone Collapse at SWPF Tom Houston, Greg Mertz, Carl Costantino, Michael Costantino, Andrew Maham Carl J. Costantino & Associates DOE NPH Conference Germantown, Maryland October 25-26 2011
Zhu Zhixi Bai, Hongtao Xu He Zhu Tan
2011-11-15
Strategic environmental assessment (SEA) inherently needs to address greater levels of uncertainty in the formulation and implementation processes of strategic decisions, compared with project environmental impact assessment. The range of uncertainties includes internal and external factors of the complex system that is concerned in the strategy. Scenario analysis is increasingly being used to cope with uncertainty in SEA. Following a brief introduction of scenarios and scenario analysis, this paper examines the rationale for scenario analysis in SEA in the context of China. The state of the art associated with scenario analysis applied to SEA in China was reviewed through four SEA case analyses. Lessons learned from these cases indicated the word 'scenario' appears to be abused and the scenario-based methods appear to be misused due to the lack of understanding of an uncertain future and scenario analysis. However, good experiences were also drawn on, regarding how to integrate scenario analysis into the SEA process in China, how to cope with driving forces including uncertainties, how to combine qualitative scenario storylines with quantitative impact predictions, and how to conduct assessments and propose recommendations based on scenarios. Additionally, the ways to improve the application of this tool in SEA were suggested. We concluded by calling for further methodological research on this issue and more practices.
Webster, Mort David
2015-03-10
This report presents the final outcomes and products of the project as performed at the Massachusetts Institute of Technology. The research project consists of three main components: methodology development for decision-making under uncertainty, improving the resolution of the electricity sector to improve integrated assessment, and application of these methods to integrated assessment. Results in each area is described in the report.
Hou, Zhangshuan; Huang, Maoyi; Leung, Lai-Yung R.; Lin, Guang; Ricciuto, Daniel M.
2012-08-10
Uncertainties in hydrologic parameters could have significant impacts on the simulated water and energy fluxes and land surface states, which will in turn affect atmospheric processes and the carbon cycle. Quantifying such uncertainties is an important step toward better understanding and quantification of uncertainty of integrated earth system models. In this paper, we introduce an uncertainty quantification (UQ) framework to analyze sensitivity of simulated surface fluxes to selected hydrologic parameters in the Community Land Model (CLM4) through forward modeling. Thirteen flux tower footprints spanning a wide range of climate and site conditions were selected to perform sensitivity analyses by perturbing the parameters identified. In the UQ framework, prior information about the parameters was used to quantify the input uncertainty using the Minimum-Relative-Entropy approach. The quasi-Monte Carlo approach was applied to generate samples of parameters on the basis of the prior pdfs. Simulations corresponding to sampled parameter sets were used to generate response curves and response surfaces and statistical tests were used to rank the significance of the parameters for output responses including latent (LH) and sensible heat (SH) fluxes. Overall, the CLM4 simulated LH and SH show the largest sensitivity to subsurface runoff generation parameters. However, study sites with deep root vegetation are also affected by surface runoff parameters, while sites with shallow root zones are also sensitive to the vadose zone soil water parameters. Generally, sites with finer soil texture and shallower rooting systems tend to have larger sensitivity of outputs to the parameters. Our results suggest the necessity of and possible ways for parameter inversion/calibration using available measurements of latent/sensible heat fluxes to obtain the optimal parameter set for CLM4. This study also provided guidance on reduction of parameter set dimensionality and parameter calibration framework design for CLM4 and other land surface models under different hydrologic and climatic regimes.
Zhang, Yan; Sahinidis, Nikolaos V.
2013-04-06
In this paper, surrogate models are iteratively built using polynomial chaos expansion (PCE) and detailed numerical simulations of a carbon sequestration system. Output variables from a numerical simulator are approximated as polynomial functions of uncertain parameters. Once generated, PCE representations can be used in place of the numerical simulator and often decrease simulation times by several orders of magnitude. However, PCE models are expensive to derive unless the number of terms in the expansion is moderate, which requires a relatively small number of uncertain variables and a low degree of expansion. To cope with this limitation, instead of using a classical full expansion at each step of an iterative PCE construction method, we introduce a mixed-integer programming (MIP) formulation to identify the best subset of basis terms in the expansion. This approach makes it possible to keep the number of terms small in the expansion. Monte Carlo (MC) simulation is then performed by substituting the values of the uncertain parameters into the closed-form polynomial functions. Based on the results of MC simulation, the uncertainties of injecting CO{sub 2} underground are quantified for a saline aquifer. Moreover, based on the PCE model, we formulate an optimization problem to determine the optimal CO{sub 2} injection rate so as to maximize the gas saturation (residual trapping) during injection, and thereby minimize the chance of leakage.
Davis, Jonathan H.
2015-03-09
Future multi-tonne Direct Detection experiments will be sensitive to solar neutrino induced nuclear recoils which form an irreducible background to light Dark Matter searches. Indeed for masses around 6 GeV the spectra of neutrinos and Dark Matter are so similar that experiments are said to run into a neutrino floor, for which sensitivity increases only marginally with exposure past a certain cross section. In this work we show that this floor can be overcome using the different annual modulation expected from solar neutrinos and Dark Matter. Specifically for cross sections below the neutrino floor the DM signal is observable through a phase shift and a smaller amplitude for the time-dependent event rate. This allows the exclusion power to be improved by up to an order of magnitude for large exposures. In addition we demonstrate that, using only spectral information, the neutrino floor exists over a wider mass range than has been previously shown, since the large uncertainties in the Dark Matter velocity distribution make the signal spectrum harder to distinguish from the neutrino background. However for most velocity distributions it can still be surpassed using timing information, and so the neutrino floor is not an absolute limit on the sensitivity of Direct Detection experiments.
Modeling of Uncertainties in Major Drivers in U.S. Electricity Markets: Preprint
Short, W.; Ferguson, T.; Leifman, M.
2006-09-01
This paper presents information on the Stochastic Energy Deployment System (SEDS) model. DOE and NREL are developing this new model, intended to address many of the shortcomings of the current suite of energy models. Once fully built, the salient qualities of SEDS will include full probabilistic treatment of the major uncertainties in national energy forecasts; code compactness for desktop application; user-friendly interface for a reasonably trained analyst; run-time within limits acceptable for quick-response analysis; choice of detailed or aggregate representations; and transparency of design, code, and assumptions. Moreover, SEDS development will be increasingly collaborative, as DOE and NREL will be coordinating with multiple national laboratories and other institutions, making SEDS nearly an 'open source' project. The collaboration will utilize the best expertise on specific sectors and problems, and also allow constant examination and review of the model. This paper outlines the rationale for this project and a description of its alpha version, as well as some example results. It also describes some of the expected development efforts in SEDS.
Optimal Control of Distributed Energy Resources and Demand Response under Uncertainty
Siddiqui, Afzal; Stadler, Michael; Marnay, Chris; Lai, Judy
2010-06-01
We take the perspective of a microgrid that has installed distribution energy resources (DER) in the form of distributed generation with combined heat and power applications. Given uncertain electricity and fuel prices, the microgrid minimizes its expected annual energy bill for various capacity sizes. In almost all cases, there is an economic and environmental advantage to using DER in conjunction with demand response (DR): the expected annualized energy bill is reduced by 9percent while CO2 emissions decline by 25percent. Furthermore, the microgrid's risk is diminished as DER may be deployed depending on prevailing market conditions and local demand. In order to test a policy measure that would place a weight on CO2 emissions, we use a multi-criteria objective function that minimizes a weighted average of expected costs and emissions. We find that greater emphasis on CO2 emissions has a beneficial environmental impact only if DR is available and enough reserve generation capacity exists. Finally, greater uncertainty results in higher expected costs and risk exposure, the effects of which may be mitigated by selecting a larger capacity.
Griffin, Joshua D. (Sandai National Labs, Livermore, CA); Eldred, Michael Scott; Martinez-Canales, Monica L.; Watson, Jean-Paul; Kolda, Tamara Gibson; Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J.; Hough, Patricia Diane; Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Guinta, Anthony A.; Brown, Shannon L.
2006-10-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.
McVeigh, J.; Lausten, M.; Eugeni, E.; Soni, A.
2010-11-01
The U.S. Department of Energy (DOE) Solar Energy Technologies Program (SETP) conducted a 2009 Technical Risk and Uncertainty Analysis to better assess its cost goals for concentrating solar power (CSP) and photovoltaic (PV) systems, and to potentially rebalance its R&D portfolio. This report details the methodology, schedule, and results of this technical risk and uncertainty analysis.
Reducing uncertainty in high-resolution sea ice models.
Peterson, Kara J.; Bochev, Pavel Blagoveston
2013-07-01
Arctic sea ice is an important component of the global climate system, reflecting a significant amount of solar radiation, insulating the ocean from the atmosphere and influencing ocean circulation by modifying the salinity of the upper ocean. The thickness and extent of Arctic sea ice have shown a significant decline in recent decades with implications for global climate as well as regional geopolitics. Increasing interest in exploration as well as climate feedback effects make predictive mathematical modeling of sea ice a task of tremendous practical import. Satellite data obtained over the last few decades have provided a wealth of information on sea ice motion and deformation. The data clearly show that ice deformation is focused along narrow linear features and this type of deformation is not well-represented in existing models. To improve sea ice dynamics we have incorporated an anisotropic rheology into the Los Alamos National Laboratory global sea ice model, CICE. Sensitivity analyses were performed using the Design Analysis Kit for Optimization and Terascale Applications (DAKOTA) to determine the impact of material parameters on sea ice response functions. Two material strength parameters that exhibited the most significant impact on responses were further analyzed to evaluate their influence on quantitative comparisons between model output and data. The sensitivity analysis along with ten year model runs indicate that while the anisotropic rheology provides some benefit in velocity predictions, additional improvements are required to make this material model a viable alternative for global sea ice simulations.
Not Available
1993-08-01
Before disposing of transuranic radioactive waste in the Waste Isolation Pilot Plant (WIPP), the United States Department of Energy (DOE) must evaluate compliance with applicable long-term regulations of the United States Environmental Protection Agency (EPA). Sandia National Laboratories is conducting iterative performance assessments (PAs) of the WIPP for the DOE to provide interim guidance while preparing for a final compliance evaluation. This volume of the 1992 PA contains results of uncertainty and sensitivity analyses with respect to migration of gas and brine from the undisturbed repository. Additional information about the 1992 PA is provided in other volumes. Volume 1 contains an overview of WIPP PA and results of a preliminary comparison with 40 CFR 191, Subpart B. Volume 2 describes the technical basis for the performance assessment, including descriptions of the linked computational models used in the Monte Carlo analyses. Volume 3 contains the reference data base and values for input parameters used in consequence and probability modeling. Volume 4 contains uncertainty and sensitivity analyses with respect to the EPA`s Environmental Standards for the Management and Disposal of Spent Nuclear Fuel, High-Level and Transuranic Radioactive Wastes (40 CFR 191, Subpart B). Finally, guidance derived from the entire 1992 PA is presented in Volume 6. Results of the 1992 uncertainty and sensitivity analyses indicate that, conditional on the modeling assumptions and the assigned parameter-value distributions, the most important parameters for which uncertainty has the potential to affect gas and brine migration from the undisturbed repository are: initial liquid saturation in the waste, anhydrite permeability, biodegradation-reaction stoichiometry, gas-generation rates for both corrosion and biodegradation under inundated conditions, and the permeability of the long-term shaft seal.
Shaffer, C.J. [Science and Engineering Associates, Albuquerque, NM (United States); Miller, L.A.; Payne, A.C. Jr.
1992-10-01
A Level III Probabilistic Risk Assessment (PRA) has been performed for LaSalle Unit 2 under the Risk Methods Integration and Evaluation Program (RMIEP) and the Phenomenology and Risk Uncertainty Evaluation Program (PRUEP). This report documents the phenomenological calculations and sources of. uncertainty in the calculations performed with HELCOR in support of the Level II portion of the PRA. These calculations are an integral part of the Level II analysis since they provide quantitative input to the Accident Progression Event Tree (APET) and Source Term Model (LASSOR). However, the uncertainty associated with the code results must be considered in the use of the results. The MELCOR calculations performed include four integrated calculations: (1) a high-pressure short-term station blackout, (2) a low-pressure short-term station blackout, (3) an intermediate-term station blackout, and (4) a long-term station blackout. Several sensitivity studies investigating the effect of variations in containment failure size and location, as well as hydrogen ignition concentration are also documented.
Shaffer, C.J. (Science and Engineering Associates, Albuquerque, NM (United States)); Miller, L.A.; Payne, A.C. Jr.
1992-10-01
A Level III Probabilistic Risk Assessment (PRA) has been performed for LaSalle Unit 2 under the Risk Methods Integration and Evaluation Program (RMIEP) and the Phenomenology and Risk Uncertainty Evaluation Program (PRUEP). This report documents the phenomenological calculations and sources of. uncertainty in the calculations performed with HELCOR in support of the Level II portion of the PRA. These calculations are an integral part of the Level II analysis since they provide quantitative input to the Accident Progression Event Tree (APET) and Source Term Model (LASSOR). However, the uncertainty associated with the code results must be considered in the use of the results. The MELCOR calculations performed include four integrated calculations: (1) a high-pressure short-term station blackout, (2) a low-pressure short-term station blackout, (3) an intermediate-term station blackout, and (4) a long-term station blackout. Several sensitivity studies investigating the effect of variations in containment failure size and location, as well as hydrogen ignition concentration are also documented.
M dwarf metallicities and giant planet occurrence: Ironing out uncertainties and systematics
Gaidos, Eric; Mann, Andrew W.
2014-08-10
Comparisons between the planet populations around solar-type stars and those orbiting M dwarfs shed light on the possible dependence of planet formation and evolution on stellar mass. However, such analyses must control for other factors, i.e., metallicity, a stellar parameter that strongly influences the occurrence of gas giant planets. We obtained infrared spectra of 121 M dwarfs stars monitored by the California Planet Search and determined metallicities with an accuracy of 0.08 dex. The mean and standard deviation of the sample are 0.05 and 0.20 dex, respectively. We parameterized the metallicity dependence of the occurrence of giant planets on orbits with a period less than two years around solar-type stars and applied this to our M dwarf sample to estimate the expected number of giant planets. The number of detected planets (3) is lower than the predicted number (6.4), but the difference is not very significant (12% probability of finding as many or fewer planets). The three M dwarf planet hosts are not especially metal rich and the most likely value of the power-law index relating planet occurrence to metallicity is 1.06 dex per dex for M dwarfs compared to 1.80 for solar-type stars; this difference, however, is comparable to uncertainties. Giant planet occurrence around both types of stars allows, but does not necessarily require, a mass dependence of ?1 dex per dex. The actual planet-mass-metallicity relation may be complex, and elucidating it will require larger surveys like those to be conducted by ground-based infrared spectrographs and the Gaia space astrometry mission.
PUFF-III: A Code for Processing ENDF Uncertainty Data Into Multigroup Covariance Matrices
Dunn, M.E.
2000-06-01
PUFF-III is an extension of the previous PUFF-II code that was developed in the 1970s and early 1980s. The PUFF codes process the Evaluated Nuclear Data File (ENDF) covariance data and generate multigroup covariance matrices on a user-specified energy grid structure. Unlike its predecessor, PUFF-III can process the new ENDF/B-VI data formats. In particular, PUFF-III has the capability to process the spontaneous fission covariances for fission neutron multiplicity. With regard to the covariance data in File 33 of the ENDF system, PUFF-III has the capability to process short-range variance formats, as well as the lumped reaction covariance data formats that were introduced in ENDF/B-V. In addition to the new ENDF formats, a new directory feature is now available that allows the user to obtain a detailed directory of the uncertainty information in the data files without visually inspecting the ENDF data. Following the correlation matrix calculation, PUFF-III also evaluates the eigenvalues of each correlation matrix and tests each matrix for positive definiteness. Additional new features are discussed in the manual. PUFF-III has been developed for implementation in the AMPX code system, and several modifications were incorporated to improve memory allocation tasks and input/output operations. Consequently, the resulting code has a structure that is similar to other modules in the AMPX code system. With the release of PUFF-III, a new and improved covariance processing code is available to process ENDF covariance formats through Version VI.
1997-04-01
Probabilistic Seismic Hazard Analysis (PSHA) is a methodology that estimates the likelihood that various levels of earthquake-caused ground motion will be exceeded at a given location in a given future time period. Due to large uncertainties in all the geosciences data and in their modeling, multiple model interpretations are often possible. This leads to disagreement among experts, which in the past has led to disagreement on the selection of ground motion for design at a given site. In order to review the present state-of-the-art and improve on the overall stability of the PSHA process, the U.S. Nuclear Regulatory Commission (NRC), the U.S. Department of Energy (DOE), and the Electric Power Research Institute (EPRI) co-sponsored a project to provide methodological guidance on how to perform a PSHA. The project has been carried out by a seven-member Senior Seismic Hazard Analysis Committee (SSHAC) supported by a large number other experts. The SSHAC reviewed past studies, including the Lawrence Livermore National Laboratory and the EPRI landmark PSHA studies of the 1980`s and examined ways to improve on the present state-of-the-art. The Committee`s most important conclusion is that differences in PSHA results are due to procedural rather than technical differences. Thus, in addition to providing a detailed documentation on state-of-the-art elements of a PSHA, this report provides a series of procedural recommendations. The role of experts is analyzed in detail. Two entities are formally defined-the Technical Integrator (TI) and the Technical Facilitator Integrator (TFI)--to account for the various levels of complexity in the technical issues and different levels of efforts needed in a given study.
Ensslin, Torsten A.; Frommert, Mona [Max-Planck-Institut fuer Astrophysik, Karl-Schwarzschild-Str. 1, 85741 Garching (Germany)
2011-05-15
The optimal reconstruction of cosmic metric perturbations and other signals requires knowledge of their power spectra and other parameters. If these are not known a priori, they have to be measured simultaneously from the same data used for the signal reconstruction. We formulate the general problem of signal inference in the presence of unknown parameters within the framework of information field theory. To solve this, we develop a generic parameter-uncertainty renormalized estimation (PURE) technique. As a concrete application, we address the problem of reconstructing Gaussian signals with unknown power-spectrum with five different approaches: (i) separate maximum-a-posteriori power-spectrum measurement and subsequent reconstruction, (ii) maximum-a-posteriori reconstruction with marginalized power-spectrum, (iii) maximizing the joint posterior of signal and spectrum, (iv) guessing the spectrum from the variance in the Wiener-filter map, and (v) renormalization flow analysis of the field-theoretical problem providing the PURE filter. In all cases, the reconstruction can be described or approximated as Wiener-filter operations with assumed signal spectra derived from the data according to the same recipe, but with differing coefficients. All of these filters, except the renormalized one, exhibit a perception threshold in case of a Jeffreys prior for the unknown spectrum. Data modes with variance below this threshold do not affect the signal reconstruction at all. Filter (iv) seems to be similar to the so-called Karhune-Loeve and Feldman-Kaiser-Peacock estimators for galaxy power spectra used in cosmology, which therefore should also exhibit a marginal perception threshold if correctly implemented. We present statistical performance tests and show that the PURE filter is superior to the others, especially if the post-Wiener-filter corrections are included or in case an additional scale-independent spectral smoothness prior can be adopted.
Scott, Michael J.; Daly, Don S.; Hathaway, John E.; Lansing, Carina S.; Liu, Ying; McJeon, Haewon C.; Moss, Richard H.; Patel, Pralit L.; Peterson, Marty J.; Rice, Jennie S.; Zhou, Yuyu
2015-10-01
In this paper, an integrated assessment model (IAM) uses a newly-developed Monte Carlo analysis capability to analyze the impacts of more aggressive U.S. residential and commercial building-energy codes and equipment standards on energy consumption and energy service costs at the state level, explicitly recognizing uncertainty in technology effectiveness and cost, socioeconomics, presence or absence of carbon prices, and climate impacts on energy demand. The paper finds that aggressive building-energy codes and equipment standards are an effective, cost-saving way to reduce energy consumption in buildings and greenhouse gas emissions in U.S. states. This conclusion is robust to significant uncertainties in population, economic activity, climate, carbon prices, and technology performance and costs.
Pace, D. C. Fisher, R. K.; Van Zeeland, M. A.; Pipes, R.
2014-11-15
New phase space mapping and uncertainty analysis of energetic ion loss data in the DIII-D tokamak provides experimental results that serve as valuable constraints in first-principles simulations of energetic ion transport. Beam ion losses are measured by the fast ion loss detector (FILD) diagnostic system consisting of two magnetic spectrometers placed independently along the outer wall. Monte Carlo simulations of mono-energetic and single-pitch ions reaching the FILDs are used to determine the expected uncertainty in the measurements. Modeling shows that the variation in gyrophase of 80 keV beam ions at the FILD aperture can produce an apparent measured energy signature spanning across 50-140 keV. These calculations compare favorably with experiments in which neutral beam prompt loss provides a well known energy and pitch distribution.
Domino, Stefan Paul; Figueroa, Victor G.; Romero, Vicente Jose; Glaze, David Jason; Sherman, Martin P.; Luketa-Hanlin, Anay Josephine
2009-12-01
The objective of this work is to perform an uncertainty quantification (UQ) and model validation analysis of simulations of tests in the cross-wind test facility (XTF) at Sandia National Laboratories. In these tests, a calorimeter was subjected to a fire and the thermal response was measured via thermocouples. The UQ and validation analysis pertains to the experimental and predicted thermal response of the calorimeter. The calculations were performed using Sierra/Fuego/Syrinx/Calore, an Advanced Simulation and Computing (ASC) code capable of predicting object thermal response to a fire environment. Based on the validation results at eight diversely representative TC locations on the calorimeter the predicted calorimeter temperatures effectively bound the experimental temperatures. This post-validates Sandia's first integrated use of fire modeling with thermal response modeling and associated uncertainty estimates in an abnormal-thermal QMU analysis.
Denman, Matthew R.; Brooks, Dusty Marie
2015-08-01
Sandia National Laboratories (SNL) has conducted an uncertainty analysi s (UA) on the Fukushima Daiichi unit (1F1) accident progression wit h the MELCOR code. Volume I of the 1F1 UA discusses the physical modeling details and time history results of the UA. Volume II of the 1F1 UA discusses the statistical viewpoint. The model used was developed for a previous accident reconstruction investigation jointly sponsored by the US Department of Energy (DOE) and Nuclear Regulatory Commission (NRC). The goal of this work was to perform a focused evaluation of uncertainty in core damage progression behavior and its effect on key figures - of - merit (e.g., hydrogen production, fraction of intact fuel, vessel lower head failure) and in doing so assess the applicability of traditional sensitivity analysis techniques .
Heath, Garvin; Warner, Ethan; Steinberg, Daniel; Brandt, Adam
2015-11-19
Presentation summarizing key findings of a Joint Institute for Strategic Energy Analysis Report at an Environmental Protection Agency workshop: 'Stakeholder Workshop on EPA GHG Data on Petroleum and Natural Gas Systems' on November 19, 2015. For additional information see the JISEA report, 'Estimating U.S. Methane Emissions from the Natural Gas Supply Chain: Approaches, Uncertainties, Current Estimates, and Future Studies' NREL/TP-6A50-62820.
Chen, J.C.; Chun, R.C.; Goudreau, G.L.; Maslenikov, O.R.; Johnson, J.J.
1984-01-01
This paper summarizes the results of the dynamic response analysis of the Zion reactor containment building using three different soil-structure interaction (SSI) analytical procedures which are: the substructure method, CLASSI; the equivalent linear finite element approach, ALUSH; and the nonlinear finite element procedure, DYNA3D. Uncertainties in analyzing a soil-structure system due to SSI analysis procedures were investigated. Responses at selected locations in the structure were compared through peak accelerations and response spectra.
Slayzak, S.J.; Ryan, J.P.
1998-04-01
As part of the US Department of Energy`s Advanced Desiccant Technology Program, the National Renewable Energy Laboratory (NREL) is characterizing the state-of-the-art in desiccant dehumidifiers, the key component of desiccant cooling systems. The experimental data will provide industry and end users with independent performance evaluation and help researchers assess the energy savings potential of the technology. Accurate determination of humidity ratio is critical to this work and an understanding of the capabilities of the available instrumentation is central to its proper application. This paper compares the minimum theoretical random error in humidity ratio calculation for three common measurement methods to give a sense of the relative maximum accuracy possible for each method assuming systematic errors can be made negligible. A series of experiments conducted also illustrate the capabilities of relative humidity sensors as compared to dewpoint sensors in measuring the grain depression of desiccant dehumidifiers. These tests support the results of the uncertainty analysis. At generally available instrument accuracies, uncertainty in calculated humidity ratio for dewpoint sensors is determined to be constant at approximately 2%. Wet-bulb sensors range between 2% and 6% above 10 g/kg (4%--15% below), and relative humidity sensors vary between 4% above 90% rh and 15% at 20% rh. Below 20% rh, uncertainty for rh sensors increases dramatically. Highest currently attainable accuracies bring dewpoint instruments down to 1% uncertainty, wet bulb to a range of 1%--3% above 10 g/kg (1.5%--8% below), and rh sensors between 1% and 5%.
Office of Environmental Management (EM)
5 November 2007 Generic Technical Issue Discussion on Sensitivity and Uncertainty Analysis and Model Support Attendees: Representatives from Department of Energy-Headquarters (DOE-HQ) and the U.S. Nuclear Regulatory Commission (NRC) met at the DOE offices in Germantown, Maryland on 15 November 2007. Representatives from Department of Energy-Savannah River (DOE-SR) and the South Carolina Department of Health and Environmental Control (SCDHEC) participated in the meeting via a teleconference link.
Gauntt, Randall O.; Mattie, Patrick D.
2016-01-01
Sandia National Laboratories (SNL) has conducted an uncertainty analysis (UA) on the Fukushima Daiichi unit (1F1) accident progression with the MELCOR code. The model used was developed for a previous accident reconstruction investigation jointly sponsored by the US Department of Energy (DOE) and Nuclear Regulatory Commission (NRC). That study focused on reconstructing the accident progressions, as postulated by the limited plant data. This work was focused evaluation of uncertainty in core damage progression behavior and its effect on key figures-of-merit (e.g., hydrogen production, reactor damage state, fraction of intact fuel, vessel lower head failure). The primary intent of this study was to characterize the range of predicted damage states in the 1F1 reactor considering state of knowledge uncertainties associated with MELCOR modeling of core damage progression and to generate information that may be useful in informing the decommissioning activities that will be employed to defuel the damaged reactors at the Fukushima Daiichi Nuclear Power Plant. Additionally, core damage progression variability inherent in MELCOR modeling numerics is investigated.
Ea, L.; Hoekstra, M.
2014-04-01
This paper offers a procedure for the estimation of the numerical uncertainty of any integral or local flow quantity as a result of a fluid flow computation; the procedure requires solutions on systematically refined grids. The error is estimated with power series expansions as a function of the typical cell size. These expansions, of which four types are used, are fitted to the data in the least-squares sense. The selection of the best error estimate is based on the standard deviation of the fits. The error estimate is converted into an uncertainty with a safety factor that depends on the observed order of grid convergence and on the standard deviation of the fit. For well-behaved data sets, i.e. monotonic convergence with the expected observed order of grid convergence and no scatter in the data, the method reduces to the well known Grid Convergence Index. Examples of application of the procedure are included. - Highlights: Estimation of the numerical uncertainty of any integral or local flow quantity. Least squares fits to power series expansions to handle noisy data. Excellent results obtained for manufactured solutions. Consistent results obtained for practical CFD calculations. Reduces to the well known Grid Convergence Index for well-behaved data sets.
Younkin, J.M.; Rushton, J.E.
1980-02-05
A program is under way to design an effective International Atomic Energy Agency (IAEA) safeguards system that could be applied to the Portsmouth Gas Centrifuge Enrichment Plant (GCEP). This system would integrate nuclear material accountability with containment and surveillance. Uncertainties in material balances due to errors in the measurements of the declared uranium streams have been projected on a yearly basis for GCEP under such a system in a previous study. Because of the large uranium flows, the projected balance uncertainties were, in some cases, greater than the IAEA goal quantity of 75 kg of U-235 contained in low-enriched uranium. Therefore, it was decided to investigate the benefits of material balance periods of less than a year in order to improve the sensitivity and timeliness of the nuclear material accountability system. An analysis has been made of projected uranium measurement uncertainties for various short-term material balance periods. To simplify this analysis, only a material balance around the process area is considered and only the major UF/sub 6/ stream measurements are included. That is, storage areas are not considered and uranium waste streams are ignored. It is also assumed that variations in the cascade inventory are negligible compared to other terms in the balance so that the results obtained in this study are independent of the absolute cascade inventory. This study is intended to provide information that will serve as the basis for the future design of a dynamic materials accounting component of the IAEA safeguards system for GCEP.
A joint analysis of Planck and BICEP2 B modes including dust polarization uncertainty
Mortonson, Michael J.; Seljak, Uro E-mail: useljak@berkeley.edu
2014-10-01
We analyze BICEP2 and Planck data using a model that includes CMB lensing, gravity waves, and polarized dust. Recently published Planck dust polarization maps have highlighted the difficulty of estimating the amount of dust polarization in low intensity regions, suggesting that the polarization fractions have considerable uncertainties and may be significantly higher than previous predictions. In this paper, we start by assuming nothing about the dust polarization except for the power spectrum shape, which we take to be C{sub l}{sup BB,dust}?l{sup -2.42}. The resulting joint BICEP2+Planck analysis favors solutions without gravity waves, and the upper limit on the tensor-to-scalar ratio is r<0.11, a slight improvement relative to the Planck analysis alone which gives r<0.13 (95% c.l.). The estimated amplitude of the dust polarization power spectrum agrees with expectations for this field based on both HI column density and Planck polarization measurements at 353 GHz in the BICEP2 field. Including the latter constraint on the dust spectrum amplitude in our analysis improves the limit further to r<0.09, placing strong constraints on theories of inflation (e.g., models with r>0.14 are excluded with 99.5% confidence). We address the cross-correlation analysis of BICEP2 at 150 GHz with BICEP1 at 100 GHz as a test of foreground contamination. We find that the null hypothesis of dust and lensing with 0r= gives ??{sup 2}<2 relative to the hypothesis of no dust, so the frequency analysis does not strongly favor either model over the other. We also discuss how more accurate dust polarization maps may improve our constraints. If the dust polarization is measured perfectly, the limit can reach r<0.05 (or the corresponding detection significance if the observed dust signal plus the expected lensing signal is below the BICEP2 observations), but this degrades quickly to almost no improvement if the dust calibration error is 20% or larger or if the dust maps are not processed through the BICEP2 pipeline, inducing sampling variance noise.
SU-E-J-145: Geometric Uncertainty in CBCT Extrapolation for Head and Neck Adaptive Radiotherapy
Liu, C; Kumarasiri, A; Chetvertkov, M; Gordon, J; Chetty, I; Siddiqui, F; Kim, J
2014-06-01
Purpose: One primary limitation of using CBCT images for H'N adaptive radiotherapy (ART) is the limited field of view (FOV) range. We propose a method to extrapolate the CBCT by using a deformed planning CT for the dose of the day calculations. The aim was to estimate the geometric uncertainty of our extrapolation method. Methods: Ten H'N patients, each with a planning CT (CT1) and a subsequent CT (CT2) taken, were selected. Furthermore, a small FOV CBCT (CT2short) was synthetically created by cropping CT2 to the size of a CBCT image. Then, an extrapolated CBCT (CBCTextrp) was generated by deformably registering CT1 to CT2short and resampling with a wider FOV (42mm more from the CT2short borders), where CT1 is deformed through translation, rigid, affine, and b-spline transformations in order. The geometric error is measured as the distance map ||DVF|| produced by a deformable registration between CBCTextrp and CT2. Mean errors were calculated as a function of the distance away from the CBCT borders. The quality of all the registrations was visually verified. Results: Results were collected based on the average numbers from 10 patients. The extrapolation error increased linearly as a function of the distance (at a rate of 0.7mm per 1 cm) away from the CBCT borders in the S/I direction. The errors (??) at the superior and inferior boarders were 0.8 0.5mm and 3.0 1.5mm respectively, and increased to 2.7 2.2mm and 5.9 1.9mm at 4.2cm away. The mean error within CBCT borders was 1.16 0.54mm . The overall errors within 4.2cm error expansion were 2.0 1.2mm (sup) and 4.5 1.6mm (inf). Conclusion: The overall error in inf direction is larger due to more large unpredictable deformations in the chest. The error introduced by extrapolation is plan dependent. The mean error in the expanded region can be large, and must be considered during implementation. This work is supported in part by Varian Medical Systems, Palo Alto, CA.
Alonso, Juan J.; Iaccarino, Gianluca
2013-08-25
The following is the final report covering the entire period of this aforementioned grant, June 1, 2011 - May 31, 2013 for the portion of the effort corresponding to Stanford University (SU). SU has partnered with Sandia National Laboratories (PI: Mike S. Eldred) and Purdue University (PI: Dongbin Xiu) to complete this research project and this final report includes those contributions made by the members of the team at Stanford. Dr. Eldred is continuing his contributions to this project under a no-cost extension and his contributions to the overall effort will be detailed at a later time (once his effort has concluded) on a separate project submitted by Sandia National Laboratories. At Stanford, the team is made up of Profs. Alonso, Iaccarino, and Duraisamy, post-doctoral researcher Vinod Lakshminarayan, and graduate student Santiago Padron. At Sandia National Laboratories, the team includes Michael Eldred, Matt Barone, John Jakeman, and Stefan Domino, and at Purdue University, we have Prof. Dongbin Xiu as our main collaborator. The overall objective of this project was to develop a novel, comprehensive methodology for uncertainty quantification by combining stochastic expansions (nonintrusive polynomial chaos and stochastic collocation), the adjoint approach, and fusion with experimental data to account for aleatory and epistemic uncertainties from random variable, random field, and model form sources. The expected outcomes of this activity were detailed in the proposal and are repeated here to set the stage for the results that we have generated during the time period of execution of this project: 1. The rigorous determination of an error budget comprising numerical errors in physical space and statistical errors in stochastic space and its use for optimal allocation of resources; 2. A considerable increase in efficiency when performing uncertainty quantification with a large number of uncertain variables in complex non-linear multi-physics problems; 3. A solution to the long-time integration problem of spectral chaos approaches; 4. A rigorous methodology to account for aleatory and epistemic uncertainties, to emphasize the most important variables via dimension reduction and dimension-adaptive refinement, and to support fusion with experimental data using Bayesian inference; 5. The application of novel methodologies to time-dependent reliability studies in wind turbine applications including a number of efforts relating to the uncertainty quantification in vertical-axis wind turbine applications. In this report, we summarize all accomplishments in the project (during the time period specified) focusing on advances in UQ algorithms and deployment efforts to the wind turbine application area. Detailed publications in each of these areas have also been completed and are available from the respective conference proceedings and journals as detailed in a later section.
Meyer, Philip D.; Ye, Ming; Rockhold, Mark L.; Neuman, Shlomo P.; Cantrell, Kirk J.
2007-07-30
This report to the Nuclear Regulatory Commission (NRC) describes the development and application of a methodology to systematically and quantitatively assess predictive uncertainty in groundwater flow and transport modeling that considers the combined impact of hydrogeologic uncertainties associated with the conceptual-mathematical basis of a model, model parameters, and the scenario to which the model is applied. The methodology is based on a n extension of a Maximum Likelihood implementation of Bayesian Model Averaging. Model uncertainty is represented by postulating a discrete set of alternative conceptual models for a site with associated prior model probabilities that reflect a belief about the relative plausibility of each model based on its apparent consistency with available knowledge and data. Posterior model probabilities are computed and parameter uncertainty is estimated by calibrating each model to observed system behavior; prior parameter estimates are optionally included. Scenario uncertainty is represented as a discrete set of alternative future conditions affecting boundary conditions, source/sink terms, or other aspects of the models, with associated prior scenario probabilities. A joint assessment of uncertainty results from combining model predictions computed under each scenario using as weight the posterior model and prior scenario probabilities. The uncertainty methodology was applied to modeling of groundwater flow and uranium transport at the Hanford Site 300 Area. Eight alternative models representing uncertainty in the hydrogeologic and geochemical properties as well as the temporal variability were considered. Two scenarios represent alternative future behavior of the Columbia River adjacent to the site were considered. The scenario alternatives were implemented in the models through the boundary conditions. Results demonstrate the feasibility of applying a comprehensive uncertainty assessment to large-scale, detailed groundwater flow and transport modeling and illustrate the benefits of the methodology I providing better estimates of predictive uncertiay8, quantitative results for use in assessing risk, and an improved understanding of the system behavior and the limitations of the models.
Gray, Genetha Anne; Watson, Jean-Paul; Silva Monroy, Cesar Augusto; Gramacy, Robert B.
2013-09-01
This report summarizes findings and results of the Quantifiably Secure Power Grid Operation, Management, and Evolution LDRD. The focus of the LDRD was to develop decisionsupport technologies to enable rational and quantifiable risk management for two key grid operational timescales: scheduling (day-ahead) and planning (month-to-year-ahead). Risk or resiliency metrics are foundational in this effort. The 2003 Northeast Blackout investigative report stressed the criticality of enforceable metrics for system resiliency - the grid's ability to satisfy demands subject to perturbation. However, we neither have well-defined risk metrics for addressing the pervasive uncertainties in a renewable energy era, nor decision-support tools for their enforcement, which severely impacts efforts to rationally improve grid security. For day-ahead unit commitment, decision-support tools must account for topological security constraints, loss-of-load (economic) costs, and supply and demand variability - especially given high renewables penetration. For long-term planning, transmission and generation expansion must ensure realized demand is satisfied for various projected technological, climate, and growth scenarios. The decision-support tools investigated in this project paid particular attention to tailoriented risk metrics for explicitly addressing high-consequence events. Historically, decisionsupport tools for the grid consider expected cost minimization, largely ignoring risk and instead penalizing loss-of-load through artificial parameters. The technical focus of this work was the development of scalable solvers for enforcing risk metrics. Advanced stochastic programming solvers were developed to address generation and transmission expansion and unit commitment, minimizing cost subject to pre-specified risk thresholds. Particular attention was paid to renewables where security critically depends on production and demand prediction accuracy. To address this concern, powerful filtering techniques for spatio-temporal measurement assimilation were used to develop short-term predictive stochastic models. To achieve uncertaintytolerant solutions, very large numbers of scenarios must be simultaneously considered. One focus of this work was investigating ways of reasonably reducing this number.
J. Zhu; K. Pohlmann; J. Chapman; C. Russell; R.W.H. Carroll; D. Shafer
2009-09-10
Yucca Mountain (YM), Nevada, has been proposed by the U.S. Department of Energy as the nation’s first permanent geologic repository for spent nuclear fuel and highlevel radioactive waste. In this study, the potential for groundwater advective pathways from underground nuclear testing areas on the Nevada Test Site (NTS) to intercept the subsurface of the proposed land withdrawal area for the repository is investigated. The timeframe for advective travel and its uncertainty for possible radionuclide movement along these flow pathways is estimated as a result of effective-porosity value uncertainty for the hydrogeologic units (HGUs) along the flow paths. Furthermore, sensitivity analysis is conducted to determine the most influential HGUs on the advective radionuclide travel times from the NTS to the YM area. Groundwater pathways are obtained using the particle tracking package MODPATH and flow results from the Death Valley regional groundwater flow system (DVRFS) model developed by the U.S. Geological Survey (USGS). Effectiveporosity values for HGUs along these pathways are one of several parameters that determine possible radionuclide travel times between the NTS and proposed YM withdrawal areas. Values and uncertainties of HGU porosities are quantified through evaluation of existing site effective-porosity data and expert professional judgment and are incorporated in the model through Monte Carlo simulations to estimate mean travel times and uncertainties. The simulations are based on two steady-state flow scenarios, the pre-pumping (the initial stress period of the DVRFS model), and the 1998 pumping (assuming steady-state conditions resulting from pumping in the last stress period of the DVRFS model) scenarios for the purpose of long-term prediction and monitoring. The pumping scenario accounts for groundwater withdrawal activities in the Amargosa Desert and other areas downgradient of YM. Considering each detonation in a clustered region around Pahute Mesa (in the NTS operational areas 18, 19, 20, and 30) under the water table as a particle, those particles from the saturated zone detonations were tracked forward using MODPATH to identify hydraulically downgradient groundwater discharge zones and to determine the particles from which detonations will intercept the proposed YM withdrawal area. Out of the 71 detonations in the saturated zone, the flowpaths from 23 of the 71 detonations will intercept the proposed YM withdrawal area under the pre-pumping scenario. For the 1998 pumping scenario, the flowpaths from 55 of the 71 detonations will intercept the proposed YM withdrawal area. Three different effective-porosity data sets compiled in support of regional models of groundwater flow and contaminant transport developed for the NTS and the proposed YM repository are used. The results illustrate that mean minimum travel time from underground nuclear testing areas on the NTS to the proposed YM repository area can vary from just over 700 to nearly 700,000 years, depending on the locations of the underground detonations, the pumping scenarios considered, and the effective-porosity value distributions used. Groundwater pumping scenarios are found to significantly impact minimum particle travel time from the NTS to the YM area by altering flowpath geometry. Pumping also attracts many more additional groundwater flowpaths from the NTS to the YM area. The sensitivity analysis further illustrates that for both the pre-pumping and 1998 pumping scenarios, the uncertainties in effective-porosity values for five of the 27 HGUs considered account for well over 90 percent of the effective-porosity-related travel time uncertainties for the flowpaths having the shortest mean travel times to YM.
Forest, Chris E.; Barsugli, Joseph J.; Li, Wei
2015-02-20
Final report for DOE Project: DE-SC-0005399 -- Linking the uncertainty of low frequency variability in tropical forcing in regional climate change. The project utilizes multiple atmospheric general circulation models (AGCMs) to examine the regional climate sensitivity to tropical sea surface temperature forcing through a series of ensemble experiments. The overall goal for this work is to use the global teleconnection operator (GTO) as a metric to assess the impact of model structural differences on the uncertainties in regional climate variability.
Loose, Verne W.; Lowry, Thomas Stephen; Malczynski, Leonard A.; Tidwell, Vincent Carroll; Stamber, Kevin Louis; Reinert, Rhonda K.; Backus, George A.; Warren, Drake E.; Zagonel, Aldo A.; Ehlen, Mark Andrew; Klise, Geoffrey T.; Vargas, Vanessa N.
2010-04-01
Policy makers will most likely need to make decisions about climate policy before climate scientists have resolved all relevant uncertainties about the impacts of climate change. This study demonstrates a risk-assessment methodology for evaluating uncertain future climatic conditions. We estimate the impacts of climate change on U.S. state- and national-level economic activity from 2010 to 2050. To understand the implications of uncertainty on risk and to provide a near-term rationale for policy interventions to mitigate the course of climate change, we focus on precipitation, one of the most uncertain aspects of future climate change. We use results of the climate-model ensemble from the Intergovernmental Panel on Climate Change's (IPCC) Fourth Assessment Report 4 (AR4) as a proxy for representing climate uncertainty over the next 40 years, map the simulated weather from the climate models hydrologically to the county level to determine the physical consequences on economic activity at the state level, and perform a detailed 70-industry analysis of economic impacts among the interacting lower-48 states. We determine the industry-level contribution to the gross domestic product and employment impacts at the state level, as well as interstate population migration, effects on personal income, and consequences for the U.S. trade balance. We show that the mean or average risk of damage to the U.S. economy from climate change, at the national level, is on the order of $1 trillion over the next 40 years, with losses in employment equivalent to nearly 7 million full-time jobs.
Hagos, Samson M.; Leung, Lai-Yung Ruby; Xue, Yongkang; Boone, Aaron; de Sales, Fernando; Neupane, Naresh; Huang, Maoyi; Yoon, Jin-Ho
2014-02-22
Land use and land cover over Africa have changed substantially over the last sixty years and this change has been proposed to affect monsoon circulation and precipitation. This study examines the uncertainties on the effect of these changes on the African Monsoon system and Sahel precipitation using an ensemble of regional model simulations with different combinations of land surface and cumulus parameterization schemes. Although the magnitude of the response covers a broad range of values, most of the simulations show a decline in Sahel precipitation due to the expansion of pasture and croplands at the expense of trees and shrubs and an increase in surface air temperature.
Smith, P.J.; Eddings, E.G.; Ring, T.; Thornock, J.; Draper, T.; Isaac, B.; Rezeai, D.; Toth, P.; Wu, Y.; Kelly, K.
2014-08-01
The objective of this task is to produce predictive capability with quantified uncertainty bounds for the heat flux in commercial-scale, tangentially fired, oxy-coal boilers. Validation data came from the Alstom Boiler Simulation Facility (BSF) for tangentially fired, oxy-coal operation. This task brings together experimental data collected under Alstom’s DOE project for measuring oxy-firing performance parameters in the BSF with this University of Utah project for large eddy simulation (LES) and validation/uncertainty quantification (V/UQ). The Utah work includes V/UQ with measurements in the single-burner facility where advanced strategies for O2 injection can be more easily controlled and data more easily obtained. Highlights of the work include: • Simulations of Alstom’s 15 megawatt (MW) BSF, exploring the uncertainty in thermal boundary conditions. A V/UQ analysis showed consistency between experimental results and simulation results, identifying uncertainty bounds on the quantities of interest for this system (Subtask 9.1) • A simulation study of the University of Utah’s oxy-fuel combustor (OFC) focused on heat flux (Subtask 9.2). A V/UQ analysis was used to show consistency between experimental and simulation results. • Measurement of heat flux and temperature with new optical diagnostic techniques and comparison with conventional measurements (Subtask 9.3). Various optical diagnostics systems were created to provide experimental data to the simulation team. The final configuration utilized a mid-wave infrared (MWIR) camera to measure heat flux and temperature, which was synchronized with a high-speed, visible camera to utilize two-color pyrometry to measure temperature and soot concentration. • Collection of heat flux and temperature measurements in the University of Utah’s OFC for use is subtasks 9.2 and 9.3 (Subtask 9.4). Several replicates were carried to better assess the experimental error. Experiments were specifically designed for the generation of high-fidelity data from a turbulent oxy-coal flame for the validation of oxy-coal simulation models. Experiments were also conducted on the OFC to determine heat flux profiles using advanced strategies for O2 injection. This is important when considering retrofit of advanced O2 injection in retrofit configurations.
Anderson, D.R.; Trauth, K.M. ); Hora, S.C. )
1991-01-01
Iterative, annual performance-assessment calculations are being performed for the Waste Isolation Pilot Plant (WIPP), a planned underground repository in southeastern New Mexico, USA for the disposal of transuranic waste. The performance-assessment calculations estimate the long-term radionuclide releases from the disposal system to the accessible environment. Because direct experimental data in some areas are presently of insufficient quantity to form the basis for the required distributions. Expert judgment was used to estimate the concentrations of specific radionuclides in a brine exiting a repository room or drift as it migrates up an intruding borehole, and also the distribution coefficients that describe the retardation of radionuclides in the overlying Culebra Dolomite. The variables representing these concentrations and coefficients have been shown by 1990 sensitivity analyses to be among the set of parameters making the greatest contribution to the uncertainty in WIPP performance-assessment predictions. Utilizing available information, the experts (one expert panel addressed concentrations and a second panel addressed retardation) developed an understanding of the problem and were formally elicited to obtain probability distributions that characterize the uncertainty in fixed, but unknown, quantities. The probability distributions developed by the experts are being incorporated into the 1991 performance-assessment calculations. 16 refs., 4 tabs.
Griffin, Joshua D.; Eldred, Michael Scott; Martinez-Canales, Monica L.; Watson, Jean-Paul; Kolda, Tamara Gibson; Giunta, Anthony Andrew; Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J.; Hough, Patricia Diane; Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Brown, Shannon L.
2006-10-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
Bao, Jie; Hou, Zhangshuan; Fang, Yilin; Ren, Huiying; Lin, Guang
2015-06-01
A series of numerical test cases reflecting broad and realistic ranges of geological formation and preexisting fault properties was developed to systematically evaluate the impacts of preexisting faults on pressure buildup and ground surface uplift during CO? injection. Numerical test cases were conducted using a coupled hydro-geomechanical simulator, eSTOMP (extreme-scale Subsurface Transport over Multiple Phases). For efficient sensitivity analysis and reliable construction of a reduced-order model, a quasi-Monte Carlo sampling method was applied to effectively sample a high-dimensional input parameter space to explore uncertainties associated with hydrologic, geologic, and geomechanical properties. The uncertainty quantification results show that the impacts on geomechanical response from the pre-existing faults mainly depend on reservoir and fault permeability. When the fault permeability is two to three orders of magnitude smaller than the reservoir permeability, the fault can be considered as an impermeable block that resists fluid transport in the reservoir, which causes pressure increase near the fault. When the fault permeability is close to the reservoir permeability, or higher than 10?? m in this study, the fault can be considered as a conduit that penetrates the caprock, connecting the fluid flow between the reservoir and the upper rock.
Eldred, Michael Scott; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Hough, Patricia Diane; Gay, David M.; Eddy, John P.; Haskell, Karen H.
2010-05-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
Eldred, Michael Scott; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Hough, Patricia Diane; Gay, David M.; Eddy, John P.; Haskell, Karen H.
2010-05-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.
Griffin, Joshua D. (Sandia National lababoratory, Livermore, CA); Eldred, Michael Scott; Martinez-Canales, Monica L.; Watson, Jean-Paul; Kolda, Tamara Gibson (Sandia National lababoratory, Livermore, CA); Giunta, Anthony Andrew; Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J.; Hough, Patricia Diane (Sandia National lababoratory, Livermore, CA); Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Brown, Shannon L.
2006-10-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.
Sung, Yixing; Adams, Brian M.; Secker, Jeffrey R.
2011-12-01
The CASL Level 1 Milestone CASL.P4.01, successfully completed in December 2011, aimed to 'conduct, using methodologies integrated into VERA, a detailed sensitivity analysis and uncertainty quantification of a crud-relevant problem with baseline VERA capabilities (ANC/VIPRE-W/BOA).' The VUQ focus area led this effort, in partnership with AMA, and with support from VRI. DAKOTA was coupled to existing VIPRE-W thermal-hydraulics and BOA crud/boron deposit simulations representing a pressurized water reactor (PWR) that previously experienced crud-induced power shift (CIPS). This work supports understanding of CIPS by exploring the sensitivity and uncertainty in BOA outputs with respect to uncertain operating and model parameters. This report summarizes work coupling the software tools, characterizing uncertainties, and analyzing the results of iterative sensitivity and uncertainty studies. These studies focused on sensitivity and uncertainty of CIPS indicators calculated by the current version of the BOA code used in the industry. Challenges with this kind of analysis are identified to inform follow-on research goals and VERA development targeting crud-related challenge problems.
Demartin, Federico; Mariani, Elisa [Dipartimento di Fisica, Universita di Milano, Via Celoria 16, I-20133 Milano (Italy); Forte, Stefano; Vicini, Alessandro [Dipartimento di Fisica, Universita di Milano, Via Celoria 16, I-20133 Milano (Italy); INFN, Sezione di Milano, Via Celoria 16, I-20133 Milano (Italy); Rojo, Juan [INFN, Sezione di Milano, Via Celoria 16, I-20133 Milano (Italy)
2010-07-01
We present a systematic study of uncertainties due to parton distributions (PDFs) and the strong coupling on the gluon-fusion production cross section of the standard model Higgs at the Tevatron and LHC colliders. We compare procedures and results when three recent sets of PDFs are used, CTEQ6.6, MSTW08, and NNPDF1.2, and we discuss specifically the way PDF and strong coupling uncertainties are combined. We find that results obtained from different PDF sets are in reasonable agreement if a common value of the strong coupling is adopted. We show that the addition in quadrature of PDF and {alpha}{sub s} uncertainties provides an adequate approximation to the full result with exact error propagation. We discuss a simple recipe to determine a conservative PDF+{alpha}{sub s} uncertainty from available global parton sets, and we use it to estimate this uncertainty on the given process to be about 10% at the Tevatron and 5% at the LHC for a light Higgs.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Hagos, Samson M.; Leung, Lai-Yung Ruby; Xue, Yongkang; Boone, Aaron; de Sales, Fernando; Neupane, Naresh; Huang, Maoyi; Yoon, Jin-Ho
2014-02-22
Land use and land cover over Africa have changed substantially over the last sixty years and this change has been proposed to affect monsoon circulation and precipitation. This study examines the uncertainties on the effect of these changes on the African Monsoon system and Sahel precipitation using an ensemble of regional model simulations with different combinations of land surface and cumulus parameterization schemes. Although the magnitude of the response covers a broad range of values, most of the simulations show a decline in Sahel precipitation due to the expansion of pasture and croplands at the expense of trees and shrubsmore » and an increase in surface air temperature.« less
Dougherty, T.; Maciuca, C.; McAssey, E.V. Jr.; Reddy, D.G.; Yang, B.W.
1990-05-01
In June 1988, Savannah River Laboratory requested that the Heat Transfer Research Facility modify the flow excursion program, which had been in progress since November 1987, to include testing of single tubes in vertical down-flow over a range of length to diameter (L/D) ratios of 100 to 500. The impetus for the request was the desire to obtain experimental data as quickly as possible for code development work. In July 1988, HTRF submitted a proposal to SRL indicating that by modifying a facility already under construction the data could be obtained within three to four months. In January 1990, HTFR issued report CU-HTRF-T4, part 1. This report contained the technical discussion of the results from the single tube uniformly heated tests. The present report is part 2 of CU-HTRF-T4 which contains further discussion of the uncertainty analysis and the complete set of data.
Redus, K. S.; Hampshire, G. J.; Patterson, J. E.; Perkins, A. B.
2002-02-25
The Waste Acceptance Criteria Forecasting and Analysis Capability System (WACFACS) is used to plan for, evaluate, and control the supply of approximately 1.8 million yd3 of low-level radioactive, TSCA, and RCRA hazardous wastes from over 60 environmental restoration projects between FY02 through FY10 to the Oak Ridge Environmental Management Waste Management Facility (EMWMF). WACFACS is a validated decision support tool that propagates uncertainties inherent in site-related contaminant characterization data, disposition volumes during EMWMF operations, and project schedules to quantitatively determine the confidence that risk-based performance standards are met. Trade-offs in schedule, volumes of waste lots, and allowable concentrations of contaminants are performed to optimize project waste disposition, regulatory compliance, and disposal cell management.
R. W. Youngblood
2010-10-01
The concept of “margin” has a long history in nuclear licensing and in the codification of good engineering practices. However, some traditional applications of “margin” have been carried out for surrogate scenarios (such as design basis scenarios), without regard to the actual frequencies of those scenarios, and have been carried out with in a systematically conservative fashion. This means that the effectiveness of the application of the margin concept is determined in part by the original choice of surrogates, and is limited in any case by the degree of conservatism imposed on the evaluation. In the RISMC project, which is part of the Department of Energy’s “Light Water Reactor Sustainability Program” (LWRSP), we are developing a risk-informed characterization of safety margin. Beginning with the traditional discussion of “margin” in terms of a “load” (a physical challenge to system or component function) and a “capacity” (the capability of that system or component to accommodate the challenge), we are developing the capability to characterize probabilistic load and capacity spectra, reflecting both aleatory and epistemic uncertainty in system response. For example, the probabilistic load spectrum will reflect the frequency of challenges of a particular severity. Such a characterization is required if decision-making is to be informed optimally. However, in order to enable the quantification of probabilistic load spectra, existing analysis capability needs to be extended. Accordingly, the INL is working on a next-generation safety analysis capability whose design will allow for much more efficient parameter uncertainty analysis, and will enable a much better integration of reliability-related and phenomenology-related aspects of margin.
Deng, Hailin; Dai, Zhenxue; Jiao, Zunsheng; Stauffer, Philip H.; Surdam, Ronald C.
2011-01-01
Many geological, geochemical, geomechanical and hydrogeological factors control CO{sub 2} storage in subsurface. Among them heterogeneity in saline aquifer can seriously influence design of injection wells, CO{sub 2} injection rate, CO{sub 2} plume migration, storage capacity, and potential leakage and risk assessment. This study applies indicator geostatistics, transition probability and Markov chain model at the Rock Springs Uplift, Wyoming generating facies-based heterogeneous fields for porosity and permeability in target saline aquifer (Pennsylvanian Weber sandstone) and surrounding rocks (Phosphoria, Madison and cap-rock Chugwater). A multiphase flow simulator FEHM is then used to model injection of CO{sub 2} into the target saline aquifer involving field-scale heterogeneity. The results reveal that (1) CO{sub 2} injection rates in different injection wells significantly change with local permeability distributions; (2) brine production rates in different pumping wells are also significantly impacted by the spatial heterogeneity in permeability; (3) liquid pressure evolution during and after CO{sub 2} injection in saline aquifer varies greatly for different realizations of random permeability fields, and this has potential important effects on hydraulic fracturing of the reservoir rock, reactivation of pre-existing faults and the integrity of the cap-rock; (4) CO{sub 2} storage capacity estimate for Rock Springs Uplift is 6614 {+-} 256 Mt at 95% confidence interval, which is about 36% of previous estimate based on homogeneous and isotropic storage formation; (5) density profiles show that the density of injected CO{sub 2} below 3 km is close to that of the ambient brine with given geothermal gradient and brine concentration, which indicates CO{sub 2} plume can sink to the deep before reaching thermal equilibrium with brine. Finally, we present uncertainty analysis of CO{sub 2} leakage into overlying formations due to heterogeneity in both the target saline aquifer and surrounding formations. This uncertainty in leakage will be used to feed into risk assessment modeling.
Gauntt, Randall O.; Bixler, Nathan E.; Wagner, Kenneth Charles
2014-03-01
A methodology for using the MELCOR code with the Latin Hypercube Sampling method was developed to estimate uncertainty in various predicted quantities such as hydrogen generation or release of fission products under severe accident conditions. In this case, the emphasis was on estimating the range of hydrogen sources in station blackout conditions in the Sequoyah Ice Condenser plant, taking into account uncertainties in the modeled physics known to affect hydrogen generation. The method uses user-specified likelihood distributions for uncertain model parameters, which may include uncertainties of a stochastic nature, to produce a collection of code calculations, or realizations, characterizing the range of possible outcomes. Forty MELCOR code realizations of Sequoyah were conducted that included 10 uncertain parameters, producing a range of in-vessel hydrogen quantities. The range of total hydrogen produced was approximately 583kg 131kg. Sensitivity analyses revealed expected trends with respected to the parameters of greatest importance, however, considerable scatter in results when plotted against any of the uncertain parameters was observed, with no parameter manifesting dominant effects on hydrogen generation. It is concluded that, with respect to the physics parameters investigated, in order to further reduce predicted hydrogen uncertainty, it would be necessary to reduce all physics parameter uncertainties similarly, bearing in mind that some parameters are inherently uncertain within a range. It is suspected that some residual uncertainty associated with modeling complex, coupled and synergistic phenomena, is an inherent aspect of complex systems and cannot be reduced to point value estimates. The probabilistic analyses such as the one demonstrated in this work are important to properly characterize response of complex systems such as severe accident progression in nuclear power plants.
Swiler, Laura Painton; Eldred, Michael Scott
2009-09-01
This report documents the results of an FY09 ASC V&V Methods level 2 milestone demonstrating new algorithmic capabilities for mixed aleatory-epistemic uncertainty quantification. Through the combination of stochastic expansions for computing aleatory statistics and interval optimization for computing epistemic bounds, mixed uncertainty analysis studies are shown to be more accurate and efficient than previously achievable. Part I of the report describes the algorithms and presents benchmark performance results. Part II applies these new algorithms to UQ analysis of radiation effects in electronic devices and circuits for the QASPR program.
Piepel, Gregory F.; Cooley, Scott K.; Kuhn, William L.; Rector, David R.; Heredia-Langner, Alejandro
2015-05-01
This report discusses the statistical methods for quantifying uncertainties in 1) test responses and other parameters in the Large Scale Integrated Testing (LSIT), and 2) estimates of coefficients and predictions of mixing performance from models that relate test responses to test parameters. Testing at a larger scale has been committed to by Bechtel National, Inc. and the U.S. Department of Energy (DOE) to “address uncertainties and increase confidence in the projected, full-scale mixing performance and operations” in the Waste Treatment and Immobilization Plant (WTP).
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Aad, G.
2015-01-15
The jet energy scale (JES) and its systematic uncertainty are determined for jets measured with the ATLAS detector using proton–proton collision data with a centre-of-mass energy of \\(\\sqrt{s}=7\\) TeV corresponding to an integrated luminosity of \\(4.7\\) \\(\\,\\,\\text{ fb }^{-1}\\). Jets are reconstructed from energy deposits forming topological clusters of calorimeter cells using the anti-\\(k_{t}\\) algorithm with distance parameters \\(R=0.4\\) or \\(R=0.6\\), and are calibrated using MC simulations. A residual JES correction is applied to account for differences between data and MC simulations. This correction and its systematic uncertainty are estimated using a combination of in situ techniques exploiting the transversemore » momentum balance between a jet and a reference object such as a photon or a \\(Z\\) boson, for \\({20} \\le p_{\\mathrm {T}}^\\mathrm {jet}<{1000}\\, ~\\mathrm{GeV }\\) and pseudorapidities \\(|\\eta |<{4.5}\\). The effect of multiple proton–proton interactions is corrected for, and an uncertainty is evaluated using in situ techniques. The smallest JES uncertainty of less than 1 % is found in the central calorimeter region (\\(|\\eta |<{1.2}\\)) for jets with \\({55} \\le p_{\\mathrm {T}}^\\mathrm {jet}<{500}\\, ~\\mathrm{GeV }\\). For central jets at lower \\(p_{\\mathrm {T}}\\), the uncertainty is about 3 %. A consistent JES estimate is found using measurements of the calorimeter response of single hadrons in proton–proton collisions and test-beam data, which also provide the estimate for \\(p_{\\mathrm {T}}^\\mathrm {jet}> 1\\) TeV. The calibration of forward jets is derived from dijet \\(p_{\\mathrm {T}}\\) balance measurements. The resulting uncertainty reaches its largest value of 6 % for low-\\(p_{\\mathrm {T}}\\) jets at \\(|\\eta |=4.5\\). In addition, JES uncertainties due to specific event topologies, such as close-by jets or selections of event samples with an enhanced content of jets originating from light quarks or gluons, are also discussed. The magnitude of these uncertainties depends on the event sample used in a given physics analysis, but typically amounts to 0.5–3 %.« less
Aad, G.
2015-01-15
The jet energy scale (JES) and its systematic uncertainty are determined for jets measured with the ATLAS detector using proton–proton collision data with a centre-of-mass energy of \\(\\sqrt{s}=7\\) TeV corresponding to an integrated luminosity of \\(4.7\\) \\(\\,\\,\\text{ fb }^{-1}\\). Jets are reconstructed from energy deposits forming topological clusters of calorimeter cells using the anti-\\(k_{t}\\) algorithm with distance parameters \\(R=0.4\\) or \\(R=0.6\\), and are calibrated using MC simulations. A residual JES correction is applied to account for differences between data and MC simulations. This correction and its systematic uncertainty are estimated using a combination of in situ techniques exploiting the transverse momentum balance between a jet and a reference object such as a photon or a \\(Z\\) boson, for \\({20} \\le p_{\\mathrm {T}}^\\mathrm {jet}<{1000}\\, ~\\mathrm{GeV }\\) and pseudorapidities \\(|\\eta |<{4.5}\\). The effect of multiple proton–proton interactions is corrected for, and an uncertainty is evaluated using in situ techniques. The smallest JES uncertainty of less than 1 % is found in the central calorimeter region (\\(|\\eta |<{1.2}\\)) for jets with \\({55} \\le p_{\\mathrm {T}}^\\mathrm {jet}<{500}\\, ~\\mathrm{GeV }\\). For central jets at lower \\(p_{\\mathrm {T}}\\), the uncertainty is about 3 %. A consistent JES estimate is found using measurements of the calorimeter response of single hadrons in proton–proton collisions and test-beam data, which also provide the estimate for \\(p_{\\mathrm {T}}^\\mathrm {jet}> 1\\) TeV. The calibration of forward jets is derived from dijet \\(p_{\\mathrm {T}}\\) balance measurements. The resulting uncertainty reaches its largest value of 6 % for low-\\(p_{\\mathrm {T}}\\) jets at \\(|\\eta |=4.5\\). In addition, JES uncertainties due to specific event topologies, such as close-by jets or selections of event samples with an enhanced content of jets originating from light quarks or gluons, are also discussed. The magnitude of these uncertainties depends on the event sample used in a given physics analysis, but typically amounts to 0.5–3 %.
Mathews, Grant J. [Center for Astrophysics, Department of Physics, University of Notre Dame, Notre Dame, IN 46556 (United States); Hidaka, Jun; Kajino, Toshitaka; Suzuki, Jyutaro [National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, Tokyo 181-8588 (Japan)
2014-08-01
Direct measurements of the core collapse supernova rate (R{sub SN}) in the redshift range 0 ? z ? 1 appear to be about a factor of two smaller than the rate inferred from the measured cosmic massive star formation rate (SFR). This discrepancy would imply that about one-half of the massive stars that have been born in the local observed comoving volume did not explode as luminous supernovae. In this work, we explore the possibility that one could clarify the source of this 'supernova rate problem' by detecting the energy spectrum of supernova relic neutrinos with a next generation 10{sup 6} ton water ?erenkov detector like Hyper-Kamiokande. First, we re-examine the supernova rate problem. We make a conservative alternative compilation of the measured SFR data over the redshift range 0 ?z ? 7. We show that by only including published SFR data for which the dust obscuration has been directly determined, the ratio of the observed massive SFR to the observed supernova rate R{sub SN} has large uncertainties ?1.8{sub ?0.6}{sup +1.6} and is statistically consistent with no supernova rate problem. If we further consider that a significant fraction of massive stars will end their lives as faint ONeMg SNe or as failed SNe leading to a black hole remnant, then the ratio reduces to ?1.1{sub ?0.4}{sup +1.0} and the rate problem is essentially solved. We next examine the prospects for detecting this solution to the supernova rate problem. We first study the sources of uncertainty involved in the theoretical estimates of the neutrino detection rate and analyze whether the spectrum of relic neutrinos can be used to independently identify the existence of a supernova rate problem and its source. We consider an ensemble of published and unpublished core collapse supernova simulation models to estimate the uncertainties in the anticipated neutrino luminosities and temperatures. We illustrate how the spectrum of detector events might be used to establish the average neutrino temperature and constrain SN models. We also consider supernova ?-process nucleosynthesis to deduce constraints on the temperature of the various neutrino flavors. We study the effects of neutrino oscillations on the detected neutrino energy spectrum and also show that one might distinguish the equation of state (EoS) as well as the cause of the possible missing luminous supernovae from the detection of supernova relic neutrinos. We also analyze a possible enhanced contribution from failed supernovae leading to a black hole remnant as a solution to the supernova rate problem. We conclude that indeed it might be possible (though difficult) to measure the neutrino temperature, neutrino oscillations, and the EoS and confirm this source of missing luminous supernovae by the detection of the spectrum of relic neutrinos.
Bao, Jie; Hou, Zhangshuan; Fang, Yilin; Ren, Huiying; Lin, Guang
2013-08-12
A series of numerical test cases reflecting broad and realistic ranges of geological formation properties was developed to systematically evaluate and compare the impacts of those properties on geomechanical responses to CO2 injection. A coupled hydro-geomechanical subsurface transport simulator, STOMP (Subsurface Transport over Multiple Phases), was adopted to simulate the CO2 migration process and geomechanical behaviors of the surrounding geological formations. A quasi-Monte Carlo sampling method was applied to efficiently sample a high-dimensional parameter space consisting of injection rate and 14 subsurface formation properties, including porosity, permeability, entry pressure, irreducible gas and aqueous saturation, Young’s modulus, and Poisson’s ratio for both reservoir and caprock. Generalized cross-validation and analysis of variance methods were used to quantitatively measure the significance of the 15 input parameters. Reservoir porosity, permeability, and injection rate were found to be among the most significant factors affecting the geomechanical responses to the CO2 injection. We used a quadrature generalized linear model to build a reduced-order model that can estimate the geomechanical response instantly instead of running computationally expensive numerical simulations. The injection pressure and ground surface displacement are often monitored for injection well safety, and are believed can partially reflect the risk of fault reactivation and seismicity. Based on the reduced order model and response surface, the input parameters can be screened for control the risk of induced seismicity. The uncertainty of the subsurface structure properties cause the numerical simulation based on a single or a few samples does not accurately estimate the geomechanical response in the actual injection site. Probability of risk can be used to evaluate and predict the risk of injection when there are great uncertainty in the subsurface properties and operation conditions.
SU-E-J-125: Classification of CBCT Noises in Terms of Their Contribution to Proton Range Uncertainty
Brousmiche, S; Orban de Xivry, J; Macq, B; Seco, J
2014-06-01
Purpose: This study assesses the potential use of CBCT images in adaptive protontherapy by estimating the contribution of the main sources of noise and calibration errors to the proton range uncertainty. Methods: Measurements intended to highlight each particular source have been achieved by adapting either the testbench configuration, e.g. use of filtration, fan-beam collimation, beam stop arrays, phantoms and detector reset light, or the sequence of correction algorithms including water precorrection. Additional Monte-Carlo simulations have been performed to complement these measurements, especially for the beam hardening and the scatter cases. Simulations of proton beams penetration through the resulting images have then been carried out to quantify the range change due to these effects. The particular case of a brain irradiation is considered mainly because of the multiple effects that the skull bones have on the internal soft tissues. Results: On top of the range error sources is the undercorrection of scatter. Its influence has been analyzed from a comparison of fan-beam and full axial FOV acquisitions. In this case, large range errors of about 12 mm can be reached if the assumption is made that the scatter has only a constant contribution over the projection images. Even the detector lag, which a priori induces a much smaller effect, has been shown to contribute for up to 2 mm to the overall error if its correction only aims at reducing the skin artefact. This last result can partially be understood by the larger interface between tissues and bones inside the skull. Conclusion: This study has set the basis of a more systematical analysis of the effect CBCT noise on range uncertainties based on a combination of measurements, simulations and theoretical results. With our method, even more subtle effects such as the cone-beam artifact or the detector lag can be assessed. SBR and JOR are financed by iMagX, a public-private partnership between the region Wallone of Belgium and IBA under convention #1217662.
Uncertainty Quantification and Management for Multi-scale Nuclear Materials Modeling
McDowell, David; Deo, Chaitanya; Zhu, Ting; Wang, Yan
2015-10-21
Understanding and improving microstructural mechanical stability in metals and alloys is central to the development of high strength and high ductility materials for cladding and cores structures in advanced fast reactors. Design and enhancement of radiation-induced damage tolerant alloys are facilitated by better understanding the connection of various unit processes to collective responses in a multiscale model chain, including: dislocation nucleation, absorption and desorption at interfaces; vacancy production, radiation-induced segregation of Cr and Ni at defect clusters (point defect sinks) in BCC Fe-Cr ferritic/martensitic steels; investigation of interaction of interstitials and vacancies with impurities (V, Nb, Ta, Mo, W, Al, Si, P, S); time evolution of swelling (cluster growth) phenomena of irradiated materials; and energetics and kinetics of dislocation bypass of defects formed by interstitial clustering and formation of prismatic loops, informing statistical models of continuum character with regard to processes of dislocation glide, vacancy agglomeration and swelling, climb and cross slip.
Lawson, Matthew; Debusschere, Bert J.; Najm, Habib N.; Sargsyan, Khachik; Frank, Jonathan H.
2010-09-01
Recent advances in high frame rate complementary metal-oxide-semiconductor (CMOS) cameras coupled with high repetition rate lasers have enabled laser-based imaging measurements of the temporal evolution of turbulent reacting flows. This measurement capability provides new opportunities for understanding the dynamics of turbulence-chemistry interactions, which is necessary for developing predictive simulations of turbulent combustion. However, quantitative imaging measurements using high frame rate CMOS cameras require careful characterization of the their noise, non-linear response, and variations in this response from pixel to pixel. We develop a noise model and calibration tools to mitigate these problems and to enable quantitative use of CMOS cameras. We have demonstrated proof of principle for image de-noising using both wavelet methods and Bayesian inference. The results offer new approaches for quantitative interpretation of imaging measurements from noisy data acquired with non-linear detectors. These approaches are potentially useful in many areas of scientific research that rely on quantitative imaging measurements.
Edwards, T. B.
2013-03-14
The Savannah River National Laboratory (SRNL) has been working with the Savannah River Remediation (SRR) Defense Waste Processing Facility (DWPF) in the development and implementation of a flammability control strategy for DWPFs melter operation during the processing of Sludge Batch 8 (SB8). SRNLs support has been in response to technical task requests that have been made by SRRs Waste Solidification Engineering (WSE) organization. The flammability control strategy relies on measurements that are performed on Slurry Mix Evaporator (SME) samples by the DWPF Laboratory. Measurements of nitrate, oxalate, formate, and total organic carbon (TOC) standards generated by the DWPF Laboratory are presented in this report, and an evaluation of the uncertainties of these measurements is provided. The impact of the uncertainties of these measurements on DWPFs strategy for controlling melter flammability also is evaluated. The strategy includes monitoring each SME batch for its nitrate content and its TOC content relative to the nitrate content and relative to the antifoam additions made during the preparation of the SME batch. A linearized approach for monitoring the relationship between TOC and nitrate is developed, equations are provided that integrate the measurement uncertainties into the flammability control strategy, and sample calculations for these equations are shown to illustrate the impact of the uncertainties on the flammability control strategy.
Impacts of WRF Physics and Measurement Uncertainty on California Wintertime Model Wet Bias
Chin, H S; Caldwell, P M; Bader, D C
2009-07-22
The Weather and Research Forecast (WRF) model version 3.0.1 is used to explore California wintertime model wet bias. In this study, two wintertime storms are selected from each of four major types of large-scale conditions; Pineapple Express, El Nino, La Nina, and synoptic cyclones. We test the impacts of several model configurations on precipitation bias through comparison with three sets of gridded surface observations; one from the National Oceanographic and Atmospheric Administration, and two variations from the University of Washington (without and with long-term trend adjustment; UW1 and UW2, respectively). To simplify validation, California is divided into 4 regions (Coast, Central Valley, Mountains, and Southern California). Simulations are driven by North American Regional Reanalysis data to minimize large-scale forcing error. Control simulations are conducted with 12-km grid spacing (low resolution) but additional experiments are performed at 2-km (high) resolution to evaluate the robustness of microphysics and cumulus parameterizations to resolution changes. We find that the choice of validation dataset has a significant impact on the model wet bias, and the forecast skill of model precipitation depends strongly on geographic location and storm type. Simulations with right physics options agree better with UW1 observations. In 12-km resolution simulations, the Lin microphysics and the Kain-Fritsch cumulus scheme have better forecast skill in the coastal region while Goddard, Thompson, and Morrison microphysics, and the Grell-Devenyi cumulus scheme perform better in the rest of California. The effect of planetary boundary layer, soil-layer, and radiation physics on model precipitation is weaker than that of microphysics and cumulus processes for short- to medium-range low-resolution simulations. Comparison of 2-km and 12-km resolution runs suggests a need for improvement of cumulus schemes, and supports the use of microphysics schemes in coarser-grid applications.
Li, Boyan; Ou, Longwen; Dang, Qi; Meyer, Pimphan A.; Jones, Susanne B.; Brown, Robert C.; Wright, Mark
2015-11-01
This study evaluates the techno-economic uncertainty in cost estimates for two emerging biorefinery technologies for biofuel production: in situ and ex situ catalytic pyrolysis. Stochastic simulations based on process and economic parameter distributions are applied to calculate biorefinery performance and production costs. The probability distributions for the minimum fuel-selling price (MFSP) indicate that in situ catalytic pyrolysis has an expected MFSP of $4.20 per gallon with a standard deviation of 1.15, while the ex situ catalytic pyrolysis has a similar MFSP with a smaller deviation ($4.27 per gallon and 0.79 respectively). These results suggest that a biorefinery based on ex situ catalytic pyrolysis could have a lower techno-economic risk than in situ pyrolysis despite a slightly higher MFSP cost estimate. Analysis of how each parameter affects the NPV indicates that internal rate of return, feedstock price, total project investment, electricity price, biochar yield and bio-oil yield are significant parameters which have substantial impact on the MFSP for both in situ and ex situ catalytic pyrolysis.
Huang, Maoyi; Hou, Zhangshuan; Leung, Lai-Yung R.; Ke, Yinghai; Liu, Ying; Fang, Zhufeng; Sun, Yu
2013-12-01
With the emergence of earth system models as important tools for understanding and predicting climate change and implications to mitigation and adaptation, it has become increasingly important to assess the fidelity of the land component within earth system models to capture realistic hydrological processes and their response to the changing climate and quantify the associated uncertainties. This study investigates the sensitivity of runoff simulations to major hydrologic parameters in version 4 of the Community Land Model (CLM4) by integrating CLM4 with a stochastic exploratory sensitivity analysis framework at 20 selected watersheds from the Model Parameter Estimation Experiment (MOPEX) spanning a wide range of climate and site conditions. We found that for runoff simulations, the most significant parameters are those related to the subsurface runoff parameterizations. Soil texture related parameters and surface runoff parameters are of secondary significance. Moreover, climate and soil conditions play important roles in the parameter sensitivity. In general, site conditions within water-limited hydrologic regimes and with finer soil texture result in stronger sensitivity of output variables, such as runoff and its surface and subsurface components, to the input parameters in CLM4. This study demonstrated the feasibility of parameter inversion for CLM4 using streamflow observations to improve runoff simulations. By ranking the significance of the input parameters, we showed that the parameter set dimensionality could be reduced for CLM4 parameter calibration under different hydrologic and climatic regimes so that the inverse problem is less ill posed.
Herschtal, Alan; Te Marvelde, Luc; Mengersen, Kerrie; Foroudi, Farshad; Eade, Thomas; Pham, Daniel; Caine, Hannah; Kron, Tomas
2015-06-01
Objective: To develop a mathematical tool that can update a patient's planning target volume (PTV) partway through a course of radiation therapy to more precisely target the tumor for the remainder of treatment and reduce dose to surrounding healthy tissue. Methods and Materials: Daily on-board imaging was used to collect large datasets of displacements for patients undergoing external beam radiation therapy for solid tumors. Bayesian statistical modeling of these geometric uncertainties was used to optimally trade off between displacement data collected from previously treated patients and the progressively accumulating data from a patient currently partway through treatment, to optimally predict future displacements for that patient. These predictions were used to update the PTV position and margin width for the remainder of treatment, such that the clinical target volume (CTV) was more precisely targeted. Results: Software simulation of dose to CTV and normal tissue for 2 real prostate displacement datasets consisting of 146 and 290 patients treated with a minimum of 30 fractions each showed that re-evaluating the PTV position and margin width after 8 treatment fractions reduced healthy tissue dose by 19% and 17%, respectively, while maintaining CTV dose. Conclusion: Incorporating patient-specific displacement patterns from early in a course of treatment allows PTV adaptation for the remainder of treatment. This substantially reduces the dose to healthy tissues and thus can reduce radiation therapy–induced toxicities, improving patient outcomes.
Kobayashi, Masakazu A. R.; Inoue, Yoshiyuki; Inoue, Akio K.
2013-01-20
The cosmic star formation rate density (CSFRD) has been observationally investigated out to redshift z {approx_equal} 10. However, most of the theoretical models for galaxy formation underpredict the CSFRD at z {approx}> 1. Since the theoretical models reproduce the observed luminosity functions (LFs), luminosity densities (LDs), and stellar mass density at each redshift, this inconsistency does not simply imply that theoretical models should incorporate some missing unknown physical processes in galaxy formation. Here, we examine the cause of this inconsistency at UV wavelengths by using a mock catalog of galaxies generated by a semi-analytic model of galaxy formation. We find that this inconsistency is due to two observational uncertainties: the dust obscuration correction and the conversion from UV luminosity to star formation rate (SFR). The methods for correction of obscuration and SFR conversion used in observational studies result in the overestimation of the CSFRD by {approx}0.1-0.3 dex and {approx}0.1-0.2 dex, respectively, compared to the results obtained directly from our mock catalog. We present new empirical calibrations for dust attenuation and conversion from observed UV LFs and LDs into the CSFRD.
Elkind, M.M.; Bedford, J.; Benjamin, S.A.; Waldren, C.A. ); Gotchy, R.L. )
1990-10-01
A study was undertaken by five radiation scientists to examine the feasibility of reducing the uncertainties in the estimation of risk due to protracted low doses of ionizing radiation. In addressing the question of feasibility, a review was made by the study group: of the cellular, molecular, and mammalian radiation data that are available; of the way in which altered oncogene properties could be involved in the loss of growth control that culminates in tumorigenesis; and of the progress that had been made in the genetic characterizations of several human and animal neoplasms. On the basis of this analysis, the study group concluded that, at the present time, it is feasible to mount a program of radiation research directed at the mechanism(s) of radiation-induced cancer with special reference to risk of neoplasia due to protracted, low doses of sparsely ionizing radiation. To implement a program of research, a review was made of the methods, techniques, and instruments that would be needed. This review was followed by a survey of the laboratories and institutions where scientific personnel and facilities are known to be available. A research agenda of the principal and broad objectives of the program is also discussed. 489 refs., 21 figs., 14 tabs.
Makarov, Yuri V.; Du, Pengwei; Pai, M. A.; McManus, Bart
2014-01-14
The variability and uncertainty of wind power production requires increased flexibility in power systems, or more operational reserves to main a satisfactory level of reliability. The incremental increase in reserve requirement caused by wind power is often studied separately from the effects of loads. Accordingly, the cost in procuring reserves is allocated based on this simplification rather than a fair and transparent calculation of the different resources contribution to the reserve requirement. This work proposes a new allocation mechanism for intermittency and variability of resources regardless of their type. It is based on a new formula, called grid balancing metric (GBM). The proposed GBM has several distinct features: 1) it is directly linked to the control performance standard (CPS) scores and interconnection frequency performance, 2) it provides scientifically defined allocation factors for individual resources, 3) the sum of allocation factors within any group of resources is equal to the groups collective allocation factor (linearity), and 4) it distinguishes helpers and harmers. The paper illustrates and provides results of the new approach based on actual transmission system operator (TSO) data.
Austri, R. Ruiz de
2013-11-01
We study in detail the impact of the current uncertainty in nucleon matrix elements on the sensitivity of direct and indirect experimental techniques for dark matter detection. We perform two scans in the framework of the cMSSM: one using recent values of the pion-sigma term obtained from Lattice QCD, and the other using values derived from experimental measurements. The two choices correspond to extreme values quoted in the literature and reflect the current tension between different ways of obtaining information about the structure of the nucleon. All other inputs in the scans, astrophysical and from particle physics, are kept unchanged. We use two experiments, XENON100 and IceCube, as benchmark cases to illustrate our case. We find that the interpretation of dark matter search results from direct detection experiments is more sensitive to the choice of the central values of the hadronic inputs than the results of indirect search experiments. The allowed regions of cMSSM parameter space after including XENON100 constraints strongly differ depending on the assumptions on the hadronic matrix elements used. On the other hand, the constraining potential of IceCube is almost independent of the choice of these values.
Uncertainty Quantification of Calculated Temperatures for the AGR 3/4 Experiment
Pham, Binh Thi-Cam
2015-09-01
A series of Advanced Gas Reactor (AGR) irradiation experiments are being conducted within the Advanced Reactor Technology (ART) Fuel Development and Qualification Program. The main objectives of the fuel experimental campaign are to provide the necessary data on fuel performance to support fuel process development, qualify a fuel design and fabrication process for normal operation and accident conditions, and support development and validation of fuel performance and fission product transport models and codes (PLN 3636, “Technical Program Plan for INL Advanced Reactor Technologies Technology Development Office/Advanced Gas Reactor Fuel Development and Qualification Program”). The AGR 3/4 test was inserted in the Northeast Flux Trap position in the Advanced Test Reactor (ATR) core at Idaho National Laboratory (INL) in December 2011 and successfully completed irradiation in mid-April 2014, resulting in irradiation of the tristructural isotropic (TRISO) fuel for 369.1 effective full-power days (EFPDs) during approximately 2.4 calendar years. The AGR 3/4 data, including the irradiation data and calculated results, were qualified and stored in the Nuclear Data Management and Analysis System (NDMAS) (Pham 2015). To support the U.S. TRISO fuel performance assessment and to provide data for validation of fuel performance and fission product transport models and codes, the daily as run thermal analysis has been performed separately on each of twelve AGR 3/4 capsules for the entire irradiation as discussed in (ECAR-2807, “AGR 3/4 Daily As Run Thermal Analyses”). The ABAQUS code’s finite element based thermal model predicts the daily average volume average (VA) fuel temperature (FT), peak FT, and graphite matrix, sleeve, and sink temperature in each capsule. The JMOCUP simulation codes were also created to perform depletion calculations for the AGR 3/4 experiment (ECAR-2753, “JMOCUP As-Run Daily Physics Depletion Calculation for the AGR 3/4 TRISO Particle Experiment in ATR Northeast Flux Trap”). This depletion analysis provides fast fluence and fission heat rate data for all components (fuel compacts, graphite rings, stainless steel retainer, etc.), which are used as inputs for the thermal analysis codes. The graphite temperatures from thermocouples (TCs) in each capsule were used to calibrate these thermal analysis codes. However, given the high rate of TC failure under the harsh irradiation and thermal conditions in the AGR capsules, the thermal analysis results are very useful in aiding TC data qualification, increasing the confidence in delineating failures of the measuring instruments (TCs) from physical mechanisms that may have shifted the system thermal response (Pham and Einerson 2011).
Uncertainty with New Technology
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
with New Technology As the U.S. electricity grid experiences the effects of aging infrastructure, a push toward renewable technologies and increasing demands for energy, new technologies may be necessary to economically meet future grid demands. However, adopting new technology is difficult when decision makers do not understand the new technology and do not know how it comtpares to alternatives. Energy storage technologies show great promise for improving the grid's operations. However, as a
Dispelling Clouds of Uncertainty
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Lewis, Ernie; Teixeira, João
2015-06-15
How do you build a climate model that accounts for cloud physics and the transitions between cloud regimes? Use MAGIC.
Dispelling Clouds of Uncertainty
Lewis, Ernie; Teixeira, João
2015-06-15
How do you build a climate model that accounts for cloud physics and the transitions between cloud regimes? Use MAGIC.
Piepel, Gregory F.; Matzke, Brett D.; Sego, Landon H.; Amidan, Brett G.
2013-04-27
This report discusses the methodology, formulas, and inputs needed to make characterization and clearance decisions for Bacillus anthracis-contaminated and uncontaminated (or decontaminated) areas using a statistical sampling approach. Specifically, the report includes the methods and formulas for calculating the number of samples required to achieve a specified confidence in characterization and clearance decisions confidence in making characterization and clearance decisions for a specified number of samples for two common statistically based environmental sampling approaches. In particular, the report addresses an issue raised by the Government Accountability Office by providing methods and formulas to calculate the confidence that a decision area is uncontaminated (or successfully decontaminated) if all samples collected according to a statistical sampling approach have negative results. Key to addressing this topic is the probability that an individual sample result is a false negative, which is commonly referred to as the false negative rate (FNR). The two statistical sampling approaches currently discussed in this report are 1) hotspot sampling to detect small isolated contaminated locations during the characterization phase, and 2) combined judgment and random (CJR) sampling during the clearance phase. Typically if contamination is widely distributed in a decision area, it will be detectable via judgment sampling during the characterization phrase. Hotspot sampling is appropriate for characterization situations where contamination is not widely distributed and may not be detected by judgment sampling. CJR sampling is appropriate during the clearance phase when it is desired to augment judgment samples with statistical (random) samples. The hotspot and CJR statistical sampling approaches are discussed in the report for four situations: 1. qualitative data (detect and non-detect) when the FNR = 0 or when using statistical sampling methods that account for FNR > 0 2. qualitative data when the FNR > 0 but statistical sampling methods are used that assume the FNR = 0 3. quantitative data (e.g., contaminant concentrations expressed as CFU/cm2) when the FNR = 0 or when using statistical sampling methods that account for FNR > 0 4. quantitative data when the FNR > 0 but statistical sampling methods are used that assume the FNR = 0. For Situation 2, the hotspot sampling approach provides for stating with Z% confidence that a hotspot of specified shape and size with detectable contamination will be found. Also for Situation 2, the CJR approach provides for stating with X% confidence that at least Y% of the decision area does not contain detectable contamination. Forms of these statements for the other three situations are discussed in Section 2.2. Statistical methods that account for FNR > 0 currently only exist for the hotspot sampling approach with qualitative data (or quantitative data converted to qualitative data). This report documents the current status of methods and formulas for the hotspot and CJR sampling approaches. Limitations of these methods are identified. Extensions of the methods that are applicable when FNR = 0 to account for FNR > 0, or to address other limitations, will be documented in future revisions of this report if future funding supports the development of such extensions. For quantitative data, this report also presents statistical methods and formulas for 1. quantifying the uncertainty in measured sample results 2. estimating the true surface concentration corresponding to a surface sample 3. quantifying the uncertainty of the estimate of the true surface concentration. All of the methods and formulas discussed in the report were applied to example situations to illustrate application of the methods and interpretation of the results.
Venkataraman, R.; Nakazawa, D.
2012-07-01
Mathematical methods are being increasingly employed in the efficiency calibration of gamma based systems for non-destructive assay (NDA) of radioactive waste and for the estimation of the Total Measurement Uncertainty (TMU). Recently, ASTM (American Society for Testing and Materials) released a standard guide for use of modeling passive gamma measurements. This is a testimony to the common use and increasing acceptance of mathematical techniques in the calibration and characterization of NDA systems. Mathematical methods offer flexibility and cost savings in terms of rapidly incorporating calibrations for multiple container types, geometries, and matrix types in a new waste assay system or a system that may already be operational. Mathematical methods are also useful in modeling heterogeneous matrices and non-uniform activity distributions. In compliance with good practice, if a computational method is used in waste assay (or in any other radiological application), it must be validated or benchmarked using representative measurements. In this paper, applications involving mathematical methods in gamma based NDA systems are discussed with several examples. The application examples are from NDA systems that were recently calibrated and performance tested. Measurement based verification results are presented. Mathematical methods play an important role in the efficiency calibration of gamma based NDA systems. This is especially true when the measurement program involves a wide variety of complex item geometries and matrix combinations for which the development of physical standards may be impractical. Mathematical methods offer a cost effective means to perform TMU campaigns. Good practice demands that all mathematical estimates be benchmarked and validated using representative sets of measurements. (authors)
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Ilgen, A. G.; Cygan, R. T.
2015-12-07
During the Frio-I Brine Pilot CO2 injection experiment in 2004, distinct geochemical changes in response to the injection of 1600 tons of CO2 were recorded in samples collected from the monitoring well. Previous geochemical modeling studies have considered dissolution of calcite and iron oxyhydroxides, or release of adsorbed iron, as the most likely sources of the increased ion concentrations. We explore in this modeling study possible alternative sources of the increasing calcium and iron, based on the data from the detailed petrographic characterization of the Upper Frio Formation “C”. Particularly, we evaluate whether dissolution of pyrite and oligoclase (anorthitemore » component) can account for the observed geochemical changes. Due to kinetic limitations, dissolution of pyrite and anorthite cannot account for the increased iron and calcium concentrations on the time scale of the field test (10 days). However, dissolution of these minerals is contributing to carbonate and clay mineral precipitation on the longer time scales (1000 years). The one-dimensional reactive transport model predicts carbonate minerals, dolomite and ankerite, as well as clay minerals kaolinite, nontronite and montmorillonite, will precipitate in the Frio Formation “C” sandstone as the system progresses towards chemical equilibrium during a 1000-year period. Cumulative uncertainties associated with using different thermodynamic databases, activity correction models (Pitzer vs. B-dot), and extrapolating to reservoir temperature, are manifested in the difference in the predicted mineral phases. Furthermore, these models are consistent with regards to the total volume of mineral precipitation and porosity values which are predicted to within 0.002%.« less
Office of Scientific and Technical Information (OSTI)
5-091 Analysis of the Uncertainty in Wind Measurements from the Atmospheric Radiation Measurement Doppler Lidar during XPIA: Field Campaign Report R Newsom March 2016 CLIMATE RESEARCH FACILITY DISCLAIMER This report was prepared as an account of work sponsored by the U.S. Government. Neither the United States nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Reducing Transaction Costs for Energy Efficiency Investments and Analysis of Economic Risk Associated With Building Performance Uncertainties Small Buildings and Small Portfolios Program Rois Langner, Bob Hendron, and Eric Bonnema Technical Report NREL/TP-5500-60976 August 2014 NREL is a national laboratory of the U.S. Department of Energy Office of Energy Efficiency & Renewable Energy Operated by the Alliance for Sustainable Energy, LLC This report is available at no cost from the National
Kiedrowski, Brian C.
2012-06-19
Within the last decade, there has been increasing interest in the calculation of cross section sensitivity coefficients of k{sub eff} for integral experiment design and uncertainty analysis. The OECD/NEA has an Expert Group devoted to Sensitivity and Uncertainty Analysis within the Working Party for Nuclear Criticality Safety. This expert group has developed benchmarks to assess code capabilities and performance for doing sensitivity and uncertainty analysis. Phase III of a set of sensitivity benchmarks evaluates capabilities for computing sensitivity coefficients. MCNP6 has the capability to compute cross section sensitivities for k{sub eff} using continuous-energy physics. To help verify this capability, results for the Phase III benchmark cases are generated and submitted to the Expert Group for comparison. The Phase III benchmark has three cases: III.1, an array of MOX fuel pins, III.2, a series of infinite lattices of MOX fuel pins with varying pitches, and III.3 two spheres with homogeneous mixtures of UF{sub 4} and polyethylene with different enrichments.
Scott, Michael J.; Daly, Don S.; Zhou, Yuyu; Rice, Jennie S.; Patel, Pralit L.; McJeon, Haewon C.; Kyle, G. Page; Kim, Son H.; Eom, Jiyong; Clarke, Leon E.
2014-05-01
Improving the energy efficiency of the building stock, commercial equipment and household appliances can have a major impact on energy use, carbon emissions, and building services. Subnational regions such as U.S. states wish to increase their energy efficiency, reduce carbon emissions or adapt to climate change. Evaluating subnational policies to reduce energy use and emissions is difficult because of the uncertainties in socioeconomic factors, technology performance and cost, and energy and climate policies. Climate change may undercut such policies. Assessing these uncertainties can be a significant modeling and computation burden. As part of this uncertainty assessment, this paper demonstrates how a decision-focused sensitivity analysis strategy using fractional factorial methods can be applied to reveal the important drivers for detailed uncertainty analysis.
Unwin, Stephen D.; Eslinger, Paul W.; Johnson, Kenneth I.
2012-09-20
The Risk-Informed Safety Margin Characterization (RISMC) pathway is a set of activities defined under the U.S. Department of Energy (DOE) Light Water Reactor Sustainability Program. The overarching objective of RISMC is to support plant life-extension decision-making by providing a state-of-knowledge characterization of safety margins in key systems, structures, and components (SSCs). A technical challenge at the core of this effort is to establish the conceptual and technical feasibility of analyzing safety margin in a risk-informed way, which, unlike conventionally defined deterministic margin analysis, would be founded on probabilistic characterizations of uncertainty in SSC performance. In the context of probabilistic risk assessment (PRA) technology, there has arisen a general consensus about the distinctive roles of two types of uncertainty: aleatory and epistemic, where the former represents irreducible, random variability inherent in a system, whereas the latter represents a state of knowledge uncertainty on the part of the analyst about the system which is, in principle, reducible through further research. While there is often some ambiguity about how any one contributing uncertainty in an analysis should be classified, there has nevertheless emerged a broad consensus on the meanings of these uncertainty types in the PRA setting. However, while RISMC methodology shares some features with conventional PRA, it will nevertheless be a distinctive methodology set. Therefore, the paradigms for classification of uncertainty in the PRA setting may not fully port to the RISMC environment. Yet the notion of risk-informed margin is based on the characterization of uncertainty, and it is therefore critical to establish a common understanding of uncertainty in the RISMC setting.
Sutton, M; Blink, J A; Greenberg, H R; Sharma, M
2012-04-25
The Used Fuel Disposition (UFD) Campaign within the Department of Energy's Office of Nuclear Energy (DOE-NE) Fuel Cycle Technology (FCT) program has been tasked with investigating the disposal of the nation's spent nuclear fuel (SNF) and high-level nuclear waste (HLW) for a range of potential waste forms and geologic environments. The planning, construction, and operation of a nuclear disposal facility is a long-term process that involves engineered barriers that are tailored to both the geologic environment and the waste forms being emplaced. The UFD Campaign is considering a range of fuel cycles that in turn produce a range of waste forms. The UFD Campaign is also considering a range of geologic media. These ranges could be thought of as adding uncertainty to what the disposal facility design will ultimately be; however, it may be preferable to thinking about the ranges as adding flexibility to design of a disposal facility. For example, as the overall DOE-NE program and industrial actions result in the fuel cycles that will produce waste to be disposed, and the characteristics of those wastes become clear, the disposal program retains flexibility in both the choice of geologic environment and the specific repository design. Of course, other factors also play a major role, including local and State-level acceptance of the specific site that provides the geologic environment. In contrast, the Yucca Mountain Project (YMP) repository license application (LA) is based on waste forms from an open fuel cycle (PWR and BWR assemblies from an open fuel cycle). These waste forms were about 90% of the total waste, and they were the determining waste form in developing the engineered barrier system (EBS) design for the Yucca Mountain Repository design. About 10% of the repository capacity was reserved for waste from a full recycle fuel cycle in which some actinides were extracted for weapons use, and the remaining fission products and some minor actinides were encapsulated in borosilicate glass. Because the heat load of the glass was much less than the PWR and BWR assemblies, the glass waste form was able to be co-disposed with the open cycle waste, by interspersing glass waste packages among the spent fuel assembly waste packages. In addition, the Yucca Mountain repository was designed to include some research reactor spent fuel and naval reactor spent fuel, within the envelope that was set using the commercial reactor assemblies as the design basis waste form. This milestone report supports Sandia National Laboratory milestone M2FT-12SN0814052, and is intended to be a chapter in that milestone report. The independent technical review of this LLNL milestone was performed at LLNL and is documented in the electronic Information Management (IM) system at LLNL. The objective of this work is to investigate what aspects of quantifying, characterizing, and representing the uncertainty associated with the engineered barrier are affected by implementing different advanced nuclear fuel cycles (e.g., partitioning and transmutation scenarios) together with corresponding designs and thermal constraints.
Zhu, Chen
2015-03-31
An important question for the Carbon Capture, Storage, and Utility program is “can we adequately predict the CO2 plume migration?” For tracking CO2 plume development, the Sleipner project in the Norwegian North Sea provides more time-lapse seismic monitoring data than any other sites, but significant uncertainties still exist for some of the reservoir parameters. In Part I, we assessed model uncertainties by applying two multi-phase compositional simulators to the Sleipner Benchmark model for the uppermost layer (Layer 9) of the Utsira Sand and calibrated our model against the time-lapsed seismic monitoring data for the site from 1999 to 2010. Approximate match with the observed plume was achieved by introducing lateral permeability anisotropy, adding CH4 into the CO2 stream, and adjusting the reservoir temperatures. Model-predicted gas saturation, CO2 accumulation thickness, and CO2 solubility in brine—none were used as calibration metrics—were all comparable with the interpretations of the seismic data in the literature. In Part II & III, we evaluated the uncertainties of predicted long-term CO2 fate up to 10,000 years, due to uncertain reaction kinetics. Under four scenarios of the kinetic rate laws, the temporal and spatial evolution of CO2 partitioning into the four trapping mechanisms (hydrodynamic/structural, solubility, residual/capillary, and mineral) was simulated with ToughReact, taking into account the CO2-brine-rock reactions and the multi-phase reactive flow and mass transport. Modeling results show that different rate laws for mineral dissolution and precipitation reactions resulted in different predicted amounts of trapped CO2 by carbonate minerals, with scenarios of the conventional linear rate law for feldspar dissolution having twice as much mineral trapping (21% of the injected CO2) as scenarios with a Burch-type or Alekseyev et al.–type rate law for feldspar dissolution (11%). So far, most reactive transport modeling (RTM) studies for CCUS have used the conventional rate law and therefore simulated the upper bound of mineral trapping. However, neglecting the regional flow after injection, as most previous RTM studies have done, artificially limits the extent of geochemical reactions as if it were in a batch system. By replenishing undersaturated groundwater from upstream, the Utsira Sand is reactive over a time scale of 10,000 years. The results from this project have been communicated via five peer-reviewed journal articles, four conference proceeding papers, and 19 invited and contributed presentations at conferences and seminars.
Nie, K; Yue, N; Chen, T; Millevoi, R; Qin, S; Guo, J
2014-06-15
Purpose: In lung radiation treatment, PTV is formed with a margin around GTV (or CTV/ITV). Although GTV is most likely of water equivalent density, the PTV margin may be formed with the surrounding low-density tissues, which may lead to unreal dosimetric plan. This study is to evaluate whether the concern of dose calculation inside the PTV with only low density margin could be justified in lung treatment. Methods: Three SBRT cases were analyzed. The PTV from the original plan (Plan-O) was created with a 510 mm margin outside the ITV to incorporate setup errors and all mobility from 10 respiratory phases. Test plans were generated with the GTV shifted to the PTV edge to simulate the extreme situations with maximum setup uncertainties. Two representative positions as the very posterior-superior (Plan-PS) and anterior-inferior (Plan-AI) edge were considered. The virtual GTV was assigned a density of 1.0 g.cm?3 and surrounding lung, including the PTV margin, was defined as 0.25 g.cm?3. Also, additional plan with a 1mm tissue-margin instead of full lung-margin was created to evaluate whether a composite-margin (Plan-Comp) has a better approximation for dose calculation. All plans were generated on the average CT using Analytical Anisotropic Algorithm with heterogeneity correction on and all planning parameters/monitor unites remained unchanged. DVH analyses were performed for comparisons. Results: Despite the non-static dose distribution, the high-dose region synchronized with tumor positions. This might due to scatter conditions as greater doses were absorbed in the solid-tumor than in the surrounding low-density lungtissue. However, it still showed missing target coverage in general. Certain level of composite-margin might give better approximation for the dosecalculation. Conclusion: Our exploratory results suggest that with the lungmargin only, the planning dose of PTV might overestimate the coverage of the target during treatment. The significance of this overestimation might warrant further investigation.
Sader, John E.; Yousefi, Morteza; Friend, James R.; Melbourne Centre for Nanofabrication, Clayton, Victoria 3800
2014-02-15
Thermal noise spectra of nanomechanical resonators are used widely to characterize their physical properties. These spectra typically exhibit a Lorentzian response, with additional white noise due to extraneous processes. Least-squares fits of these measurements enable extraction of key parameters of the resonator, including its resonant frequency, quality factor, and stiffness. Here, we present general formulas for the uncertainties in these fit parameters due to sampling noise inherent in all thermal noise spectra. Good agreement with Monte Carlo simulation of synthetic data and measurements of an Atomic Force Microscope (AFM) cantilever is demonstrated. These formulas enable robust interpretation of thermal noise spectra measurements commonly performed in the AFM and adaptive control of fitting procedures with specified tolerances.
Lee, Si-Yong; Zaluski, Wade; Will, Robert; Eisinger, Chris; Matthews, Vince; McPherson, Brian
2013-09-01
The purpose of this report is to report results of reservoir model simulation analyses for forecasting subsurface CO2 storage capacity estimation for the most promising formations in the Rocky Mountain region of the USA. A particular emphasis of this project was to assess uncertainty of the simulation-based forecasts. Results illustrate how local-scale data, including well information, number of wells, and location of wells, affect storage capacity estimates and what degree of well density (number of wells over a fixed area) may be required to estimate capacity within a specified degree of confidence. A major outcome of this work was development of a new workflow of simulation analysis, accommodating the addition of “random pseudo wells” to represent virtual characterization wells.
Doolan, P; Sharp, G; Testa, M; Lu, H-M; Bentefour, E; Royle, G
2014-06-15
Purpose: Beam range uncertainty in proton treatment comes primarily from converting the patient's X-ray CT (xCT) dataset to relative stopping power (RSP). Current practices use a single curve for this conversion, produced by a stoichiometric calibration based on tissue composition data for average, healthy, adult humans, but not for the individual in question. Proton radiographs produce water-equivalent path length (WEPL) maps, dependent on the RSP of tissues within the specific patient. This work investigates the use of such WEPL maps to optimize patient-specific calibration curves for reducing beam range uncertainty. Methods: The optimization procedure works on the principle of minimizing the difference between the known WEPL map, obtained from a proton radiograph, and a digitally-reconstructed WEPL map (DRWM) through an RSP dataset, by altering the calibration curve that is used to convert the xCT into an RSP dataset. DRWMs were produced with Plastimatch, an in-house developed software, and an optimization procedure was implemented in Matlab. Tests were made on a range of systems including simulated datasets with computed WEPL maps and phantoms (anthropomorphic and real biological tissue) with WEPL maps measured by single detector proton radiography. Results: For the simulated datasets, the optimizer showed excellent results. It was able to either completely eradicate or significantly reduce the root-mean-square-error (RMSE) in the WEPL for the homogeneous phantoms (to zero for individual materials or from 1.5% to 0.2% for the simultaneous optimization of multiple materials). For the heterogeneous phantom the RMSE was reduced from 1.9% to 0.3%. Conclusion: An optimization procedure has been designed to produce patient-specific calibration curves. Test results on a range of systems with different complexities and sizes have been promising for accurate beam range control in patients. This project was funded equally by the Engineering and Physical Sciences Research Council (UK) and Ion Beam Applications (Louvain-La-Neuve, Belgium)
Degteva, M. O.; Anspaugh, L. R.; Napier, Bruce A.
2009-10-23
This is the concluding Progress Report for Project 1.1 of the U.S./Russia Joint Coordinating Committee on Radiation Effects Research (JCCRER). An overwhelming majority of our work this period has been to complete our primary obligation of providing a new version of the Techa River Dosimetry System (TRDS), which we call TRDS-2009D; the D denotes deterministic. This system provides estimates of individual doses to members of the Extended Techa River Cohort (ETRC) and post-natal doses to members of the Techa River Offspring Cohort (TROC). The latter doses were calculated with use of the TRDS-2009D. The doses for the members of the ETRC have been made available to the American and Russian epidemiologists in September for their studies in deriving radiogenic risk factors. Doses for members of the TROC are being provided to European and Russian epidemiologists, as partial input for studies of risk in this population. Two of our original goals for the completion of this nine-year phase of Project 1.1 were not completed. These are completion of TRDS-2009MC, which was to be a Monte Carlo version of TRDS-2009 that could be used for more explicit analysis of the impact of uncertainty in doses on uncertainty in radiogenic risk factors. The second incomplete goal was to be the provision of household specific external doses (rather than village average). This task was far along, but had to be delayed due to the lead investigators work on consideration of a revised source term.
Frey, H. Christopher; Tran, Loan K.
1999-04-30
This is Volume 2 of a two-volume set of reports describing work conducted at North Carolina State University sponsored by Grant Number DE-FG05-95ER30250 by the U.S. Department of Energy. The title of the project is “Quantitative Analysis of Variability and Uncertainty in Acid Rain Assessments.” The work conducted under sponsorship of this grant pertains primarily to two main topics: (1) development of new methods for quantitative analysis of variability and uncertainty applicable to any type of model; and (2) analysis of variability and uncertainty in the performance, emissions, and cost of electric power plant combustion-based NO_{x} control technologies. These two main topics are reported separately in Volumes 1 and 2.
Peterson, Kara J.; Bochev, Pavel Blagoveston; Paskaleva, Biliana S.
2010-09-01
Arctic sea ice is an important component of the global climate system and due to feedback effects the Arctic ice cover is changing rapidly. Predictive mathematical models are of paramount importance for accurate estimates of the future ice trajectory. However, the sea ice components of Global Climate Models (GCMs) vary significantly in their prediction of the future state of Arctic sea ice and have generally underestimated the rate of decline in minimum sea ice extent seen over the past thirty years. One of the contributing factors to this variability is the sensitivity of the sea ice to model physical parameters. A new sea ice model that has the potential to improve sea ice predictions incorporates an anisotropic elastic-decohesive rheology and dynamics solved using the material-point method (MPM), which combines Lagrangian particles for advection with a background grid for gradient computations. We evaluate the variability of the Los Alamos National Laboratory CICE code and the MPM sea ice code for a single year simulation of the Arctic basin using consistent ocean and atmospheric forcing. Sensitivities of ice volume, ice area, ice extent, root mean square (RMS) ice speed, central Arctic ice thickness, and central Arctic ice speed with respect to ten different dynamic and thermodynamic parameters are evaluated both individually and in combination using the Design Analysis Kit for Optimization and Terascale Applications (DAKOTA). We find similar responses for the two codes and some interesting seasonal variability in the strength of the parameters on the solution.
Travaglio, C.; Gallino, R.; Rauscher, T.; Dauphas, N.; Rpke, F. K.; Hillebrandt, W. E-mail: claudia.travaglio@b2fh.org
2014-11-10
The nucleosynthesis of proton-rich isotopes is calculated for multi-dimensional Chandrasekhar-mass models of Type Ia supernovae (SNe Ia) with different metallicities. The predicted abundances of the short-lived radioactive isotopes {sup 92}Nb, {sup 97,} {sup 98}Tc, and {sup 146}Sm are given in this framework. The abundance seeds are obtained by calculating s-process nucleosynthesis in the material accreted onto a carbon-oxygen white dwarf from a binary companion. A fine grid of s-seeds at different metallicities and {sup 13}C-pocket efficiencies is considered. A galactic chemical evolution model is used to predict the contribution of SN Ia to the solar system p-nuclei composition measured in meteorites. Nuclear physics uncertainties are critical to determine the role of SNe Ia in the production of {sup 92}Nb and {sup 146}Sm. We find that, if standard Chandrasekhar-mass SNe Ia are at least 50% of all SN Ia, they are strong candidates for reproducing the radiogenic p-process signature observed in meteorites.
Scott, Michael J.; Daly, Don S.; Hathaway, John E.; Lansing, Carina S.; Liu, Ying; McJeon, Haewon C.; Moss, Richard H.; Patel, Pralit L.; Peterson, Marty J.; Rice, Jennie S.; Zhou, Yuyu
2014-12-06
This report presents data and assumptions employed in an application of PNNL’s Global Change Assessment Model with a newly-developed Monte Carlo analysis capability. The model is used to analyze the impacts of more aggressive U.S. residential and commercial building-energy codes and equipment standards on energy consumption and energy service costs at the state level, explicitly recognizing uncertainty in technology effectiveness and cost, socioeconomics, presence or absence of carbon prices, and climate impacts on energy demand. The report provides a summary of how residential and commercial buildings are modeled, together with assumptions made for the distributions of state–level population, Gross Domestic Product (GDP) per worker, efficiency and cost of residential and commercial energy equipment by end use, and efficiency and cost of residential and commercial building shells. The cost and performance of equipment and of building shells are reported separately for current building and equipment efficiency standards and for more aggressive standards. The report also details assumptions concerning future improvements brought about by projected trends in technology.
Image registration with uncertainty analysis
Simonson, Katherine M.
2011-03-22
In an image registration method, edges are detected in a first image and a second image. A percentage of edge pixels in a subset of the second image that are also edges in the first image shifted by a translation is calculated. A best registration point is calculated based on a maximum percentage of edges matched. In a predefined search region, all registration points other than the best registration point are identified that are not significantly worse than the best registration point according to a predetermined statistical criterion.
Dealing with natural gas uncertainties
Clements, J.; Graeber, D. )
1991-04-01
The fuel of choice for generating new power is and will continue over the next two decades to be natural gas. It is the fuel of choice because it is plentiful, environmentally acceptable, and relatively inexpensive. This paper reports that gas reserves on the North American continent continue to be discovered in amounts that may keep the gas bubble inflated far longer than currently estimated. New gas transportation capacity is actively being developed to overcome the capacity bottlenecks and deliverability shortfalls. Natural gas prices will probably remain stable (with expected CPI-related increases) for the short run (2-4 years), and probably will be higher than CPI increases thereafter.
Market Prices and Uncertainty Report
Reports and Publications (EIA)
2016-01-01
Monthly analysis of crude oil, petroleum products, natural gas, and propane prices is released as a regular supplement to the Short-Term Energy Outlook.
Ilgen, A. G.; Cygan, R. T.
2015-12-07
During the Frio-I Brine Pilot CO_{2} injection experiment in 2004, distinct geochemical changes in response to the injection of 1600 tons of CO_{2 } were recorded in samples collected from the monitoring well. Previous geochemical modeling studies have considered dissolution of calcite and iron oxyhydroxides, or release of adsorbed iron, as the most likely sources of the increased ion concentrations. We explore in this modeling study possible alternative sources of the increasing calcium and iron, based on the data from the detailed petrographic characterization of the Upper Frio Formation “C”. Particularly, we evaluate whether dissolution of pyrite and oligoclase (anorthite component) can account for the observed geochemical changes. Due to kinetic limitations, dissolution of pyrite and anorthite cannot account for the increased iron and calcium concentrations on the time scale of the field test (10 days). However, dissolution of these minerals is contributing to carbonate and clay mineral precipitation on the longer time scales (1000 years). The one-dimensional reactive transport model predicts carbonate minerals, dolomite and ankerite, as well as clay minerals kaolinite, nontronite and montmorillonite, will precipitate in the Frio Formation “C” sandstone as the system progresses towards chemical equilibrium during a 1000-year period. Cumulative uncertainties associated with using different thermodynamic databases, activity correction models (Pitzer vs. B-dot), and extrapolating to reservoir temperature, are manifested in the difference in the predicted mineral phases. Furthermore, these models are consistent with regards to the total volume of mineral precipitation and porosity values which are predicted to within 0.002%.
Yan, Huiping; Qian, Yun; Zhao, Chun; Wang, Hailong; Wang, Minghuai; Yang, Ben; Liu, Xiaohong; Fu, Qiang
2015-09-16
In this study, we adopt a parametric sensitivity analysis framework that integrates the quasi-Monte Carlo parameter sampling approach and a surrogate model to examine aerosol effects on the East Asian Monsoon climate simulated in the Community Atmosphere Model (CAM5). A total number of 256 CAM5 simulations are conducted to quantify the model responses to the uncertain parameters associated with cloud microphysics parameterizations and aerosol (e.g., sulfate, black carbon (BC), and dust) emission factors and their interactions. Results show that the interaction terms among parameters are important for quantifying the sensitivity of fields of interest, especially precipitation, to the parameters. The relative importance of cloud-microphysics parameters and emission factors (strength) depends on evaluation metrics or the model fields we focused on, and the presence of uncertainty in cloud microphysics imposes an additional challenge in quantifying the impact of aerosols on cloud and climate. Due to their different optical and microphysical properties and spatial distributions, sulfate, BC, and dust aerosols have very different impacts on East Asian Monsoon through aerosol-cloud-radiation interactions. The climatic effects of aerosol do not always have a monotonic response to the change of emission factors. The spatial patterns of both sign and magnitude of aerosol-induced changes in radiative fluxes, cloud, and precipitation could be different, depending on the aerosol types, when parameters are sampled in different ranges of values. We also identify the different cloud microphysical parameters that show the most significant impact on climatic effect induced by sulfate, BC and dust, respectively, in East Asia.
Park, Sungsu
2014-12-12
The main goal of this project is to systematically quantify the major uncertainties of aerosol indirect effects due to the treatment of moist turbulent processes that drive aerosol activation, cloud macrophysics and microphysics in response to anthropogenic aerosol perturbations using the CAM5/CESM1. To achieve this goal, the P.I. hired a postdoctoral research scientist (Dr. Anna Fitch) who started her work from the Nov.1st.2012. In order to achieve the project goal, the first task that the Postdoc. and the P.I. did was to quantify the role of subgrid vertical velocity variance on the activation and nucleation of cloud liquid droplets and ice crystals and its impact on the aerosol indirect effect in CAM5. First, we analyzed various LES cases (from dry stable to cloud-topped PBL) to check whether this isotropic turbulence assumption used in CAM5 is really valid. It turned out that this isotropic turbulence assumption is not universally valid. Consequently, from the analysis of LES, we derived an empirical formulation relaxing the isotropic turbulence assumption used for the CAM5 aerosol activation and ice nucleation, and implemented the empirical formulation into CAM5/CESM1, and tested in the single-column and global simulation modes, and examined how it changed aerosol indirect effects in the CAM5/CESM1. These results were reported in the poster section in the 18th Annual CESM workshop held in Breckenridge, CO during Jun.17-20.2013. While we derived an empirical formulation from the analysis of couple of LES from the first task, the general applicability of that empirical formulation was questionable, because it was obtained from the limited number of LES simulations. The second task we did was to derive a more fundamental analytical formulation relating vertical velocity variance to TKE using other information starting from basic physical principles. This was a somewhat challenging subject, but if this could be done in a successful way, it could be directly implemented into the CAM5 as a practical parameterization, and substantially contributes to achieving the project goal. Through an intensive research for about one year, we found appropriate mathematical formulation and tried to implement it into the CAM5 PBL and activation routine as a practical parameterized numerical code. During these processes, however, the Postdoc applied for another position in Sweden, Europe, and accepted a job offer there, and left NCAR in August 2014. In Sweden, Dr. Anna Fitch is still working on this subject in a part time, planning to finalize the research and to write the paper in a near future.
Sahoo, N; Zhu, X; Zhang, X; Poenisch, F; Li, H; Wu, R; Lii, M; Umfleet, W; Gillin, M; Mahajan, A; Grosshans, D
2014-06-01
Purpose: To quantify the impact of range and setup uncertainties on various dosimetric indices that are used to assess normal tissue toxicities of patients receiving passive scattering proton beam therapy (PSPBT). Methods: Robust analysis of sample treatment plans of six brain cancer patients treated with PSPBT at our facility for whom the maximum brain stem dose exceeded 5800 CcGE were performed. The DVH of each plan was calculated in an Eclipse treatment planning system (TPS) version 11 applying ±3.5% range uncertainty and ±3 mm shift of the isocenter in x, y and z directions to account for setup uncertainties. Worst-case dose indices for brain stem and whole brain were compared to their values in the nominal plan to determine the average change in their values. For the brain stem, maximum dose to 1 cc of volume, dose to 10%, 50%, 90% of volume (D10, D50, D90) and volume receiving 6000, 5400, 5000, 4500, 4000 CcGE (V60, V54, V50, V45, V40) were evaluated. For the whole brain, maximum dose to 1 cc of volume, and volume receiving 5400, 5000, 4500, 4000, 3000 CcGE (V54, V50, V45, V40 and V30) were assessed. Results: The average change in the values of these indices in the worst scenario cases from the nominal plan were as follows. Brain stem; Maximum dose to 1 cc of volume: 1.1%, D10: 1.4%, D50: 8.0%, D90:73.3%, V60:116.9%, V54:27.7%, V50: 21.2%, V45:16.2%, V40:13.6%,Whole brain; Maximum dose to 1 cc of volume: 0.3%, V54:11.4%, V50: 13.0%, V45:13.6%, V40:14.1%, V30:13.5%. Conclusion: Large to modest changes in the dosiemtric indices for brain stem and whole brain compared to nominal plan due to range and set up uncertainties were observed. Such potential changes should be taken into account while using any dosimetric parameters for outcome evaluation of patients receiving proton therapy.
Leray, O.; Hudelot, J. P.; Antony, M.; Doederlein, C.; Santamarina, A.; Bernard, D.; Vaglio-Gaudard, C.
2011-07-01
The new European material testing Jules Horowitz Reactor (JHR), currently under construction in Cadarache center (CEA France), will use LEU (20% enrichment in {sup 235}U) fuels (U{sub 3}Si{sub 2} for the start up and UMoAl in the future) which are quite different from the industrial oxide fuel, for which an extensive neutronics qualification database has been established. The HORUS3D/N neutronics calculation scheme, used for the design and safety studies of the JHR, is being developed within the framework of a rigorous verification-validation-qualification methodology. In this framework, the experimental VALMONT (Validation of Aluminium Molybdenum uranium fuel for Neutronics) program has been performed in the MINERVE facility of CEA Cadarache (France), in order to qualify the capability of HORUS3D/N to accurately calculate the reactivity of the JHR reactor. The MINERVE facility using the oscillation technique provides accurate measurements of reactivity effect of samples. The VALMONT program includes oscillations of samples of UAl{sub x}/Al and UMo/Al with enrichments ranging from 0.2% to 20% and Uranium densities from 2.2 to 8 g/cm{sup 3}. The geometry of the samples and the pitch of the experimental lattice ensure maximum representativeness with the neutron spectrum expected for JHR. By comparing the effect of the sample with the one of a known fuel specimen, the reactivity effect can be measured in absolute terms and be compared to computational results. Special attention was paid to the rigorous determination and reduction of the experimental uncertainties. The calculational analysis of the VALMONT results was performed with the French deterministic code APOLLO2. A comparison of the impact of the different calculation methods, data libraries and energy meshes that were tested is presented. The interpretation of the VALMONT experimental program allowed the qualification of JHR fuel UMoAl8 (with an enrichment of 19.75% {sup 235}U) by the Minerve-dedicated interpretation tool: PIMS. The effect of energy meshes and evaluations put forward the JEFF3.1.1/SHEM scheme that leads to a better calculation of the reactivity effect of VALMONT samples. Then, in order to quantify the impact of the uncertainties linked to the basic nuclear data, their propagation from the cross section measurement to the final computational result was analysed in a rigorous way by using a nuclear data re-estimation method based on Gauss-Newton iterations. This study concludes that the prior uncertainties due to nuclear data (uranium, aluminium, beryllium and water) on the reactivity of the Begin Of Cycle (BOC) for the JHR core reach 1217 pcm at 2{sigma}. Now, the uppermost uncertainty on the JHR reactivity is due to aluminium. (authors)
Keeling, V; Jin, H; Hossain, S; Ahmad, S; Ali, I
2014-06-15
Purpose: To evaluate setup accuracy and quantify individual systematic and random errors for the various hardware and software components of the frameless 6D-BrainLAB ExacTrac system. Methods: 35 patients with cranial lesions, some with multiple isocenters (50 total lesions treated in 1, 3, 5 fractions), were investigated. All patients were simulated with a rigid head-and-neck mask and the BrainLAB localizer. CT images were transferred to the IPLAN treatment planning system where optimized plans were generated using stereotactic reference frame based on the localizer. The patients were setup initially with infrared (IR) positioning ExacTrac system. Stereoscopic X-ray images (XC: X-ray Correction) were registered to their corresponding digitally-reconstructed-radiographs, based on bony anatomy matching, to calculate 6D-translational and rotational (Lateral, Longitudinal, Vertical, Pitch, Roll, Yaw) shifts. XC combines systematic errors of the mask, localizer, image registration, frame, and IR. If shifts were below tolerance (0.7 mm translational and 1 degree rotational), treatment was initiated; otherwise corrections were applied and additional X-rays were acquired to verify patient position (XV: X-ray Verification). Statistical analysis was used to extract systematic and random errors of the different components of the 6D-ExacTrac system and evaluate the cumulative setup accuracy. Results: Mask systematic errors (translational; rotational) were the largest and varied from one patient to another in the range (−15 to 4mm; −2.5 to 2.5degree) obtained from mean of XC for each patient. Setup uncertainty in IR positioning (0.97,2.47,1.62mm;0.65,0.84,0.96degree) was extracted from standard-deviation of XC. Combined systematic errors of the frame and localizer (0.32,−0.42,−1.21mm; −0.27,0.34,0.26degree) was extracted from mean of means of XC distributions. Final patient setup uncertainty was obtained from the standard deviations of XV (0.57,0.77,0.67mm,0.39,0.35,0.30degree). Conclusion: Statistical analysis was used to calculate cumulative and individual systematic errors from the different hardware and software components of the 6D-ExacTrac-system. Patients were treated with cumulative errors (<1mm,<1degree) with XV image guidance.
Langner, R.; Hendron, B.; Bonnema, E.
2014-08-01
The small buildings and small portfolios (SBSP) sector face a number of barriers that inhibit SBSP owners from adopting energy efficiency solutions. This pilot project focused on overcoming two of the largest barriers to financing energy efficiency in small buildings: disproportionately high transaction costs and unknown or unacceptable risk. Solutions to these barriers can often be at odds, because inexpensive turnkey solutions are often not sufficiently tailored to the unique circumstances of each building, reducing confidence that the expected energy savings will be achieved. To address these barriers, NREL worked with two innovative, forward-thinking lead partners, Michigan Saves and Energi, to develop technical solutions that provide a quick and easy process to encourage energy efficiency investments while managing risk. The pilot project was broken into two stages: the first stage focused on reducing transaction costs, and the second stage focused on reducing performance risk. In the first stage, NREL worked with the non-profit organization, Michigan Saves, to analyze the effects of 8 energy efficiency measures (EEMs) on 81 different baseline small office building models in Holland, Michigan (climate zone 5A). The results of this analysis (totaling over 30,000 cases) are summarized in a simple spreadsheet tool that enables users to easily sort through the results and find appropriate small office EEM packages that meet a particular energy savings threshold and are likely to be cost-effective.
Daley, Thomas M.; Hendrickson, Joel; Queen, John H.
2014-12-31
A time-lapse Offset Vertical Seismic Profile (OVSP) data set was acquired as part of a subsurface monitoring program for geologic sequestration of CO_{2}. The storage site at Cranfield, near Natchez, Mississippi, is part of a detailed area study (DAS) site for geologic carbon sequestration operated by the U.S. Dept. of Energys Southeast Regional Carbon Sequestration Partnership (SECARB). The DAS site includes three boreholes, an injection well and two monitoring wells. The project team selected the DAS site to examine CO_{2} sequestration multiphase fluid flow and pressure at the interwell scale in a brine reservoir. The time-lapse (TL) OVSP was part of an integrated monitoring program that included well logs, crosswell seismic, electrical resistance tomography and 4D surface seismic. The goals of the OVSP were to detect the CO_{2} induced change in seismic response, give information about the spatial distribution of CO_{2} near the injection well and to help tie the high-resolution borehole monitoring to the 4D surface data. The VSP data were acquired in well CFU 31-F1, which is the ~3200 m deep CO_{2} injection well at the DAS site. A preinjection survey was recorded in late 2009 with injection beginning in December 2009, and a post injection survey was conducted in Nov 2010 following injection of about 250 kT of CO_{2}. The sensor array for both surveys was a 50-level, 3-component, Sercel MaxiWave system with 15 m (49 ft) spacing between levels. The source for both surveys was an accelerated weight drop, with different source trucks used for the two surveys. Consistent time-lapse processing was applied to both data sets. Time-lapse processing generated difference corridor stacks to investigate CO_{2} induced reflection amplitude changes from each source point. Corridor stacks were used for amplitude analysis to maximize the signal-to-noise ratio (S/N) for each shot point. Spatial variation in reflectivity (used to map the plume) was similar in magnitude to the corridor stacks but, due to relatively lower S/N, the results were less consistent and more sensitive to processing and therefore are not presented. We examined the overall time-lapse repeatability of the OVSP data using three methods, the NRMS and Predictability (Pred) measures of Kragh and Christie (2002) and the signal-to-distortion ratio (SDR) method of Cantillo (2011). Because time-lapse noise was comparable to the observed change, multiple methods were used to analyze data reliability. The reflections from the top and base reservoir were identified on the corridor stacks by correlation with a synthetic response generated from the well logs. A consistent change in the corridor stack amplitudes from pre- to post-CO_{2} injection was found for both the top and base reservoir reflections on all ten shot locations analyzed. In addition to the well-log synthetic response, a finite-difference elastic wave propagation model was built based on rock/fluid properties obtained from well logs, with CO_{2} induced changes guided by time-lapse crosswell seismic tomography (Ajo-Franklin, et al., 2013) acquired at the DAS site. Time-lapse seismic tomography indicated that two reservoir zones were affected by the flood. The modeling established that interpretation of the VSP trough and peak event amplitudes as reflectivity from the top and bottom of reservoir is appropriate even with possible tuning effects. Importantly, this top/base change gives confidence in an interpretation that these changes arise from within the reservoir, not from bounding lithology. The modeled time-lapse change and the observed field data change from 10 shotpoints are in agreement for both magnitude and polarity of amplitude change for top and base of reservoir. Therefore, we conclude the stored CO_{2} has been successfully detected and, furthermore, the observed seismic reflection change can
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Daley, Thomas M.; Hendrickson, Joel; Queen, John H.
2014-12-31
A time-lapse Offset Vertical Seismic Profile (OVSP) data set was acquired as part of a subsurface monitoring program for geologic sequestration of CO2. The storage site at Cranfield, near Natchez, Mississippi, is part of a detailed area study (DAS) site for geologic carbon sequestration operated by the U.S. Dept. of Energy’s Southeast Regional Carbon Sequestration Partnership (SECARB). The DAS site includes three boreholes, an injection well and two monitoring wells. The project team selected the DAS site to examine CO2 sequestration multiphase fluid flow and pressure at the interwell scale in a brine reservoir. The time-lapse (TL) OVSP was partmore » of an integrated monitoring program that included well logs, crosswell seismic, electrical resistance tomography and 4D surface seismic. The goals of the OVSP were to detect the CO2 induced change in seismic response, give information about the spatial distribution of CO2 near the injection well and to help tie the high-resolution borehole monitoring to the 4D surface data. The VSP data were acquired in well CFU 31-F1, which is the ~3200 m deep CO2 injection well at the DAS site. A preinjection survey was recorded in late 2009 with injection beginning in December 2009, and a post injection survey was conducted in Nov 2010 following injection of about 250 kT of CO2. The sensor array for both surveys was a 50-level, 3-component, Sercel MaxiWave system with 15 m (49 ft) spacing between levels. The source for both surveys was an accelerated weight drop, with different source trucks used for the two surveys. Consistent time-lapse processing was applied to both data sets. Time-lapse processing generated difference corridor stacks to investigate CO2 induced reflection amplitude changes from each source point. Corridor stacks were used for amplitude analysis to maximize the signal-to-noise ratio (S/N) for each shot point. Spatial variation in reflectivity (used to ‘map’ the plume) was similar in magnitude to the corridor stacks but, due to relatively lower S/N, the results were less consistent and more sensitive to processing and therefore are not presented. We examined the overall time-lapse repeatability of the OVSP data using three methods, the NRMS and Predictability (Pred) measures of Kragh and Christie (2002) and the signal-to-distortion ratio (SDR) method of Cantillo (2011). Because time-lapse noise was comparable to the observed change, multiple methods were used to analyze data reliability. The reflections from the top and base reservoir were identified on the corridor stacks by correlation with a synthetic response generated from the well logs. A consistent change in the corridor stack amplitudes from pre- to post-CO2 injection was found for both the top and base reservoir reflections on all ten shot locations analyzed. In addition to the well-log synthetic response, a finite-difference elastic wave propagation model was built based on rock/fluid properties obtained from well logs, with CO2 induced changes guided by time-lapse crosswell seismic tomography (Ajo-Franklin, et al., 2013) acquired at the DAS site. Time-lapse seismic tomography indicated that two reservoir zones were affected by the flood. The modeling established that interpretation of the VSP trough and peak event amplitudes as reflectivity from the top and bottom of reservoir is appropriate even with possible tuning effects. Importantly, this top/base change gives confidence in an interpretation that these changes arise from within the reservoir, not from bounding lithology. The modeled time-lapse change and the observed field data change from 10 shotpoints are in agreement for both magnitude and polarity of amplitude change for top and base of reservoir. Therefore, we conclude the stored CO2 has been successfully detected and, furthermore, the observed seismic reflection change can be applied to Cranfield’s 4D surface seismic for spatially delineating the CO2/brine interface.« less
Microsoft Word - Price Uncertainty Supplement.doc
Annual Energy Outlook [U.S. Energy Information Administration (EIA)]
Market participants follow the evolution of storage economics via inter-month spreads between futures contracts. That is, the value of buying a prompt-delivered futures contract ...
Uncertainty in verification and validation: recent perspective...
Office of Scientific and Technical Information (OSTI)
Country of Publication: United States Language: English Subject: 99 GENERAL AND MISCELLANEOUSMATHEMATICS, COMPUTING, AND INFORMATION SCIENCE; VALIDATION; VERIFICATION; COMPUTER ...
PMU Uncertainty Quantification in Voltage Stability Analysis...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Year of Publication 2015 Authors Chen, C, Wang, J, Li, Z, Sun, H, Wang, Z Journal IEEE Transactions on Power Systems Volume 30 Start Page 2196 Issue 4 Pagination 2 Date...
Microsoft Word - Price Uncertainty Supplement.doc
Gasoline and Diesel Fuel Update (EIA)
monthly supplement to the EIA Short-Term Energy Outlook. (http:www.eia.doe.govemeu... This statistic is calculated using a cumulative normal density (CND) function. It can be ...
Approaches for Uncertainty Quantification and Sensitivity Analysis |
Office of Environmental Management (EM)
| Department of Energy Applying Risk Communication to the Transportation of Radioactive Materials Applying Risk Communication to the Transportation of Radioactive Materials Participants should expect to gain the following skills: How to recognize how the stakeholders prefer to receive information How to integrate risk communication principles into individual communication How to recognize the importance of earning trust and credibility How to identify stakeholders How to answer questions
Characterizing Uncertainties in Ice Particle Size Distributions
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
and image(s), see ARM Research Highlights http:www.arm.govsciencehighlights Research Highlight In many parameterization schemes for numerical models or remote sensing...