National Library of Energy BETA

Sample records for methodologies model structures

  1. Methodology for characterizing modeling and discretization uncertainties in computational simulation

    SciTech Connect (OSTI)

    ALVIN,KENNETH F.; OBERKAMPF,WILLIAM L.; RUTHERFORD,BRIAN M.; DIEGERT,KATHLEEN V.

    2000-03-01

    This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.

  2. SASSI Methodology-Based Sensitivity Studies for Deeply Embedded Structures,

    Office of Environmental Management (EM)

    Such As Small Modular Reactors (SMRs) | Department of Energy SASSI Methodology-Based Sensitivity Studies for Deeply Embedded Structures, Such As Small Modular Reactors (SMRs) SASSI Methodology-Based Sensitivity Studies for Deeply Embedded Structures, Such As Small Modular Reactors (SMRs) SASSI Methodology-Based Sensitivity Studies for Deeply Embedded Structures, Such As Small Modular Reactors (SMRs) Dr. Dan M. Ghiocel Ghiocel Predictive Technologies Inc. http://www.ghiocel-tech.com 2014 DOE

  3. Modeling of Diesel Exhaust Systems: A methodology to better simulate...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    of Diesel Exhaust Systems: A methodology to better simulate soot reactivity Modeling of Diesel Exhaust Systems: A methodology to better simulate soot reactivity Discussed ...

  4. Proposed Methodology for LEED Baseline Refrigeration Modeling (Presentation)

    SciTech Connect (OSTI)

    Deru, M.

    2011-02-01

    This PowerPoint presentation summarizes a proposed methodology for LEED baseline refrigeration modeling. The presentation discusses why refrigeration modeling is important, the inputs of energy models, resources, reference building model cases, baseline model highlights, example savings calculations and results.

  5. Tornado missile simulation and design methodology. Volume 2: model verification and data base updates. Final report

    SciTech Connect (OSTI)

    Twisdale, L.A.; Dunn, W.L.

    1981-08-01

    A probabilistic methodology has been developed to predict the probabilities of tornado-propelled missiles impacting and damaging nuclear power plant structures. Mathematical models of each event in the tornado missile hazard have been developed and sequenced to form an integrated, time-history simulation methodology. The models are data based where feasible. The data include documented records of tornado occurrence, field observations of missile transport, results of wind tunnel experiments, and missile impact tests. Probabilistic Monte Carlo techniques are used to estimate the risk probabilities. The methodology has been encoded in the TORMIS computer code to facilitate numerical analysis and plant-specific tornado missile probability assessments.

  6. Prototype integration of the joint munitions assessment and planning model with the OSD threat methodology

    SciTech Connect (OSTI)

    Lynn, R.Y.S.; Bolmarcich, J.J.

    1994-06-01

    The purpose of this Memorandum is to propose a prototype procedure which the Office of Munitions might employ to exercise, in a supportive joint fashion, two of its High Level Conventional Munitions Models, namely, the OSD Threat Methodology and the Joint Munitions Assessment and Planning (JMAP) model. The joint application of JMAP and the OSD Threat Methodology provides a tool to optimize munitions stockpiles. The remainder of this Memorandum comprises five parts. The first is a description of the structure and use of the OSD Threat Methodology. The second is a description of JMAP and its use. The third discusses the concept of the joint application of JMAP and OSD Threat Methodology. The fourth displays sample output of the joint application. The fifth is a summary and epilogue. Finally, three appendices contain details of the formulation, data, and computer code.

  7. Application of Random Vibration Theory Methodology for Seismic Soil-Structure Interaction Analysis

    Broader source: Energy.gov [DOE]

    Application of Random Vibration Theory Methodology for Seismic Soil-Structure Interaction Analysis Farhang Ostadan Nan Deng Lisa Anderson Bechtel National, Inc. USDOE NPH Workshop October 2014

  8. Methodology Using MELCOR Code to Model Proposed Hazard Scenario

    SciTech Connect (OSTI)

    Gavin Hawkley

    2010-07-01

    This study demonstrates a methodology for using the MELCOR code to model a proposed hazard scenario within a building containing radioactive powder, and the subsequent evaluation of a leak path factor (LPF) (or the amount of respirable material which that escapes a facility into the outside environment), implicit in the scenario. This LPF evaluation will analyzes the basis and applicability of an assumed standard multiplication of 0.5 × 0.5 (in which 0.5 represents the amount of material assumed to leave one area and enter another), for calculating an LPF value. The outside release is dependsent upon the ventilation/filtration system, both filtered and un-filtered, and from other pathways from the building, such as doorways (, both open and closed). This study is presents ed to show how the multiple leak path factorsLPFs from the interior building can be evaluated in a combinatory process in which a total leak path factorLPF is calculated, thus addressing the assumed multiplication, and allowing for the designation and assessment of a respirable source term (ST) for later consequence analysis, in which: the propagation of material released into the environmental atmosphere can be modeled and the dose received by a receptor placed downwind can be estimated and the distance adjusted to maintains such exposures as low as reasonably achievableALARA.. Also, this study will briefly addresses particle characteristics thatwhich affect atmospheric particle dispersion, and compares this dispersion with leak path factorLPF methodology.

  9. Modeling of Diesel Exhaust Systems: A methodology to better simulate soot reactivity

    Broader source: Energy.gov [DOE]

    Discussed development of a methodology for creating accurate soot models for soot samples from various origins with minimal characterization

  10. SPAR Model Structural Efficiencies

    SciTech Connect (OSTI)

    John Schroeder; Dan Henry

    2013-04-01

    The Nuclear Regulatory Commission (NRC) and the Electric Power Research Institute (EPRI) are supporting initiatives aimed at improving the quality of probabilistic risk assessments (PRAs). Included in these initiatives are the resolution of key technical issues that are have been judged to have the most significant influence on the baseline core damage frequency of the NRC’s Standardized Plant Analysis Risk (SPAR) models and licensee PRA models. Previous work addressed issues associated with support system initiating event analysis and loss of off-site power/station blackout analysis. The key technical issues were: • Development of a standard methodology and implementation of support system initiating events • Treatment of loss of offsite power • Development of standard approach for emergency core cooling following containment failure Some of the related issues were not fully resolved. This project continues the effort to resolve outstanding issues. The work scope was intended to include substantial collaboration with EPRI; however, EPRI has had other higher priority initiatives to support. Therefore this project has addressed SPAR modeling issues. The issues addressed are • SPAR model transparency • Common cause failure modeling deficiencies and approaches • Ac and dc modeling deficiencies and approaches • Instrumentation and control system modeling deficiencies and approaches

  11. Mathematical Modeling of Microbial Community Dynamics: A Methodological Review

    SciTech Connect (OSTI)

    Song, Hyun-Seob; Cannon, William R.; Beliaev, Alex S.; Konopka, Allan

    2014-10-17

    Microorganisms in nature form diverse communities that dynamically change in structure and function in response to environmental variations. As a complex adaptive system, microbial communities show higher-order properties that are not present in individual microbes, but arise from their interactions. Predictive mathematical models not only help to understand the underlying principles of the dynamics and emergent properties of natural and synthetic microbial communities, but also provide key knowledge required for engineering them. In this article, we provide an overview of mathematical tools that include not only current mainstream approaches, but also less traditional approaches that, in our opinion, can be potentially useful. We discuss a broad range of methods ranging from low-resolution supra-organismal to high-resolution individual-based modeling. Particularly, we highlight the integrative approaches that synergistically combine disparate methods. In conclusion, we provide our outlook for the key aspects that should be further developed to move microbial community modeling towards greater predictive power.

  12. Natural gas production problems : solutions, methodologies, and modeling.

    SciTech Connect (OSTI)

    Rautman, Christopher Arthur; Herrin, James M.; Cooper, Scott Patrick; Basinski, Paul M.; Olsson, William Arthur; Arnold, Bill Walter; Broadhead, Ronald F.; Knight, Connie D.; Keefe, Russell G.; McKinney, Curt; Holm, Gus; Holland, John F.; Larson, Rich; Engler, Thomas W.; Lorenz, John Clay

    2004-10-01

    Natural gas is a clean fuel that will be the most important domestic energy resource for the first half the 21st centtuy. Ensuring a stable supply is essential for our national energy security. The research we have undertaken will maximize the extractable volume of gas while minimizing the environmental impact of surface disturbances associated with drilling and production. This report describes a methodology for comprehensive evaluation and modeling of the total gas system within a basin focusing on problematic horizontal fluid flow variability. This has been accomplished through extensive use of geophysical, core (rock sample) and outcrop data to interpret and predict directional flow and production trends. Side benefits include reduced environmental impact of drilling due to reduced number of required wells for resource extraction. These results have been accomplished through a cooperative and integrated systems approach involving industry, government, academia and a multi-organizational team within Sandia National Laboratories. Industry has provided essential in-kind support to this project in the forms of extensive core data, production data, maps, seismic data, production analyses, engineering studies, plus equipment and staff for obtaining geophysical data. This approach provides innovative ideas and technologies to bring new resources to market and to reduce the overall environmental impact of drilling. More importantly, the products of this research are not be location specific but can be extended to other areas of gas production throughout the Rocky Mountain area. Thus this project is designed to solve problems associated with natural gas production at developing sites, or at old sites under redevelopment.

  13. HIERARCHICAL METHODOLOGY FOR MODELING HYDROGEN STORAGE SYSTEMS PART II: DETAILED MODELS

    SciTech Connect (OSTI)

    Hardy, B; Donald L. Anton, D

    2008-12-22

    There is significant interest in hydrogen storage systems that employ a media which either adsorbs, absorbs or reacts with hydrogen in a nearly reversible manner. In any media based storage system the rate of hydrogen uptake and the system capacity is governed by a number of complex, coupled physical processes. To design and evaluate such storage systems, a comprehensive methodology was developed, consisting of a hierarchical sequence of models that range from scoping calculations to numerical models that couple reaction kinetics with heat and mass transfer for both the hydrogen charging and discharging phases. The scoping models were presented in Part I [1] of this two part series of papers. This paper describes a detailed numerical model that integrates the phenomena occurring when hydrogen is charged and discharged. A specific application of the methodology is made to a system using NaAlH{sub 4} as the storage media.

  14. Electronic Structure of Ligated CdSe Clusters: Dependence on DFT Methodology

    SciTech Connect (OSTI)

    Albert, VV; Ivanov, SA; Tretiak, S; Kilina, SV

    2011-07-07

    Simulations of ligated semiconductor quantum dots (QDs) and their physical properties, such as morphologies, QD-ligand interactions, electronic structures, and optical transitions, are expected to be very sensitive to computational methodology. We utilize Density Functional Theory (DFT) and systematically study how the choice of density functional, atom-localized basis set, and a solvent affects the physical properties of the Cd{sub 33}Se{sub 33} cluster ligated with a trimethyl phosphine oxide ligand. We have found that qualitative performance of all exchange-correlation (XC) functionals is relatively similar in predicting strong QD-ligand binding energy ({approx}1 eV). Additionally, all functionals predict shorter Cd-Se bond lengths on the QD surface than in its core, revealing the nature and degree of QD surface reconstruction. For proper modeling of geometries and QD-ligand interactions, however, augmentation of even a moderately sized basis set with polarization functions (e.g., LANL2DZ* and 6-31G*) is very important. A polar solvent has very significant implications for the ligand binding energy, decreasing it to 0.2-0.5 eV. However, the solvent model has a minor effect on the optoelectronic properties, resulting in persistent blue shifts up to {approx}0.3 eV of the low-energy optical transitions. For obtaining reasonable energy gaps and optical transition energies, hybrid XC functionals augmented by a long-range Hartree-Fock orbital exchange have to be applied.

  15. Structural system identification: Structural dynamics model validation

    SciTech Connect (OSTI)

    Red-Horse, J.R.

    1997-04-01

    Structural system identification is concerned with the development of systematic procedures and tools for developing predictive analytical models based on a physical structure`s dynamic response characteristics. It is a multidisciplinary process that involves the ability (1) to define high fidelity physics-based analysis models, (2) to acquire accurate test-derived information for physical specimens using diagnostic experiments, (3) to validate the numerical simulation model by reconciling differences that inevitably exist between the analysis model and the experimental data, and (4) to quantify uncertainties in the final system models and subsequent numerical simulations. The goal of this project was to develop structural system identification techniques and software suitable for both research and production applications in code and model validation.

  16. Methodology for the Incorporation of Passive Component Aging Modeling into the RAVEN/ RELAP-7 Environment

    SciTech Connect (OSTI)

    Mandelli, Diego; Rabiti, Cristian; Cogliati, Joshua; Alfonsi, Andrea; Askin Guler; Tunc Aldemir

    2014-11-01

    Passive system, structure and components (SSCs) will degrade over their operation life and this degradation may cause to reduction in the safety margins of a nuclear power plant. In traditional probabilistic risk assessment (PRA) using the event-tree/fault-tree methodology, passive SSC failure rates are generally based on generic plant failure data and the true state of a specific plant is not reflected realistically. To address aging effects of passive SSCs in the traditional PRA methodology [1] does consider physics based models that account for the operating conditions in the plant, however, [1] does not include effects of surveillance/inspection. This paper represents an overall methodology for the incorporation of aging modeling of passive components into the RAVEN/RELAP-7 environment which provides a framework for performing dynamic PRA. Dynamic PRA allows consideration of both epistemic and aleatory uncertainties (including those associated with maintenance activities) in a consistent phenomenological and probabilistic framework and is often needed when there is complex process/hardware/software/firmware/ human interaction [2]. Dynamic PRA has gained attention recently due to difficulties in the traditional PRA modeling of aging effects of passive components using physics based models and also in the modeling of digital instrumentation and control systems. RAVEN (Reactor Analysis and Virtual control Environment) [3] is a software package under development at the Idaho National Laboratory (INL) as an online control logic driver and post-processing tool. It is coupled to the plant transient code RELAP-7 (Reactor Excursion and Leak Analysis Program) also currently under development at INL [3], as well as RELAP 5 [4]. The overall methodology aims to: • Address multiple aging mechanisms involving large number of components in a computational feasible manner where sequencing of events is conditioned on the physical conditions predicted in a simulation

  17. Structural model of uramarsite

    SciTech Connect (OSTI)

    Rastsvetaeva, R. K.; Sidorenko, G. A.; Ivanova, A. G.; Chukanov, N. V.

    2008-09-15

    The structural model of uramarsite, a new mineral of the uran-mica family from the Bota-Burum deposit (South Kazakhstan), is determined using a single-crystal X-ray diffraction analysis. The parameters of the triclinic unit cell are as follows: a = 7.173(2) A, b = 7.167(5) A, c = 9.30(1) A, {alpha} = 90.13(7){sup o}, {beta} = 90.09(4){sup o}, {gamma} = 89.96(4){sup o}, and space group P1. The crystal chemical formula of uramarsite is: (UO{sub 2}){sub 2}[AsO{sub 4}][PO{sub 4},AsO{sub 4}][NH{sub 4}][H{sub 3}O] . 6H{sub 2}O (Z = 1). Uramarsite is the second ammonium-containing mineral of uranium and an arsenate analogue of uramphite. In the case of uramarsite, the lowering of the symmetry from tetragonal to triclinic, which is accompanied by a triclinic distortion of the tetragonal unit cell, is apparently caused by the ordering of the As and P atoms and the NH{sub 4}, H{sub 3}O, and H{sub 2}O groups.

  18. Model Validation and Testing: The Methodological Foundation of ASHRAE Standard 140; Preprint

    SciTech Connect (OSTI)

    Judkoff, R.; Neymark, J.

    2006-07-01

    Ideally, whole-building energy simulation programs model all aspects of a building that influence energy use and thermal and visual comfort for the occupants. An essential component of the development of such computer simulation models is a rigorous program of validation and testing. This paper describes a methodology to evaluate the accuracy of whole-building energy simulation programs. The methodology is also used to identify and diagnose differences in simulation predictions that may be caused by algorithmic differences, modeling limitations, coding errors, or input errors. The methodology has been adopted by ANSI/ASHRAE Standard 140 (ANSI/ASHRAE 2001, 2004), Method of Test for the Evaluation of Building Energy Analysis Computer Programs. A summary of the method is included in the ASHRAE Handbook of Fundamentals (ASHRAE 2005). This paper describes the ANSI/ASHRAE Standard 140 method of test and its methodological basis. Also discussed are possible future enhancements to Standard 140 and related research recommendations.

  19. Model Validation and Testing: The Methodological Foundation of ASHRAE Standard 140

    SciTech Connect (OSTI)

    Judkoff, R.; Neymark, J.

    2006-01-01

    Ideally, whole-building energy simulation programs model all aspects of a building that influence energy use and thermal and visual comfort for the occupants. An essential component of the development of such computer simulation models is a rigorous program of validation and testing. This paper describes a methodology to evaluate the accuracy of whole-building energy simulation programs. The methodology is also used to identify and diagnose differences in simulation predictions that may be caused by algorithmic differences, modeling limitations, coding errors, or input errors. The methodology has been adopted by ANSI/ASHRAE Standard 140, Method of Test for the Evaluation of Building Energy Analysis Computer Programs (ASHRAE 2001a, 2004). A summary of the method is included in the 2005 ASHRAE Handbook--Fundamentals (ASHRAE 2005). This paper describes the ASHRAE Standard 140 method of test and its methodological basis. Also discussed are possible future enhancements to ASHRAE Standard 140 and related research recommendations.

  20. Fuel cycle assessment: A compendium of models, methodologies, and approaches

    SciTech Connect (OSTI)

    Not Available

    1994-07-01

    The purpose of this document is to profile analytical tools and methods which could be used in a total fuel cycle analysis. The information in this document provides a significant step towards: (1) Characterizing the stages of the fuel cycle. (2) Identifying relevant impacts which can feasibly be evaluated quantitatively or qualitatively. (3) Identifying and reviewing other activities that have been conducted to perform a fuel cycle assessment or some component thereof. (4) Reviewing the successes/deficiencies and opportunities/constraints of previous activities. (5) Identifying methods and modeling techniques/tools that are available, tested and could be used for a fuel cycle assessment.

  1. Methodology Development for Passive Component Reliability Modeling in a Multi-Physics Simulation Environment

    SciTech Connect (OSTI)

    Aldemir, Tunc; Denning, Richard; Catalyurek, Umit; Unwin, Stephen

    2015-01-23

    Reduction in safety margin can be expected as passive structures and components undergo degradation with time. Limitations in the traditional probabilistic risk assessment (PRA) methodology constrain its value as an effective tool to address the impact of aging effects on risk and for quantifying the impact of aging management strategies in maintaining safety margins. A methodology has been developed to address multiple aging mechanisms involving large numbers of components (with possibly statistically dependent failures) within the PRA framework in a computationally feasible manner when the sequencing of events is conditioned on the physical conditions predicted in a simulation environment, such as the New Generation System Code (NGSC) concept. Both epistemic and aleatory uncertainties can be accounted for within the same phenomenological framework and maintenance can be accounted for in a coherent fashion. The framework accommodates the prospective impacts of various intervention strategies such as testing, maintenance, and refurbishment. The methodology is illustrated with several examples.

  2. Precarious Rock Methodology for Seismic Hazard: Physical Testing, Numerical Modeling and Coherence Studies

    SciTech Connect (OSTI)

    Anooshehpoor, Rasool; Purvance, Matthew D.; Brune, James N.; Preston, Leiph A.; Anderson, John G.; Smith, Kenneth D.

    2006-09-29

    This report covers the following projects: Shake table tests of precarious rock methodology, field tests of precarious rocks at Yucca Mountain and comparison of the results with PSHA predictions, study of the coherence of the wave field in the ESF, and a limited survey of precarious rocks south of the proposed repository footprint. A series of shake table experiments have been carried out at the University of Nevada, Reno Large Scale Structures Laboratory. The bulk of the experiments involved scaling acceleration time histories (uniaxial forcing) from 0.1g to the point where the objects on the shake table overturned a specified number of times. The results of these experiments have been compared with numerical overturning predictions. Numerical predictions for toppling of large objects with simple contact conditions (e.g., I-beams with sharp basal edges) agree well with shake-table results. The numerical model slightly underpredicts the overturning of small rectangular blocks. It overpredicts the overturning PGA for asymmetric granite boulders with complex basal contact conditions. In general the results confirm the approximate predictions of previous studies. Field testing of several rocks at Yucca Mountain has approximately confirmed the preliminary results from previous studies, suggesting that he PSHA predictions are too high, possibly because the uncertainty in the mean of the attenuation relations. Study of the coherence of wavefields in the ESF has provided results which will be very important in design of the canisters distribution, in particular a preliminary estimate of the wavelengths at which the wavefields become incoherent. No evidence was found for extreme focusing by lens-like inhomogeneities. A limited survey for precarious rocks confirmed that they extend south of the repository, and one of these has been field tested.

  3. Snow Micro-Structure Model

    Energy Science and Technology Software Center (OSTI)

    2014-06-25

    PIKA is a MOOSE-based application for modeling micro-structure evolution of seasonal snow. The model will be useful for environmental, atmospheric, and climate scientists. Possible applications include application to energy balance models, ice sheet modeling, and avalanche forecasting. The model implements physics from published, peer-reviewed articles. The main purpose is to foster university and laboratory collaboration to build a larger multi-scale snow model using MOOSE. The main feature of the code is that it is implementedmore » using the MOOSE framework, thus making features such as multiphysics coupling, adaptive mesh refinement, and parallel scalability native to the application. PIKA implements three equations: the phase-field equation for tracking the evolution of the ice-air interface within seasonal snow at the grain-scale; the heat equation for computing the temperature of both the ice and air within the snow; and the mass transport equation for monitoring the diffusion of water vapor in the pore space of the snow.« less

  4. Dixie Valley Engineered Geothermal System Exploration Methodology Project, Baseline Conceptual Model Report

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    Iovenitti, Joe

    FSR Part I presents (1) an assessment of the readily available public domain data and some proprietary data provided by Terra-Gen Power, LLC, (2) a re-interpretation of these data as required, (3) an exploratory geostatistical data analysis, (4) the baseline geothermal conceptual model, and (5) the EGS favorability/trust mapping. The conceptual model presented applies to both the hydrothermal system and EGS in the Dixie Valley region. FSR Part II presents (1) 278 new gravity stations; (2) enhanced gravity-magnetic modeling; (3) 42 new ambient seismic noise survey stations; (4) an integration of the new seismic noise data with a regional seismic network; (5) a new methodology and approach to interpret this data; (5) a novel method to predict rock type and temperature based on the newly interpreted data; (6) 70 new magnetotelluric (MT) stations; (7) an integrated interpretation of the enhanced MT data set; (8) the results of a 308 station soil CO2 gas survey; (9) new conductive thermal modeling in the project area; (10) new convective modeling in the Calibration Area; (11) pseudo-convective modeling in the Calibration Area; (12) enhanced data implications and qualitative geoscience correlations at three scales (a) Regional, (b) Project, and (c) Calibration Area; (13) quantitative geostatistical exploratory data analysis; and (14) responses to nine questions posed in the proposal for this investigation. Enhanced favorability/trust maps were not generated because there was not a sufficient amount of new, fully-vetted (see below) rock type, temperature, and stress data. The enhanced seismic data did generate a new method to infer rock type and temperature. However, in the opinion of the Principal Investigator for this project, this new methodology needs to be tested and evaluated at other sites in the Basin and Range before it is used to generate the referenced maps. As in the baseline conceptual model, the enhanced findings can be applied to both the hydrothermal

  5. Dixie Valley Engineered Geothermal System Exploration Methodology Project, Baseline Conceptual Model Report

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    Iovenitti, Joe

    2014-01-02

    FSR Part I presents (1) an assessment of the readily available public domain data and some proprietary data provided by Terra-Gen Power, LLC, (2) a re-interpretation of these data as required, (3) an exploratory geostatistical data analysis, (4) the baseline geothermal conceptual model, and (5) the EGS favorability/trust mapping. The conceptual model presented applies to both the hydrothermal system and EGS in the Dixie Valley region. FSR Part II presents (1) 278 new gravity stations; (2) enhanced gravity-magnetic modeling; (3) 42 new ambient seismic noise survey stations; (4) an integration of the new seismic noise data with a regional seismic network; (5) a new methodology and approach to interpret this data; (5) a novel method to predict rock type and temperature based on the newly interpreted data; (6) 70 new magnetotelluric (MT) stations; (7) an integrated interpretation of the enhanced MT data set; (8) the results of a 308 station soil CO2 gas survey; (9) new conductive thermal modeling in the project area; (10) new convective modeling in the Calibration Area; (11) pseudo-convective modeling in the Calibration Area; (12) enhanced data implications and qualitative geoscience correlations at three scales (a) Regional, (b) Project, and (c) Calibration Area; (13) quantitative geostatistical exploratory data analysis; and (14) responses to nine questions posed in the proposal for this investigation. Enhanced favorability/trust maps were not generated because there was not a sufficient amount of new, fully-vetted (see below) rock type, temperature, and stress data. The enhanced seismic data did generate a new method to infer rock type and temperature. However, in the opinion of the Principal Investigator for this project, this new methodology needs to be tested and evaluated at other sites in the Basin and Range before it is used to generate the referenced maps. As in the baseline conceptual model, the enhanced findings can be applied to both the hydrothermal

  6. A methodology for assessing the market benefits of alternative motor fuels: The Alternative Fuels Trade Model

    SciTech Connect (OSTI)

    Leiby, P.N.

    1993-09-01

    This report describes a modeling methodology for examining the prospective economic benefits of displacing motor gasoline use by alternative fuels. The approach is based on the Alternative Fuels Trade Model (AFTM). AFTM development was undertaken by the US Department of Energy (DOE) as part of a longer term study of alternative fuels issues. The AFTM is intended to assist with evaluating how alternative fuels may be promoted effectively, and what the consequences of substantial alternative fuels use might be. Such an evaluation of policies and consequences of an alternative fuels program is being undertaken by DOE as required by Section 502(b) of the Energy Policy Act of 1992. Interest in alternative fuels is based on the prospective economic, environmental and energy security benefits from the substitution of these fuels for conventional transportation fuels. The transportation sector is heavily dependent on oil. Increased oil use implies increased petroleum imports, with much of the increase coming from OPEC countries. Conversely, displacement of gasoline has the potential to reduce US petroleum imports, thereby reducing reliance on OPEC oil and possibly weakening OPEC`s ability to extract monopoly profits. The magnitude of US petroleum import reduction, the attendant fuel price changes, and the resulting US benefits, depend upon the nature of oil-gas substitution and the supply and demand behavior of other world regions. The methodology applies an integrated model of fuel market interactions to characterize these effects.

  7. On the Inclusion of Energy-Shifting Demand Response in Production Cost Models: Methodology and a Case Study

    SciTech Connect (OSTI)

    O'Connell, Niamh; Hale, Elaine; Doebber, Ian; Jorgenson, Jennie

    2015-07-20

    In the context of future power system requirements for additional flexibility, demand response (DR) is an attractive potential resource. Its proponents widely laud its prospective benefits, which include enabling higher penetrations of variable renewable generation at lower cost than alternative storage technologies, and improving economic efficiency. In practice, DR from the commercial and residential sectors is largely an emerging, not a mature, resource, and its actual costs and benefits need to be studied to determine promising combinations of physical DR resource, enabling controls and communications, power system characteristics, regulatory environments, market structures, and business models. The work described in this report focuses on the enablement of such analysis from the production cost modeling perspective. In particular, we contribute a bottom-up methodology for modeling load-shifting DR in production cost models. The resulting model is sufficiently detailed to reflect the physical characteristics and constraints of the underlying flexible load, and includes the possibility of capturing diurnal and seasonal variations in the resource. Nonetheless, the model is of low complexity and thus suitable for inclusion in conventional unit commitment and market clearing algorithms. The ability to simulate DR as an operational resource on a power system over a year facilitates an assessment of its time-varying value to the power system.

  8. Modeling and Analysis of The Pressure Die Casting Using Response Surface Methodology

    SciTech Connect (OSTI)

    Kittur, Jayant K.; Herwadkar, T. V. [KLS Gogte Institute of Technology, Belgaum -590 008, Karnataka (India); Parappagoudar, M. B. [Chhatrapati Shivaji Institute of Technology, Durg (C.G)-491001 (India)

    2010-10-26

    Pressure die casting is successfully used in the manufacture of Aluminum alloys components for automobile and many other industries. Die casting is a process involving many process parameters having complex relationship with the quality of the cast product. Though various process parameters have influence on the quality of die cast component, major influence is seen by the die casting machine parameters and their proper settings. In the present work, non-linear regression models have been developed for making predictions and analyzing the effect of die casting machine parameters on the performance characteristics of die casting process. Design of Experiments (DOE) with Response Surface Methodology (RSM) has been used to analyze the effect of effect of input parameters and their interaction on the response and further used to develop nonlinear input-output relationships. Die casting machine parameters, namely, fast shot velocity, slow shot to fast shot change over point, intensification pressure and holding time have been considered as the input variables. The quality characteristics of the cast product were determined by porosity, hardness and surface rough roughness (output/responses). Design of experiments has been used to plan the experiments and analyze the impact of variables on the quality of casting. On the other-hand Response Surface Methodology (Central Composite Design) is utilized to develop non-linear input-output relationships (regression models). The developed regression models have been tested for their statistical adequacy through ANOVA test. The practical usefulness of these models has been tested with some test cases. These models can be used to make the predictions about different quality characteristics, for the known set of die casting machine parameters, without conducting the experiments.

  9. Dixie Valley Engineered Geothermal System Exploration Methodology Project, Baseline Conceptual Model Report

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    Iovenitti, Joe

    2014-01-02

    The Engineered Geothermal System (EGS) Exploration Methodology Project is developing an exploration approach for EGS through the integration of geoscientific data. The Project chose the Dixie Valley Geothermal System in Nevada as a field laboratory site for methodology calibration purposes because, in the public domain, it is a highly characterized geothermal system in the Basin and Range with a considerable amount of geoscience and most importantly, well data. The overall project area is 2500km2 with the Calibration Area (Dixie Valley Geothermal Wellfield) being about 170km2. The project was subdivided into five tasks (1) collect and assess the existing public domain geoscience data; (2) design and populate a GIS database; (3) develop a baseline (existing data) geothermal conceptual model, evaluate geostatistical relationships, and generate baseline, coupled EGS favorability/trust maps from +1km above sea level (asl) to -4km asl for the Calibration Area at 0.5km intervals to identify EGS drilling targets at a scale of 5km x 5km; (4) collect new geophysical and geochemical data, and (5) repeat Task 3 for the enhanced (baseline + new ) data. Favorability maps were based on the integrated assessment of the three critical EGS exploration parameters of interest: rock type, temperature and stress. A complimentary trust map was generated to compliment the favorability maps to graphically illustrate the cumulative confidence in the data used in the favorability mapping. The Final Scientific Report (FSR) is submitted in two parts with Part I describing the results of project Tasks 1 through 3 and Part II covering the results of project Tasks 4 through 5 plus answering nine questions posed in the proposal for the overall project. FSR Part I presents (1) an assessment of the readily available public domain data and some proprietary data provided by Terra-Gen Power, LLC, (2) a re-interpretation of these data as required, (3) an exploratory geostatistical data analysis, (4

  10. WaterSense Program: Methodology for National Water Savings Analysis Model Indoor Residential Water Use

    SciTech Connect (OSTI)

    Whitehead, Camilla Dunham; McNeil, Michael; Dunham_Whitehead, Camilla; Letschert, Virginie; della_Cava, Mirka

    2008-02-28

    The U.S. Environmental Protection Agency (EPA) influences the market for plumbing fixtures and fittings by encouraging consumers to purchase products that carry the WaterSense label, which certifies those products as performing at low flow rates compared to unlabeled fixtures and fittings. As consumers decide to purchase water-efficient products, water consumption will decline nationwide. Decreased water consumption should prolong the operating life of water and wastewater treatment facilities.This report describes the method used to calculate national water savings attributable to EPA?s WaterSense program. A Microsoft Excel spreadsheet model, the National Water Savings (NWS) analysis model, accompanies this methodology report. Version 1.0 of the NWS model evaluates indoor residential water consumption. Two additional documents, a Users? Guide to the spreadsheet model and an Impacts Report, accompany the NWS model and this methodology document. Altogether, these four documents represent Phase One of this project. The Users? Guide leads policy makers through the spreadsheet options available for projecting the water savings that result from various policy scenarios. The Impacts Report shows national water savings that will result from differing degrees of market saturation of high-efficiency water-using products.This detailed methodology report describes the NWS analysis model, which examines the effects of WaterSense by tracking the shipments of products that WaterSense has designated as water-efficient. The model estimates market penetration of products that carry the WaterSense label. Market penetration is calculated for both existing and new construction. The NWS model estimates savings based on an accounting analysis of water-using products and of building stock. Estimates of future national water savings will help policy makers further direct the focus of WaterSense and calculate stakeholder impacts from the program.Calculating the total gallons of water the

  11. Dixie Valley Engineered Geothermal System Exploration Methodology Project, Baseline Conceptual Model Report

    SciTech Connect (OSTI)

    Iovenitti, Joe

    2013-05-15

    The Engineered Geothermal System (EGS) Exploration Methodology Project is developing an exploration approach for EGS through the integration of geoscientific data. The Project chose the Dixie Valley Geothermal System in Nevada as a field laboratory site for methodlogy calibration purposes because, in the public domain, it is a highly characterized geothermal systems in the Basin and Range with a considerable amount of geoscience and most importantly, well data. This Baseline Conceptual Model report summarizes the results of the first three project tasks (1) collect and assess the existing public domain geoscience data, (2) design and populate a GIS database, and (3) develop a baseline (existing data) geothermal conceptual model, evaluate geostatistical relationships, and generate baseline, coupled EGS favorability/trust maps from +1km above sea level (asl) to -4km asl for the Calibration Area (Dixie Valley Geothermal Wellfield) to identify EGS drilling targets at a scale of 5km x 5km. It presents (1) an assessment of the readily available public domain data and some proprietary data provided by Terra-Gen Power, LLC, (2) a re-interpretation of these data as required, (3) an exploratory geostatistical data analysis, (4) the baseline geothermal conceptual model, and (5) the EGS favorability/trust mapping. The conceptual model presented applies to both the hydrothermal system and EGS in the Dixie Valley region.

  12. Dixie Valley Engineered Geothermal System Exploration Methodology Project, Baseline Conceptual Model Report

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    Iovenitti, Joe

    The Engineered Geothermal System (EGS) Exploration Methodology Project is developing an exploration approach for EGS through the integration of geoscientific data. The Project chose the Dixie Valley Geothermal System in Nevada as a field laboratory site for methodlogy calibration purposes because, in the public domain, it is a highly characterized geothermal systems in the Basin and Range with a considerable amount of geoscience and most importantly, well data. This Baseline Conceptual Model report summarizes the results of the first three project tasks (1) collect and assess the existing public domain geoscience data, (2) design and populate a GIS database, and (3) develop a baseline (existing data) geothermal conceptual model, evaluate geostatistical relationships, and generate baseline, coupled EGS favorability/trust maps from +1km above sea level (asl) to -4km asl for the Calibration Area (Dixie Valley Geothermal Wellfield) to identify EGS drilling targets at a scale of 5km x 5km. It presents (1) an assessment of the readily available public domain data and some proprietary data provided by Terra-Gen Power, LLC, (2) a re-interpretation of these data as required, (3) an exploratory geostatistical data analysis, (4) the baseline geothermal conceptual model, and (5) the EGS favorability/trust mapping. The conceptual model presented applies to both the hydrothermal system and EGS in the Dixie Valley region.

  13. Mathematical model of marine diesel engine simulator for a new methodology of self propulsion tests

    SciTech Connect (OSTI)

    Izzuddin, Nur; Sunarsih,; Priyanto, Agoes

    2015-05-15

    As a vessel operates in the open seas, a marine diesel engine simulator whose engine rotation is controlled to transmit through propeller shaft is a new methodology for the self propulsion tests to track the fuel saving in a real time. Considering the circumstance, this paper presents the real time of marine diesel engine simulator system to track the real performance of a ship through a computer-simulated model. A mathematical model of marine diesel engine and the propeller are used in the simulation to estimate fuel rate, engine rotating speed, thrust and torque of the propeller thus achieve the target vessel’s speed. The input and output are a real time control system of fuel saving rate and propeller rotating speed representing the marine diesel engine characteristics. The self-propulsion tests in calm waters were conducted using a vessel model to validate the marine diesel engine simulator. The simulator then was used to evaluate the fuel saving by employing a new mathematical model of turbochargers for the marine diesel engine simulator. The control system developed will be beneficial for users as to analyze different condition of vessel’s speed to obtain better characteristics and hence optimize the fuel saving rate.

  14. A non-linear dimension reduction methodology for generating data-driven stochastic input models

    SciTech Connect (OSTI)

    Ganapathysubramanian, Baskar; Zabaras, Nicholas

    2008-06-20

    Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem of manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space R{sup n}. An isometric mapping F from M to a low-dimensional, compact, connected set A is contained in R{sup d}(d<methodology uses arguments from graph theory and differential geometry to construct the isometric transformation F:M{yields}A. Asymptotic convergence of the representation of M by A is shown. This mapping F serves as an accurate, low-dimensional, data-driven representation of the property variations. The reduced-order model of the material topology and thermal diffusivity variations is subsequently used as an input in the solution of stochastic partial differential equations that describe the evolution of dependant variables. A sparse grid collocation strategy (Smolyak algorithm) is utilized to solve these stochastic equations efficiently. We showcase the

  15. Methodology for modeling the devolatilization of refuse-derived fuel from thermogravimetric analysis of municipal solid waste components

    SciTech Connect (OSTI)

    Fritsky, K.J.; Miller, D.L.; Cernansky, N.P.

    1994-09-01

    A methodology was introduced for modeling the devolatilization characteristics of refuse-derived fuel (RFD) in terms of temperature-dependent weight loss. The basic premise of the methodology is that RDF is modeled as a combination of select municipal solid waste (MSW) components. Kinetic parameters are derived for each component from thermogravimetric analyzer (TGA) data measured at a specific set of conditions. These experimentally derived parameters, along with user-derived parameters, are inputted to model equations for the purpose of calculating thermograms for the components. The component thermograms are summed to create a composite thermogram that is an estimate of the devolatilization for the as-modeled RFD. The methodology has several attractive features as a thermal analysis tool for waste fuels. 7 refs., 10 figs., 3 tabs.

  16. Electronic Structure Modeling of Electrochemical Reactions at...

    Office of Scientific and Technical Information (OSTI)

    Journal Article: Electronic Structure Modeling of Electrochemical Reactions at ElectrodeElectrolyte Interfaces in Lithium Ion Batteries Citation Details In-Document Search Title: ...

  17. Continuous mutual improvement of macromolecular structure models...

    Office of Scientific and Technical Information (OSTI)

    ... Country of Publication: United States Language: English Subject: 59 BASIC BIOLOGICAL SCIENCES; 96 KNOWLEDGE MANAGEMENT AND PRESERVATION structure determination; model quality; data ...

  18. Human Factors Engineering Program Review Model (NUREG-0711)Revision 3: Update Methodology and Key Revisions

    SciTech Connect (OSTI)

    OHara J. M.; Higgins, J.; Fleger, S.

    2012-07-22

    The U.S. Nuclear Regulatory Commission (NRC) reviews the human factors engineering (HFE) programs of applicants for nuclear power plant construction permits, operating licenses, standard design certifications, and combined operating licenses. The purpose of these safety reviews is to help ensure that personnel performance and reliability are appropriately supported. Detailed design review procedures and guidance for the evaluations is provided in three key documents: the Standard Review Plan (NUREG-0800), the HFE Program Review Model (NUREG-0711), and the Human-System Interface Design Review Guidelines (NUREG-0700). These documents were last revised in 2007, 2004 and 2002, respectively. The NRC is committed to the periodic update and improvement of the guidance to ensure that it remains a state-of-the-art design evaluation tool. To this end, the NRC is updating its guidance to stay current with recent research on human performance, advances in HFE methods and tools, and new technology being employed in plant and control room design. NUREG-0711 is the first document to be addressed. We present the methodology used to update NUREG-0711 and summarize the main changes made. Finally, we discuss the current status of the update program and the future plans.

  19. Development and application of a statistical methodology to evaluate the predictive accuracy of building energy baseline models

    SciTech Connect (OSTI)

    Granderson, Jessica; Price, Phillip N.

    2014-03-01

    This paper documents the development and application of a general statistical methodology to assess the accuracy of baseline energy models, focusing on its application to Measurement and Verification (M&V) of whole-­building energy savings. The methodology complements the principles addressed in resources such as ASHRAE Guideline 14 and the International Performance Measurement and Verification Protocol. It requires fitting a baseline model to data from a ``training period’’ and using the model to predict total electricity consumption during a subsequent ``prediction period.’’ We illustrate the methodology by evaluating five baseline models using data from 29 buildings. The training period and prediction period were varied, and model predictions of daily, weekly, and monthly energy consumption were compared to meter data to determine model accuracy. Several metrics were used to characterize the accuracy of the predictions, and in some cases the best-­performing model as judged by one metric was not the best performer when judged by another metric.

  20. Preserving Lagrangian Structure in Nonlinear Model Reduction with Application to Structural Dynamics

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Carlberg, Kevin; Tuminaro, Ray; Boggs, Paul

    2015-03-11

    Our work proposes a model-reduction methodology that preserves Lagrangian structure and achieves computational efficiency in the presence of high-order nonlinearities and arbitrary parameter dependence. As such, the resulting reduced-order model retains key properties such as energy conservation and symplectic time-evolution maps. We focus on parameterized simple mechanical systems subjected to Rayleigh damping and external forces, and consider an application to nonlinear structural dynamics. To preserve structure, the method first approximates the system's “Lagrangian ingredients''---the Riemannian metric, the potential-energy function, the dissipation function, and the external force---and subsequently derives reduced-order equations of motion by applying the (forced) Euler--Lagrange equation with thesemore » quantities. Moreover, from the algebraic perspective, key contributions include two efficient techniques for approximating parameterized reduced matrices while preserving symmetry and positive definiteness: matrix gappy proper orthogonal decomposition and reduced-basis sparsification. Our results for a parameterized truss-structure problem demonstrate the practical importance of preserving Lagrangian structure and illustrate the proposed method's merits: it reduces computation time while maintaining high accuracy and stability, in contrast to existing nonlinear model-reduction techniques that do not preserve structure.« less

  1. Cross-Linking and Mass Spectrometry Methodologies to Facilitate Structural Biology: Finding a Path through the Maze

    SciTech Connect (OSTI)

    Merkley, Eric D.; Cort, John R.; Adkins, Joshua N.

    2013-09-01

    Multiprotein complexes, rather than individual proteins, make up a large part of the biological macromolecular machinery of a cell. Understanding the structure and organization of these complexes is critical to understanding cellular function. Chemical cross-linking coupled with mass spectrometry is emerging as a complementary technique to traditional structural biology methods and can provide low-resolution structural information for a multitude of purposes, such as distance constraints in computational modeling of protein complexes. In this review, we discuss the experimental considerations for successful application of chemical cross-linking-mass spectrometry in biological studies and highlight three examples of such studies from the recent literature. These examples (as well as many others) illustrate the utility of a chemical cross-linking-mass spectrometry approach in facilitating structural analysis of large and challenging complexes.

  2. HANFORD DOUBLE SHELL TANK (DST) THERMAL & SEISMIC PROJECT ESTABLISHMENT OF METHODOLOGY FOR TIME DOMAIN SOIL STRUCTURE INTERACTION ANALYSIS OF HANFORD DST

    SciTech Connect (OSTI)

    MACKEY, T.C.

    2006-03-14

    domain, but frequency domain analysis is limited to systems with linear responses. The nonlinear character of the coupled SSI model and tank structural model requires that the seismic analysis be solved in the time domain. However, time domain SSI analysis is somewhat nontraditional and requires that the appropriate methodology be developed and demonstrated. Moreover, the analysis of seismically induced fluid-structure interaction between the explicitly modeled waste and the primary tank must be benchmarked against known solutions to simpler problems before being applied to the more complex analysis of the DSTs. The objective of this investigation is to establish the methodology necessary to perform the required SSI analysis of the DSTs in the time domain. Specifically, the analysis establishes the capabilities and limitations of the time domain codes ANSYS and Dytran for performing seismic SSI analysis of the DSTs. The benchmarking of the codes Dytran and ANSYS for performing seismically induced fluid-structure interaction (FSI) between the contained waste and the DST primary tank are documented in Abatt (2006) and Carpenter and Abatt (2006), respectively. The results of those two studies show that both codes have the capability to analyze the fluid-structure interaction behavior of the primary tank and contained waste. As expected, Dytran appears to have more robust capabilities for FSI analysis. The ANSYS model used in that study captures much of the FSI behavior, but does have some limitations for predicting the convective response of the waste and possibly the response of the waste in the knuckle region of the primary tank. While Dytran appears to have somewhat stronger capabilities for the analysis of the FSI behavior in the primary tank, it is more practical for the overall analysis to use ANSYS. Thus, Dytran served the purpose of helping to identify limitations in the ANSYS FSI analysis so that those limitations can be addressed in the structural evaluation of

  3. Revenue Requirements Modeling System (RRMS) documentation. Volume I. Methodology description and user's guide. Appendix A: model abstract; Appendix B: technical appendix; Appendix C: sample input and output. [Compustat

    SciTech Connect (OSTI)

    Not Available

    1986-03-01

    The Revenue Requirements Modeling System (RRMS) is a utility specific financial modeling system used by the Energy Information Administration (EIA) to evaluate the impact on electric utilities of changes in the regulatory, economic, and tax environments. Included in the RRMS is a power plant life-cycle revenue requirements model designed to assess the comparative economic advantage of alternative generating plant. This report is Volume I of a 2-volume set and provides a methodology description and user's guide, a model abstract and technical appendix, and sample input and output for the models. Volume II provides an operator's manual and a program maintenance guide.

  4. Modeling Fission Product Sorption in Graphite Structures

    SciTech Connect (OSTI)

    Szlufarska, Izabela; Morgan, Dane; Allen, Todd

    2013-04-08

    The goal of this project is to determine changes in adsorption and desorption of fission products to/from nuclear-grade graphite in response to a changing chemical environment. First, the project team will employ principle calculations and thermodynamic analysis to predict stability of fission products on graphite in the presence of structural defects commonly observed in very high- temperature reactor (VHTR) graphites. Desorption rates will be determined as a function of partial pressure of oxygen and iodine, relative humidity, and temperature. They will then carry out experimental characterization to determine the statistical distribution of structural features. This structural information will yield distributions of binding sites to be used as an input for a sorption model. Sorption isotherms calculated under this project will contribute to understanding of the physical bases of the source terms that are used in higher-level codes that model fission product transport and retention in graphite. The project will include the following tasks: Perform structural characterization of the VHTR graphite to determine crystallographic phases, defect structures and their distribution, volume fraction of coke, and amount of sp2 versus sp3 bonding. This information will be used as guidance for ab initio modeling and as input for sorptivity models; Perform ab initio calculations of binding energies to determine stability of fission products on the different sorption sites present in nuclear graphite microstructures. The project will use density functional theory (DFT) methods to calculate binding energies in vacuum and in oxidizing environments. The team will also calculate stability of iodine complexes with fission products on graphite sorption sites; Model graphite sorption isotherms to quantify concentration of fission products in graphite. The binding energies will be combined with a Langmuir isotherm statistical model to predict the sorbed concentration of fission

  5. Model of Electronic Structure and Superconductivity in Orbitally...

    Office of Scientific and Technical Information (OSTI)

    Model of Electronic Structure and Superconductivity in Orbitally Ordered FeSe Title: Model of Electronic Structure and Superconductivity in Orbitally Ordered FeSe Authors: ...

  6. Simplified Protein Models: Predicting Folding Pathways and Structure...

    Office of Scientific and Technical Information (OSTI)

    Simplified Protein Models: Predicting Folding Pathways and Structure Using Amino Acid Sequences Title: Simplified Protein Models: Predicting Folding Pathways and Structure Using ...

  7. Model of Electronic Structure and Superconductivity in Orbitally...

    Office of Scientific and Technical Information (OSTI)

    Journal Article: Model of Electronic Structure and Superconductivity in Orbitally Ordered FeSe Citation Details In-Document Search Title: Model of Electronic Structure and ...

  8. Flavor structure of warped extra dimension models

    SciTech Connect (OSTI)

    Agashe, Kaustubh; Perez, Gilad; Soni, Amarjit

    2005-01-01

    We recently showed that warped extra-dimensional models with bulk custodial symmetry and few TeV Kaluza-Klein (KK) masses lead to striking signals at B factories. In this paper, using a spurion analysis, we systematically study the flavor structure of models that belong to the above class. In particular we find that the profiles of the zero modes, which are similar in all these models, essentially control the underlying flavor structure. This implies that our results are robust and model independent in this class of models. We discuss in detail the origin of the signals in B physics. We also briefly study other new physics signatures that arise in rare K decays (K{yields}{pi}{nu}{nu}), in rare top decays [t{yields}c{gamma}(Z,gluon)], and the possibility of CP asymmetries in D{sup 0} decays to CP eigenstates such as K{sub S}{pi}{sup 0} and others. Finally we demonstrate that with light KK masses, {approx}3 TeV, the above class of models with anarchic 5D Yukawas has a 'CP problem' since contributions to the neutron electric dipole moment are roughly 20 times larger than the current experimental bound. Using AdS/CFT correspondence, these extra-dimensional models are dual to a purely 4D strongly coupled conformal Higgs sector thus enhancing their appeal.

  9. Flavor Structure of Warped Extra Dimension Models

    SciTech Connect (OSTI)

    Agashe, Kaustubh; Perez, Gilad; Soni, Amarjit

    2004-08-10

    We recently showed, in hep-ph/0406101, that warped extra dimensional models with bulk custodial symmetry and few TeV KK masses lead to striking signals at B-factories. In this paper, using a spurion analysis, we systematically study the flavor structure of models that belong to the above class. In particular we find that the profiles of the zero modes, which are similar in all these models, essentially control the underlying flavor structure. This implies that our results are robust and model independent in this class of models. We discuss in detail the origin of the signals in B-physics. We also briefly study other NP signatures that arise in rare K decays (K {yields} {pi}{nu}{nu}), in rare top decays [t {yields} c{gamma}(Z, gluon)] and the possibility of CP asymmetries in D{sup 0} decays to CP eigenstates such as K{sub s}{pi}{sup 0} and others. Finally we demonstrate that with light KK masses, {approx} 3 TeV, the above class of models with anarchic 5D Yukawas has a ''CP problem'' since contributions to the neutron electric dipole moment are roughly 20 times larger than the current experimental bound. Using AdS/CFT correspondence, these extra-dimensional models are dual to a purely 4D strongly coupled conformal Higgs sector thus enhancing their appeal.

  10. Numerical modeling of the groundwater contaminant transport for the Lake Karachai Area: The methodological approach and the basic two- dimensional regional model

    SciTech Connect (OSTI)

    Petrov, A.V.; Samsonova, L.M.; Vasil`kova, N.A.; Zinin, A.I.; Zinina, G.A. |

    1994-06-01

    Methodological aspects of the numerical modeling of the groundwater contaminant transport for the Lake Karachay area are discussed. Main features of conditions of the task are the high grade of non-uniformity of the aquifer in the fractured rock massif and the high density of the waste solutions, and also the high volume of the input data: both on the part of parameters of the aquifer (number of pump tests) and on the part of observations of functions of processes (long-time observations by the monitoring well grid). The modeling process for constructing the two dimensional regional model is described, and this model is presented as the basic model for subsequent full three-dimensional modeling in sub-areas of interest. Original powerful mathematical apparatus and computer codes for finite-difference numerical modeling are used.

  11. Analysis of Wind Turbine Simulation Models: Assessment of Simplified versus Complete Methodologies: Preprint

    SciTech Connect (OSTI)

    Honrubia-Escribano, A.; Jimenez-Buendia, F.; Molina-Garcia, A.; Fuentes-Moreno, J. A.; Muljadi, Eduard; Gomez-Lazaro, E.

    2015-09-14

    This paper presents the current status of simplified wind turbine models used for power system stability analysis. This work is based on the ongoing work being developed in IEC 61400-27. This international standard, for which a technical committee was convened in October 2009, is focused on defining generic (also known as simplified) simulation models for both wind turbines and wind power plants. The results of the paper provide an improved understanding of the usability of generic models to conduct power system simulations.

  12. GREET 1.0 -- Transportation fuel cycles model: Methodology and use

    SciTech Connect (OSTI)

    Wang, M.Q.

    1996-06-01

    This report documents the development and use of the Greenhouse Gases, Regulated Emissions, and Energy Use in Transportation (GREET) model. The model, developed in a spreadsheet format, estimates the full fuel-cycle emissions and energy use associated with various transportation fuels for light-duty vehicles. The model calculates fuel-cycle emissions of five criteria pollutants (volatile organic compounds, Co, NOx, SOx, and particulate matter measuring 10 microns or less) and three greenhouse gases (carbon dioxide, methane, and nitrous oxide). The model also calculates the total fuel-cycle energy consumption, fossil fuel consumption, and petroleum consumption using various transportation fuels. The GREET model includes 17 fuel cycles: petroleum to conventional gasoline, reformulated gasoline, clean diesel, liquefied petroleum gas, and electricity via residual oil; natural gas to compressed natural gas, liquefied petroleum gas, methanol, hydrogen, and electricity; coal to electricity; uranium to electricity; renewable energy (hydropower, solar energy, and wind) to electricity; corn, woody biomass, and herbaceous biomass to ethanol; and landfill gases to methanol. This report presents fuel-cycle energy use and emissions for a 2000 model-year car powered by each of the fuels that are produced from the primary energy sources considered in the study.

  13. Comparison of Two Gas Selection Methodologies: An Application of Bayesian Model Averaging

    SciTech Connect (OSTI)

    Renholds, Andrea S.; Thompson, Sandra E.; Anderson, Kevin K.; Chilton, Lawrence K.

    2006-03-31

    One goal of hyperspectral imagery analysis is the detection and characterization of plumes. Characterization includes identifying the gases in the plumes, which is a model selection problem. Two gas selection methods compared in this report are Bayesian model averaging (BMA) and minimum Akaike information criterion (AIC) stepwise regression (SR). Simulated spectral data from a three-layer radiance transfer model were used to compare the two methods. Test gases were chosen to span the types of spectra observed, which exhibit peaks ranging from broad to sharp. The size and complexity of the search libraries were varied. Background materials were chosen to either replicate a remote area of eastern Washington or feature many common background materials. For many cases, BMA and SR performed the detection task comparably in terms of the receiver operating characteristic curves. For some gases, BMA performed better than SR when the size and complexity of the search library increased. This is encouraging because we expect improved BMA performance upon incorporation of prior information on background materials and gases.

  14. Emulating a System Dynamics Model with Agent-Based Models: A Methodological Case Study in Simulation of Diabetes Progression

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Schryver, Jack; Nutaro, James; Shankar, Mallikarjun

    2015-10-30

    An agent-based simulation model hierarchy emulating disease states and behaviors critical to progression of diabetes type 2 was designed and implemented in the DEVS framework. The models are translations of basic elements of an established system dynamics model of diabetes. In this model hierarchy, which mimics diabetes progression over an aggregated U.S. population, was dis-aggregated and reconstructed bottom-up at the individual (agent) level. Four levels of model complexity were defined in order to systematically evaluate which parameters are needed to mimic outputs of the system dynamics model. Moreover, the four estimated models attempted to replicate stock counts representing disease statesmore » in the system dynamics model, while estimating impacts of an elderliness factor, obesity factor and health-related behavioral parameters. Health-related behavior was modeled as a simple realization of the Theory of Planned Behavior, a joint function of individual attitude and diffusion of social norms that spread over each agent s social network. Although the most complex agent-based simulation model contained 31 adjustable parameters, all models were considerably less complex than the system dynamics model which required numerous time series inputs to make its predictions. In all three elaborations of the baseline model provided significantly improved fits to the output of the system dynamics model. The performances of the baseline agent-based model and its extensions illustrate a promising approach to translate complex system dynamics models into agent-based model alternatives that are both conceptually simpler and capable of capturing main effects of complex local agent-agent interactions.« less

  15. Modeling threat assessments of water supply systems using markov latent effects methodology.

    SciTech Connect (OSTI)

    Silva, Consuelo Juanita

    2006-12-01

    Recent amendments to the Safe Drinking Water Act emphasize efforts toward safeguarding our nation's water supplies against attack and contamination. Specifically, the Public Health Security and Bioterrorism Preparedness and Response Act of 2002 established requirements for each community water system serving more than 3300 people to conduct an assessment of the vulnerability of its system to a terrorist attack or other intentional acts. Integral to evaluating system vulnerability is the threat assessment, which is the process by which the credibility of a threat is quantified. Unfortunately, full probabilistic assessment is generally not feasible, as there is insufficient experience and/or data to quantify the associated probabilities. For this reason, an alternative approach is proposed based on Markov Latent Effects (MLE) modeling, which provides a framework for quantifying imprecise subjective metrics through possibilistic or fuzzy mathematics. Here, an MLE model for water systems is developed and demonstrated to determine threat assessments for different scenarios identified by the assailant, asset, and means. Scenario assailants include terrorists, insiders, and vandals. Assets include a water treatment plant, water storage tank, node, pipeline, well, and a pump station. Means used in attacks include contamination (onsite chemicals, biological and chemical), explosives and vandalism. Results demonstrated highest threats are vandalism events and least likely events are those performed by a terrorist.

  16. INTERLINE 5. 0 -- An expanded railroad routing model: Program description, methodology, and revised user's manual

    SciTech Connect (OSTI)

    Johnson, P.E.; Joy, D.S. ); Clarke, D.B.; Jacobi, J.M. . Transportation Center)

    1993-03-01

    A rail routine model, INTERLINE, has been developed at the Oak Ridge National Laboratory to investigate potential routes for transporting radioactive materials. In Version 5.0, the INTERLINE routing algorithms have been enhanced to include the ability to predict alternative routes, barge routes, and population statistics for any route. The INTERLINE railroad network is essentially a computerized rail atlas describing the US railroad system. All rail lines, with the exception of industrial spurs, are included in the network. Inland waterways and deep water routes along with their interchange points with the US railroadsystem are also included. The network contains over 15,000 rail and barge segments (links) and over 13,000 stations, interchange points, ports, and other locations (nodes). The INTERLINE model has been converted to operate on an IBM-compatible personal computer. At least a 286 computer with a hard disk containing approximately 6 MB of free space is recommended. Enhanced program performance will be obtained by using arandom-access memory drive on a 386 or 486 computer.

  17. Fragility Analysis Methodology for Degraded Structures and Passive Components in Nuclear Power Plants - Illustrated using a Condensate Storage Tank

    SciTech Connect (OSTI)

    Nie, J.; Braverman, J.; Hofmayer, C.; Choun, Y.; Kim, M.; Choi, I.

    2010-06-30

    The Korea Atomic Energy Research Institute (KAERI) is conducting a five-year research project to develop a realistic seismic risk evaluation system which includes the consideration of aging of structures and components in nuclear power plants (NPPs). The KAERI research project includes three specific areas that are essential to seismic probabilistic risk assessment (PRA): (1) probabilistic seismic hazard analysis, (2) seismic fragility analysis including the effects of aging, and (3) a plant seismic risk analysis. Since 2007, Brookhaven National Laboratory (BNL) has entered into a collaboration agreement with KAERI to support its development of seismic capability evaluation technology for degraded structures and components. The collaborative research effort is intended to continue over a five year period. The goal of this collaboration endeavor is to assist KAERI to develop seismic fragility analysis methods that consider the potential effects of age-related degradation of structures, systems, and components (SSCs). The research results of this multi-year collaboration will be utilized as input to seismic PRAs. In the Year 1 scope of work, BNL collected and reviewed degradation occurrences in US NPPs and identified important aging characteristics needed for the seismic capability evaluations. This information is presented in the Annual Report for the Year 1 Task, identified as BNL Report-81741-2008 and also designated as KAERI/RR-2931/2008. The report presents results of the statistical and trending analysis of this data and compares the results to prior aging studies. In addition, the report provides a description of U.S. current regulatory requirements, regulatory guidance documents, generic communications, industry standards and guidance, and past research related to aging degradation of SSCs. In the Year 2 scope of work, BNL carried out a research effort to identify and assess degradation models for the long-term behavior of dominant materials that are

  18. Developing a Cost Model and Methodology to Estimate Capital Costs for Thermal Energy Storage

    SciTech Connect (OSTI)

    Glatzmaier, G.

    2011-12-01

    This report provides an update on the previous cost model for thermal energy storage (TES) systems. The update allows NREL to estimate the costs of such systems that are compatible with the higher operating temperatures associated with advanced power cycles. The goal of the Department of Energy (DOE) Solar Energy Technology Program is to develop solar technologies that can make a significant contribution to the United States domestic energy supply. The recent DOE SunShot Initiative sets a very aggressive cost goal to reach a Levelized Cost of Energy (LCOE) of 6 cents/kWh by 2020 with no incentives or credits for all solar-to-electricity technologies.1 As this goal is reached, the share of utility power generation that is provided by renewable energy sources is expected to increase dramatically. Because Concentrating Solar Power (CSP) is currently the only renewable technology that is capable of integrating cost-effective energy storage, it is positioned to play a key role in providing renewable, dispatchable power to utilities as the share of power generation from renewable sources increases. Because of this role, future CSP plants will likely have as much as 15 hours of Thermal Energy Storage (TES) included in their design and operation. As such, the cost and performance of the TES system is critical to meeting the SunShot goal for solar technologies. The cost of electricity from a CSP plant depends strongly on its overall efficiency, which is a product of two components - the collection and conversion efficiencies. The collection efficiency determines the portion of incident solar energy that is captured as high-temperature thermal energy. The conversion efficiency determines the portion of thermal energy that is converted to electricity. The operating temperature at which the overall efficiency reaches its maximum depends on many factors, including material properties of the CSP plant components. Increasing the operating temperature of the power generation

  19. Application of Random Vibration Theory Methodology for Seismic...

    Energy Savers [EERE]

    Application of Random Vibration Theory Methodology for Seismic Soil-Structure Interaction Analysis Application of Random Vibration Theory Methodology for Seismic Soil-Structure...

  20. Adaptation of methodology to select structural alternatives of one-way slab in residential building to the guidelines of the European Committee for Standardization (CEN/TC 350)

    SciTech Connect (OSTI)

    Fraile-Garcia, Esteban; Ferreiro-Cabello, Javier; Martinez-Camara, Eduardo; Jimenez-Macias, Emilio

    2015-11-15

    The European Committee for Standardization (CEN) through its Technical Committee CEN/TC-350 is developing a series of standards for assessing the building sustainability, at both product and building levels. The practical application of the selection (decision making) of structural alternatives made by one-way slabs leads to an intermediate level between the product and the building. Thus the present study addresses this problem of decision making, following the CEN guidelines and incorporating relevant aspects of architectural design into residential construction. A life cycle assessment (LCA) is developed in order to obtain valid information for the decision making process (the LCA was developed applying CML methodology although Ecoindicator99 was used in order to facilitate the comparison of the values); this information (the carbon footprint values) is contrasted with other databases and with the information from the Environmental Product Declaration (EPD) of one of the lightening materials (expanded polystyrene), in order to validate the results. Solutions of different column disposition and geometries are evaluated in the three pillars of sustainable construction on residential construction: social, economic and environmental. The quantitative analysis of the variables used in this study enables and facilitates an objective comparison in the design stage by a responsible technician; the application of the proposed methodology reduces the possible solutions to be evaluated by the expert to 12.22% of the options in the case of low values of the column index and to 26.67% for the highest values. - Highlights: • Methodology for selection of structural alternatives in buildings with one-way slabs • Adapted to CEN guidelines (CEN/TC-350) for assessing the building sustainability • LCA is developed in order to obtain valid information for the decision making process. • Results validated comparing carbon footprint, databases and Env. Product Declarations

  1. Experimentally validated finite element model of electrocaloric multilayer ceramic structures

    SciTech Connect (OSTI)

    Smith, N. A. S. E-mail: maciej.rokosz@npl.co.uk Correia, T. M. E-mail: maciej.rokosz@npl.co.uk; Rokosz, M. K. E-mail: maciej.rokosz@npl.co.uk

    2014-07-28

    A novel finite element model to simulate the electrocaloric response of a multilayer ceramic capacitor (MLCC) under real environment and operational conditions has been developed. The two-dimensional transient conductive heat transfer model presented includes the electrocaloric effect as a source term, as well as accounting for radiative and convective effects. The model has been validated with experimental data obtained from the direct imaging of MLCC transient temperature variation under application of an electric field. The good agreement between simulated and experimental data, suggests that the novel experimental direct measurement methodology and the finite element model could be used to support the design of optimised electrocaloric units and operating conditions.

  2. Resolving the structure of Ti3C2Tx MXenes through multilevel structural modeling of the atomic pair distribution function

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Wesolowski, David J.; Wang, Hsiu -Wen; Page, Katharine L.; Naguib, Michael; Gogotsi, Yury

    2015-12-08

    MXenes are a recently discovered family of two-dimensional (2D) early transition metal carbides and carbonitrides, which have already shown many attractive properties and a great promise in energy storage and many other applications. However, a complex surface chemistry and small coherence length has been an obstacle in some applications of MXenes, also limiting accuracy of predictions of their properties. In this study, we describe and benchmark a novel way of modeling layered materials with real interfaces (diverse surface functional groups and stacking order between the adjacent monolayers) against experimental data. The structures of three kinds of Ti3C2Tx MXenes (T standsmore » for surface terminating species, including O, OH, and F) produced under different synthesis conditions were resolved for the first time using atomic pair distribution function obtained by high-quality neutron total scattering. The true nature of the material can be easily captured with the sensitivity of neutron scattering to the surface species of interest and the detailed third-generation structure model we present. The modeling approach leads to new understanding of MXene structural properties and can replace the currently used idealized models in predictions of a variety of physical, chemical and functional properties of Ti3C2-based MXenes. Furthermore, the developed models can be employed to guide the design of new MXene materials with selected surface termination and controlled contact angle, catalytic, optical, electrochemical and other properties. We suggest that the multi-level structural modeling should form the basis for a generalized methodology on modeling diffraction and pair distribution function data for 2D and layered materials.« less

  3. A Structural Model Guide For Geothermal Exploration In Ancestral...

    Open Energy Info (EERE)

    traverse the base of the AMB volcano. This master fault induced fracture-controlled permeability where fluids in the Tongonan Geothermal Field circulate. The structural model...

  4. UNDERSTANDING PHYSICAL CONDITIONS IN HIGH-REDSHIFT GALAXIES THROUGH C I FINE STRUCTURE LINES: DATA AND METHODOLOGY

    SciTech Connect (OSTI)

    Jorgenson, Regina A.; Wolfe, Arthur M.; Prochaska, J. Xavier

    2010-10-10

    We probe the physical conditions in high-redshift galaxies, specifically, the damped Ly{alpha} systems (DLAs) using neutral carbon (C I) fine structure lines and molecular hydrogen (H{sub 2}). We report five new detections of C I and analyze the C I in an additional two DLAs with previously published data. We also present one new detection of H{sub 2} in a DLA. We present a new method of analysis that simultaneously constrains both the volume density and the temperature of the gas, as opposed to previous studies that a priori assumed a gas temperature. We use only the column density of C I measured in the fine structure states and the assumption of ionization equilibrium in order to constrain the physical conditions in the gas. We present a sample of 11 C I velocity components in six DLAs and compare their properties to those derived by the global C II* technique. The resulting median values for this sample are (n(H I)) = 69 cm{sup -3}, (T) = 50 K, and (log(P/k)) = 3.86 cm{sup -3} K, with standard deviations, {sigma}{sub n(H{sub i})} = 134 cm{sup -3}, {sigma}{sub T} = 52 K, and {sigma}{sub log(P/k)} = 3.68 cm{sup -3} K. This can be compared with the integrated median values for the same DLAs: (n(H I)) = 2.8 cm{sup -3}, (T) = 139 K, and (log(P/k)) = 2.57 cm{sup -3} K, with standard deviations {sigma}{sub n(H{sub i})} = 3.0 cm{sup -3}, {sigma}{sub T} = 43 K, and {sigma}{sub log(P/k)} = 0.22 cm{sup -3} K. Interestingly, the pressures measured in these high-redshift C I clouds are similar to those found in the Milky Way. We conclude that the C I gas is tracing a higher-density, higher-pressure region, possibly indicative of post-shock gas or a photodissociation region on the edge of a molecular cloud. We speculate that these clouds may be direct probes of the precursor sites of star formation in normal galaxies at high redshift.

  5. Tornado missile simulation and design methodology. Volume 1: simulation methodology, design applications, and TORMIS computer code. Final report

    SciTech Connect (OSTI)

    Twisdale, L.A.; Dunn, W.L.

    1981-08-01

    A probabilistic methodology has been developed to predict the probabilities of tornado-propelled missiles impacting and damaging nuclear power plant structures. Mathematical models of each event in the tornado missile hazard have been developed and sequenced to form an integrated, time-history simulation methodology. The models are data based where feasible. The data include documented records of tornado occurrence, field observations of missile transport, results of wind tunnel experiments, and missile impact tests. Probabilistic Monte Carlo techniques are used to estimate the risk probabilities. The methodology has been encoded in the TORMIS computer code to facilitate numerical analysis and plant-specific tornado missile probability assessments. Sensitivity analyses have been performed on both the individual models and the integrated methodology, and risk has been assessed for a hypothetical nuclear power plant design case study.

  6. SASSI Methodology-Based Sensitivity Studies for Deeply Embedded...

    Office of Environmental Management (EM)

    SASSI Methodology-Based Sensitivity Studies for Deeply Embedded Structures, Such As Small Modular Reactors (SMRs) SASSI Methodology-Based Sensitivity Studies for Deeply Embedded...

  7. MODELING UNDERGROUND STRUCTURE VULNERABILITY IN JOINTED ROCK

    SciTech Connect (OSTI)

    R. SWIFT; D. STEEDMAN

    2001-02-01

    The vulnerability of underground structures and openings in deep jointed rock to ground shock attack is of chief concern to military planning and security. Damage and/or loss of stability to a structure in jointed rock, often manifested as brittle failure and accompanied with block movement, can depend significantly on jointed properties, such as spacing, orientation, strength, and block character. We apply a hybrid Discrete Element Method combined with the Smooth Particle Hydrodynamics approach to simulate the MIGHTY NORTH event, a definitive high-explosive test performed on an aluminum lined cylindrical opening in jointed Salem limestone. Representing limestone with discrete elements having elastic-equivalence and explicit brittle tensile behavior and the liner as an elastic-plastic continuum provides good agreement with the experiment and damage obtained with finite-element simulations. Extending the approach to parameter variations shows damage is substantially altered by differences in joint geometry and liner properties.

  8. Three Dimensional Response Spectrum Soil Structure Modeling Versus Conceptual Understanding To Illustrate Seismic Response Of Structures

    SciTech Connect (OSTI)

    Touqan, Abdul Razzaq

    2008-07-08

    Present methods of analysis and mathematical modeling contain so many assumptions that separate them from reality and thus represent a defect in design which makes it difficult to analyze reasons of failure. Three dimensional (3D) modeling is so superior to 1D or 2D modeling, static analysis deviates from the true nature of earthquake load which is 'a dynamic punch', and conflicting assumptions exist between structural engineers (who assume flexible structures on rigid block foundations) and geotechnical engineers (who assume flexible foundations supporting rigid structures). Thus a 3D dynamic soil-structure interaction is a step that removes many of the assumptions and thus clears reality to a greater extent. However such a model cannot be analytically analyzed. We need to anatomize and analogize it. The paper will represent a conceptual (analogical) 1D model for soil structure interaction and clarifies it by comparing its outcome with 3D dynamic soil-structure finite element analysis of two structures. The aim is to focus on how to calculate the period of the structure and to investigate effect of variation of stiffness on soil-structure interaction.

  9. Scientists model brain structure to help computers recognize...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    The team tried developing a computer model based on human neural structure and function, ... Introspectively, we know that the human brain solves this problem very well. We only have ...

  10. Climate Change Modeling and Downscaling Issues and Methodological Perspectives for the U.S. National Climate Assessment

    SciTech Connect (OSTI)

    Janetos, Anthony C.; Collins, William D.; Wuebbles, D.J.; Diffenbaugh, Noah; Hayhoe, Katharine; Hibbard, Kathleen A.; Hurtt, George

    2012-03-31

    This is the full workshop report for the modeling workshop we did for the National Climate Assessment, with DOE support.

  11. Energy Intensity Indicators: Methodology | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Methodology Energy Intensity Indicators: Methodology The files listed below contain methodology documentation and related studies that support the information presented on this website. The files are available to view and/or download as Adobe Acrobat PDF files. 2003. Energy Indicators System: Index Construction Methodology 2004. Changing the Base Year for the Index Boyd GA, and JM Roop. 2004. "A Note on the Fisher Ideal Index Decomposition for Structural Change in Energy Intensity."

  12. Structure formation in a nonlocally modified gravity model

    SciTech Connect (OSTI)

    Park, Sohyun; Dodelson, Scott

    2013-01-01

    We study a nonlocally modified gravity model proposed by Deser and Woodard which gives an explanation for current cosmic acceleration. By deriving and solving the equations governing the evolution of the structure in the Universe, we show that this model predicts a pattern of growth that differs from standard general relativity (+dark energy) at the 10-30% level. These differences will be easily probed by the next generation of galaxy surveys, so the model should be tested shortly.

  13. Cement-aggregate compatibility and structure property relationships including modelling

    SciTech Connect (OSTI)

    Jennings, H.M.; Xi, Y.

    1993-07-15

    The role of aggregate, and its interface with cement paste, is discussed with a view toward establishing models that relate structure to properties. Both short (nm) and long (mm) range structure must be considered. The short range structure of the interface depends not only on the physical distribution of the various phases, but also on moisture content and reactivity of aggregate. Changes that occur on drying, i.e. shrinkage, may alter the structure which, in turn, feeds back to alter further drying and shrinkage. The interaction is dynamic, even without further hydration of cement paste, and the dynamic characteristic must be considered in order to fully understand and model its contribution to properties. Microstructure and properties are two subjects which have been pursued somewhat separately. This review discusses both disciplines with a view toward finding common research goals in the future. Finally, comment is made on possible chemical reactions which may occur between aggregate and cement paste.

  14. Advances on statistical/thermodynamical models for unpolarized structure functions

    SciTech Connect (OSTI)

    Trevisan, Luis A.; Mirez, Carlos; Tomio, Lauro

    2013-03-25

    During the eights and nineties many statistical/thermodynamical models were proposed to describe the nucleons' structure functions and distribution of the quarks in the hadrons. Most of these models describe the compound quarks and gluons inside the nucleon as a Fermi / Bose gas respectively, confined in a MIT bag with continuous energy levels. Another models considers discrete spectrum. Some interesting features of the nucleons are obtained by these models, like the sea asymmetries {sup -}d/{sup -}u and {sup -}d-{sup -}u.

  15. PHASE STRUCTURE OF TWISTED EGUCHI-KAWAI MODEL.

    SciTech Connect (OSTI)

    ISHIKAWA,T.; AZEYANAGI, T.; HANADA, M.; HIRATA, T.

    2007-07-30

    We study the phase structure of the four-dimensional twisted Eguchi-Kawai model using numerical simulations. This model is an effective tool for studying SU(N) gauge theory in the large-N limit and provides a nonperturbative formulation of the gauge theory on noncommutative spaces. Recently it was found that its Z{sub n}{sup 4} symmetry, which is crucial for the validity of this model, can break spontaneously in the intermediate coupling region. We investigate in detail the symmetry breaking point from the weak coupling side. Our simulation results show that the continuum limit of this model cannot be taken.

  16. Mechanical modeling of the growth of salt structures

    SciTech Connect (OSTI)

    Alfaro, R.A.M.

    1993-05-01

    A 2D numerical model for studying the morphology and history of salt structures by way of computer simulations is presented. The model is based on conservation laws for physical systems, a fluid marker equation to keep track of the salt/sediments interface, and two constitutive laws for rocksalt. When buoyancy alone is considered, the fluid-assisted diffusion model predicts evolution of salt structures 2.5 times faster than the power-law creep model. Both rheological laws predict strain rates of the order of 4.0 {times} 10{sup {minus}15}s{sup {minus}1} for similar structural maturity level of salt structures. Equivalent stresses and viscosities predicted by the fluid-assisted diffusion law are 10{sup 2} times smaller than those predicted by the power-law creep rheology. Use of East Texas Basin sedimentation rates and power-law creep rheology indicate that differential loading is an effective mechanism to induce perturbations that amplify and evolve to mature salt structures, similar to those observed under natural geological conditions.

  17. Structure and thermodynamics of core-softened models for alcohols

    SciTech Connect (OSTI)

    Munaò, Gianmarco; Urbic, Tomaz

    2015-06-07

    The phase behavior and the fluid structure of coarse-grain models for alcohols are studied by means of reference interaction site model (RISM) theory and Monte Carlo simulations. Specifically, we model ethanol and 1-propanol as linear rigid chains constituted by three (trimers) and four (tetramers) partially fused spheres, respectively. Thermodynamic properties of these models are examined in the RISM context, by employing closed formulæ for the calculation of free energy and pressure. Gas-liquid coexistence curves for trimers and tetramers are reported and compared with already existing data for a dimer model of methanol. Critical temperatures slightly increase with the number of CH{sub 2} groups in the chain, while critical pressures and densities decrease. Such a behavior qualitatively reproduces the trend observed in experiments on methanol, ethanol, and 1-propanol and suggests that our coarse-grain models, despite their simplicity, can reproduce the essential features of the phase behavior of such alcohols. The fluid structure of these models is investigated by computing radial distribution function g{sub ij}(r) and static structure factor S{sub ij}(k); the latter shows the presence of a low−k peak at intermediate-high packing fractions and low temperatures, suggesting the presence of aggregates for both trimers and tetramers.

  18. Model of evolution of surface grain structure under ion bombardment

    SciTech Connect (OSTI)

    Knyazeva, Anna G.; Kryukova, Olga N.

    2014-11-14

    Diffusion and chemical reactions in multicomponent systems play an important role in numerous technology applications. For example, surface treatment of materials and coatings by particle beam leads to chemical composition and grain structure change. To investigate the thermal-diffusion and chemical processes affecting the evolution of surface structure, the mathematical modeling is efficient addition to experiment. In this paper two-dimensional model is discussed to describe the evolution of titanium nitride coating on the iron substrate under implantation of boron and carbon. The equation for diffusion fluxes and reaction rate are obtained using Gibbs energy expansion into series with respect to concentration and their gradients.

  19. DOE Challenge Home Label Methodology

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    October 2012 1 Label Methodology DOE Challenge Home Label Methodology October 2012 DOE Challenge Home October 2012 2 Label Methodology Contents Background ............................................................................................................................................... 3 Methodology ............................................................................................................................................. 5 Comfort/Quiet

  20. Shell model description of band structure in 48Cr

    SciTech Connect (OSTI)

    Vargas, Carlos E.; Velazquez, Victor M.

    2007-02-12

    The band structure for normal and abnormal parity bands in 48Cr are described using the m-scheme shell model. In addition to full fp-shell, two particles in the 1d3/2 orbital are allowed in order to describe intruder states. The interaction includes fp-, sd- and mixed matrix elements.

  1. Modeling Blast Loading on Buried Reinforced Concrete Structures with Zapotec

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Bessette, Greg C.

    2008-01-01

    A coupled Euler-Lagrange solution approach is used to model the response of a buried reinforced concrete structure subjected to a close-in detonation of a high explosive charge. The coupling algorithm is discussed along with a set of benchmark calculations involving detonations in clay and sand.

  2. INTERLINE 5.0 -- An expanded railroad routing model: Program description, methodology, and revised user`s manual

    SciTech Connect (OSTI)

    Johnson, P.E.; Joy, D.S.; Clarke, D.B.; Jacobi, J.M.

    1993-03-01

    A rail routine model, INTERLINE, has been developed at the Oak Ridge National Laboratory to investigate potential routes for transporting radioactive materials. In Version 5.0, the INTERLINE routing algorithms have been enhanced to include the ability to predict alternative routes, barge routes, and population statistics for any route. The INTERLINE railroad network is essentially a computerized rail atlas describing the US railroad system. All rail lines, with the exception of industrial spurs, are included in the network. Inland waterways and deep water routes along with their interchange points with the US railroadsystem are also included. The network contains over 15,000 rail and barge segments (links) and over 13,000 stations, interchange points, ports, and other locations (nodes). The INTERLINE model has been converted to operate on an IBM-compatible personal computer. At least a 286 computer with a hard disk containing approximately 6 MB of free space is recommended. Enhanced program performance will be obtained by using arandom-access memory drive on a 386 or 486 computer.

  3. Comparison of {gamma}Z-structure function models

    SciTech Connect (OSTI)

    Rislow, Benjamin C.

    2013-11-01

    The {gamma}Z-box is an important contribution to the proton's weak charge. The {gamma}Z-box is calculated dispersively and depends on {gamma}Z-structure functions, F{sub {gamma}Z1,2,3}(x,Q{sup 2}) . At present there is no data for these structure functions and they must be modeled by modifying existing fits to electromagnetic data. Each group that has studied the {gamma}Z-box used different modifications. The results of the PVDIS experiment at Jefferson Lab may provide a first test of the validity of each group's models. I present details of the different models and their predictions for the PVDIS result.

  4. Quantitative Analysis of Variability and Uncertainty in Environmental Data and Models. Volume 1. Theory and Methodology Based Upon Bootstrap Simulation

    SciTech Connect (OSTI)

    Frey, H. Christopher; Rhodes, David S.

    1999-04-30

    This is Volume 1 of a two-volume set of reports describing work conducted at North Carolina State University sponsored by Grant Number DE-FG05-95ER30250 by the U.S. Department of Energy. The title of the project is “Quantitative Analysis of Variability and Uncertainty in Acid Rain Assessments.” The work conducted under sponsorship of this grant pertains primarily to two main topics: (1) development of new methods for quantitative analysis of variability and uncertainty applicable to any type of model; and (2) analysis of variability and uncertainty in the performance, emissions, and cost of electric power plant combustion-based NOx control technologies. These two main topics are reported separately in Volumes 1 and 2.

  5. Computational Structural Mechanics

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    load-2 TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computational Structural Mechanics Overview of CSM Computational structural mechanics is a well-established methodology for the design and analysis of many components and structures found in the transportation field. Modern finite-element models (FEMs) play a major role in these evaluations, and sophisticated software, such as the commercially available LS-DYNA® code, is

  6. Modeling the initiation and growth of delaminations in composite structures

    SciTech Connect (OSTI)

    Reedy, E.D. Jr.; Mello, F.J.; Guess, T.R.

    1996-01-01

    A method for modeling the initiation and growth of discrete delaminations in shell-like composite structures is presented. The laminate is divided into two or more sublaminates, with each sublaminate modeled with 4-noded quadrilateral shell elements. A special, 8-noded hex constraint element connects the sublaminates and makes them act as a single laminate until a prescribed failure criterion is attained. When the failure criterion is reached, the connection is broken, and a discrete delamination is initiated or grows. This approach has been implemented in a three-dimensional, finite element code. This code uses explicit time integration, and can analyze shell-like structures subjected to large deformations and complex contact conditions. Tensile, compressive, and shear laminate failures are also modeled. This paper describes the 8-noded hex constraint element used to model the initiation and growth of a delamination, and discusses associated implementation issues. In addition, calculated results for double cantilever beam and end notched flexure specimens are presented and compared to measured data to assess the ability of the present approach to reproduce observed behavior. Results are also presented for a diametrally compressed ring to demonstrate the capacity to analyze progressive failure in a highly deformed composite structure.

  7. Structure formation in inhomogeneous Early Dark Energy models

    SciTech Connect (OSTI)

    Batista, R.C.; Pace, F. E-mail: francesco.pace@port.ac.uk

    2013-06-01

    We study the impact of Early Dark Energy fluctuations in the linear and non-linear regimes of structure formation. In these models the energy density of dark energy is non-negligible at high redshifts and the fluctuations in the dark energy component can have the same order of magnitude of dark matter fluctuations. Since two basic approximations usually taken in the standard scenario of quintessence models, that both dark energy density during the matter dominated period and dark energy fluctuations on small scales are negligible, are not valid in such models, we first study approximate analytical solutions for dark matter and dark energy perturbations in the linear regime. This study is helpful to find consistent initial conditions for the system of equations and to analytically understand the effects of Early Dark Energy and its fluctuations, which are also verified numerically. In the linear regime we compute the matter growth and variation of the gravitational potential associated with the Integrated Sachs-Wolf effect, showing that these observables present important modifications due to Early Dark Energy fluctuations, though making them more similar to the ΛCDM model. We also make use of the Spherical Collapse model to study the influence of Early Dark Energy fluctuations in the nonlinear regime of structure formation, especially on δ{sub c} parameter, and their contribution to the halo mass, which we show can be of the order of 10%. We finally compute how the number density of halos is modified in comparison to the ΛCDM model and address the problem of how to correct the mass function in order to take into account the contribution of clustered dark energy. We conclude that the inhomogeneous Early Dark Energy models are more similar to the ΛCDM model than its homogeneous counterparts.

  8. Lattice and off-lattice side chain models of protein folding: Linear time structure prediction better than 86% of optimal

    SciTech Connect (OSTI)

    Hart, W.E.; Istrail, S. [Sandia National Labs., Albuquerque, NM (United States). Algorithms and Discrete Mathematics Dept.

    1996-08-09

    This paper considers the protein structure prediction problem for lattice and off-lattice protein folding models that explicitly represent side chains. Lattice models of proteins have proven extremely useful tools for reasoning about protein folding in unrestricted continuous space through analogy. This paper provides the first illustration of how rigorous algorithmic analyses of lattice models can lead to rigorous algorithmic analyses of off-lattice models. The authors consider two side chain models: a lattice model that generalizes the HP model (Dill 85) to explicitly represent side chains on the cubic lattice, and a new off-lattice model, the HP Tangent Spheres Side Chain model (HP-TSSC), that generalizes this model further by representing the backbone and side chains of proteins with tangent spheres. They describe algorithms for both of these models with mathematically guaranteed error bounds. In particular, the authors describe a linear time performance guaranteed approximation algorithm for the HP side chain model that constructs conformations whose energy is better than 865 of optimal in a face centered cubic lattice, and they demonstrate how this provides a 70% performance guarantee for the HP-TSSC model. This is the first algorithm in the literature for off-lattice protein structure prediction that has a rigorous performance guarantee. The analysis of the HP-TSSC model builds off of the work of Dancik and Hannenhalli who have developed a 16/30 approximation algorithm for the HP model on the hexagonal close packed lattice. Further, the analysis provides a mathematical methodology for transferring performance guarantees on lattices to off-lattice models. These results partially answer the open question of Karplus et al. concerning the complexity of protein folding models that include side chains.

  9. Reduced order modeling of fluid/structure interaction.

    SciTech Connect (OSTI)

    Barone, Matthew Franklin; Kalashnikova, Irina; Segalman, Daniel Joseph; Brake, Matthew Robert

    2009-11-01

    This report describes work performed from October 2007 through September 2009 under the Sandia Laboratory Directed Research and Development project titled 'Reduced Order Modeling of Fluid/Structure Interaction.' This project addresses fundamental aspects of techniques for construction of predictive Reduced Order Models (ROMs). A ROM is defined as a model, derived from a sequence of high-fidelity simulations, that preserves the essential physics and predictive capability of the original simulations but at a much lower computational cost. Techniques are developed for construction of provably stable linear Galerkin projection ROMs for compressible fluid flow, including a method for enforcing boundary conditions that preserves numerical stability. A convergence proof and error estimates are given for this class of ROM, and the method is demonstrated on a series of model problems. A reduced order method, based on the method of quadratic components, for solving the von Karman nonlinear plate equations is developed and tested. This method is applied to the problem of nonlinear limit cycle oscillations encountered when the plate interacts with an adjacent supersonic flow. A stability-preserving method for coupling the linear fluid ROM with the structural dynamics model for the elastic plate is constructed and tested. Methods for constructing efficient ROMs for nonlinear fluid equations are developed and tested on a one-dimensional convection-diffusion-reaction equation. These methods are combined with a symmetrization approach to construct a ROM technique for application to the compressible Navier-Stokes equations.

  10. Modeling of fracture of protective concrete structures under impact loads

    SciTech Connect (OSTI)

    Radchenko, P. A. Batuev, S. P.; Radchenko, A. V.; Plevkov, V. S.

    2015-10-27

    This paper presents results of numerical simulation of interaction between a Boeing 747-400 aircraft and the protective shell of a nuclear power plant. The shell is presented as a complex multilayered cellular structure consisting of layers of concrete and fiber concrete bonded with steel trusses. Numerical simulation was performed three-dimensionally using the original algorithm and software taking into account algorithms for building grids of complex geometric objects and parallel computations. Dynamics of the stress-strain state and fracture of the structure were studied. Destruction is described using a two-stage model that allows taking into account anisotropy of elastic and strength properties of concrete and fiber concrete. It is shown that wave processes initiate destruction of the cellular shell structure; cells start to destruct in an unloading wave originating after the compression wave arrival at free cell surfaces.

  11. Scaling issues associated with thermal and structural modeling and testing

    SciTech Connect (OSTI)

    Thomas, R.K.; Moya, J.L.; Skocypec, R.D.

    1993-10-01

    Sandia National Laboratories (SNL) is actively engaged in research to characterize abnormal environments, and to improve our capability to accurately predict the response of engineered systems to thermal and structural events. Abnormal environments, such as impact and fire, are complex and highly nonlinear phenomena which are difficult to model by computer simulation. Validation of computer results with full scale, high fidelity test data is required. The number of possible abnormal environments and the range of initial conditions are very large. Because full-scale tests are very costly, only a minimal number have been conducted. Scale model tests are often performed to span the range of abnormal environments and initial conditions unobtainable by full-scale testing. This paper will discuss testing capabilities at SNL, issues associated with thermal and structural scaling, and issues associated with extrapolating scale model data to full-scale system response. Situated a few minutes from Albuquerque, New Mexico, are the unique test facilities of Sandia National Laboratories. The testing complex is comprised of over 40 facilities which occupy over 40 square miles. Many of the facilities have been designed and built by SNL to simulate complex problems encountered in engineering analysis and design. The facilities can provide response measurements, under closely controlled conditions, to both verify mathematical models of engineered systems and satisfy design specifications.

  12. Ultrafast Structural Dynamics in Combustion Relevant Model Systems

    SciTech Connect (OSTI)

    Weber, Peter M.

    2014-03-31

    The research project explored the time resolved structural dynamics of important model reaction system using an array of novel methods that were developed specifically for this purpose. They include time resolved electron diffraction, time resolved relativistic electron diffraction, and time resolved Rydberg fingerprint spectroscopy. Toward the end of the funding period, we also developed time-resolved x-ray diffraction, which uses ultrafast x-ray pulses at LCLS. Those experiments are just now blossoming, as the funding period expired. In the following, the time resolved Rydberg Fingerprint Spectroscopy is discussed in some detail, as it has been a very productive method. The binding energy of an electron in a Rydberg state, that is, the energy difference between the Rydberg level and the ground state of the molecular ion, has been found to be a uniquely powerful tool to characterize the molecular structure. To rationalize the structure sensitivity we invoke a picture from electron diffraction: when it passes the molecular ion core, the Rydberg electron experiences a phase shift compared to an electron in a hydrogen atom. This phase shift requires an adjustment of the binding energy of the electron, which is measurable. As in electron diffraction, the phase shift depends on the molecular, geometrical structure, so that a measurement of the electron binding energy can be interpreted as a measurement of the molecules structure. Building on this insight, we have developed a structurally sensitive spectroscopy: the molecule is first elevated to the Rydberg state, and the binding energy is then measured using photoelectron spectroscopy. The molecules structure is read out as the binding energy spectrum. Since the photoionization can be done with ultrafast laser pulses, the technique is inherently capable of a time resolution in the femtosecond regime. For the purpose of identifying the structures of molecules during chemical reactions, and for the analysis of

  13. Nonlinear structure formation in the cubic Galileon gravity model

    SciTech Connect (OSTI)

    Barreira, Alexandre; Li, Baojiu; Hellwing, Wojciech A.; Baugh, Carlton M.; Pascoli, Silvia E-mail: baojiu.li@durham.ac.uk E-mail: c.m.baugh@durham.ac.uk

    2013-10-01

    We model the linear and nonlinear growth of large scale structure in the Cubic Galileon gravity model, by running a suite of N-body cosmological simulations using the ECOSMOG code. Our simulations include the Vainshtein screening effect, which reconciles the Cubic Galileon model with local tests of gravity. In the linear regime, the amplitude of the matter power spectrum increases by ? 20% with respect to the standard ?CDM model today. The modified expansion rate accounts for ? 15% of this enhancement, while the fifth force is responsible for only ? 5%. This is because the effective unscreened gravitational strength deviates from standard gravity only at late times, even though it can be twice as large today. In the nonlinear regime (k?>0.1h Mpc{sup ?1}), the fifth force leads to only a modest increase (?<8%) in the clustering power on all scales due to the very efficient operation of the Vainshtein mechanism. Such a strong effect is typically not seen in other models with the same screening mechanism. The screening also results in the fifth force increasing the number density of halos by less than 10%, on all mass scales. Our results show that the screening does not ruin the validity of linear theory on large scales which anticipates very strong constraints from galaxy clustering data. We also show that, whilst the model gives an excellent match to CMB data on small angular scales (l?>50), the predicted integrated Sachs-Wolfe effect is in tension with Planck/WMAP results.

  14. Numerical modeling of solar magnetostatic structures bounded by current sheets

    SciTech Connect (OSTI)

    Pizzo, V.J. )

    1990-12-01

    A numerical method for efficiently determining the magnetostatic equilibrium configuration of erupted solar flux concentrations, such as sunspots and flux tubes, is presented. The magnetic structures are taken to be approximately vertically oriented and axisymmetric in the surface layers and are assumed to be isolated from the surrounding photosphere by a vanishingly thin current sheet. Since the location of the current sheet is initially unknown, the final structure is generated iteratively as a free-surface problem, with the magnetic configuration for each iterate being obtained from the horizontal force balance equation, subject to the appropriate boundary conditions. Multigrid methods are used at each stage to solve the equilibrium equation, which is mapped algebraically into a body-fitted coordinate system via transfinite interpolation techniques. Several model flux tubes and sunspots are computed to illustrate the procedure, and the accuracy of the numerical method is assessed against exact analytic solutions. 32 refs.

  15. 2008 ASC Methodology Errata

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    BONNEVILLE POWER ADMINISTRATION'S ERRATA CORRECTIONS TO THE 2008 AVERAGE SYSTEM COST METHODOLOGY September 12, 2008 I. DESCRIPTION OF ERRATA CORRECTIONS A. Attachment A, ASC...

  16. Draft Tiered Rate Methodology

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    For Regional Dialogue Discussion Purposes Only Pre-Decisional Draft Tiered Rates Methodology March 7, 2008 Pre-decisional, Deliberative, For Discussion Purposes Only March 7,...

  17. Phase structure in a chiral model of nuclear matter

    SciTech Connect (OSTI)

    Phat, Tran Huu; Anh, Nguyen Tuan; Tam, Dinh Thanh

    2011-08-15

    The phase structure of symmetric nuclear matter in the extended Nambu-Jona-Lasinio (ENJL) model is studied by means of the effective potential in the one-loop approximation. It is found that chiral symmetry gets restored at high nuclear density and a typical first-order phase transition of the liquid-gas transition occurs at zero temperature, T=0, which weakens as T grows and eventually ends up with a second-order critical point at T=20 MeV. This phase transition scenario is confirmed by investigating the evolution of the effective potential versus the effective nucleon mass and the equation of state.

  18. The growth of structure in interacting dark energy models

    SciTech Connect (OSTI)

    Caldera-Cabral, Gabriela; Maartens, Roy; Schaefer, Bjoern Malte E-mail: roy.maartens@port.ac.uk

    2009-07-01

    If dark energy interacts with dark matter, there is a change in the background evolution of the universe, since the dark matter density no longer evolves as a{sup ?3}. In addition, the non-gravitational interaction affects the growth of structure. In principle, these changes allow us to detect and constrain an interaction in the dark sector. Here we investigate the growth factor and the weak lensing signal for a new class of interacting dark energy models. In these models, the interaction generalises the simple cases where one dark fluid decays into the other. In order to calculate the effect on structure formation, we perform a careful analysis of the perturbed interaction and its effect on peculiar velocities. Assuming a normalization to today's values of dark matter density and overdensity, the signal of the interaction is an enhancement (suppression) of both the growth factor and the lensing power, when the energy transfer in the background is from dark matter to dark energy (dark energy to dark matter)

  19. Light Water Reactor Sustainability Program Advanced Seismic Soil Structure Modeling

    SciTech Connect (OSTI)

    Bolisetti, Chandrakanth; Coleman, Justin Leigh

    2015-06-01

    Risk calculations should focus on providing best estimate results, and associated insights, for evaluation and decision-making. Specifically, seismic probabilistic risk assessments (SPRAs) are intended to provide best estimates of the various combinations of structural and equipment failures that can lead to a seismic induced core damage event. However, in some instances the current SPRA approach has large uncertainties, and potentially masks other important events (for instance, it was not the seismic motions that caused the Fukushima core melt events, but the tsunami ingress into the facility). SPRA’s are performed by convolving the seismic hazard (this is the estimate of all likely damaging earthquakes at the site of interest) with the seismic fragility (the conditional probability of failure of a structure, system, or component given the occurrence of earthquake ground motion). In this calculation, there are three main pieces to seismic risk quantification, 1) seismic hazard and nuclear power plants (NPPs) response to the hazard, 2) fragility or capacity of structures, systems and components (SSC), and 3) systems analysis. Two areas where NLSSI effects may be important in SPRA calculations are, 1) when calculating in-structure response at the area of interest, and 2) calculation of seismic fragilities (current fragility calculations assume a lognormal distribution for probability of failure of components). Some important effects when using NLSSI in the SPRA calculation process include, 1) gapping and sliding, 2) inclined seismic waves coupled with gapping and sliding of foundations atop soil, 3) inclined seismic waves coupled with gapping and sliding of deeply embedded structures, 4) soil dilatancy, 5) soil liquefaction, 6) surface waves, 7) buoyancy, 8) concrete cracking and 9) seismic isolation The focus of the research task presented here-in is on implementation of NLSSI into the SPRA calculation process when calculating in-structure response at the area

  20. Refinement of Modeling Techniques for the Structural Evaluation of Hanford Single-Shell Nuclear Waste Storage Tanks

    SciTech Connect (OSTI)

    Karri, Naveen K.; Rinker, Michael W.; Johnson, Kenneth I.; Bapanapalli, Satish K.

    2012-11-10

    ABSTRACT Several tanks at the Hanford Site (in Washington State, USA) belong to the first generation of underground nuclear waste storage tanks known as single shell tanks (SSTs). These tanks were constructed between 1943 and 1964 and are well beyond their design life. This article discusses the structural analysis approach and modeling challenges encountered during the ongoing analysis of record (AOR) for evaluating the structural integrity of the SSTs. There are several geometrical and material nonlinearities and uncertainties to be dealt with while performing the modern finite element analysis of these tanks. The analysis takes into account the temperature history of the tanks and allowable mechanical operating loads of these tanks for proper estimation of creep strains and thermal degradation of material properties. The loads prescribed in the AOR models also include anticipated loads that these tanks may see during waste retrieval and closure. Due to uncertainty in a number of inputs to the models, sensitivity studies were conducted to address questions related to the boundary conditions to realistically or conservatively represent the influence of surrounding tanks in a tank farm, the influence of backfill excavation slope, the extent of backfill and the total extent of undisturbed soil surrounding the backfill. Because of the limited availability of data on the thermal and operating history for many of the individual tanks, some of the data was assumed or interpolated. However, the models developed for the analysis of record represent the bounding scenarios and include the loading conditions that the tanks were subjected to or anticipated. The modeling refinement techniques followed in the AOR resulted in conservative estimates for force and moment demands at various sections in the concrete tanks. This article discusses the modeling aspects related to Type-II and Type-III SSTs. The modeling techniques, methodology and evaluation criteria developed for

  1. Used Nuclear Fuel Loading and Structural Performance Under Normal Conditions of Transport - Modeling, Simulation and Experimental Integration RD&D Plan

    SciTech Connect (OSTI)

    Adkins, Harold E.

    2013-04-01

    Under current U.S. Nuclear Regulatory Commission regulation, it is not sufficient for used nuclear fuel (UNF) to simply maintain its integrity during the storage period, it must maintain its integrity in such a way that it can withstand the physical forces of handling and transportation associated with restaging the fuel and moving it to treatment or recycling facilities, or a geologic repository. Hence it is necessary to understand the performance characteristics of aged UNF cladding and ancillary components under loadings stemming from transport initiatives. Researchers would like to demonstrate that enough information, including experimental support and modeling and simulation capabilities, exists to establish a preliminary determination of UNF structural performance under normal conditions of transport (NCT). This research, development and demonstration (RD&D) plan describes a methodology, including development and use of analytical models, to evaluate loading and associated mechanical responses of UNF rods and key structural components. This methodology will be used to provide a preliminary assessment of the performance characteristics of UNF cladding and ancillary components under rail-related NCT loading. The methodology couples modeling and simulation and experimental efforts currently under way within the Used Fuel Disposition Campaign (UFDC). The methodology will involve limited uncertainty quantification in the form of sensitivity evaluations focused around available fuel and ancillary fuel structure properties exclusively. The work includes collecting information via literature review, soliciting input/guidance from subject matter experts, performing computational analyses, planning experimental measurement and possible execution (depending on timing), and preparing a variety of supporting documents that will feed into and provide the basis for future initiatives. The methodology demonstration will focus on structural performance evaluation of

  2. Energy Intensity Indicators: Methodology Downloads | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Indicators: Methodology Downloads Energy Intensity Indicators: Methodology Downloads The files listed below contain methodology documentation and related studies that support the information presented on this website. The files are available to view and/or download as Adobe Acrobat PDF files. Energy Indicators System: Index Construction Methodology (101.17 KB) Changing the Base Year for the Index (23.98 KB) "A Note on the Fisher Ideal Index Decomposition for Structural Change in Energy

  3. Regional Shelter Analysis Methodology

    SciTech Connect (OSTI)

    Dillon, Michael B.; Dennison, Deborah; Kane, Jave; Walker, Hoyt; Miller, Paul

    2015-08-01

    The fallout from a nuclear explosion has the potential to injure or kill 100,000 or more people through exposure to external gamma (fallout) radiation. Existing buildings can reduce radiation exposure by placing material between fallout particles and exposed people. Lawrence Livermore National Laboratory was tasked with developing an operationally feasible methodology that could improve fallout casualty estimates. The methodology, called a Regional Shelter Analysis, combines the fallout protection that existing buildings provide civilian populations with the distribution of people in various locations. The Regional Shelter Analysis method allows the consideration of (a) multiple building types and locations within buildings, (b) country specific estimates, (c) population posture (e.g., unwarned vs. minimally warned), and (d) the time of day (e.g., night vs. day). The protection estimates can be combined with fallout predictions (or measurements) to (a) provide a more accurate assessment of exposure and injury and (b) evaluate the effectiveness of various casualty mitigation strategies. This report describes the Regional Shelter Analysis methodology, highlights key operational aspects (including demonstrating that the methodology is compatible with current tools), illustrates how to implement the methodology, and provides suggestions for future work.

  4. DOE Systems Engineering Methodology

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Systems Engineering Methodology (SEM) Computer System Retirement Guidelines Version 3 September 2002 U.S. Department of Energy Office of the Chief Information Officer Computer System Retirement Guidelines Date: September 2002 Page 1 Rev Date: Table of Contents Section Page Purpose ............................................................................................................................................ 2 Initiation and Distribution

  5. FINITE ELEMENT MODELS FOR COMPUTING SEISMIC INDUCED SOIL PRESSURES ON DEEPLY EMBEDDED NUCLEAR POWER PLANT STRUCTURES.

    SciTech Connect (OSTI)

    XU, J.; COSTANTINO, C.; HOFMAYER, C.

    2006-06-26

    PAPER DISCUSSES COMPUTATIONS OF SEISMIC INDUCED SOIL PRESSURES USING FINITE ELEMENT MODELS FOR DEEPLY EMBEDDED AND OR BURIED STIFF STRUCTURES SUCH AS THOSE APPEARING IN THE CONCEPTUAL DESIGNS OF STRUCTURES FOR ADVANCED REACTORS.

  6. Analysis Methodologies | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Systems Analysis » Analysis Methodologies Analysis Methodologies A spectrum of analysis methodologies are used in combination to provide a sound understanding of hydrogen and fuel cell systems and developing markets, as follows: Resource Analysis Technological Feasibility and Cost Analysis Environmental Analysis Delivery Analysis Infrastructure Development and Financial Analysis Energy Market Analysis In general, each methodology builds on previous efforts to quantify the benefits, drawbacks,

  7. Methodologies for Reservoir Characterization Using Fluid Inclusion...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Methodologies for Reservoir Characterization Using Fluid Inclusion Gas Chemistry Methodologies for Reservoir Characterization Using Fluid Inclusion Gas Chemistry Methodologies for ...

  8. Refinement of Modeling Techniques for the Structural Evaluation of Hanford Single-Shell Nuclear Waste Storage Tanks - 12288

    SciTech Connect (OSTI)

    Karri, Naveen K.; Rinker, Michael W.; Johnson, Kenneth I.; Bapanapalli, Satish K.

    2012-07-01

    The single-shell tanks at the Hanford Site (in Washington State, USA) were constructed between 1943 and 1964 and are well beyond their estimated 25 year design life. This article discusses the structural analysis approach and modeling challenges encountered during the ongoing analysis of record for evaluating the structural integrity of the single-shell tanks. There are several geometrical and material nonlinearities and uncertainties to be dealt with while performing the modern finite element analysis of these tanks. The analysis takes into account the temperature history of the tanks and allowable mechanical operating loads for proper estimation of creep strains and thermal degradation of material properties. The loads prescribed in the analysis of record models also include anticipated loads that may occur during waste retrieval and closure. Due to uncertainty in a number of modeling details, sensitivity studies were conducted to address questions related to boundary conditions that realistically or conservatively represent the influence of surrounding tanks in a tank farm, the influence of backfill excavation slope, the extent of backfill and the total extent of undisturbed soil surrounding the backfill. Because of the limited availability of data on the thermal and operating history for many of the individual tanks, some of the data was assumed or interpolated. However, the models developed for the analysis of record represent the bounding scenarios and include the loading conditions that the tanks were subjected to or anticipated. The modeling refinement techniques followed in the analysis of record resulted in conservative estimates for force and moment demands at various sections in the concrete tanks. This article discusses the modeling aspects related to Type-II and Type-III single-shell tanks. The modeling techniques, methodology and evaluation criteria developed for evaluating the structural integrity of single-shell tanks at Hanford are in general

  9. Lifecycle Model

    Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]

    1997-05-21

    This chapter describes the lifecycle model used for the Departmental software engineering methodology.

  10. Beyond the Lone-Pair Model for Structurally Distorted Metal Oxides

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Beyond the Lone-Pair Model for Structurally Distorted Metal Oxides Beyond the Lone-Pair Model for Structurally Distorted Metal Oxides Print Wednesday, 28 February 2007 00:00 "Ferroelectricity," by analogy to ferromagnetism, is defined as the presence of spontaneous electrical polarization in a material, often arising from distortions in the material's crystal structure. In oxides of the metals lead and bismuth, such distortions were for many years attributed to the existence of

  11. Emergency exercise methodology

    SciTech Connect (OSTI)

    Klimczak, C.A.

    1993-03-01

    Competence for proper response to hazardous materials emergencies is enhanced and effectively measured by exercises which test plans and procedures and validate training. Emergency exercises are most effective when realistic criteria is used and a sequence of events is followed. The scenario is developed from pre-determined exercise objectives based on hazard analyses, actual plans and procedures. The scenario should address findings from previous exercises and actual emergencies. Exercise rules establish the extent of play and address contingencies during the exercise. All exercise personnel are assigned roles as players, controllers or evaluators. These participants should receive specialized training in advance. A methodology for writing an emergency exercise plan will be detailed.

  12. Emergency exercise methodology

    SciTech Connect (OSTI)

    Klimczak, C.A.

    1993-01-01

    Competence for proper response to hazardous materials emergencies is enhanced and effectively measured by exercises which test plans and procedures and validate training. Emergency exercises are most effective when realistic criteria is used and a sequence of events is followed. The scenario is developed from pre-determined exercise objectives based on hazard analyses, actual plans and procedures. The scenario should address findings from previous exercises and actual emergencies. Exercise rules establish the extent of play and address contingencies during the exercise. All exercise personnel are assigned roles as players, controllers or evaluators. These participants should receive specialized training in advance. A methodology for writing an emergency exercise plan will be detailed.

  13. Refinement of Modeling Techniques for the Structural Evaluation of Hanford Single-Shell Nuclear Waste Storage Tanks

    SciTech Connect (OSTI)

    Karri, Naveen K.; Rinker, Michael W.; Johnson, Kenneth I.; Bapanapalli, Satish K.

    2012-03-01

    soil surrounding the backfill. The article also discusses the criteria and design standards used for evaluating the structural integrity of these underground concrete tanks. Because of the non-availability of complete data on the thermal and operating history for many of the individual tanks, some of the data was assumed or interpolated. However, the models developed for the analysis of record represent the bounding scenarios and include the worst and extreme loading cases that the tanks were subjected to or anticipated. The modeling refinement techniques followed in the AOR resulted in conservative estimates for force and moment demands at various sections in the concrete tanks. The SSTs are classified into 4 types as per their configuration and capacity. This article discusses the modeling aspects related to two types of SSTs that have been analyzed until now. The TOLA results combined with seismic demands from seismic analysis for the analysis of record indicate that the tanks analyzed are structurally stable as per the evaluation criteria established. These results are presented in a separate article. The modeling techniques, methodology and evaluation criteria developed for evaluating the structural integrity of SSTs at Hanford are in general applicable to any similar tanks or underground concrete storage structures.

  14. Resolving the structure of Ti3C2Tx MXenes through multilevel structural modeling of the atomic pair distribution function

    SciTech Connect (OSTI)

    Wesolowski, David J.; Wang, Hsiu -Wen; Page, Katharine L.; Naguib, Michael; Gogotsi, Yury

    2015-12-08

    MXenes are a recently discovered family of two-dimensional (2D) early transition metal carbides and carbonitrides, which have already shown many attractive properties and a great promise in energy storage and many other applications. However, a complex surface chemistry and small coherence length has been an obstacle in some applications of MXenes, also limiting accuracy of predictions of their properties. In this study, we describe and benchmark a novel way of modeling layered materials with real interfaces (diverse surface functional groups and stacking order between the adjacent monolayers) against experimental data. The structures of three kinds of Ti3C2Tx MXenes (T stands for surface terminating species, including O, OH, and F) produced under different synthesis conditions were resolved for the first time using atomic pair distribution function obtained by high-quality neutron total scattering. The true nature of the material can be easily captured with the sensitivity of neutron scattering to the surface species of interest and the detailed third-generation structure model we present. The modeling approach leads to new understanding of MXene structural properties and can replace the currently used idealized models in predictions of a variety of physical, chemical and functional properties of Ti3C2-based MXenes. Furthermore, the developed models can be employed to guide the design of new MXene materials with selected surface termination and controlled contact angle, catalytic, optical, electrochemical and other properties. We suggest that the multi-level structural modeling should form the basis for a generalized methodology on modeling diffraction and pair distribution function data for 2D and layered materials.

  15. Data Collection Handbook to Support Modeling Impacts of Radioactive Material in Soil and Building Structures

    SciTech Connect (OSTI)

    Yu, Charley; Kamboj, Sunita; Wang, Cheng; Cheng, Jing-Jy

    2015-09-01

    This handbook is an update of the 1993 version of the Data Collection Handbook and the Radionuclide Transfer Factors Report to support modeling the impact of radioactive material in soil. Many new parameters have been added to the RESRAD Family of Codes, and new measurement methodologies are available. A detailed review of available parameter databases was conducted in preparation of this new handbook. This handbook is a companion document to the user manuals when using the RESRAD (onsite) and RESRAD-OFFSITE code. It can also be used for RESRAD-BUILD code because some of the building-related parameters are included in this handbook. The RESRAD (onsite) has been developed for implementing U.S. Department of Energy Residual Radioactive Material Guidelines. Hydrogeological, meteorological, geochemical, geometrical (size, area, depth), crops and livestock, human intake, source characteristic, and building characteristic parameters are used in the RESRAD (onsite) code. The RESRAD-OFFSITE code is an extension of the RESRAD (onsite) code and can also model the transport of radionuclides to locations outside the footprint of the primary contamination. This handbook discusses parameter definitions, typical ranges, variations, and measurement methodologies. It also provides references for sources of additional information. Although this handbook was developed primarily to support the application of RESRAD Family of Codes, the discussions and values are valid for use of other pathway analysis models and codes.

  16. Modeling direct interband tunneling. II. Lower-dimensional structures

    SciTech Connect (OSTI)

    Pan, Andrew; Chui, Chi On

    2014-08-07

    We investigate the applicability of the two-band Hamiltonian and the widely used Kane analytical formula to interband tunneling along unconfined directions in nanostructures. Through comparisons with kp and tight-binding calculations and quantum transport simulations, we find that the primary correction is the change in effective band gap. For both constant fields and realistic tunnel field-effect transistors, dimensionally consistent band gap scaling of the Kane formula allows analytical and numerical device simulations to approximate non-equilibrium Green's function current characteristics without arbitrary fitting. This allows efficient first-order calibration of semiclassical models for interband tunneling in nanodevices.

  17. Reflood completion report: Volume 1. A phenomenological thermal-hydraulic model of hot rod bundles experiencing simultaneous bottom and top quenching and an optimization methodology for closure development

    SciTech Connect (OSTI)

    Nelson, R.A. Jr.; Pimentel, D.A.; Jolly-Woodruff, S.; Spore, J.

    1998-04-01

    In this report, a phenomenological model of simultaneous bottom-up and top-down quenching is developed and discussed. The model was implemented in the TRAC-PF1/MOD2 computer code. Two sets of closure relationships were compared within the study, the Absolute set and the Conditional set. The Absolute set of correlations is frequently viewed as the pure set because the correlations is frequently viewed as the pure set because the correlations utilize their original coefficients as suggested by the developer. The Conditional set is a modified set of correlations with changes to the correlation coefficient only. Results for these two sets indicate quite similar results. This report also summarizes initial results of an effort to investigate nonlinear optimization techniques applied to the closure model development. Results suggest that such techniques can provide advantages for future model development work, but that extensive expertise is required to utilize such techniques (i.e., the model developer must fully understand both the physics of the process being represented and the computational techniques being employed). The computer may then be used to improve the correlation of computational results with experiments.

  18. Application of a New Structural Model & Exploration Technologies to Define

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    a Blind Geothermal System: A Viable Alternative to Grid Drilling for Geothermal Exploration: McCoy, Churchill County, NV | Department of Energy a New Structural Model & Exploration Technologies to Define a Blind Geothermal System: A Viable Alternative to Grid Drilling for Geothermal Exploration: McCoy, Churchill County, NV Application of a New Structural Model & Exploration Technologies to Define a Blind Geothermal System: A Viable Alternative to Grid Drilling for Geothermal

  19. Automated Eukaryotic Gene Structure Annotation Using EVidenceModeler and the Program to Assemble Spliced Alignments

    SciTech Connect (OSTI)

    Haas, B J; Salzberg, S L; Zhu, W; Pertea, M; Allen, J E; Orvis, J; White, O; Buell, C R; Wortman, J R

    2007-12-10

    EVidenceModeler (EVM) is presented as an automated eukaryotic gene structure annotation tool that reports eukaryotic gene structures as a weighted consensus of all available evidence. EVM, when combined with the Program to Assemble Spliced Alignments (PASA), yields a comprehensive, configurable annotation system that predicts protein-coding genes and alternatively spliced isoforms. Our experiments on both rice and human genome sequences demonstrate that EVM produces automated gene structure annotation approaching the quality of manual curation.

  20. The {ital Energy Interaction Model}: A promising new methodology for projecting GPHS-RTG cladding failures, release amounts & respirable release fractions for postulated pre-launch, launch, and post-reentry earth impact accidents

    SciTech Connect (OSTI)

    Coleman, J.R.; Sholtis, J.A. Jr.; McCulloch, W.H.

    1998-01-01

    Safety analyses and evaluations must be scrutable, defensible, and credible. This is particularly true when nuclear systems are involved, with their attendant potential for releases of radioactive materials (source terms) to the unrestricted environment. Analytical projections of General Purpose Heat Source Radioisotope Thermoelectric Generator (GPHS-RTG) source terms, for safety analyses conducted to date, have relied upon generic data correlations using a single parameter of cladding damage, termed {open_quotes}distortion.{close_quotes} However, distortion is not an unequivocal measure of cladding insult, failure, or release. Furthermore, the analytical foundation, applicability, and broad use of distortion are argumentative and, thus, somewhat troublesome. In an attempt to avoid the complications associated with the use of distortion, a new methodology, referred to as the {ital Energy Interaction Model (EIM)}, has been preliminarily developed. This new methodology is based upon the physical principles of energy and energy exchange during mechanical interactions. Specifically, the {ital EIM} considers the energy imparted to GPHS-RTG components (bare fueled clads, GPHS modules, and full GPHS-RTGs) when exposed to mechanical threats (blast/overpressure, shrapnel and fragment impacts, and Earth surface impacts) posed by the full range of potential accidents. Expected forms are developed for equations intended to project cladding failure probabilities, the number of cladding failures expected, release amounts, and the fraction released as respirable particles. The coefficients of the equations developed are then set to fit the GPHS-RTG test data, ensuring good agreement with the experimental database. This assured, fitted agreement with the test database, along with the foundation of the {ital EIM} in first principles, provides confidence in the model{close_quote}s projections beyond the available database. In summary, the newly developed {ital EIM} methodology is

  1. Hydrologic characterization of fractured rocks: An interdisciplinary methodology

    SciTech Connect (OSTI)

    Long, J.C.S.; Majer, E.L.; Martel, S.J.; Karasaki, K.; Peterson, J.E. Jr.; Davey, A.; Hestir, K. )

    1990-11-01

    The characterization of fractured rock is a critical problem in the development of nuclear waste repositories in geologic media. A good methodology for characterizing these systems should be focused on the large important features first and concentrate on building numerical models which can reproduce the observed hydrologic behavior of the fracture system. In many rocks, fracture zones dominate the behavior. These can be described using the tools of geology and geomechanics in order to understand what kind of features might be important hydrologically and to qualitatively describe the way flow might occur in the rock. Geophysics can then be employed to locate these features between boreholes. Then well testing can be used to see if the identified features are in fact important. Given this information, a conceptual model of the system can be developed which honors the geologic description, the tomographic data and the evidence of high permeability. Such a model can then be modified through an inverse process, such as simulated annealing, until it reproduces the cross-hole well test behavior which has been observed insitu. Other possible inversion techniques might take advantage of self similar structure. Once a model is constructed, we need to see how well the model makes predictions. We can use a cross-validation technique which sequentially puts aside parts of the data and uses the model to predict that part in order to calculate the prediction error. This approach combines many types of information in a methodology which can be modified to fit a particular field site. 114 refs., 81 figs., 7 tabs.

  2. Waste Package Design Methodology Report

    SciTech Connect (OSTI)

    D.A. Brownson

    2001-09-28

    The objective of this report is to describe the analytical methods and processes used by the Waste Package Design Section to establish the integrity of the various waste package designs, the emplacement pallet, and the drip shield. The scope of this report shall be the methodology used in criticality, risk-informed, shielding, source term, structural, and thermal analyses. The basic features and appropriateness of the methods are illustrated, and the processes are defined whereby input values and assumptions flow through the application of those methods to obtain designs that ensure defense-in-depth as well as satisfy requirements on system performance. Such requirements include those imposed by federal regulation, from both the U.S. Department of Energy (DOE) and U.S. Nuclear Regulatory Commission (NRC), and those imposed by the Yucca Mountain Project to meet repository performance goals. The report is to be used, in part, to describe the waste package design methods and techniques to be used for producing input to the License Application Report.

  3. Structural models of the membrane anchors of envelope glycoproteins E1 and E2 from pestiviruses

    SciTech Connect (OSTI)

    Wang, Jimin Li, Yue; Modis, Yorgo

    2014-04-15

    The membrane anchors of viral envelope proteins play essential roles in cell entry. Recent crystal structures of the ectodomain of envelope protein E2 from a pestivirus suggest that E2 belongs to a novel structural class of membrane fusion machinery. Based on geometric constraints from the E2 structures, we generated atomic models of the E1 and E2 membrane anchors using computational approaches. The E1 anchor contains two amphipathic perimembrane helices and one transmembrane helix; the E2 anchor contains a short helical hairpin stabilized in the membrane by an arginine residue, similar to flaviviruses. A pair of histidine residues in the E2 ectodomain may participate in pH sensing. The proposed atomic models point to Cys987 in E2 as the site of disulfide bond linkage with E1 to form E1–E2 heterodimers. The membrane anchor models provide structural constraints for the disulfide bonding pattern and overall backbone conformation of the E1 ectodomain. - Highlights: • Structures of pestivirus E2 proteins impose constraints on E1, E2 membrane anchors. • Atomic models of the E1 and E2 membrane anchors were generated in silico. • A “snorkeling” arginine completes the short helical hairpin in the E2 membrane anchor. • Roles in pH sensing and E1–E2 disulfide bond formation are proposed for E1 residues. • Implications for E1 ectodomain structure and disulfide bonding pattern are discussed.

  4. Importance of Lorentz structure in the parton model: Target mass corrections, transverse momentum dependence, positivity bounds

    SciTech Connect (OSTI)

    D'Alesio, U.; Leader, E.; Murgia, F.

    2010-02-01

    We show that respecting the underlying Lorentz structure in the parton model has very strong consequences. Failure to insist on the correct Lorentz covariance is responsible for the existence of contradictory results in the literature for the polarized structure function g{sub 2}(x), whereas with the correct imposition we are able to derive the Wandzura-Wilczek relation for g{sub 2}(x) and the target-mass corrections for polarized deep inelastic scattering without recourse to the operator product expansion. We comment briefly on the problem of threshold behavior in the presence of target-mass corrections. Careful attention to the Lorentz structure has also profound implications for the structure of the transverse momentum dependent parton densities often used in parton model treatments of hadron production, allowing the k{sub T} dependence to be derived explicitly. It also leads to stronger positivity and Soffer-type bounds than usually utilized for the collinear densities.

  5. Computational modeling of structure of metal matrix composite in centrifugal casting process

    SciTech Connect (OSTI)

    Zagorski, Roman [Department of Electrotechnology, Faculty of Materials Science and Metallurgy, Silesian University of Technology, ul. Krasinskiego 8, 40-019, Katowice (Poland)

    2007-04-07

    The structure of alumina matrix composite reinforced with crystalline particles obtained during centrifugal casting process are studied. Several parameters of cast process like pouring temperature, temperature, rotating speed and size of casting mould which influent on structure of composite are examined. Segregation of crystalline particles depended on other factors such as: the gradient of density of the liquid matrix and reinforcement, thermal processes connected with solidifying of the cast, processes leading to changes in physical and structural properties of liquid composite are also investigated. All simulation are carried out by CFD program Fluent. Numerical simulations are performed using the FLUENT two-phase free surface (air and matrix) unsteady flow model (volume of fluid model - VOF) and discrete phase model (DPM)

  6. Beyond the Lone-Pair Model for Structurally Distorted Metal Oxides

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Beyond the Lone-Pair Model for Structurally Distorted Metal Oxides Print "Ferroelectricity," by analogy to ferromagnetism, is defined as the presence of spontaneous electrical polarization in a material, often arising from distortions in the material's crystal structure. In oxides of the metals lead and bismuth, such distortions were for many years attributed to the existence of "lone pair" electrons: pairs of chemically inert, nonbonding valence electrons in hybrid orbitals

  7. Beyond the Lone-Pair Model for Structurally Distorted Metal Oxides

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Beyond the Lone-Pair Model for Structurally Distorted Metal Oxides Print "Ferroelectricity," by analogy to ferromagnetism, is defined as the presence of spontaneous electrical polarization in a material, often arising from distortions in the material's crystal structure. In oxides of the metals lead and bismuth, such distortions were for many years attributed to the existence of "lone pair" electrons: pairs of chemically inert, nonbonding valence electrons in hybrid orbitals

  8. Selecting the best defect reduction methodology

    SciTech Connect (OSTI)

    Hinckley, C.M.; Barkan, P.

    1994-04-01

    Defect rates less than 10 parts per million, unimaginable a few years ago, have become the standard of world-class quality. To reduce defects, companies are aggressively implementing various quality methodologies, such as Statistical Quality Control Motorola`s Six Sigma, or Shingo`s poka-yok. Although each quality methodology reduces defects, selection has been based on an intuitive sense without understanding their relative effectiveness in each application. A missing link in developing superior defect reduction strategies has been a lack of a general defect model that clarifies the unique focus of each method. Toward the goal of efficient defect reduction, we have developed an event tree which addresses a broad spectrum of quality factors and two defect sources, namely, error and variation. The Quality Control Tree (QCT) predictions are more consistent with production experience than obtained by the other methodologies considered independently. The QCT demonstrates that world-class defect rates cannot be achieved through focusing on a single defect source or quality control factor, a common weakness of many methodologies. We have shown that the most efficient defect reduction strategy depend on the relative strengths and weaknesses of each organization. The QCT can help each organization identify the most promising defect reduction opportunities for achieving its goals.

  9. Structure of the Kinase Domain of CaMKII and Modeling the Holoenzyme

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Structure of the Kinase Domain of CaMKII and Modeling the Holoenzyme Structure of the Kinase Domain of CaMKII and Modeling the Holoenzyme Print Wednesday, 31 May 2006 00:00 The rate and intensity of calcium (Ca2+) currents that oscillate through the plasma membrane around a cell affect such diverse phenomena as fertilization, the cardiac rhythm, and even the formation of memories. How does the cell sense these digital oscillations and transduce them into a cellular signal, such as changes in

  10. Simulating Cellulose Structure, Properties, Thermodynamics, Synthesis, and Deconstruction with Atomistic and Coarse-Grain Models

    SciTech Connect (OSTI)

    Crowley, M. F.; Matthews, J.; Beckham, G.; Bomble, Y.; Hynninen, A. P.; Ciesielski, P. F.

    2012-01-01

    Cellulose is still a mysterious polymer in many ways: structure of microfibrils, thermodynamics of synthesis and degradation, and interactions with other plant cell wall components. Our aim is to uncover the details and mechanisms of cellulose digestion and synthesis. We report the details of the structure of cellulose 1-beta under several temperature conditions and report here the results of these studies and connections to experimental measurements and the measurement in-silico the free energy of decrystallization of several morphologies of cellulose. In spatially large modeling, we show the most recent work of mapping atomistic and coarse-grain models into tomographic images of cellulose and extreme coarse-grain modeling of interactions of large cellulase complexes with microfibrils. We discuss the difficulties of modeling cellulose and suggest future work both experimental and theoretical to increase our understanding of cellulose and our ability to use it as a raw material for fuels and materials.

  11. eGallon-methodology-final

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    traditional gallon of unleaded fuel -- the dominant fuel choice for vehicles in the U.S. eGallon Methodology The eGallon is measured as an "implicit" cost of a gallon of gasoline. ...

  12. Weekly Coal Production Estimation Methodology

    Gasoline and Diesel Fuel Update (EIA)

    Weekly Coal Production Estimation Methodology Step 1 (Estimate total amount of weekly U.S. coal production) U.S. coal production for the current week is estimated using a ratio ...

  13. Application of viscous and Iwan modal damping models to experimental measurements from bolted structures

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Deaner, Brandon J.; Allen, Matthew S.; Starr, Michael James; Segalman, Daniel J.; Sumali, Hartono

    2015-01-20

    Measurements are presented from a two-beam structure with several bolted interfaces in order to characterize the nonlinear damping introduced by the joints. The measurements (all at force levels below macroslip) reveal that each underlying mode of the structure is well approximated by a single degree-of-freedom (SDOF) system with a nonlinear mechanical joint. At low enough force levels, the measurements show dissipation that scales as the second power of the applied force, agreeing with theory for a linear viscously damped system. This is attributed to linear viscous behavior of the material and/or damping provided by the support structure. At larger forcemore » levels, the damping is observed to behave nonlinearly, suggesting that damping from the mechanical joints is dominant. A model is presented that captures these effects, consisting of a spring and viscous damping element in parallel with a four-parameter Iwan model. As a result, the parameters of this model are identified for each mode of the structure and comparisons suggest that the model captures the stiffness and damping accurately over a range of forcing levels.« less

  14. Application of viscous and Iwan modal damping models to experimental measurements from bolted structures

    SciTech Connect (OSTI)

    Deaner, Brandon J.; Allen, Matthew S.; Starr, Michael James; Segalman, Daniel J.; Sumali, Hartono

    2015-01-20

    Measurements are presented from a two-beam structure with several bolted interfaces in order to characterize the nonlinear damping introduced by the joints. The measurements (all at force levels below macroslip) reveal that each underlying mode of the structure is well approximated by a single degree-of-freedom (SDOF) system with a nonlinear mechanical joint. At low enough force levels, the measurements show dissipation that scales as the second power of the applied force, agreeing with theory for a linear viscously damped system. This is attributed to linear viscous behavior of the material and/or damping provided by the support structure. At larger force levels, the damping is observed to behave nonlinearly, suggesting that damping from the mechanical joints is dominant. A model is presented that captures these effects, consisting of a spring and viscous damping element in parallel with a four-parameter Iwan model. As a result, the parameters of this model are identified for each mode of the structure and comparisons suggest that the model captures the stiffness and damping accurately over a range of forcing levels.

  15. Experiments to Populate and Validate a Processing Model for Polyurethane Foam: Additional Data for Structural Foams.

    SciTech Connect (OSTI)

    Rao, Rekha R.; Celina, Mathias C.; Giron, Nicholas Henry; Long, Kevin Nicholas; Russick, Edward M.

    2015-01-01

    We are developing computational models to help understand manufacturing processes, final properties and aging of structural foam, polyurethane PMDI. Th e resulting model predictions of density and cure gradients from the manufacturing process will be used as input to foam heat transfer and mechanical models. BKC 44306 PMDI-10 and BKC 44307 PMDI-18 are the most prevalent foams used in structural parts. Experiments needed to parameterize models of the reaction kinetics and the equations of motion during the foam blowing stages were described for BKC 44306 PMDI-10 in the first of this report series (Mondy et al. 2014). BKC 44307 PMDI-18 is a new foam that will be used to make relatively dense structural supports via over packing. It uses a different catalyst than those in the BKC 44306 family of foams; hence, we expect that the reaction kineti cs models must be modified. Here we detail the experiments needed to characteriz e the reaction kinetics of BKC 44307 PMDI-18 and suggest parameters for the model based on these experiments. In additi on, the second part of this report describes data taken to provide input to the preliminary nonlinear visco elastic structural response model developed for BKC 44306 PMDI-10 foam. We show that the standard cu re schedule used by KCP does not fully cure the material, and, upon temperature elevation above 150 o C, oxidation or decomposition reactions occur that alter the composition of the foam. These findings suggest that achieving a fully cured foam part with this formulation may be not be possible through therma l curing. As such, visco elastic characterization procedures developed for curing thermosets can provide only approximate material properties, since the state of the material continuously evolves during tests.

  16. Structure Based Drug Design for HIM Protease: From Molecular Modeling to Cheminformatics

    SciTech Connect (OSTI)

    Volarath, Patra; Weber, Irene T.; Harrison, Robert W.

    2008-06-06

    Significant progress over the past decade in virtual representations of molecules and their physicochemical properties has produced new drugs from virtual screening of the structures of single protein molecules by conventional modeling methods. The development of clinical antiviral drugs from structural data for HIV protease has been a major success in structure based drug design. Techniques for virtual screening involve the ranking of the affinity of potential ligands for the target site on a protein. Two main alternatives have been developed: modeling of the target protein with a series of related ligand molecules, and docking molecules from a database to the target protein site. The computational speed and prediction accuracy will depend on the representation of the molecular structure and chemistry, the search or simulation algorithm, and the scoring function to rank the ligands. Moreover, the general challenges in modern computational drug design arise from the profusion of data, including whole genomes of DNA, protein structures, chemical libraries, affinity and pharmacological data. Therefore, software tools are being developed to manage and integrate diverse data, and extract and visualize meaningful relationships. Current areas of research include the development of searchable chemical databases, which requires new algorithms to represent molecules and search for structurally or chemically similar molecules, and the incorporation of machine learning techniques for data mining to improve the accuracy of predictions. Examples will be presented for the virtual screening of drugs that target HIV protease.

  17. Microscopic model for intersubband gain from electrically pumped quantum-dot structures

    SciTech Connect (OSTI)

    Michael, Stephan; Chow, Weng Wah; Schneider, Han Christian

    2014-10-03

    We study theoretically the performance of electrically pumped self-organized quantum dots as a gain material in the mid-infrared range at room temperature. We analyze an AlGaAs/InGaAs based structure composed of dots-in-a-well sandwiched between two quantum wells. We numerically analyze a comprehensive model by combining a many-particle approach for electronic dynamics with a realistic modeling of the electronic states in the whole structure. We investigate the gain both for quasi-equilibrium conditions and current injection. We find, comparing different structures, that steady-state gain can only be realized by an efficient extraction process, which prevents an accumulation of electrons in continuum states, that make the available scattering pathways through the quantum-dot active region too fast to sustain inversion.

  18. Performance of corrosion inhibiting admixtures for structural concrete -- assessment methods and predictive modeling

    SciTech Connect (OSTI)

    Yunovich, M.; Thompson, N.G.

    1998-12-31

    During the past fifteen years corrosion inhibiting admixtures (CIAs) have become increasingly popular for protection of reinforced components of highway bridges and other structures from damage induced by chlorides. However, there remains considerable debate about the benefits of CIAs in concrete. A variety of testing methods to assess the performance of CIA have been reported in the literature, ranging from tests in simulated pore solutions to long-term exposures of concrete slabs. The paper reviews the published techniques and recommends the methods which would make up a comprehensive CIA effectiveness testing program. The results of this set of tests would provide the data which can be used to rank the presently commercially available CIA and future candidate formulations utilizing a proposed predictive model. The model is based on relatively short-term laboratory testing and considers several phases of a service life of a structure (corrosion initiation, corrosion propagation without damage, and damage to the structure).

  19. Microscopic model for intersubband gain from electrically pumped quantum-dot structures

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Michael, Stephan; Chow, Weng Wah; Schneider, Han Christian

    2014-10-03

    We study theoretically the performance of electrically pumped self-organized quantum dots as a gain material in the mid-infrared range at room temperature. We analyze an AlGaAs/InGaAs based structure composed of dots-in-a-well sandwiched between two quantum wells. We numerically analyze a comprehensive model by combining a many-particle approach for electronic dynamics with a realistic modeling of the electronic states in the whole structure. We investigate the gain both for quasi-equilibrium conditions and current injection. We find, comparing different structures, that steady-state gain can only be realized by an efficient extraction process, which prevents an accumulation of electrons in continuum states, thatmore » make the available scattering pathways through the quantum-dot active region too fast to sustain inversion.« less

  20. Introducing improved structural properties and salt dependence into a coarse-grained model of DNA

    SciTech Connect (OSTI)

    Snodin, Benedict E. K. Mosayebi, Majid; Schreck, John S.; Romano, Flavio; Doye, Jonathan P. K.; Randisi, Ferdinando; ulc, Petr; Ouldridge, Thomas E.; Tsukanov, Roman; Nir, Eyal; Louis, Ard A.

    2015-06-21

    We introduce an extended version of oxDNA, a coarse-grained model of deoxyribonucleic acid (DNA) designed to capture the thermodynamic, structural, and mechanical properties of single- and double-stranded DNA. By including explicit major and minor grooves and by slightly modifying the coaxial stacking and backbone-backbone interactions, we improve the ability of the model to treat large (kilobase-pair) structures, such as DNA origami, which are sensitive to these geometric features. Further, we extend the model, which was previously parameterised to just one salt concentration ([Na{sup +}] = 0.5M), so that it can be used for a range of salt concentrations including those corresponding to physiological conditions. Finally, we use new experimental data to parameterise the oxDNA potential so that consecutive adenine bases stack with a different strength to consecutive thymine bases, a feature which allows a more accurate treatment of systems where the flexibility of single-stranded regions is important. We illustrate the new possibilities opened up by the updated model, oxDNA2, by presenting results from simulations of the structure of large DNA objects and by using the model to investigate some salt-dependent properties of DNA.

  1. Implementation of New Process Models for Tailored Polymer Composite Structures into Processing Software Packages

    SciTech Connect (OSTI)

    Nguyen, Ba Nghiep; Jin, Xiaoshi; Wang, Jin; Phelps, Jay; Tucker III, Charles L.; Kunc, Vlastimil; Bapanapalli, Satish K.; Smith, Mark T.

    2010-02-23

    This report describes the work conducted under the Cooperative Research and Development Agreement (CRADA) (Nr. 260) between the Pacific Northwest National Laboratory (PNNL) and Autodesk, Inc. to develop and implement process models for injection-molded long-fiber thermoplastics (LFTs) in processing software packages. The structure of this report is organized as follows. After the Introduction Section (Section 1), Section 2 summarizes the current fiber orientation models developed for injection-molded short-fiber thermoplastics (SFTs). Section 3 provides an assessment of these models to determine their capabilities and limitations, and the developments needed for injection-molded LFTs. Section 4 then focuses on the development of a new fiber orientation model for LFTs. This model is termed the anisotropic rotary diffusion - reduced strain closure (ARD-RSC) model as it explores the concept of anisotropic rotary diffusion to capture the fiber-fiber interaction in long-fiber suspensions and uses the reduced strain closure method of Wang et al. to slow down the orientation kinetics in concentrated suspensions. In contrast to fiber orientation modeling, before this project, no standard model was developed to predict the fiber length distribution in molded fiber composites. Section 5 is therefore devoted to the development of a fiber length attrition model in the mold. Sections 6 and 7 address the implementations of the models in AMI, and the conclusions drawn from this work is presented in Section 8.

  2. Surface structural ion adsorption modeling of competitive binding of oxyanions by metal (hydr)oxides

    SciTech Connect (OSTI)

    Hiemstra, T.; Riemsdijk, W.H. van

    1999-02-01

    An important challenge in surface complexation models (SCM) is to connect the molecular microscopic reality to macroscopic adsorption phenomena. This study elucidates the primary factor controlling the adsorption process by analyzing the adsorption and competition of PO{sub 4}, AsO{sub 4}, and SeO{sub 3}. The authors show that the structure of the surface-complex acting in the dominant electrostatic field can be ascertained as the primary controlling adsorption factor. The surface species of arsenate are identical with those of phosphate and the adsorption behavior is very similar. On the basis of the selenite adsorption, The authors show that the commonly used 1pK models are incapable to incorporate in the adsorption modeling the correct bidentate binding mechanism found by spectroscopy. The use of the bidentate mechanism leads to a proton-oxyanion ratio and corresponding pH dependence that are too large. The inappropriate intrinsic charge attribution to the primary surface groups and the condensation of the inner sphere surface complex to a point charge are responsible for this behavior of commonly used 2pK models. Both key factors are differently defined in the charge distributed multi-site complexation (CD-MUSIC) model and are based in this model on a surface structural approach. The CD-MUSIC model can successfully describe the macroscopic adsorption phenomena using the surface speciation and binding mechanisms as found by spectroscopy. The model is also able to predict the anion competition well. The charge distribution in the interface is in agreement with the observed structure of surface complexes.

  3. Target Allocation Methodology for China's Provinces: Energy Intensity in the 12th FIve-Year Plan

    SciTech Connect (OSTI)

    Ohshita, Stephanie; Price, Lynn

    2011-03-21

    Experience with China's 20% energy intensity improvement target during the 11th Five-Year Plan (FYP) (2006-2010) has shown the challenges of rapidly setting targets and implementing measures to meet them. For the 12th FYP (2011-2015), there is an urgent need for a more scientific methodology to allocate targets among the provinces and to track physical and economic indicators of energy and carbon saving progress. This report provides a sectoral methodology for allocating a national energy intensity target - expressed as percent change in energy per unit gross domestic product (GDP) - among China's provinces in the 12th FYP. Drawing on international experience - especially the European Union (EU) Triptych approach for allocating Kyoto carbon targets among EU member states - the methodology here makes important modifications to the EU approach to address an energy intensity rather than a CO{sub 2} emissions target, and for the wider variation in provincial energy and economic structure in China. The methodology combines top-down national target projections and bottom-up provincial and sectoral projections of energy and GDP to determine target allocation of energy intensity targets. Total primary energy consumption is separated into three end-use sectors - industrial, residential, and other energy. Sectoral indicators are used to differentiate the potential for energy saving among the provinces. This sectoral methodology is utilized to allocate provincial-level targets for a national target of 20% energy intensity improvement during the 12th FYP; the official target is determined by the National Development and Reform Commission. Energy and GDP projections used in the allocations were compared with other models, and several allocation scenarios were run to test sensitivity. The resulting allocations for the 12th FYP offer insight on past performance and offer somewhat different distributions of provincial targets compared to the 11th FYP. Recommendations for reporting

  4. Maintenance personnel performance simulation (MAPPS): a model for predicting maintenance performance reliability in nuclear power plants

    SciTech Connect (OSTI)

    Knee, H.E.; Krois, P.A.; Haas, P.M.; Siegel, A.I.; Ryan, T.G.

    1983-01-01

    The NRC has developed a structured, quantitative, predictive methodology in the form of a computerized simulation model for assessing maintainer task performance. Objective of the overall program is to develop, validate, and disseminate a practical, useful, and acceptable methodology for the quantitative assessment of NPP maintenance personnel reliability. The program was organized into four phases: (1) scoping study, (2) model development, (3) model evaluation, and (4) model dissemination. The program is currently nearing completion of Phase 2 - Model Development.

  5. Chemical incident economic impact analysis methodology. (Technical...

    Office of Scientific and Technical Information (OSTI)

    Chemical incident economic impact analysis methodology. Citation Details In-Document Search Title: Chemical incident economic impact analysis methodology. You are accessing a ...

  6. Measuring the Impact of Benchmarking & Transparency - Methodologies...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Measuring the Impact of Benchmarking & Transparency - Methodologies and the NYC Example Measuring the Impact of Benchmarking & Transparency - Methodologies and the NYC Example ...

  7. Data development technical support document for the aircraft crash risk analysis methodology (ACRAM) standard

    SciTech Connect (OSTI)

    Kimura, C.Y.; Glaser, R.E.; Mensing, R.W.; Lin, T.; Haley, T.A.; Barto, A.B.; Stutzke, M.A.

    1996-08-01

    The Aircraft Crash Risk Analysis Methodology (ACRAM) Panel has been formed by the US Department of Energy Office of Defense Programs (DOE/DP) for the purpose of developing a standard methodology for determining the risk from aircraft crashes onto DOE ground facilities. In order to accomplish this goal, the ACRAM panel has been divided into four teams, the data development team, the model evaluation team, the structural analysis team, and the consequence team. Each team, consisting of at least one member of the ACRAM plus additional DOE and DOE contractor personnel, specializes in the development of the methodology assigned to that team. This report documents the work performed by the data development team and provides the technical basis for the data used by the ACRAM Standard for determining the aircraft crash frequency. This report should be used to provide the generic data needed to calculate the aircraft crash frequency into the facility under consideration as part of the process for determining the aircraft crash risk to ground facilities as given by the DOE Standard Aircraft Crash Risk Assessment Methodology (ACRAM). Some broad guidance is presented on how to obtain the needed site-specific and facility specific data but this data is not provided by this document.

  8. Analytical modeling and structural response of a stretched-membrane reflective module

    SciTech Connect (OSTI)

    Murphy, L.M.; Sallis, D.V.

    1984-06-01

    The optical and structural load deformation response behavior of a uniform pressure-loaded stretched-membrane reflective module subject to nonaxisymmetric support constraints is studied in this report. To aid in the understanding of this behavior, an idealized analytical model is developed and implemented and predictions are compared with predictions based on the detailed structural analysis code NASTRAN. Single structural membrane reflector modules are studied in this analysis. In particular, the interaction of the frame-membrane combination and variations in membrane pressure loading and tension are studied in detail. Variations in the resulting lateral shear load on the frame, frame lateral support, and frame twist as a function of distance between the supports are described as are the resulting optical effects. Results indicate the need to consider the coupled deformation problem as the lateral frame deformations are amplified by increasing the membrane tension. The importance of accurately considering the effects of different membrane attachment approaches is also demonstrated.

  9. Structure of the Kinase Domain of CaMKII and Modeling the Holoenzyme

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Structure of the Kinase Domain of CaMKII and Modeling the Holoenzyme Print The rate and intensity of calcium (Ca2+) currents that oscillate through the plasma membrane around a cell affect such diverse phenomena as fertilization, the cardiac rhythm, and even the formation of memories. How does the cell sense these digital oscillations and transduce them into a cellular signal, such as changes in phosphorylation (addition of a phosphate group to a protein) or gene transcription? A group from the

  10. Structure of the Kinase Domain of CaMKII and Modeling the Holoenzyme

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Structure of the Kinase Domain of CaMKII and Modeling the Holoenzyme Print The rate and intensity of calcium (Ca2+) currents that oscillate through the plasma membrane around a cell affect such diverse phenomena as fertilization, the cardiac rhythm, and even the formation of memories. How does the cell sense these digital oscillations and transduce them into a cellular signal, such as changes in phosphorylation (addition of a phosphate group to a protein) or gene transcription? A group from the

  11. Multiscale modeling of thermal conductivity of high burnup structures in UO2 fuels

    SciTech Connect (OSTI)

    Bai, Xian -Ming; Tonks, Michael R.; Zhang, Yongfeng; Hales, Jason D.

    2015-12-22

    The high burnup structure forming at the rim region in UO2 based nuclear fuel pellets has interesting physical properties such as improved thermal conductivity, even though it contains a high density of grain boundaries and micron-size gas bubbles. To understand this counterintuitive phenomenon, mesoscale heat conduction simulations with inputs from atomistic simulations and experiments were conducted to study the thermal conductivities of a small-grain high burnup microstructure and two large-grain unrestructured microstructures. We concluded that the phonon scattering effects caused by small point defects such as dispersed Xe atoms in the grain interior must be included in order to correctly predict the thermal transport properties of these microstructures. In extreme cases, even a small concentration of dispersed Xe atoms such as 10-5 can result in a lower thermal conductivity in the large-grain unrestructured microstructures than in the small-grain high burnup structure. The high-density grain boundaries in a high burnup structure act as defect sinks and can reduce the concentration of point defects in its grain interior and improve its thermal conductivity in comparison with its large-grain counterparts. Furthermore, an analytical model was developed to describe the thermal conductivity at different concentrations of dispersed Xe, bubble porosities, and grain sizes. Upon calibration, the model is robust and agrees well with independent heat conduction modeling over a wide range of microstructural parameters.

  12. Multiscale modeling of thermal conductivity of high burnup structures in UO2 fuels

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Bai, Xian -Ming; Tonks, Michael R.; Zhang, Yongfeng; Hales, Jason D.

    2015-12-22

    The high burnup structure forming at the rim region in UO2 based nuclear fuel pellets has interesting physical properties such as improved thermal conductivity, even though it contains a high density of grain boundaries and micron-size gas bubbles. To understand this counterintuitive phenomenon, mesoscale heat conduction simulations with inputs from atomistic simulations and experiments were conducted to study the thermal conductivities of a small-grain high burnup microstructure and two large-grain unrestructured microstructures. We concluded that the phonon scattering effects caused by small point defects such as dispersed Xe atoms in the grain interior must be included in order to correctlymore » predict the thermal transport properties of these microstructures. In extreme cases, even a small concentration of dispersed Xe atoms such as 10-5 can result in a lower thermal conductivity in the large-grain unrestructured microstructures than in the small-grain high burnup structure. The high-density grain boundaries in a high burnup structure act as defect sinks and can reduce the concentration of point defects in its grain interior and improve its thermal conductivity in comparison with its large-grain counterparts. Furthermore, an analytical model was developed to describe the thermal conductivity at different concentrations of dispersed Xe, bubble porosities, and grain sizes. Upon calibration, the model is robust and agrees well with independent heat conduction modeling over a wide range of microstructural parameters.« less

  13. Pipelines subject to slow landslide movements: Structural modeling vs field measurement

    SciTech Connect (OSTI)

    Bruschi, R.; Glavina, S.; Spinazze, M.; Tomassini, D.; Bonanni, S.; Cuscuna, S.

    1996-12-01

    In recent years finite element techniques have been increasingly used to investigate the behavior of buried pipelines subject to soil movements. The use of these tools provides a rational basis for the definition of minimum wall thickness requirements in landslide crossings. Furthermore the design of mitigation measures or monitoring systems which control the development of undesirable strains in the pipe wall over time, requires a detailed structural modeling. The scope of this paper is to discuss the use of dedicated structural modeling with relevant calibration to field measurements. The strain measurements used were regularly gathered from pipe sections, in two different sites over a period of time long enough to record changes of axial strain due to soil movement. Detailed structural modeling of pipeline layout in both sites and for operating conditions, is applied. Numerical simulations show the influence of the distribution of soil movement acting on the pipeline with regards to the state of strain which can be developed in certain locations. The role of soil nature and direction of relative movements in the definition of loads transferred to the pipeline, is also discussed.

  14. Modeling laser-induced periodic surface structures: Finite-difference time-domain feedback simulations

    SciTech Connect (OSTI)

    Skolski, J. Z. P. Vincenc Obona, J.; Römer, G. R. B. E.; Huis in 't Veld, A. J.

    2014-03-14

    A model predicting the formation of laser-induced periodic surface structures (LIPSSs) is presented. That is, the finite-difference time domain method is used to study the interaction of electromagnetic fields with rough surfaces. In this approach, the rough surface is modified by “ablation after each laser pulse,” according to the absorbed energy profile, in order to account for inter-pulse feedback mechanisms. LIPSSs with a periodicity significantly smaller than the laser wavelength are found to “grow” either parallel or orthogonal to the laser polarization. The change in orientation and periodicity follow from the model. LIPSSs with a periodicity larger than the wavelength of the laser radiation and complex superimposed LIPSS patterns are also predicted by the model.

  15. Nonlinear waves and coherent structures in the quantum single-wave model

    SciTech Connect (OSTI)

    Tzenov, Stephan I. [Department of Physics, Lancaster University, Lancaster LA1 4YB (United Kingdom); Marinov, Kiril B. [ASTeC, STFC Daresbury Laboratory, Keckwick Lane, Daresbury WA4 4AD (United Kingdom)

    2011-10-15

    Starting from the von Neumann-Maxwell equations for the Wigner quasi-probability distribution and for the self-consistent electric field, the quantum analog of the classical single-wave model has been derived. The linear stability of the quantum single-wave model has been studied, and periodic in time patterns have been found both analytically and numerically. In addition, some features of quantum chaos have been detected in the unstable region in parameter space. Further, a class of standing-wave solutions of the quantum single-wave model has also been found, which have been observed to behave as stable solitary-wave structures. The analytical results have been finally compared to the exact system dynamics obtained by solving the corresponding equations in Schrodinger representation numerically.

  16. Validation of New Process Models for Large Injection-Molded Long-Fiber Thermoplastic Composite Structures

    SciTech Connect (OSTI)

    Nguyen, Ba Nghiep; Jin, Xiaoshi; Wang, Jin; Kunc, Vlastimil; Tucker III, Charles L.

    2012-02-23

    This report describes the work conducted under the CRADA Nr. PNNL/304 between Battelle PNNL and Autodesk whose objective is to validate the new process models developed under the previous CRADA for large injection-molded LFT composite structures. To this end, the ARD-RSC and fiber length attrition models implemented in the 2013 research version of Moldflow was used to simulate the injection molding of 600-mm x 600-mm x 3-mm plaques from 40% glass/polypropylene (Dow Chemical DLGF9411.00) and 40% glass/polyamide 6,6 (DuPont Zytel 75LG40HSL BK031) materials. The injection molding was performed by Injection Technologies, Inc. at Windsor, Ontario (under a subcontract by Oak Ridge National Laboratory, ORNL) using the mold offered by the Automotive Composite Consortium (ACC). Two fill speeds under the same back pressure were used to produce plaques under slow-fill and fast-fill conditions. Also, two gating options were used to achieve the following desired flow patterns: flows in edge-gated plaques and in center-gated plaques. After molding, ORNL performed measurements of fiber orientation and length distributions for process model validations. The structure of this report is as follows. After the Introduction (Section 1), Section 2 provides a summary of the ARD-RSC and fiber length attrition models. A summary of model implementations in the latest research version of Moldflow is given in Section 3. Section 4 provides the key processing conditions and parameters for molding of the ACC plaques. The validations of the ARD-RSC and fiber length attrition models are presented and discussed in Section 5. The conclusions will be drawn in Section 6.

  17. 8th Annual Glycoscience Symposium: Integrating Models of Plant Cell Wall Structure, Biosynthesis and Assembly

    SciTech Connect (OSTI)

    Azadi, Paratoo

    2015-09-24

    The Complex Carbohydrate Research Center (CCRC) of the University of Georgia holds a symposium yearly that highlights a broad range of carbohydrate research topics. The 8th Annual Georgia Glycoscience Symposium entitled “Integrating Models of Plant Cell Wall Structure, Biosynthesis and Assembly” was held on April 7, 2014 at the CCRC. The focus of symposium was on the role of glycans in plant cell wall structure and synthesis. The goal was to have world leaders in conjunction with graduate students, postdoctoral fellows and research scientists to propose the newest plant cell wall models. The symposium program closely followed the DOE’s mission and was specifically designed to highlight chemical and biochemical structures and processes important for the formation and modification of renewable plant cell walls which serve as the basis for biomaterial and biofuels. The symposium was attended by both senior investigators in the field as well as students including a total attendance of 103, which included 80 faculty/research scientists, 11 graduate students and 12 Postdoctoral students.

  18. Influence of the plasma environment on atomic structure using an ion-sphere model

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Belkhiri, Madeny Jean; Fontes, Christopher John; Poirier, Michel

    2015-09-03

    Plasma environment effects on atomic structure are analyzed using various atomic structure codes. To monitor the effect of high free-electron density or low temperatures, Fermi-Dirac and Maxwell-Boltzmann statistics are compared. After a discussion of the implementation of the Fermi-Dirac approach within the ion-sphere model, several applications are considered. In order to check the consistency of the modifications brought here to extant codes, calculations have been performed using the Los Alamos Cowan Atomic Structure (cats) code in its Hartree-Fock or Hartree-Fock-Slater form and the parametric potential Flexible Atomic Code (fac). The ground-state energy shifts due to the plasma effects for themore » six most ionized aluminum ions have been calculated using the fac and cats codes and fairly agree. For the intercombination resonance line in Fe22+, the plasma effect within the uniform electron gas model results in a positive shift that agrees with the MCDF value of B. Saha et al.« less

  19. Influence of the plasma environment on atomic structure using an ion-sphere model

    SciTech Connect (OSTI)

    Belkhiri, Madeny Jean; Fontes, Christopher John; Poirier, Michel

    2015-09-03

    Plasma environment effects on atomic structure are analyzed using various atomic structure codes. To monitor the effect of high free-electron density or low temperatures, Fermi-Dirac and Maxwell-Boltzmann statistics are compared. After a discussion of the implementation of the Fermi-Dirac approach within the ion-sphere model, several applications are considered. In order to check the consistency of the modifications brought here to extant codes, calculations have been performed using the Los Alamos Cowan Atomic Structure (cats) code in its Hartree-Fock or Hartree-Fock-Slater form and the parametric potential Flexible Atomic Code (fac). The ground-state energy shifts due to the plasma effects for the six most ionized aluminum ions have been calculated using the fac and cats codes and fairly agree. For the intercombination resonance line in Fe22+, the plasma effect within the uniform electron gas model results in a positive shift that agrees with the MCDF value of B. Saha et al.

  20. Seismic acquisition and processing methodologies in overthrust areas: Some examples from Latin America

    SciTech Connect (OSTI)

    Tilander, N.G.; Mitchel, R..

    1996-08-01

    Overthrust areas represent some of the last frontiers in petroleum exploration today. Billion barrel discoveries in the Eastern Cordillera of Colombia and the Monagas fold-thrust belt of Venezuela during the past decade have highlighted the potential rewards for overthrust exploration. However the seismic data recorded in many overthrust areas is disappointingly poor. Challenges such as rough topography, complex subsurface structure, presence of high-velocity rocks at the surface, back-scattered energy and severe migration wavefronting continue to lower data quality and reduce interpretability. Lack of well/velocity control also reduces the reliability of depth estimations and migrated images. Failure to obtain satisfactory pre-drill structural images can easily result in costly wildcat failures. Advances in the methodologies used by Chevron for data acquisition, processing and interpretation have produced significant improvements in seismic data quality in Bolivia, Colombia and Trinidad. In this paper, seismic test results showing various swath geometries will be presented. We will also show recent examples of processing methods which have led to improved structural imaging. Rather than focusing on {open_quotes}black box{close_quotes} methodology, we will emphasize the cumulative effect of step-by-step improvements. Finally, the critical significance and interrelation of velocity measurements, modeling and depth migration will be explored. Pre-drill interpretations must ultimately encompass a variety of model solutions, and error bars should be established which realistically reflect the uncertainties in the data.

  1. Energy Efficiency Indicators Methodology Booklet

    SciTech Connect (OSTI)

    Sathaye, Jayant; Price, Lynn; McNeil, Michael; de la rue du Can, Stephane

    2010-05-01

    This Methodology Booklet provides a comprehensive review and methodology guiding principles for constructing energy efficiency indicators, with illustrative examples of application to individual countries. It reviews work done by international agencies and national government in constructing meaningful energy efficiency indicators that help policy makers to assess changes in energy efficiency over time. Building on past OECD experience and best practices, and the knowledge of these countries' institutions, relevant sources of information to construct an energy indicator database are identified. A framework based on levels of hierarchy of indicators -- spanning from aggregate, macro level to disaggregated end-use level metrics -- is presented to help shape the understanding of assessing energy efficiency. In each sector of activity: industry, commercial, residential, agriculture and transport, indicators are presented and recommendations to distinguish the different factors affecting energy use are highlighted. The methodology booklet addresses specifically issues that are relevant to developing indicators where activity is a major factor driving energy demand. A companion spreadsheet tool is available upon request.

  2. Fluid-Structure Interaction Modeling of High-Aspect Ratio Nuclear Fuel Plates Using COMSOL

    SciTech Connect (OSTI)

    Curtis, Franklin G [ORNL] [ORNL; Ekici, Kivanc [ORNL] [ORNL; Freels, James D [ORNL] [ORNL

    2013-01-01

    The High Flux Isotope Reactor at the Oak Ridge National Lab is in the research stage of converting its fuel from high-enriched uranium to low-enriched uranium. Due to different physical properties of the new fuel and changes to the internal fuel plate design, the current safety basis must be re-evaluated through rigorous computational analyses. One of the areas being explored is the fluid-structure interaction phenomenon due to the interaction of thin fuel plates (50 mils thickness) and the cooling fluid (water). Detailed computational fluid dynamics and fluid-structure interaction simulations have only recently become feasible due to improved numerical algorithms and advancements in computing technology. For many reasons including the already built-in fluid-structure interaction module, COMSOL has been chosen for this complex problem. COMSOL's ability to solve multiphysics problems using a fully-coupled and implicit solution algorithm is crucial in obtaining a stable and accurate solution. Our initial findings show that COMSOL can accurately model such problems due to its ability to closely couple the fluid dynamics and the structural dynamics problems.

  3. CPUF - a chemical-structure-based polyurethane foam decomposition and foam response model.

    SciTech Connect (OSTI)

    Fletcher, Thomas H. (Brigham Young University, Provo, UT); Thompson, Kyle Richard; Erickson, Kenneth L.; Dowding, Kevin J.; Clayton, Daniel (Brigham Young University, Provo, UT); Chu, Tze Yao; Hobbs, Michael L.; Borek, Theodore Thaddeus III

    2003-07-01

    A Chemical-structure-based PolyUrethane Foam (CPUF) decomposition model has been developed to predict the fire-induced response of rigid, closed-cell polyurethane foam-filled systems. The model, developed for the B-61 and W-80 fireset foam, is based on a cascade of bondbreaking reactions that produce CO2. Percolation theory is used to dynamically quantify polymer fragment populations of the thermally degrading foam. The partition between condensed-phase polymer fragments and gas-phase polymer fragments (i.e. vapor-liquid split) was determined using a vapor-liquid equilibrium model. The CPUF decomposition model was implemented into the finite element (FE) heat conduction codes COYOTE and CALORE, which support chemical kinetics and enclosure radiation. Elements were removed from the computational domain when the calculated solid mass fractions within the individual finite element decrease below a set criterion. Element removal, referred to as ?element death,? creates a radiation enclosure (assumed to be non-participating) as well as a decomposition front, which separates the condensed-phase encapsulant from the gas-filled enclosure. All of the chemistry parameters as well as thermophysical properties for the CPUF model were obtained from small-scale laboratory experiments. The CPUF model was evaluated by comparing predictions to measurements. The validation experiments included several thermogravimetric experiments at pressures ranging from ambient pressure to 30 bars. Larger, component-scale experiments were also used to validate the foam response model. The effects of heat flux, bulk density, orientation, embedded components, confinement and pressure were measured and compared to model predictions. Uncertainties in the model results were evaluated using a mean value approach. The measured mass loss in the TGA experiments and the measured location of the decomposition front were within the 95% prediction limit determined using the CPUF model for all of the

  4. Spent fuel management fee methodology and computer code user's manual.

    SciTech Connect (OSTI)

    Engel, R.L.; White, M.K.

    1982-01-01

    The methodology and computer model described here were developed to analyze the cash flows for the federal government taking title to and managing spent nuclear fuel. The methodology has been used by the US Department of Energy (DOE) to estimate the spent fuel disposal fee that will provide full cost recovery. Although the methodology was designed to analyze interim storage followed by spent fuel disposal, it could be used to calculate a fee for reprocessing spent fuel and disposing of the waste. The methodology consists of two phases. The first phase estimates government expenditures for spent fuel management. The second phase determines the fees that will result in revenues such that the government attains full cost recovery assuming various revenue collection philosophies. These two phases are discussed in detail in subsequent sections of this report. Each of the two phases constitute a computer module, called SPADE (SPent fuel Analysis and Disposal Economics) and FEAN (FEe ANalysis), respectively.

  5. Monte Carlo analysis of critical phenomenon of the Ising model on memory stabilizer structures

    SciTech Connect (OSTI)

    Viteri, C. Ricardo; Tomita, Yu; Brown, Kenneth R.

    2009-10-15

    We calculate the critical temperature of the Ising model on a set of graphs representing a concatenated three-bit error-correction code. The graphs are derived from the stabilizer formalism used in quantum error correction. The stabilizer for a subspace is defined as the group of Pauli operators whose eigenvalues are +1 on the subspace. The group can be generated by a subset of operators in the stabilizer, and the choice of generators determines the structure of the graph. The Wolff algorithm, together with the histogram method and finite-size scaling, is used to calculate both the critical temperature and the critical exponents of each structure. The simulations show that the choice of stabilizer generators, both the number and the geometry, has a large effect on the critical temperature.

  6. Durability-Based Design Guide for an Automotive Structural Composite: Part 2. Background Data and Models

    SciTech Connect (OSTI)

    Corum, J.M.; Battiste, R.L.; Brinkman, C.R.; Ren, W.; Ruggles, M.B.; Weitsman, Y.J.; Yahr, G.T.

    1998-02-01

    This background report is a companion to the document entitled ''Durability-Based Design Criteria for an Automotive Structural Composite: Part 1. Design Rules'' (ORNL-6930). The rules and the supporting material characterization and modeling efforts described here are the result of a U.S. Department of Energy Advanced Automotive Materials project entitled ''Durability of Lightweight Composite Structures.'' The overall goal of the project is to develop experimentally based, durability-driven design guidelines for automotive structural composites. The project is closely coordinated with the Automotive Composites Consortium (ACC). The initial reference material addressed by the rules and this background report was chosen and supplied by ACC. The material is a structural reaction injection-molded isocyanurate (urethane), reinforced with continuous-strand, swirl-mat, E-glass fibers. This report consists of 16 position papers, each summarizing the observations and results of a key area of investigation carried out to provide the basis for the durability-based design guide. The durability issues addressed include the effects of cyclic and sustained loadings, temperature, automotive fluids, vibrations, and low-energy impacts (e.g., tool drops and roadway kickups) on deformation, strength, and stiffness. The position papers cover these durability issues. Topics include (1) tensile, compressive, shear, and flexural properties; (2) creep and creep rupture; (3) cyclic fatigue; (4) the effects of temperature, environment, and prior loadings; (5) a multiaxial strength criterion; (6) impact damage and damage tolerance design; (7) stress concentrations; (8) a damage-based predictive model for time-dependent deformations; (9) confirmatory subscale component tests; and (10) damage development and growth observations.

  7. Dynamic materials testing and constitutive modeling of structural sheet steel for automotive applications. Final progress report

    SciTech Connect (OSTI)

    Cady, C.M.; Chen, S.R.; Gray, G.T. III

    1996-08-23

    The objective of this study was to characterize the dynamic mechanical properties of four different structural sheet steels used in automobile manufacture. The analysis of a drawing quality, special killed (DQSK) mild steel; high strength, low alloy (HSLA) steel; interstitial free (IF); and a high strength steel (M-190) have been completed. In addition to the true stress-true strain data, coefficients for the Johnson-Cook, Zerilli-Armstrong, and Mechanical Threshold Stress constitutive models have been determined from the mechanical test results at various strain rates and temperatures and are summarized. Compression, tensile, and biaxial bulge tests and low (below 0.1/s) strain rate tests were completed for all four steels. From these test results it was determined to proceed with the material modeling optimization using the through thickness compression results. Compression tests at higher strain rates and temperatures were also conducted and analyzed for all the steels. Constitutive model fits were generated from the experimental data. This report provides a compilation of information generated from mechanical tests, the fitting parameters for each of the constitutive models, and an index and description of data files.

  8. Structure of intermediate shocks in collisionless anisotropic Hall-magnetohydrodynamics plasma models

    SciTech Connect (OSTI)

    Snchez-Arriaga, G.

    2013-10-15

    The existence of discontinuities within the double-adiabatic Hall-magnetohydrodynamics (MHD) model is discussed. These solutions are transitional layers where some of the plasma properties change from one equilibrium state to another. Under the assumption of traveling wave solutions with velocity C and propagation angle ? with respect to the ambient magnetic field, the Hall-MHD model reduces to a dynamical system and the waves are heteroclinic orbits joining two different fixed points. The analysis of the fixed points rules out the existence of rotational discontinuities. Simple considerations about the Hamiltonian nature of the system show that, unlike dissipative models, the intermediate shock waves are organized in branches in parameter space, i.e., they occur if a given relationship between ? and C is satisfied. Electron-polarized (ion-polarized) shock waves exhibit, in addition to a reversal of the magnetic field component tangential to the shock front, a maximum (minimum) of the magnetic field amplitude. The jumps of the magnetic field and the relative specific volume between the downstream and the upstream states as a function of the plasma properties are presented. The organization in parameter space of localized structures including in the model the influence of finite Larmor radius is discussed.

  9. STORM: A STatistical Object Representation Model

    SciTech Connect (OSTI)

    Rafanelli, M. ); Shoshani, A. )

    1989-11-01

    In this paper we explore the structure and semantic properties of the entities stored in statistical databases. We call such entities statistical objects'' (SOs) and propose a new statistical object representation model,'' based on a graph representation. We identify a number of SO representational problems in current models and propose a methodology for their solution. 11 refs.

  10. eGallon Methodology | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    eGallon Methodology eGallon Methodology The average American measures the day-to-day cost of driving by the price of a gallon of gasoline. In other words, as the price of gasoline ...

  11. FLUID-STRUCTURE INTERACTION MODELS OF THE MITRAL VALVE: FUNCTION IN NORMAL AND PATHOLOGIC STATES

    SciTech Connect (OSTI)

    Kunzelman, K. S.; Einstein, Daniel R.; Cochran, R. P.

    2007-08-29

    Successful mitral valve repair is dependent upon a full understanding of normal and abnormal mitral valve anatomy and function. Computational analysis is one such method that can be applied to simulate mitral valve function in order to analyze the roles of individual components, and evaluate proposed surgical repair. We developed the first three-dimensional, finite element (FE) computer model of the mitral valve including leaflets and chordae tendineae, however, one critical aspect that has been missing until the last few years was the evaluation of fluid flow, as coupled to the function of the mitral valve structure. We present here our latest results for normal function and specific pathologic changes using a fluid-structure interaction (FSI) model. Normal valve function was first assessed, followed by pathologic material changes in collagen fiber volume fraction, fiber stiffness, fiber splay, and isotropic stiffness. Leaflet and chordal stress and strain, and papillary muscle force was determined. In addition, transmitral flow, time to leaflet closure, and heart valve sound were assessed. Model predictions in the normal state agreed well with a wide range of available in-vivo and in-vitro data. Further, pathologic material changes that preserved the anisotropy of the valve leaflets were found to preserve valve function. By contrast, material changes that altered the anisotropy of the valve were found to profoundly alter valve function. The addition of blood flow and an experimentally driven microstructural description of mitral tissue represent significant advances in computational studies of the mitral valve, which allow further insight to be gained. This work is another building block in the foundation of a computational framework to aid in the refinement and development of a truly noninvasive diagnostic evaluation of the mitral valve. Ultimately, it represents the basis for simulation of surgical repair of pathologic valves in a clinical and educational

  12. Siting Methodologies for Hydrokinetics | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Siting Methodologies for Hydrokinetics Siting Methodologies for Hydrokinetics Report that provides an overview of the federal and state regulatory framework for hydrokinetic projects. siting_handbook_2009.pdf (2.43 MB) More Documents & Publications Siting Methodologies for Hydrokinetics EIS-0488: Final Environmental Impact Statement EIS-0493: Draft Environmental Impact Statement

  13. Structural health and prognostics management for offshore wind turbines : case studies of rotor fault and blade damage with initial O&M cost modeling.

    SciTech Connect (OSTI)

    Myrent, Noah J.; Kusnick, Joshua F.; Barrett, Natalie C.; Adams, Douglas E.; Griffith, Daniel Todd

    2013-04-01

    Operations and maintenance costs for offshore wind plants are significantly higher than the current costs for land-based (onshore) wind plants. One way to reduce these costs would be to implement a structural health and prognostic management (SHPM) system as part of a condition based maintenance paradigm with smart load management and utilize a state-based cost model to assess the economics associated with use of the SHPM system. To facilitate the development of such a system a multi-scale modeling approach developed in prior work is used to identify how the underlying physics of the system are affected by the presence of damage and faults, and how these changes manifest themselves in the operational response of a full turbine. This methodology was used to investigate two case studies: (1) the effects of rotor imbalance due to pitch error (aerodynamic imbalance) and mass imbalance and (2) disbond of the shear web; both on a 5-MW offshore wind turbine in the present report. Based on simulations of damage in the turbine model, the operational measurements that demonstrated the highest sensitivity to the damage/faults were the blade tip accelerations and local pitching moments for both imbalance and shear web disbond. The initial cost model provided a great deal of insight into the estimated savings in operations and maintenance costs due to the implementation of an effective SHPM system. The integration of the health monitoring information and O&M cost versus damage/fault severity information provides the initial steps to identify processes to reduce operations and maintenance costs for an offshore wind farm while increasing turbine availability, revenue, and overall profit.

  14. A coarse-grained model with implicit salt for RNAs: Predicting 3D structure, stability and salt effect

    SciTech Connect (OSTI)

    Shi, Ya-Zhou; Wang, Feng-Hua; Wu, Yuan-Yan; Tan, Zhi-Jie

    2014-09-14

    To bridge the gap between the sequences and 3-dimensional (3D) structures of RNAs, some computational models have been proposed for predicting RNA 3D structures. However, the existed models seldom consider the conditions departing from the room/body temperature and high salt (1M NaCl), and thus generally hardly predict the thermodynamics and salt effect. In this study, we propose a coarse-grained model with implicit salt for RNAs to predict 3D structures, stability, and salt effect. Combined with Monte Carlo simulated annealing algorithm and a coarse-grained force field, the model folds 46 tested RNAs (?45 nt) including pseudoknots into their native-like structures from their sequences, with an overall mean RMSD of 3.5 and an overall minimum RMSD of 1.9 from the experimental structures. For 30 RNA hairpins, the present model also gives the reliable predictions for the stability and salt effect with the mean deviation ? 1.0 C of melting temperatures, as compared with the extensive experimental data. In addition, the model could provide the ensemble of possible 3D structures for a short RNA at a given temperature/salt condition.

  15. Modeling the Structural Response from a Propagating High Explosive Using Smooth Particle Hydrodynamics

    SciTech Connect (OSTI)

    Margraf, J

    2012-06-12

    material flows through a still mesh. This is not typically done in an ALE3D analysis, especially if Lagrange elements exist. Deforming Lagrange elements would certainly tangle with a Eulerian mesh eventually. The best method in this case is to have an advecting mesh positioned as some relaxed version of the pre and post Lagrange step; this gives the best opportunity of modeling a high energy event with a combination of Lagrange and ALE elements. Dyne3D is another explicit dynamic analysis code, ParaDyn being the parallel version. ParaDyn is used for predicting the transient response of three dimensional structures using Lagrangian solid mechanics. Large deformation and mesh tangling is often resolved through the use of an element deletion scheme. This is useful to accommodate component failure, but if it is done purely as a means to preserve a useful mesh it can lead to problems because it does not maintain continuity of the material bulk response. Whatever medium exists between structural components is typically not modeled in ParaDyn. Instead, a structure either has a known loading profile applied or given initial conditions. The many included contact algorithms can calculate the loading response of materials if and when they collide. A recent implementation of an SPH module in which failed or deleted material nodes are converted to independent particles is currently being utilized for a variety of spall related problems and high velocity impact scenarios. Figure 4 shows an example of a projectile, given an initial velocity, and how it fails the first plate which generates SPH particles which then interact with and damage the second plate.

  16. Simulation Enabled Safeguards Assessment Methodology

    SciTech Connect (OSTI)

    Robert Bean; Trond Bjornard; Thomas Larson

    2007-09-01

    It is expected that nuclear energy will be a significant component of future supplies. New facilities, operating under a strengthened international nonproliferation regime will be needed. There is good reason to believe virtual engineering applied to the facility design, as well as to the safeguards system design will reduce total project cost and improve efficiency in the design cycle. Simulation Enabled Safeguards Assessment MEthodology (SESAME) has been developed as a software package to provide this capability for nuclear reprocessing facilities. The software architecture is specifically designed for distributed computing, collaborative design efforts, and modular construction to allow step improvements in functionality. Drag and drop wireframe construction allows the user to select the desired components from a component warehouse, render the system for 3D visualization, and, linked to a set of physics libraries and/or computational codes, conduct process evaluations of the system they have designed.

  17. Methodology for flammable gas evaluations

    SciTech Connect (OSTI)

    Hopkins, J.D., Westinghouse Hanford

    1996-06-12

    There are 177 radioactive waste storage tanks at the Hanford Site. The waste generates flammable gases. The waste releases gas continuously, but in some tanks the waste has shown a tendency to trap these flammable gases. When enough gas is trapped in a tank`s waste matrix, it may be released in a way that renders part or all of the tank atmosphere flammable for a period of time. Tanks must be evaluated against previously defined criteria to determine whether they can present a flammable gas hazard. This document presents the methodology for evaluating tanks in two areas of concern in the tank headspace:steady-state flammable-gas concentration resulting from continuous release, and concentration resulting from an episodic gas release.

  18. Simulation enabled safeguards assessment methodology

    SciTech Connect (OSTI)

    Bean, Robert; Bjornard, Trond; Larson, Tom

    2007-07-01

    It is expected that nuclear energy will be a significant component of future supplies. New facilities, operating under a strengthened international nonproliferation regime will be needed. There is good reason to believe virtual engineering applied to the facility design, as well as to the safeguards system design will reduce total project cost and improve efficiency in the design cycle. Simulation Enabled Safeguards Assessment MEthodology has been developed as a software package to provide this capability for nuclear reprocessing facilities. The software architecture is specifically designed for distributed computing, collaborative design efforts, and modular construction to allow step improvements in functionality. Drag and drop wire-frame construction allows the user to select the desired components from a component warehouse, render the system for 3D visualization, and, linked to a set of physics libraries and/or computational codes, conduct process evaluations of the system they have designed. (authors)

  19. Modeling molecule-plasmon interactions using quantized radiation fields within time-dependent electronic structure theory

    SciTech Connect (OSTI)

    Nascimento, Daniel R.; DePrince, A. Eugene

    2015-12-07

    We present a combined cavity quantum electrodynamics/ab initio electronic structure approach for simulating plasmon-molecule interactions in the time domain. The simple Jaynes-Cummings-type model Hamiltonian typically utilized in such simulations is replaced with one in which the molecular component of the coupled system is treated in a fully ab initio way, resulting in a computationally efficient description of general plasmon-molecule interactions. Mutual polarization effects are easily incorporated within a standard ground-state Hartree-Fock computation, and time-dependent simulations carry the same formal computational scaling as real-time time-dependent Hartree-Fock theory. As a proof of principle, we apply this generalized method to the emergence of a Fano-like resonance in coupled molecule-plasmon systems; this feature is quite sensitive to the nanoparticle-molecule separation and the orientation of the molecule relative to the polarization of the external electric field.

  20. Finite element modeling of magnetic compression using coupled electromagnetic-structural codes

    SciTech Connect (OSTI)

    Hainsworth, G.; Leonard, P.J.; Rodger, D.; Leyden, C.

    1996-05-01

    A link between the electromagnetic code, MEGA, and the structural code, DYNA3D has been developed. Although the primary use of this is for modelling of Railgun components, it has recently been applied to a small experimental Coilgun at Bath. The performance of Coilguns is very dependent on projectile material conductivity, and so high purity aluminium was investigated. However, due to its low strength, it is crushed significantly by magnetic compression in the gun. Although impractical as a real projectile material, this provides useful benchmark experimental data on high strain rate plastic deformation caused by magnetic forces. This setup is equivalent to a large scale version of the classic jumping ring experiment, where the ring jumps with an acceleration of 40 kG.

  1. Probabilistic Modeling of Landfill Subsidence Introduced by Buried Structure Collapse - 13229

    SciTech Connect (OSTI)

    Foye, Kevin; Soong, Te-Yang

    2013-07-01

    The long-term reliability of land disposal facility final cover systems - and therefore the overall waste containment - depends on the distortions imposed on these systems by differential settlement/subsidence. The evaluation of differential settlement is challenging because of the heterogeneity of the waste mass and buried structure placement. Deterministic approaches to long-term final cover settlement prediction are not able to capture the spatial variability in the waste mass and sub-grade properties, especially discontinuous inclusions, which control differential settlement. An alternative is to use a probabilistic model to capture the non-uniform collapse of cover soils and buried structures and the subsequent effect of that collapse on the final cover system. Both techniques are applied to the problem of two side-by-side waste trenches with collapsible voids. The results show how this analytical technique can be used to connect a metric of final cover performance (inundation area) to the susceptibility of the sub-grade to collapse and the effective thickness of the cover soils. This approach allows designers to specify cover thickness, reinforcement, and slope to meet the demands imposed by the settlement of the underlying waste trenches. (authors)

  2. Assessing the toxic effects of ethylene glycol ethers using Quantitative Structure Toxicity Relationship models

    SciTech Connect (OSTI)

    Ruiz, Patricia; Mumtaz, Moiz; Gombar, Vijay

    2011-07-15

    Experimental determination of toxicity profiles consumes a great deal of time, money, and other resources. Consequently, businesses, societies, and regulators strive for reliable alternatives such as Quantitative Structure Toxicity Relationship (QSTR) models to fill gaps in toxicity profiles of compounds of concern to human health. The use of glycol ethers and their health effects have recently attracted the attention of international organizations such as the World Health Organization (WHO). The board members of Concise International Chemical Assessment Documents (CICAD) recently identified inadequate testing as well as gaps in toxicity profiles of ethylene glycol mono-n-alkyl ethers (EGEs). The CICAD board requested the ATSDR Computational Toxicology and Methods Development Laboratory to conduct QSTR assessments of certain specific toxicity endpoints for these chemicals. In order to evaluate the potential health effects of EGEs, CICAD proposed a critical QSTR analysis of the mutagenicity, carcinogenicity, and developmental effects of EGEs and other selected chemicals. We report here results of the application of QSTRs to assess rodent carcinogenicity, mutagenicity, and developmental toxicity of four EGEs: 2-methoxyethanol, 2-ethoxyethanol, 2-propoxyethanol, and 2-butoxyethanol and their metabolites. Neither mutagenicity nor carcinogenicity is indicated for the parent compounds, but these compounds are predicted to be developmental toxicants. The predicted toxicity effects were subjected to reverse QSTR (rQSTR) analysis to identify structural attributes that may be the main drivers of the developmental toxicity potential of these compounds.

  3. An Integrated Approach Linking Process to Structural Modeling With Microstructural Characterization for Injections-Molded Long-Fiber Thermoplastics

    SciTech Connect (OSTI)

    Nguyen, Ba Nghiep; Bapanapalli, Satish K.; Smith, Mark T.; Kunc, Vlastimil; Frame, Barbara; Norris, Robert E.; Phelps, Jay; Tucker III, Charles L.; Jin, Xiaoshi; Wang, Jin

    2008-09-01

    The objective of our work is to enable the optimum design of lightweight automotive structural components using injection-molded long fiber thermoplastics (LFTs). To this end, an integrated approach that links process modeling to structural analysis with experimental microstructural characterization and validation is developed. First, process models for LFTs are developed and implemented into processing codes (e.g. ORIENT, Moldflow) to predict the microstructure of the as-formed composite (i.e. fiber length and orientation distributions). In parallel, characterization and testing methods are developed to obtain necessary microstructural data to validate process modeling predictions. Second, the predicted LFT composite microstructure is imported into a structural finite element analysis by ABAQUS to determine the response of the as-formed composite to given boundary conditions. At this stage, constitutive models accounting for the composite microstructure are developed to predict various types of behaviors (i.e. thermoelastic, viscoelastic, elastic-plastic, damage, fatigue, and impact) of LFTs. Experimental methods are also developed to determine material parameters and to validate constitutive models. Such a process-linked-structural modeling approach allows an LFT composite structure to be designed with confidence through numerical simulations. Some recent results of our collaborative research will be illustrated to show the usefulness and applications of this integrated approach.

  4. Seismic Soil-Structure Interaction Analyses of a Deeply Embedded Model Reactor – SASSI Analyses

    SciTech Connect (OSTI)

    Nie J.; Braverman J.; Costantino, M.

    2013-10-31

    This report summarizes the SASSI analyses of a deeply embedded reactor model performed by BNL and CJC and Associates, as part of the seismic soil-structure interaction (SSI) simulation capability project for the NEAMS (Nuclear Energy Advanced Modeling and Simulation) Program of the Department of Energy. The SASSI analyses included three cases: 0.2 g, 0.5 g, and 0.9g, all of which refer to nominal peak accelerations at the top of the bedrock. The analyses utilized the modified subtraction method (MSM) for performing the seismic SSI evaluations. Each case consisted of two analyses: input motion in one horizontal direction (X) and input motion in the vertical direction (Z), both of which utilized the same in-column input motion. Besides providing SASSI results for use in comparison with the time domain SSI results obtained using the DIABLO computer code, this study also leads to the recognition that the frequency-domain method should be modernized so that it can better serve its mission-critical role for analysis and design of nuclear power plants.

  5. Modeling precursor diffusion and reaction of atomic layer deposition in porous structures

    SciTech Connect (OSTI)

    Keuter, Thomas, E-mail: t.keuter@fz-juelich.de; Menzler, Norbert Heribert; Mauer, Georg; Vondahlen, Frank; Vaen, Robert; Buchkremer, Hans Peter [Forschungszentrum Jlich, Institute of Energy and Climate Research (IEK-1), 52425 Jlich (Germany)

    2015-01-01

    Atomic layer deposition (ALD) is a technique for depositing thin films of materials with a precise thickness control and uniformity using the self-limitation of the underlying reactions. Usually, it is difficult to predict the result of the ALD process for given external parameters, e.g., the precursor exposure time or the size of the precursor molecules. Therefore, a deeper insight into ALD by modeling the process is needed to improve process control and to achieve more economical coatings. In this paper, a detailed, microscopic approach based on the model developed by Yanguas-Gil and Elam is presented and additionally compared with the experiment. Precursor diffusion and second-order reaction kinetics are combined to identify the influence of the porous substrate's microstructural parameters and the influence of precursor properties on the coating. The thickness of the deposited film is calculated for different depths inside the porous structure in relation to the precursor exposure time, the precursor vapor pressure, and other parameters. Good agreement with experimental results was obtained for ALD zirconiumdioxide (ZrO{sub 2}) films using the precursors tetrakis(ethylmethylamido)zirconium and O{sub 2}. The derivation can be adjusted to describe other features of ALD processes, e.g., precursor and reactive site losses, different growth modes, pore size reduction, and surface diffusion.

  6. Calibration methodology for proportional counters applied to yield measurements of a neutron burst

    SciTech Connect (OSTI)

    Tarifeo-Saldivia, Ariel E-mail: atarisal@gmail.com; Pavez, Cristian; Soto, Leopoldo; Center for Research and Applications in Plasma Physics and Pulsed Power, P4, Santiago; Departamento de Ciencias Fisicas, Facultad de Ciencias Exactas, Universidad Andres Bello, Republica 220, Santiago ; Mayer, Roberto E.

    2014-01-15

    This paper introduces a methodology for the yield measurement of a neutron burst using neutron proportional counters. This methodology is to be applied when single neutron events cannot be resolved in time by nuclear standard electronics, or when a continuous current cannot be measured at the output of the counter. The methodology is based on the calibration of the counter in pulse mode, and the use of a statistical model to estimate the number of detected events from the accumulated charge resulting from the detection of the burst of neutrons. The model is developed and presented in full detail. For the measurement of fast neutron yields generated from plasma focus experiments using a moderated proportional counter, the implementation of the methodology is herein discussed. An experimental verification of the accuracy of the methodology is presented. An improvement of more than one order of magnitude in the accuracy of the detection system is obtained by using this methodology with respect to previous calibration methods.

  7. Effect of Divalent Cation Removal on the Structure of Gram-Negative Bacterial Outer Membrane Models

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Clifton, Luke A.; Skoda, Maximilian W. A.; Le Brun, Anton P.; Ciesielski, Filip; Kuzmenko, Ivan; Holt, Stephen A.; Lakey, Jeremy H.

    2014-12-09

    The Gram-negative bacterial outer membrane (GNB-OM) is asymmetric in its lipid composition with a phospholipid-rich inner leaflet and an outer leaflet predominantly composed of lipopolysaccharides (LPS). LPS are polyanionic molecules, with numerous phosphate groups present in the lipid A and core oligosaccharide regions. The repulsive forces due to accumulation of the negative charges are screened and bridged by the divalent cations (Mg2+ and Ca2+) that are known to be crucial for the integrity of the bacterial OM. Indeed, chelation of divalent cations is a well-established method to permeabilize Gram-negative bacteria such as Escherichia coli. Here, we use X-ray and neutronmore » reflectivity (XRR and NR, respectively) techniques to examine the role of calcium ions in the stability of a model GNB-OM. Using XRR we show that Ca2+ binds to the core region of the rough mutant LPS (RaLPS) films, producing more ordered structures in comparison to divalent cation free monolayers. Using recently developed solid-supported models of the GNB-OM, we study the effect of calcium removal on the asymmetry of DPPC:RaLPS bilayers. We show that without the charge screening effect of divalent cations, the LPS is forced to overcome the thermodynamically unfavorable energy barrier and flip across the hydrophobic bilayer to minimize the repulsive electrostatic forces, resulting in about 20% mixing of LPS and DPPC between the inner and outer bilayer leaflets. These results reveal for the first time the molecular details behind the well-known mechanism of outer membrane stabilization by divalent cations. This confirms the relevance of the asymmetric models for future studies of outer membrane stability and antibiotic penetration.« less

  8. Effect of Divalent Cation Removal on the Structure of Gram-Negative Bacterial Outer Membrane Models

    SciTech Connect (OSTI)

    Clifton, Luke A.; Skoda, Maximilian W. A.; Le Brun, Anton P.; Ciesielski, Filip; Kuzmenko, Ivan; Holt, Stephen A.; Lakey, Jeremy H.

    2014-12-09

    The Gram-negative bacterial outer membrane (GNB-OM) is asymmetric in its lipid composition with a phospholipid-rich inner leaflet and an outer leaflet predominantly composed of lipopolysaccharides (LPS). LPS are polyanionic molecules, with numerous phosphate groups present in the lipid A and core oligosaccharide regions. The repulsive forces due to accumulation of the negative charges are screened and bridged by the divalent cations (Mg2+ and Ca2+) that are known to be crucial for the integrity of the bacterial OM. Indeed, chelation of divalent cations is a well-established method to permeabilize Gram-negative bacteria such as Escherichia coli. Here, we use X-ray and neutron reflectivity (XRR and NR, respectively) techniques to examine the role of calcium ions in the stability of a model GNB-OM. Using XRR we show that Ca2+ binds to the core region of the rough mutant LPS (RaLPS) films, producing more ordered structures in comparison to divalent cation free monolayers. Using recently developed solid-supported models of the GNB-OM, we study the effect of calcium removal on the asymmetry of DPPC:RaLPS bilayers. We show that without the charge screening effect of divalent cations, the LPS is forced to overcome the thermodynamically unfavorable energy barrier and flip across the hydrophobic bilayer to minimize the repulsive electrostatic forces, resulting in about 20% mixing of LPS and DPPC between the inner and outer bilayer leaflets. These results reveal for the first time the molecular details behind the well-known mechanism of outer membrane stabilization by divalent cations. This confirms the relevance of the asymmetric models for future studies of outer membrane stability and antibiotic penetration.

  9. Implications of Model Structure and Detail for Utility Planning. Scenario Case Studies using the Resource Planning Model

    SciTech Connect (OSTI)

    Mai, Trieu; Barrows, Clayton; Lopez, Anthony; Hale, Elaine; Dyson, Mark; Eurek, Kelly

    2015-04-23

    We examine how model investment decisions change under different model configurations and assumptions related to renewable capacity credit, the inclusion or exclusion of operating reserves, dispatch period sampling, transmission power flow modeling, renewable spur line costs, and the ability of a planning region to import and export power. For all modeled scenarios, we find that under market conditions where new renewable deployment is predominantly driven by renewable portfolio standards, model representations of wind and solar capacity credit and interactions between balancing areas are most influential in avoiding model investments in excess thermal capacity. We also compare computation time between configurations to evaluate tradeoffs between computational burden and model accuracy. From this analysis, we find that certain advanced dispatch representations (e.g., DC optimal power flow) can have dramatic adverse effects on computation time but can be largely inconsequential to model investment outcomes, at least at the renewable penetration levels modeled. Finally, we find that certain underappreciated aspects of new capacity investment decisions and model representations thereof, such as spur lines for new renewable capacity, can influence model outcomes particularly in the renewable technology and location chosen by the model. Though this analysis is not comprehensive and results are specific to the model region, input assumptions, and optimization-modeling framework employed, the findings are intended to provide a guide for model improvement opportunities.

  10. New Methodology for Estimating Fuel Economy by Vehicle Class

    SciTech Connect (OSTI)

    Chin, Shih-Miao; Dabbs, Kathryn; Hwang, Ho-Ling

    2011-01-01

    Office of Highway Policy Information to develop a new methodology to generate annual estimates of average fuel efficiency and number of motor vehicles registered by vehicle class for Table VM-1 of the Highway Statistics annual publication. This paper describes the new methodology developed under this effort and compares the results of the existing manual method and the new systematic approach. The methodology developed under this study takes a two-step approach. First, the preliminary fuel efficiency rates are estimated based on vehicle stock models for different classes of vehicles. Then, a reconciliation model is used to adjust the initial fuel consumption rates from the vehicle stock models and match the VMT information for each vehicle class and the reported total fuel consumption. This reconciliation model utilizes a systematic approach that produces documentable and reproducible results. The basic framework utilizes a mathematical programming formulation to minimize the deviations between the fuel economy estimates published in the previous year s Highway Statistics and the results from the vehicle stock models, subject to the constraint that fuel consumptions for different vehicle classes must sum to the total fuel consumption estimate published in Table MF-21 of the current year Highway Statistics. The results generated from this new approach provide a smoother time series for the fuel economies by vehicle class. It also utilizes the most up-to-date and best available data with sound econometric models to generate MPG estimates by vehicle class.

  11. Seismic Fracture Characterization Methodologies for Enhanced Geothermal

    Office of Scientific and Technical Information (OSTI)

    Systems (Technical Report) | SciTech Connect Seismic Fracture Characterization Methodologies for Enhanced Geothermal Systems Citation Details In-Document Search Title: Seismic Fracture Characterization Methodologies for Enhanced Geothermal Systems Executive Summary The overall objective of this work was the development of surface and borehole seismic methodologies using both compressional and shear waves for characterizing faults and fractures in Enhanced Geothermal Systems. We used both

  12. Methodology for Validating Building Energy Analysis Simulations

    SciTech Connect (OSTI)

    Judkoff, R.; Wortman, D.; O'Doherty, B.; Burch, J.

    2008-04-01

    The objective of this report was to develop a validation methodology for building energy analysis simulations, collect high-quality, unambiguous empirical data for validation, and apply the validation methodology to the DOE-2.1, BLAST-2MRT, BLAST-3.0, DEROB-3, DEROB-4, and SUNCAT 2.4 computer programs. This report covers background information, literature survey, validation methodology, comparative studies, analytical verification, empirical validation, comparative evaluation of codes, and conclusions.

  13. Siting Methodologies for Hydrokinetics | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Siting Methodologies for Hydrokinetics Report that provides an overview of the federal and state regulatory framework for hydrokinetic projects. PDF icon sitinghandbook2009.pdf ...

  14. Development of Nonlinear SSI Time Domain Methodology

    Broader source: Energy.gov [DOE]

    Development of Nonlinear SSI Time Domain Methodology Justin Coleman, P.E. Nuclear Science and Technology Idaho National Laboratory October 22, 2014

  15. Solutia: Massachusetts Chemical Manufacturer Uses SECURE Methodology...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Energy Consumption Solutia: Massachusetts Chemical Manufacturer Uses SECURE Methodology to Identify Potential Reductions in Utility and Process Energy Consumption This case ...

  16. Model independent x-ray standing wave analysis of periodic multilayer structures

    SciTech Connect (OSTI)

    Yakunin, S. N.; Pashaev, E. M.; Subbotin, I. A.; Makhotkin, I. A.; Kruijs, R. W. E. van de; Zoethout, E.; Chuev, M. A.; Louis, E.; Seregin, S. Yu.; Novikov, D. V.; Bijkerk, F.; Kovalchuk, M. V.

    2014-04-07

    We present a model independent approach for the analysis of X-ray fluorescence yield modulated by an X-ray standing wave (XSW), that allow a fast reconstruction of the atomic distribution function inside a sample without fitting procedure. The approach is based on the direct regularized solution of the system of linear equations that characterizes the fluorescence yield. The suggested technique was optimized for, but not limited to, the analysis of periodic layered structures where the XSW is formed under Bragg conditions. The developed approach was applied to the reconstruction of the atomic distribution function for LaN/BN multilayers with 50 periods of 43 Å thick layers. The object is especially difficult to analyze with traditional methods, as the estimated thickness of the interface region between the constituent materials is comparable to the individual layer thicknesses. However, using the suggested technique, it was possible to reconstruct width of the La atomic distribution showing that the La atoms stay localized within the LaN layers and interfaces and do not diffuse into the BN layer. The analysis of the reconstructed profiles showed that the positions of the center of the atomic distribution function can be estimated with an accuracy of 1 Å.

  17. Magnetic and Structural Design of a 15 T $Nb_3Sn$ Accelerator Depole Model

    SciTech Connect (OSTI)

    Kashikhin, V. V.; Andreev, N.; Barzi, E.; Novitski, I.; Zlobin, A. V.

    2015-01-01

    Hadron Colliders (HC) are the most powerful discovery tools in modern high energy physics. A 100 TeV scale HC with a nominal operation field of at least 15 T is being considered for the post-LHC era. The choice of a 15 T nominal field requires using the Nb3Sn technology. Practical demonstration of this field level in an accelerator-quality magnet and substantial reduction of the magnet costs are the key conditions for realization of such a machine. FNAL has started the development of a 15 T $Nb_{3}Sn$ dipole demonstrator for a 100 TeV scale HC. The magnet design is based on 4-layer shell type coils, graded between the inner and outer layers to maximize the performance. The experience gained during the 11-T dipole R&D campaign is applied to different aspects of the magnet design. This paper describes the magnetic and structural designs and parameters of the 15 T $Nb_3Sn$ dipole and the steps towards the demonstration model.

  18. Methodology for extracting local constants from petroleum cracking flows

    DOE Patents [OSTI]

    Chang, Shen-Lin; Lottes, Steven A.; Zhou, Chenn Q.

    2000-01-01

    A methodology provides for the extraction of local chemical kinetic model constants for use in a reacting flow computational fluid dynamics (CFD) computer code with chemical kinetic computations to optimize the operating conditions or design of the system, including retrofit design improvements to existing systems. The coupled CFD and kinetic computer code are used in combination with data obtained from a matrix of experimental tests to extract the kinetic constants. Local fluid dynamic effects are implicitly included in the extracted local kinetic constants for each particular application system to which the methodology is applied. The extracted local kinetic model constants work well over a fairly broad range of operating conditions for specific and complex reaction sets in specific and complex reactor systems. While disclosed in terms of use in a Fluid Catalytic Cracking (FCC) riser, the inventive methodology has application in virtually any reaction set to extract constants for any particular application and reaction set formulation. The methodology includes the step of: (1) selecting the test data sets for various conditions; (2) establishing the general trend of the parametric effect on the measured product yields; (3) calculating product yields for the selected test conditions using coupled computational fluid dynamics and chemical kinetics; (4) adjusting the local kinetic constants to match calculated product yields with experimental data; and (5) validating the determined set of local kinetic constants by comparing the calculated results with experimental data from additional test runs at different operating conditions.

  19. Solving coiled-coil protein structures

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Dauter, Zbigniew

    2015-02-26

    With the availability of more than 100,000 entries stored in the Protein Data Bank (PDB) that can be used as search models, molecular replacement (MR) is currently the most popular method of solving crystal structures of macromolecules. Significant methodological efforts have been directed in recent years towards making this approach more powerful and practical. This resulted in the creation of several computer programs, highly automated and user friendly, that are able to successfully solve many structures even by researchers who, although interested in structures of biomolecules, are not very experienced in crystallography.

  20. Yield Line Evaluation Methodology for Reinforced Concrete Structures

    Energy Science and Technology Software Center (OSTI)

    1998-12-30

    Yield line theory is an analytical technique that can be used to determine the ultimate bending capacity of flat reinforced concrete plates subject to distributed and concentrated loadings. Alternately, yield line theory, combined with rotation limits can be used to determine the energy absorption capacity of plates subject to impulsive and impact loadings. Typical components analyzed by yield line theory are basemats, floor and roof slabs subject to vertical loads along with walls subject tomore » out of plane loadings. One limitation of yield line theory is that it is computationally difficult to evaluate some mechanisms. This problem is aggravated by the complex geometry and reinforcing layouts commonly found in practice. The program has the capability to either evaluate a single user defined mechanism or to iterate over a range of mechanisms to determine the minimum ultimate capacity. The program is verified by comparison to a series of yield line mechanisms with known solutions.« less

  1. A High-Performance Embedded Hybrid Methodology for Uncertainty Quantification With Applications

    SciTech Connect (OSTI)

    Iaccarino, Gianluca

    2014-04-01

    Multiphysics processes modeled by a system of unsteady di erential equations are natu- rally suited for partitioned (modular) solution strategies. We consider such a model where probabilistic uncertainties are present in each module of the system and represented as a set of random input parameters. A straightforward approach in quantifying uncertainties in the predicted solution would be to sample all the input parameters into a single set, and treat the full system as a black-box. Although this method is easily parallelizable and requires minimal modi cations to deterministic solver, it is blind to the modular structure of the underlying multiphysical model. On the other hand, using spectral representations polynomial chaos expansions (PCE) can provide richer structural information regarding the dynamics of these uncertainties as they propagate from the inputs to the predicted output, but can be prohibitively expensive to implement in the high-dimensional global space of un- certain parameters. Therefore, we investigated hybrid methodologies wherein each module has the exibility of using sampling or PCE based methods of capturing local uncertainties while maintaining accuracy in the global uncertainty analysis. For the latter case, we use a conditional PCE model which mitigates the curse of dimension associated with intru- sive Galerkin or semi-intrusive Pseudospectral methods. After formalizing the theoretical framework, we demonstrate our proposed method using a numerical viscous ow simulation and benchmark the performance against a solely Monte-Carlo method and solely spectral method.

  2. Molecular simulation of structure and diffusion at smectite-water interfaces: Using expanded clay interlayers as model nanopores

    SciTech Connect (OSTI)

    Greathouse, Jeffery A.; Hart, David; Bowers, Geoffrey M.; Kirkpatrick, R. James; Cygan, Randall Timothy

    2015-07-20

    In geologic settings relevant to a number of extraction and potential sequestration processes, nanopores bounded by clay mineral surfaces play a critical role in the transport of aqueous species. Solution structure and dynamics at clay–water interfaces are quite different from their bulk values, and the spatial extent of this disruption remains a topic of current interest. We have used molecular dynamics simulations to investigate the structure and diffusion of aqueous solutions in clay nanopores approximately 6 nm thick, comparing the effect of clay composition with model Na-hectorite and Na-montmorillonite surfaces. In addition to structural properties at the interface, water and ion diffusion coefficients were calculated within each aqueous layer at the interface, as well as in the central bulk-like region of the nanopore. The results show similar solution structure and diffusion properties at each surface, with subtle differences in sodium adsorption complexes and water structure in the first adsorbed layer due to different arrangements of layer hydroxyl groups in the two clay models. Interestingly, the extent of surface disruption on bulk-like solution structure and diffusion extends to only a few water layers. Additionally, a comparison of sodium ion residence times confirms similar behavior of inner-sphere and outer-sphere surface complexes at each clay surface, but ~1% of sodium ions adsorb in ditrigonal cavities on the hectorite surface. Thus, the presence of these anhydrous ions is consistent with highly immobile anhydrous ions seen in previous nuclear magnetic resonance spectroscopic measurements of hectorite pastes.

  3. Molecular Simulation of Structure and Diffusion at Smectite-Water Interfaces: Using Expanded Clay Interlayers as Model Nanopores

    SciTech Connect (OSTI)

    Greathouse, Jeffery A.; Hart, David; Bowers, Geoffrey M.; Kirkpatrick, R. James; Cygan, Randall Timothy

    2015-07-20

    In geologic settings relevant to a number of extraction and potential sequestration processes, nanopores bounded by clay mineral surfaces play a critical role in the transport of aqueous species. Solution structure and dynamics at claywater interfaces are quite different from their bulk values, and the spatial extent of this disruption remains a topic of current interest. We have used molecular dynamics simulations to investigate the structure and diffusion of aqueous solutions in clay nanopores approximately 6 nm thick, comparing the effect of clay composition with model Na-hectorite and Na-montmorillonite surfaces. In addition to structural properties at the interface, water and ion diffusion coefficients were calculated within each aqueous layer at the interface, as well as in the central bulk-like region of the nanopore. The results show similar solution structure and diffusion properties at each surface, with subtle differences in sodium adsorption complexes and water structure in the first adsorbed layer due to different arrangements of layer hydroxyl groups in the two clay models. Interestingly, the extent of surface disruption on bulk-like solution structure and diffusion extends to only a few water layers. Additionally, a comparison of sodium ion residence times confirms similar behavior of inner-sphere and outer-sphere surface complexes at each clay surface, but ~1% of sodium ions adsorb in ditrigonal cavities on the hectorite surface. Thus, the presence of these anhydrous ions is consistent with highly immobile anhydrous ions seen in previous nuclear magnetic resonance spectroscopic measurements of hectorite pastes.

  4. Structural Model of the Basement in the Central Savannah River Area, South Carolina and Georgia

    SciTech Connect (OSTI)

    Stephenson, D. [Westinghouse Savannah River Company, AIKEN, SC (United States); Stieve, A.

    1992-03-01

    Interpretation of several generations of seismic reflection data and potential field data suggests the presence of several crustal blocks within the basement beneath the Coastal Plain in the Central Savannah River Area (CSRA). The seismic reflection and refraction data include a grid of profiles that capture shallow and deep reflection events and traverse the Savannah River Site and vicinity. Potential field data includes aeromagnetic, ground magnetic surveys, reconnaissance and detailed gravity surveys. Subsurface data from recovered core are used to constrain the model.Interpretation of these data characteristically indicate a southeast dipping basement surface with some minor highs and lows suggesting an erosional pre-Cretaceous unconformity. This surface is interrupted by several basement faults, most of which offset only early Cretaceous sedimentary horizons overlying the erosional surface. The oldest fault is perhaps late Paleozoic because it is truncated at the basement/Coastal Plain interface. This fault is related in timing and mechanism to the underlying Augusta fault. The youngest faults deform Coastal Plain sediments of at least Priabonian age (40-36.6 Ma). One of these young faults is the Pen Branch faults, identified as the southeast dipping master fault for the Triassic Dunbarton basin. All the Cenozoic faults are probably related in time and mechanism to the nearby, well studied Belair fault.The study area thus contains a set of structures evolved from the Alleghanian orogeny through Mesozoic extension to Cenozoic readjustment of the crust. There is a metamorphosed crystalline terrane with several reflector/fault packages, a reactivated Triassic basin, a mafic terrane separating the Dunbarton basin from the large South Georgia basin to the southeast, and an overprint of reverse faults, some reactivated, and some newly formed.

  5. Aeroelastic Modeling of Offshore Turbines and Support Structures in Hurricane-Prone Regions (Poster)

    SciTech Connect (OSTI)

    Damiani, R.

    2014-03-01

    US offshore wind turbines (OWTs) will likely have to contend with hurricanes and the associated loading conditions. Current industry standards do not account for these design load cases (DLCs), thus a new approach is required to guarantee that the OWTs achieve an appropriate level of reliability. In this study, a sequentially coupled aero-hydro-servo-elastic modeling technique was used to address two design approaches: 1.) The ABS (American Bureau of Shipping) approach; and 2.) The Hazard Curve or API (American Petroleum Institute) approach. The former employs IEC partial load factors (PSFs) and 100-yr return-period (RP) metocean events. The latter allows setting PSFs and RP to a prescribed level of system reliability. The 500-yr RP robustness check (appearing in [2] and [3] upcoming editions) is a good indicator of the target reliability for L2 structures. CAE tools such as NREL's FAST and Bentley's' SACS (offshore analysis and design software) can be efficiently coupled to simulate system loads under hurricane DLCs. For this task, we augmented the latest FAST version (v. 8) to include tower aerodynamic drag that cannot be ignored in hurricane DLCs. In this project, a 6 MW turbine was simulated on a typical 4-legged jacket for a mid-Atlantic site. FAST-calculated tower base loads were fed to SACS at the interface level (transition piece); SACS added hydrodynamic and wind loads on the exposed substructure, and calculated mudline overturning moments, and member and joint utilization. Results show that CAE tools can be effectively used to compare design approaches for the design of OWTs in hurricane regions and to achieve a well-balanced design, where reliability levels and costs are optimized.

  6. SEP Request for Approval Form 4 - Alternative Adjustment Model...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Adjustment Model Application Methodology SEP Request for Approval Form 4 - Alternative Adjustment Model Application Methodology SEP-Request-for-Approval-Form-4Alternati...

  7. Considerations for realistic ECCS evaluation methodology for LWRs

    SciTech Connect (OSTI)

    Rohatgi, U.S.; Saha, P.; Chexal, V.K.

    1985-01-01

    This paper identifies the various phenomena which govern the course of large and small break LOCAs in LWRs, and affect the key parameters such as Peak Clad Temperature (PCT) and timing of the end of blowdown, beginning of reflood, PCT, and complete quench. A review of the best-estimate models and correlations for these phenomena in the current literature has been presented. Finally, a set of models have been recommended which may be incorporated in a present best-estimate code such as TRAC or RELAP5 in order to develop a realistic ECCS evaluation methodology for future LWRs and have also been compared with the requirements of current ECCS evaluation methodology as outlined in Appendix K of 10CFR50. 58 refs.

  8. Risk-Based Sensor Placement Methodology - Energy Innovation Portal

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Energy Analysis Energy Analysis Advanced Materials Advanced Materials Find More Like This Return to Search Risk-Based Sensor Placement Methodology Providing Optimal Monitoring of Hazardous Releases Oak Ridge National Laboratory Contact ORNL About This Technology Technology Marketing Summary Current methods for sensor placement are based on qualitative approaches ranging from "best guess" to expensive, customized studies. Description Scientists at ORNL have developed a model for

  9. Photovoltaic module energy rating methodology development

    SciTech Connect (OSTI)

    Kroposki, B.; Myers, D.; Emery, K.; Mrig, L.; Whitaker, C.; Newmiller, J.

    1996-05-01

    A consensus-based methodology to calculate the energy output of a PV module will be described in this paper. The methodology develops a simple measure of PV module performance that provides for a realistic estimate of how a module will perform in specific applications. The approach makes use of the weather data profiles that describe conditions throughout the United States and emphasizes performance differences between various module types. An industry-representative Technical Review Committee has been assembled to provide feedback and guidance on the strawman and final approach used in developing the methodology.

  10. Covariance Evaluation Methodology for Neutron Cross Sections

    SciTech Connect (OSTI)

    Herman,M.; Arcilla, R.; Mattoon, C.M.; Mughabghab, S.F.; Oblozinsky, P.; Pigni, M.; Pritychenko, b.; Songzoni, A.A.

    2008-09-01

    We present the NNDC-BNL methodology for estimating neutron cross section covariances in thermal, resolved resonance, unresolved resonance and fast neutron regions. The three key elements of the methodology are Atlas of Neutron Resonances, nuclear reaction code EMPIRE, and the Bayesian code implementing Kalman filter concept. The covariance data processing, visualization and distribution capabilities are integral components of the NNDC methodology. We illustrate its application on examples including relatively detailed evaluation of covariances for two individual nuclei and massive production of simple covariance estimates for 307 materials. Certain peculiarities regarding evaluation of covariances for resolved resonances and the consistency between resonance parameter uncertainties and thermal cross section uncertainties are also discussed.

  11. Relative Hazard and Risk Measure Calculation Methodology

    SciTech Connect (OSTI)

    Stenner, Robert D.; Strenge, Dennis L.; Elder, Matthew S.

    2004-03-20

    The relative hazard (RH) and risk measure (RM) methodology and computer code is a health risk-based tool designed to allow managers and environmental decision makers the opportunity to readily consider human health risks (i.e., public and worker risks) in their screening-level analysis of alternative cleanup strategies. Environmental management decisions involve consideration of costs, schedules, regulatory requirements, health hazards, and risks. The RH-RM tool is a risk-based environmental management decision tool that allows managers the ability to predict and track health hazards and risks over time as they change in relation to mitigation and cleanup actions. Analysis of the hazards and risks associated with planned mitigation and cleanup actions provides a baseline against which alternative strategies can be compared. This new tool allows managers to explore “what if scenarios,” to better understand the impact of alternative mitigation and cleanup actions (i.e., alternatives to the planned actions) on health hazards and risks. This new tool allows managers to screen alternatives on the basis of human health risk and compare the results with cost and other factors pertinent to the decision. Once an alternative or a narrow set of alternatives are selected, it will then be more cost-effective to perform the detailed risk analysis necessary for programmatic and regulatory acceptance of the selected alternative. The RH-RM code has been integrated into the PNNL developed Framework for Risk Analysis In Multimedia Environmental Systems (FRAMES) to allow the input and output data of the RH-RM code to be readily shared with the more comprehensive risk analysis models, such as the PNNL developed Multimedia Environmental Pollutant Assessment System (MEPAS) model.

  12. Developing regionalized models of lithospheric thickness and velocity structure across Eurasia and the Middle East from jointly inverting P-wave and S-wave receiver functions with Rayleigh wave group and phase velocities

    SciTech Connect (OSTI)

    Julia, J; Nyblade, A; Hansen, S; Rodgers, A; Matzel, E

    2009-07-06

    In this project, we are developing models of lithospheric structure for a wide variety of tectonic regions throughout Eurasia and the Middle East by regionalizing 1D velocity models obtained by jointly inverting P-wave and S-wave receiver functions with Rayleigh wave group and phase velocities. We expect the regionalized velocity models will improve our ability to predict travel-times for local and regional phases, such as Pg, Pn, Sn and Lg, as well as travel-times for body-waves at upper mantle triplication distances in both seismic and aseismic regions of Eurasia and the Middle East. We anticipate the models will help inform and strengthen ongoing and future efforts within the NNSA labs to develop 3D velocity models for Eurasia and the Middle East, and will assist in obtaining model-based predictions where no empirical data are available and for improving locations from sparse networks using kriging. The codes needed to conduct the joint inversion of P-wave receiver functions (PRFs), S-wave receiver functions (SRFs), and dispersion velocities have already been assembled as part of ongoing research on lithospheric structure in Africa. The methodology has been tested with synthetic 'data' and case studies have been investigated with data collected at an open broadband stations in South Africa. PRFs constrain the size and S-P travel-time of seismic discontinuities in the crust and uppermost mantle, SRFs constrain the size and P-S travel-time of the lithosphere-asthenosphere boundary, and dispersion velocities constrain average S-wave velocity within frequency-dependent depth-ranges. Preliminary results show that the combination yields integrated 1D velocity models local to the recording station, where the discontinuities constrained by the receiver functions are superimposed to a background velocity model constrained by the dispersion velocities. In our first year of this project we will (i) generate 1D velocity models for open broadband seismic stations in the

  13. Culture, and a Metrics Methodology for Biological Countermeasure Scenarios

    SciTech Connect (OSTI)

    Simpson, Mary J.

    2007-03-15

    . With uncertain data and limited common units, the aggregation of results is not inherently obvious. Candidate methodologies discussed include statistical, analytical, and expert-based numerical approaches. Most statistical methods require large amounts of data with a random distribution of values for validity. Analytical methods predominate wherein structured data or patterns are evident and randomness is low. The analytical hierarchy process is shown to satisfy all requirements and provide a detailed method for measurement that depends on expert judgment by decision makers.

  14. NSD Methodology Report | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    NSD Methodology Report NSDMethodologyReport.pdf (4.46 MB) More Documents & Publications New Stream-reach Development (NSD) Final Report and Fact Sheet An Assessment of Energy ...

  15. Methodology for Monthly Crude Oil Production Estimates

    U.S. Energy Information Administration (EIA) Indexed Site

    015 U.S. Energy Information Administration | Methodology for Monthly Crude Oil Production Estimates 1 Methodology for Monthly Crude Oil Production Estimates Executive summary The U.S. Energy Information Administration (EIA) relies on data from state and other federal agencies and does not currently collect survey data directly from crude oil producers. Summarizing the estimation process in terms of percent of U.S. production: * 20% is based on state agency data, including North Dakota and

  16. A design methodology for unattended monitoring systems

    SciTech Connect (OSTI)

    SMITH,JAMES D.; DELAND,SHARON M.

    2000-03-01

    The authors presented a high-level methodology for the design of unattended monitoring systems, focusing on a system to detect diversion of nuclear materials from a storage facility. The methodology is composed of seven, interrelated analyses: Facility Analysis, Vulnerability Analysis, Threat Assessment, Scenario Assessment, Design Analysis, Conceptual Design, and Performance Assessment. The design of the monitoring system is iteratively improved until it meets a set of pre-established performance criteria. The methodology presented here is based on other, well-established system analysis methodologies and hence they believe it can be adapted to other verification or compliance applications. In order to make this approach more generic, however, there needs to be more work on techniques for establishing evaluation criteria and associated performance metrics. They found that defining general-purpose evaluation criteria for verifying compliance with international agreements was a significant undertaking in itself. They finally focused on diversion of nuclear material in order to simplify the problem so that they could work out an overall approach for the design methodology. However, general guidelines for the development of evaluation criteria are critical for a general-purpose methodology. A poor choice in evaluation criteria could result in a monitoring system design that solves the wrong problem.

  17. Review and evaluation of paleohydrologic methodologies

    SciTech Connect (OSTI)

    Foley, M.G.; Zimmerman, D.A.; Doesburg, J.M.; Thorne, P.D.

    1982-12-01

    A literature review was conducted to identify methodologies that could be used to interpret paleohydrologic environments. Paleohydrology is the study of past hydrologic systems or of the past behavior of an existing hydrologic system. The purpose of the review was to evaluate how well these methodologies could be applied to the siting of low-level radioactive waste facilities. The computer literature search queried five bibliographical data bases containing over five million citations of technical journals, books, conference papers, and reports. Two data-base searches (United States Geological Survey - USGS) and a manual search were also conducted. The methodologies were examined for data requirements and sensitivity limits. Paleohydrologic interpretations are uncertain because of the effects of time on hydrologic and geologic systems and because of the complexity of fluvial systems. Paleoflow determinations appear in many cases to be order-of-magnitude estimates. However, the methodologies identified in this report mitigate this uncertainty when used collectively as well as independently. That is, the data from individual methodologies can be compared or combined to corroborate hydrologic predictions. In this manner, paleohydrologic methodologies are viable tools to assist in evaluating the likely future hydrology of low-level radioactive waste sites.

  18. Modifications to toxic CUG RNAs induce structural stability, rescue mis-splicing in a myotonic dystrophy cell model and reduce toxicity in a myotonic dystrophy zebrafish model

    SciTech Connect (OSTI)

    deLorimier, Elaine; Coonrod, Leslie A.; Copperman, Jeremy; Taber, Alex; Reister, Emily E.; Sharma, Kush; Todd, Peter K.; Guenza, Marina G.; Berglund, J. Andrew

    2014-10-10

    In this study, CUG repeat expansions in the 3' UTR of dystrophia myotonica protein kinase (DMPK) cause myotonic dystrophy type 1 (DM1). As RNA, these repeats elicit toxicity by sequestering splicing proteins, such as MBNL1, into proteinRNA aggregates. Structural studies demonstrate that CUG repeats can form A-form helices, suggesting that repeat secondary structure could be important in pathogenicity. To evaluate this hypothesis, we utilized structure-stabilizing RNA modifications pseudouridine (?) and 2'-O-methylation to determine if stabilization of CUG helical conformations affected toxicity. CUG repeats modified with ? or 2'-O-methyl groups exhibited enhanced structural stability and reduced affinity for MBNL1. Molecular dynamics and X-ray crystallography suggest a potential water-bridging mechanism for ?-mediated CUG repeat stabilization. ? modification of CUG repeats rescued mis-splicing in a DM1 cell model and prevented CUG repeat toxicity in zebrafish embryos. This study indicates that the structure of toxic RNAs has a significant role in controlling the onset of neuromuscular diseases.

  19. Modifications to toxic CUG RNAs induce structural stability, rescue mis-splicing in a myotonic dystrophy cell model and reduce toxicity in a myotonic dystrophy zebrafish model

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    deLorimier, Elaine; Coonrod, Leslie A.; Copperman, Jeremy; Taber, Alex; Reister, Emily E.; Sharma, Kush; Todd, Peter K.; Guenza, Marina G.; Berglund, J. Andrew

    2014-10-10

    In this study, CUG repeat expansions in the 3' UTR of dystrophia myotonica protein kinase (DMPK) cause myotonic dystrophy type 1 (DM1). As RNA, these repeats elicit toxicity by sequestering splicing proteins, such as MBNL1, into protein–RNA aggregates. Structural studies demonstrate that CUG repeats can form A-form helices, suggesting that repeat secondary structure could be important in pathogenicity. To evaluate this hypothesis, we utilized structure-stabilizing RNA modifications pseudouridine (Ψ) and 2'-O-methylation to determine if stabilization of CUG helical conformations affected toxicity. CUG repeats modified with Ψ or 2'-O-methyl groups exhibited enhanced structural stability and reduced affinity for MBNL1. Molecular dynamicsmore » and X-ray crystallography suggest a potential water-bridging mechanism for Ψ-mediated CUG repeat stabilization. Ψ modification of CUG repeats rescued mis-splicing in a DM1 cell model and prevented CUG repeat toxicity in zebrafish embryos. This study indicates that the structure of toxic RNAs has a significant role in controlling the onset of neuromuscular diseases.« less

  20. Risk Assessment of Cascading Outages: Methodologies and Challenges

    SciTech Connect (OSTI)

    Vaiman, Marianna; Bell, Keith; Chen, Yousu; Chowdhury, Badrul; Dobson, Ian; Hines, Paul; Papic, Milorad; Miller, Stephen; Zhang, Pei

    2012-05-31

    Abstract- This paper is a result of ongoing activity carried out by Understanding, Prediction, Mitigation and Restoration of Cascading Failures Task Force under IEEE Computer Analytical Methods Subcommittee (CAMS). The task force's previous papers are focused on general aspects of cascading outages such as understanding, prediction, prevention and restoration from cascading failures. This is the first of two new papers, which extend this previous work to summarize the state of the art in cascading failure risk analysis methodologies and modeling tools. This paper is intended to be a reference document to summarize the state of the art in the methodologies for performing risk assessment of cascading outages caused by some initiating event(s). A risk assessment should cover the entire potential chain of cascades starting with the initiating event(s) and ending with some final condition(s). However, this is a difficult task and heuristic approaches and approximations have been suggested. This paper discusses different approaches to this and suggests directions for future development of methodologies. The second paper summarizes the state of the art in modeling tools for risk assessment of cascading outages.

  1. Molecular simulation of structure and diffusion at smectite-water interfaces: Using expanded clay interlayers as model nanopores

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Greathouse, Jeffery A.; Hart, David; Bowers, Geoffrey M.; Kirkpatrick, R. James; Cygan, Randall Timothy

    2015-07-20

    In geologic settings relevant to a number of extraction and potential sequestration processes, nanopores bounded by clay mineral surfaces play a critical role in the transport of aqueous species. Solution structure and dynamics at clay–water interfaces are quite different from their bulk values, and the spatial extent of this disruption remains a topic of current interest. We have used molecular dynamics simulations to investigate the structure and diffusion of aqueous solutions in clay nanopores approximately 6 nm thick, comparing the effect of clay composition with model Na-hectorite and Na-montmorillonite surfaces. In addition to structural properties at the interface, water andmore » ion diffusion coefficients were calculated within each aqueous layer at the interface, as well as in the central bulk-like region of the nanopore. The results show similar solution structure and diffusion properties at each surface, with subtle differences in sodium adsorption complexes and water structure in the first adsorbed layer due to different arrangements of layer hydroxyl groups in the two clay models. Interestingly, the extent of surface disruption on bulk-like solution structure and diffusion extends to only a few water layers. Additionally, a comparison of sodium ion residence times confirms similar behavior of inner-sphere and outer-sphere surface complexes at each clay surface, but ~1% of sodium ions adsorb in ditrigonal cavities on the hectorite surface. Thus, the presence of these anhydrous ions is consistent with highly immobile anhydrous ions seen in previous nuclear magnetic resonance spectroscopic measurements of hectorite pastes.« less

  2. Structural analysis of three global land models on carbon cycle simulations using a traceability framework

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Rafique, R.; Xia, J.; Hararuk, O.; Luo, Y.

    2014-06-27

    Modeled carbon (C) storage capacity is largely determined by the C residence time and net primary productivity (NPP). Extensive research has been done on NPP dynamics but the residence time and their relationships with C storage are much less studied. In this study, we implemented a traceability analysis to understand the modeled C storage and residence time in three land surface models: CSIRO's Atmosphere Biosphere Land Exchange (CABLE) with 9 C pools, Community Land Model (version 3.5) combined with Carnegie-Ames-Stanford Approach (CLM3.5-CASA) with 12 C pools and Community Land Model (version 4) (CLM4) with 26 C pools. The globally averagedmoreC storage and residence time was computed at both individual pool and total ecosystem levels. The spatial distribution of total ecosystem C storage and residence time differ greatly among the three models. The CABLE model showed a closer agreement with measured C storage and residence time in plant and soil pools than CLM3.5-CASA and CLM4. However, CLM3.5-CASA and CLM4 were close to each other in modeled C storage but not with measured data. CABLE stores more C in root whereas CLM3.5-CASA and CLM4 store more C in woody pools, partly due to differential NPP allocation in respective pools. The C residence time in individual C pools is greatly different among models, largely because of different transfer coefficient values among pools. CABLE had higher bulk residence time for soil C pools than the other two models. Overall, the traceability analysis used in this study can help fully characterizes the behavior of complex land models.less

  3. Electronic Structure And Spectroscopy of 'Superoxidized' Iron Centers in Model Systems: Theoretical And Experimental Trends

    SciTech Connect (OSTI)

    Berry, J.F.; George, S.DeBeer; Neese, F.

    2009-05-12

    Recent advances in synthetic chemistry have led to the discovery of superoxidized iron centers with valencies Fe(V) and Fe(VI) [K. Meyer et al., J. Am. Chem. Soc., 1999, 121, 4859-4876; J. F. Berry et al., Science, 2006, 312, 1937-1941; F. T. de Oliveira et al., Science, 2007, 315, 835-838.]. Furthermore, in recent years a number of high-valent Fe(IV) species have been found as reaction intermediates in metalloenzymes and have also been characterized in model systems [C. Krebs et al., Acc. Chem. Res., 2007, 40, 484-492; L. Que, Jr, Acc. Chem. Res., 2007, 40, 493-500.]. These species are almost invariably stabilized by a highly basic ligand X{sup n-} which is either O{sup 2-} or N{sup 3-}. The differences in structure and bonding between oxo- and nitrido species as a function of oxidation state and their consequences on the observable spectroscopic properties have never been carefully assessed. Hence, fundamental differences between high-valent iron complexes having either Fe=O or Fe=N multiple bonds have been probed computationally in this work in a series of hypothetical trans-[FeO(NH{sub 3}){sub 4}OH]{sup +/2+/3+} (1-3) and trans-[FeN(NH{sub 3}){sub 4}OH]{sup 0/+/2+} (4-6) complexes. All computational properties are permeated by the intrinsically more covalent character of the Fe=N multiple bond as compared to the Fe=O bond. This difference is likely due to differences in Z* between N and O that allow for better orbital overlap to occur in the case of the Fe=N multiple bond. Spin-state energetics were addressed using elaborate multireference ab initio computations that show that all species 1-6 have an intrinsic preference for the low-spin state, except in the case of 1 in which S = 1 and S = 2 states are very close in energy. In addition to Moessbauer parameters, g-tensors, zero-field splitting and iron hyperfine couplings, X-ray absorption Fe K pre-edge spectra have been simulated using time-dependent DFT methods for the first time for a series of compounds

  4. Methodology for Estimating Solar Potential on Multiple Building Rooftops for Photovoltaic Systems

    SciTech Connect (OSTI)

    Kodysh, Jeffrey B; Omitaomu, Olufemi A; Bhaduri, Budhendra L; Neish, Bradley S

    2013-01-01

    In this paper, a methodology for estimating solar potential on multiple building rooftops is presented. The objective of this methodology is to estimate the daily or monthly solar radiation potential on individual buildings in a city/region using Light Detection and Ranging (LiDAR) data and a geographic information system (GIS) approach. Conceptually, the methodology is based on the upward-looking hemispherical viewshed algorithm, but applied using an area-based modeling approach. The methodology considers input parameters, such as surface orientation, shadowing effect, elevation, and atmospheric conditions, that influence solar intensity on the earth s surface. The methodology has been implemented for some 212,000 buildings in Knox County, Tennessee, USA. Based on the results obtained, the methodology seems to be adequate for estimating solar radiation on multiple building rooftops. The use of LiDAR data improves the radiation potential estimates in terms of the model predictive error and the spatial pattern of the model outputs. This methodology could help cities/regions interested in sustainable projects to quickly identify buildings with higher potentials for roof-mounted photovoltaic systems.

  5. Structure of AgI-doped Ge-In-S glasses: Experiment, reverse Monte Carlo modelling, and density functional calculations

    SciTech Connect (OSTI)

    Chrissanthopoulos, A.; Jovari, P.; Kaban, I.; Gruner, S.; Kavetskyy, T.; Borc, J.; Wang, W.; Ren, J.; Chen, G.; Yannopoulos, S.N.

    2012-08-15

    We report an investigation of the structure and vibrational modes of Ge-In-S-AgI bulk glasses using X-ray diffraction, EXAFS spectroscopy, Reverse Monte-Carlo (RMC) modelling, Raman spectroscopy, and density functional theoretical (DFT) calculations. The combination of these techniques made it possible to elucidate the short- and medium-range structural order of these glasses. Data interpretation revealed that the AgI-free glass structure is composed of a network where GeS{sub 4/2} tetrahedra are linked with trigonal InS{sub 3/2} units; S{sub 3/2}Ge-GeS{sub 3/2} ethane-like species linked with InS{sub 4/2}{sup -} tetrahedra form sub-structures which are dispersed in the network structure. The addition of AgI into the Ge-In-S glassy matrix causes appreciable structural changes, enriching the Indium species with Iodine terminal atoms. The existence of trigonal species InS{sub 2/2}I and tetrahedral units InS{sub 3/2}I{sup -} and InS{sub 2/2}I{sub 2}{sup -} is compatible with the EXAFS and RMC analysis. Their vibrational properties (harmonic frequencies and Raman activities) calculated by DFT are in very good agreement with the experimental values determined by Raman spectroscopy. - Graphical abstract: Experiment (XRD, EXAFS, RMC, Raman scattering) and density functional calculations are employed to study the structure of AgI-doped Ge-In-S glasses. The role of mixed structural units as illustrated in the figure is elucidated. Highlights: Black-Right-Pointing-Pointer Doping Ge-In-S glasses with AgI causes significant changes in glass structure. Black-Right-Pointing-Pointer Experiment and DFT are combined to elucidate short- and medium-range structural order. Black-Right-Pointing-Pointer Indium atoms form both (InS{sub 4/2}){sup -} tetrahedra and InS{sub 3/2} planar triangles. Black-Right-Pointing-Pointer (InS{sub 4/2}){sup -} tetrahedra bond to (S{sub 3/2}Ge-GeS{sub 3/2}){sup 2+} ethane-like units forming neutral sub-structures. Black-Right-Pointing-Pointer Mixed

  6. Methodologies for Reservoir Characterization Using Fluid Inclusion Gas Chemistry

    SciTech Connect (OSTI)

    Dilley, Lorie M.

    2015-04-13

    The purpose of this project was to: 1) evaluate the relationship between geothermal fluid processes and the compositions of the fluid inclusion gases trapped in the reservoir rocks; and 2) develop methodologies for interpreting fluid inclusion gas data in terms of the chemical, thermal and hydrological properties of geothermal reservoirs. Phase 1 of this project was designed to conduct the following: 1) model the effects of boiling, condensation, conductive cooling and mixing on selected gaseous species; using fluid compositions obtained from geothermal wells, 2) evaluate, using quantitative analyses provided by New Mexico Tech (NMT), how these processes are recorded by fluid inclusions trapped in individual crystals; and 3) determine if the results obtained on individual crystals can be applied to the bulk fluid inclusion analyses determined by Fluid Inclusion Technology (FIT). Our initial studies however, suggested that numerical modeling of the data would be premature. We observed that the gas compositions, determined on bulk and individual samples were not the same as those discharged by the geothermal wells. Gases discharged from geothermal wells are CO2-rich and contain low concentrations of light gases (i.e. H2, He, N, Ar, CH4). In contrast many of our samples displayed enrichments in these light gases. Efforts were initiated to evaluate the reasons for the observed gas distributions. As a first step, we examined the potential importance of different reservoir processes using a variety of commonly employed gas ratios (e.g. Giggenbach plots). The second technical target was the development of interpretational methodologies. We have develop methodologies for the interpretation of fluid inclusion gas data, based on the results of Phase 1, geologic interpretation of fluid inclusion data, and integration of the data. These methodologies can be used in conjunction with the relevant geological and hydrological information on the system to

  7. 2010 Diffraction Methods in Structural Biology

    SciTech Connect (OSTI)

    Dr. Ana Gonzalez

    2011-03-10

    Advances in basic methodologies have played a major role in the dramatic progress in macromolecular crystallography over the past decade, both in terms of overall productivity and in the increasing complexity of the systems being successfully tackled. The 2010 Gordon Research Conference on Diffraction Methods in Structural Biology will, as in the past, focus on the most recent developments in methodology, covering all aspects of the process from crystallization to model building and refinement, complemented by examples of structural highlights and complementary methods. Extensive discussion will be encouraged and it is hoped that all attendees will participate by giving oral or poster presentations, the latter using the excellent poster display area available at Bates College. The relatively small size and informal atmosphere of the meeting provides an excellent opportunity for all participants, especially younger scientists, to meet and exchange ideas with leading methods developers.

  8. Observed and modeled patterns of covariability between low-level cloudiness and the structure of the trade-wind layer

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Nuijens, Louise; Medeiros, Brian; Sandu, Irina; Ahlgrimm, Maike

    2015-11-06

    We present patterns of covariability between low-level cloudiness and the trade-wind boundary layer structure using long-term measurements at a site representative of dynamical regimes with moderate subsidence or weak ascent. We compare these with ECMWF’s Integrated Forecast System and 10 CMIP5 models. By using single-time step output at a single location, we find that models can produce a fairly realistic trade-wind layer structure in long-term means, but with unrealistic variability at shorter-time scales. The unrealistic variability in modeled cloudiness near the lifting condensation level (LCL) is due to stronger than observed relationships with mixed-layer relative humidity (RH) and temperature stratificationmore » at the mixed-layer top. Those relationships are weak in observations, or even of opposite sign, which can be explained by a negative feedback of convection on cloudiness. Cloudiness near cumulus tops at the tradewind inversion instead varies more pronouncedly in observations on monthly time scales, whereby larger cloudiness relates to larger surface winds and stronger trade-wind inversions. However, these parameters appear to be a prerequisite, rather than strong controlling factors on cloudiness, because they do not explain submonthly variations in cloudiness. Models underestimate the strength of these relationships and diverge in particular in their responses to large-scale vertical motion. No model stands out by reproducing the observed behavior in all respects. As a result, these findings suggest that climate models do not realistically represent the physical processes that underlie the coupling between trade-wind clouds and their environments in present-day climate, which is relevant for how we interpret modeled cloud feedbacks.« less

  9. Observed and modeled patterns of covariability between low-level cloudiness and the structure of the trade-wind layer

    SciTech Connect (OSTI)

    Nuijens, Louise; Medeiros, Brian; Sandu, Irina; Ahlgrimm, Maike

    2015-11-06

    We present patterns of covariability between low-level cloudiness and the trade-wind boundary layer structure using long-term measurements at a site representative of dynamical regimes with moderate subsidence or weak ascent. We compare these with ECMWF’s Integrated Forecast System and 10 CMIP5 models. By using single-time step output at a single location, we find that models can produce a fairly realistic trade-wind layer structure in long-term means, but with unrealistic variability at shorter-time scales. The unrealistic variability in modeled cloudiness near the lifting condensation level (LCL) is due to stronger than observed relationships with mixed-layer relative humidity (RH) and temperature stratification at the mixed-layer top. Those relationships are weak in observations, or even of opposite sign, which can be explained by a negative feedback of convection on cloudiness. Cloudiness near cumulus tops at the tradewind inversion instead varies more pronouncedly in observations on monthly time scales, whereby larger cloudiness relates to larger surface winds and stronger trade-wind inversions. However, these parameters appear to be a prerequisite, rather than strong controlling factors on cloudiness, because they do not explain submonthly variations in cloudiness. Models underestimate the strength of these relationships and diverge in particular in their responses to large-scale vertical motion. No model stands out by reproducing the observed behavior in all respects. As a result, these findings suggest that climate models do not realistically represent the physical processes that underlie the coupling between trade-wind clouds and their environments in present-day climate, which is relevant for how we interpret modeled cloud feedbacks.

  10. Probabilistic Based Design Methodology for Solid Oxide Fuel Cell Stacks

    SciTech Connect (OSTI)

    Sun, Xin; Tartakovsky, Alexandre M.; Khaleel, Mohammad A.

    2009-05-01

    A probabilistic-based component design methodology is developed for solid oxide fuel cell (SOFC) stack. This method takes into account the randomness in SOFC material properties as well as the stresses arising from different manufacturing and operating conditions. The purpose of this work is to provide the SOFC designers a design methodology such that desired level of component reliability can be achieved with deterministic design functions using an equivalent safety factor to account for the uncertainties in material properties and structural stresses. Multi-physics-based finite element analyses were used to predict the electrochemical and thermal mechanical responses of SOFC stacks with different geometric variations and under different operating conditions. Failures in the anode and the seal were used as design examples. The predicted maximum principal stresses in the anode and the seal were compared with the experimentally determined strength characteristics for the anode and the seal respectively. Component failure probabilities for the current design were then calculated under different operating conditions. It was found that anode failure probability is very low under all conditions examined. The seal failure probability is relatively high, particularly for high fuel utilization rate under low average cell temperature. Next, the procedures for calculating the equivalent safety factors for anode and seal were demonstrated such that uniform failure probability of the anode and seal can be achieved. Analysis procedures were also included for non-normal distributed random variables such that more realistic distributions of strength and stress can be analyzed using the proposed design methodology.

  11. Critical infrastructure systems of systems assessment methodology.

    SciTech Connect (OSTI)

    Sholander, Peter E.; Darby, John L.; Phelan, James M.; Smith, Bryan; Wyss, Gregory Dane; Walter, Andrew; Varnado, G. Bruce; Depoy, Jennifer Mae

    2006-10-01

    Assessing the risk of malevolent attacks against large-scale critical infrastructures requires modifications to existing methodologies that separately consider physical security and cyber security. This research has developed a risk assessment methodology that explicitly accounts for both physical and cyber security, while preserving the traditional security paradigm of detect, delay, and respond. This methodology also accounts for the condition that a facility may be able to recover from or mitigate the impact of a successful attack before serious consequences occur. The methodology uses evidence-based techniques (which are a generalization of probability theory) to evaluate the security posture of the cyber protection systems. Cyber threats are compared against cyber security posture using a category-based approach nested within a path-based analysis to determine the most vulnerable cyber attack path. The methodology summarizes the impact of a blended cyber/physical adversary attack in a conditional risk estimate where the consequence term is scaled by a ''willingness to pay'' avoidance approach.

  12. Vehicle Technologies Office Merit Review 2014: Validation of Material Models for Automotive Carbon Fiber Composite Structures

    Broader source: Energy.gov [DOE]

    Presentation given by General Motors at 2014 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Office Annual Merit Review and Peer Evaluation Meeting about validation of material models...

  13. RELAP5/MOD3 code manual: Code structure, system models, and solution methods. Volume 1

    SciTech Connect (OSTI)

    1995-08-01

    The RELAP5 code has been developed for best estimate transient simulation of light water reactor coolant systems during postulated accidents. The code models the coupled behavior of the reactor coolant system and the core for loss-of-coolant accidents, and operational transients, such as anticipated transient without scram, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling, approach is used that permits simulating a variety of thermal hydraulic systems. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater systems. RELAP5/MOD3 code documentation is divided into seven volumes: Volume I provides modeling theory and associated numerical schemes.

  14. Modeling and Algorithmic Approaches to Constitutively-Complex, Micro-structured Fluids

    SciTech Connect (OSTI)

    Forest, Mark Gregory

    2014-05-06

    The team for this Project made significant progress on modeling and algorithmic approaches to hydrodynamics of fluids with complex microstructure. Our advances are broken down into modeling and algorithmic approaches. In experiments a driven magnetic bead in a complex fluid accelerates out of the Stokes regime and settles into another apparent linear response regime. The modeling explains the take-off as a deformation of entanglements, and the longtime behavior is a nonlinear, far-from-equilibrium property. Furthermore, the model has predictive value, as we can tune microstructural properties relative to the magnetic force applied to the bead to exhibit all possible behaviors. Wave-theoretic probes of complex fluids have been extended in two significant directions, to small volumes and the nonlinear regime. Heterogeneous stress and strain features that lie beyond experimental capability were studied. It was shown that nonlinear penetration of boundary stress in confined viscoelastic fluids is not monotone, indicating the possibility of interlacing layers of linear and nonlinear behavior, and thus layers of variable viscosity. Models, algorithms, and codes were developed and simulations performed leading to phase diagrams of nanorod dispersion hydrodynamics in parallel shear cells and confined cavities representative of film and membrane processing conditions. Hydrodynamic codes for polymeric fluids are extended to include coupling between microscopic and macroscopic models, and to the strongly nonlinear regime.

  15. Particle Measurement Methodology: Comparison of On-road and Lab...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Measurement Methodology: Comparison of On-road and Lab Diesel Particle Size Distributions Particle Measurement Methodology: Comparison of On-road and Lab Diesel Particle Size ...

  16. Evaluation of the European PMP Methodologies Using Chassis Dynamometer...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    the European PMP Methodologies Using Chassis Dynamometer and On-road Testing of Heavy-duty Vehicles Evaluation of the European PMP Methodologies Using Chassis Dynamometer and ...

  17. Biopower Report Presents Methodology for Assessing the Value...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Report Presents Methodology for Assessing the Value of Co-Firing Biomass in Pulverized Coal Plants Biopower Report Presents Methodology for Assessing the Value of Co-Firing...

  18. Validation of Hydrogen Exchange Methodology on Molecular Sieves...

    Office of Environmental Management (EM)

    Validation of Hydrogen Exchange Methodology on Molecular Sieves for Tritium Removal from Contaminated Water Validation of Hydrogen Exchange Methodology on Molecular Sieves for ...

  19. Seismic hazard methodology for the Central and Eastern United...

    Office of Scientific and Technical Information (OSTI)

    Central and Eastern United States: Volume 1: Part 2, Methodology (Revision 1): Final report Citation Details In-Document Search Title: Seismic hazard methodology for the Central ...

  20. Seismic hazard methodology for the central and Eastern United...

    Office of Scientific and Technical Information (OSTI)

    Title: Seismic hazard methodology for the central and Eastern United States: Volume 1, Part 1: Theory: Final report The NRC staff concludes that SOGEPRI Seismic Hazard Methodology...

  1. A Proposed Methodology to Determine the Leverage Impacts of Technology...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    A Proposed Methodology to Determine the Leverage Impacts of Technology Deployment Programs 2008 A Proposed Methodology to Determine the Leverage Impacts of Technology Deployment ...

  2. Science-based MEMS reliability methodology. (Conference) | SciTech...

    Office of Scientific and Technical Information (OSTI)

    Science-based MEMS reliability methodology. Citation Details In-Document Search Title: Science-based MEMS reliability methodology. No abstract prepared. Authors: Walraven, Jeremy ...

  3. VERA Core Simulator Methodology for PWR Cycle Depletion (Conference...

    Office of Scientific and Technical Information (OSTI)

    VERA Core Simulator Methodology for PWR Cycle Depletion Citation Details In-Document Search Title: VERA Core Simulator Methodology for PWR Cycle Depletion Authors: Kochunas, ...

  4. On the UQ methodology development for storage applications. ...

    Office of Scientific and Technical Information (OSTI)

    On the UQ methodology development for storage applications. Citation Details In-Document Search Title: On the UQ methodology development for storage applications. Abstract not ...

  5. Barr Engineering Statement of Methodology Rosemount Wind Turbine...

    Energy Savers [EERE]

    Barr Engineering Statement of Methodology Rosemount Wind Turbine Simulations by Truescape Visual Reality, DOEEA-1791 (May 2010) Barr Engineering Statement of Methodology Rosemount...

  6. Systematic Comparison of Operating Reserve Methodologies: Preprint

    SciTech Connect (OSTI)

    Ibanez, E.; Krad, I.; Ela, E.

    2014-04-01

    Operating reserve requirements are a key component of modern power systems, and they contribute to maintaining reliable operations with minimum economic impact. No universal method exists for determining reserve requirements, thus there is a need for a thorough study and performance comparison of the different existing methodologies. Increasing penetrations of variable generation (VG) on electric power systems are posed to increase system uncertainty and variability, thus the need for additional reserve also increases. This paper presents background information on operating reserve and its relationship to VG. A consistent comparison of three methodologies to calculate regulating and flexibility reserve in systems with VG is performed.

  7. A surface structural approach to ion adsorption: The charge distribution (CD) model

    SciTech Connect (OSTI)

    Hiemstra, T.; Van Riemsdijk, W.H.

    1996-05-10

    Cation and anion adsorption at the solid/solution interface of metal hydroxides plays an important role in several fields of chemistry, including colloid and interface chemistry, soil chemistry and geochemistry, aquatic chemistry, environmental chemistry, catalysis, and chemical engineering. An ion adsorption model for metal hydroxides has been developed which deals with the observation that in the case of inner sphere complex formation only part of the surface complex is incorporated into the surface by a ligand exchange reaction while the other part is located in the Stern layer. The charge distribution (CD) concept of Pauling, used previously in the multi site complexation (MUSIC) model approach, is extended to account for adsorbed surface complexes. In the new model, surface complexes are not treated as point charges, but are considered as having a spatial distribution of charge in the interfacial region. The new CD model can describe within a single conceptual framework all important experimental adsorption phenomena, taking into account the chemical composition of the crystal surface. The CD model has been applied to one of the most difficult and challenging ion adsorption phenomena, i.e., PO{sub 4} adsorption on goethite, and successfully describes simultaneously the basic charging behavior of goethite, the concentration, pH, and salt dependency of adsorption, the shifts in the zeta potentials and isoelectric point (IEP), and the OH/P exchange ratio. This is all achieved within the constraint that the experimental surface speciation found from in situ IR spectroscopy is also described satisfactorily.

  8. Development of a statistically based access delay timeline methodology.

    SciTech Connect (OSTI)

    Rivera, W. Gary; Robinson, David Gerald; Wyss, Gregory Dane; Hendrickson, Stacey M. Langfitt

    2013-02-01

    The charter for adversarial delay is to hinder access to critical resources through the use of physical systems increasing an adversary's task time. The traditional method for characterizing access delay has been a simple model focused on accumulating times required to complete each task with little regard to uncertainty, complexity, or decreased efficiency associated with multiple sequential tasks or stress. The delay associated with any given barrier or path is further discounted to worst-case, and often unrealistic, times based on a high-level adversary, resulting in a highly conservative calculation of total delay. This leads to delay systems that require significant funding and personnel resources in order to defend against the assumed threat, which for many sites and applications becomes cost prohibitive. A new methodology has been developed that considers the uncertainties inherent in the problem to develop a realistic timeline distribution for a given adversary path. This new methodology incorporates advanced Bayesian statistical theory and methodologies, taking into account small sample size, expert judgment, human factors and threat uncertainty. The result is an algorithm that can calculate a probability distribution function of delay times directly related to system risk. Through further analysis, the access delay analyst or end user can use the results in making informed decisions while weighing benefits against risks, ultimately resulting in greater system effectiveness with lower cost.

  9. Risk Assessment of Cascading Outages: Part I - Overview of Methodologies

    SciTech Connect (OSTI)

    Vaiman, Marianna; Bell, Keith; Chen, Yousu; Chowdhury, Badrul; Dobson, Ian; Hines, Paul; Papic, Milorad; Miller, Stephen; Zhang, Pei

    2011-07-31

    This paper is a result of ongoing activity carried out by Understanding, Prediction, Mitigation and Restoration of Cascading Failures Task Force under IEEE Computer Analytical Methods Subcommittee (CAMS). The task force's previous papers are focused on general aspects of cascading outages such as understanding, prediction, prevention and restoration from cascading failures. This is the first of two new papers, which will extend this previous work to summarize the state of the art in cascading failure risk analysis methodologies and modeling tools. This paper is intended to be a reference document to summarize the state of the art in the methodologies for performing risk assessment of cascading outages caused by some initiating event(s). A risk assessment should cover the entire potential chain of cascades starting with the initiating event(s) and ending with some final condition(s). However, this is a difficult task and heuristic approaches and approximations have been suggested. This paper discusses diffeent approaches to this and suggests directions for future development of methodologies.

  10. Dynamic modeling of injection-induced fault reactivation and ground motion and impact on surface structures and human perception

    SciTech Connect (OSTI)

    Rutqvist, Jonny; Cappa, Frederic; Rinaldi, Antonio P.; Godano, Maxime

    2014-12-31

    We summarize recent modeling studies of injection-induced fault reactivation, seismicity, and its potential impact on surface structures and nuisance to the local human population. We used coupled multiphase fluid flow and geomechanical numerical modeling, dynamic wave propagation modeling, seismology theories, and empirical vibration criteria from mining and construction industries. We first simulated injection-induced fault reactivation, including dynamic fault slip, seismic source, wave propagation, and ground vibrations. From co-seismic average shear displacement and rupture area, we determined the moment magnitude to about Mw = 3 for an injection-induced fault reactivation at a depth of about 1000 m. We then analyzed the ground vibration results in terms of peak ground acceleration (PGA), peak ground velocity (PGV), and frequency content, with comparison to the U.S. Bureau of Mines vibration criteria for cosmetic damage to buildings, as well as human-perception vibration limits. For the considered synthetic Mw = 3 event, our analysis showed that the short duration, high frequency ground motion may not cause any significant damage to surface structures, and would not cause, in this particular case, upward CO2 leakage, but would certainly be felt by the local population.

  11. Reducing computation in an i-vector speaker recognition system using a tree-structured universal background model

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    McClanahan, Richard; De Leon, Phillip L.

    2014-08-20

    The majority of state-of-the-art speaker recognition systems (SR) utilize speaker models that are derived from an adapted universal background model (UBM) in the form of a Gaussian mixture model (GMM). This is true for GMM supervector systems, joint factor analysis systems, and most recently i-vector systems. In all of the identified systems, the posterior probabilities and sufficient statistics calculations represent a computational bottleneck in both enrollment and testing. We propose a multi-layered hash system, employing a tree-structured GMM–UBM which uses Runnalls’ Gaussian mixture reduction technique, in order to reduce the number of these calculations. Moreover, with this tree-structured hash, wemore » can trade-off reduction in computation with a corresponding degradation of equal error rate (EER). As an example, we also reduce this computation by a factor of 15× while incurring less than 10% relative degradation of EER (or 0.3% absolute EER) when evaluated with NIST 2010 speaker recognition evaluation (SRE) telephone data.« less

  12. Dynamic modeling of injection-induced fault reactivation and ground motion and impact on surface structures and human perception

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Rutqvist, Jonny; Cappa, Frederic; Rinaldi, Antonio P.; Godano, Maxime

    2014-12-31

    We summarize recent modeling studies of injection-induced fault reactivation, seismicity, and its potential impact on surface structures and nuisance to the local human population. We used coupled multiphase fluid flow and geomechanical numerical modeling, dynamic wave propagation modeling, seismology theories, and empirical vibration criteria from mining and construction industries. We first simulated injection-induced fault reactivation, including dynamic fault slip, seismic source, wave propagation, and ground vibrations. From co-seismic average shear displacement and rupture area, we determined the moment magnitude to about Mw = 3 for an injection-induced fault reactivation at a depth of about 1000 m. We then analyzed themore » ground vibration results in terms of peak ground acceleration (PGA), peak ground velocity (PGV), and frequency content, with comparison to the U.S. Bureau of Mines’ vibration criteria for cosmetic damage to buildings, as well as human-perception vibration limits. For the considered synthetic Mw = 3 event, our analysis showed that the short duration, high frequency ground motion may not cause any significant damage to surface structures, and would not cause, in this particular case, upward CO2 leakage, but would certainly be felt by the local population.« less

  13. Description and assessment of structural and temperature models in the FRAP-T6 code. [PWR; BWR

    SciTech Connect (OSTI)

    Siefken, L.J.

    1983-01-01

    The FRAP-T6 code was developed at the Idaho National Engineering Laboratory (INEL) for the purpose of calculating the transient performance of light water reactor fuel rods during reactor transients ranging from mild operational transients to severe hypothetical loss-of-coolant accidents. An important application of the FRAP-T6 code is to calculate the structural performance of fuel rod cladding. The capabilities of the FRAP-T6 code are assessed by comparisons of code calculations with the measurements of several hundred in-pile experiments on fuel rods. The results of the assessments show that the code accurately and efficiently models the structural and thermal response of fuel rods.

  14. New Methodology for Natural Gas Production Estimates

    Reports and Publications (EIA)

    2010-01-01

    A new methodology is implemented with the monthly natural gas production estimates from the EIA-914 survey this month. The estimates, to be released April 29, 2010, include revisions for all of 2009. The fundamental changes in the new process include the timeliness of the historical data used for estimation and the frequency of sample updates, both of which are improved.

  15. Model Based Structural Evaluation & Design of Overpack Container for Bag-Buster Processing of TRU Waste Drums

    SciTech Connect (OSTI)

    D. T. Clark; A. S. Siahpush; G. L. Anderson

    2004-07-01

    This paper describes a materials and computational model based analysis utilized to design an engineered overpack container capable of maintaining structural integrity for confinement of transuranic wastes undergoing the cryo-vacuum stress based Bag-Buster process and satisfying DOT 7A waste package requirements. The engineered overpack is a key component of the Ultra-BagBuster process/system being commercially developed by UltraTech International for potential DOE applications to non-intrusively breach inner confinement layers (poly bags/packaging) within transuranic (TRU) waste drums. This system provides a lower cost/risk approach to mitigate hydrogen gas concentration buildup limitations on transport of high alpha activity organic transuranic wastes. Four evolving overpack design configurations and two materials (low carbon steel and 300 series stainless) were considered and evaluated using non-linear finite element model analyses of structural response. Properties comparisons show that 300-series stainless is required to provide assurance of ductility and structural integrity at both room and cryogenic temperatures. The overpack designs were analyzed for five accidental drop impact orientations onto an unyielding surface (dropped flat on bottom, bottom corner, side, top corner, and top). The first three design configurations failed the bottom and top corner drop orientations (flat bottom, top, and side plates breached or underwent material failure). The fourth design utilized a protruding rim-ring (skirt) below the overpacks bottom plate and above the overpacks lid plate to absorb much of the impact energy and maintained structural integrity under all accidental drop loads at both room and cryogenic temperature conditions. Selected drop testing of the final design will be required to confirm design performance.

  16. Exploration and Modeling of Structural changes in Waste Glass Under Corrosion

    SciTech Connect (OSTI)

    Pantano, Carlos; Ryan, Joseph; Strachan, Denis

    2013-11-10

    Vitrification is currently the world-wide treatment of choice for the disposition of high-level nuclear wastes. In glasses, radionuclides are atomistically bonded into the solid, resulting in a highly durable product, with borosilicate glasses exhibiting particularly excellent durability in water. Considering that waste glass is designed to retain the radionuclides within the waste form for long periods, it is important to understand the long-term stability of these materials when they react in the environment, especially in the presence of water. Based on a number of previous studies, there is general consensus regarding the mechanisms controlling the initial rate of nuclear waste glass dissolution. Agreement regarding the cause of the observed decrease in dissolution rate at extended times, however, has been elusive. Two general models have been proposed to explain this behavior, and it has been concluded that both concepts are valid and must be taken into account when considering the decrease in dissolution rate. Furthermore, other processes such as water diffusion, ion exchange, and precipitation of mineral phases onto the glass surface may occur in parallel with dissolution of the glass and can influence long-term performance. Our proposed research will address these issues through a combination of aqueous-phase dissolution/reaction experiments and probing of the resulting surface layers with state-of-the-art analytical methods. These methods include solid-state nuclear magnetic resonance (SSNMR) and time-of-flight secondary ion mass spectrometry (TOF-SIMS). The resulting datasets will then be coupled with computational chemistry and reaction-rate modeling to address the most persistent uncertainties in the understanding of glass corrosion, which indeed have limited the performance of the best corrosion models to date. With an improved understanding of corrosion mechanisms, models can be developed and improved that, while still conservative, take advantage of

  17. Hadron structure in a simple model of quark/nuclear matter

    SciTech Connect (OSTI)

    Horowitz, C. J.; Moniz, Ernest J.; Negele, J. W.

    1985-04-01

    We study a simple model for one-dimensional hadron matter with many of the essential features needed for examining the transition from nuclear to quark matter and the limitations of models based upon hadron rather than quark degrees of freedom. The dynamics are generated entirely by the quark confining force and exchange symmetry. Using Monte Carlo techniques, the ground-state energy, single-quark momentum distribution, and quark correlation function are calculated for uniform matter as a function of density. The quark confinement scale in the medium increases substantially with increasing density. This change is evident in the correlation function and momentum distribution, in qualitative agreement with the changes observed in deep-inelastic lepton scattering. Nevertheless, the ground-state energy is smooth throughout the transition to quark matter and is described remarkably well by an effective hadron theory based on a phenomenological hadron-hadron potential.

  18. Climate change and agriculture: Current methodologies and future directions

    SciTech Connect (OSTI)

    Rosenzweig, C.; Hillel, D.

    1996-12-31

    In the last fifteen years a major methodology has been developed for the assessment of the potential impacts of climate change on agricultural production around the world. This methodology consists of coupling dynamic crop growth models, designed to predict plant development and yield as a function of weather, soil, and management input variables, to predictors of climate change for sites within a given region. Such impact studies consist of (1) Definition of area of study and analysis of current climate and agricultural practices; (2) Crop model calibration and evaluation; (3) Development of climate change scenarios from GCMs or historical weather data; (4) Analyses of yield changes under changed climatic conditions; and (5) Development and analysis of adaptation strategies. Crop productivity results of such studies are often used in economic analyses. The Intergovernmental Panel on Climate Change and the US Country Studies Program endorse this modeling approach for the assessment of climate change effects on agriculture. It is useful for assessment studies to continue in the framework of the approved guidelines, in order to build a more complete understanding of likely effects on agricultural production throughout the world, and for more comprehensive results to be available for integrated assessment studies.

  19. Modeling investigation of the stability and irradiation-induced evolution of nanoscale precipitates in advanced structural materials

    SciTech Connect (OSTI)

    Wirth, Brian

    2015-04-08

    Materials used in extremely hostile environment such as nuclear reactors are subject to a high flux of neutron irradiation, and thus vast concentrations of vacancy and interstitial point defects are produced because of collisions of energetic neutrons with host lattice atoms. The fate of these defects depends on various reaction mechanisms which occur immediately following the displacement cascade evolution and during the longer-time kinetically dominated evolution such as annihilation, recombination, clustering or trapping at sinks of vacancies, interstitials and their clusters. The long-range diffusional transport and evolution of point defects and self-defect clusters drive a microstructural and microchemical evolution that are known to produce degradation of mechanical properties including the creep rate, yield strength, ductility, or fracture toughness, and correspondingly affect material serviceability and lifetimes in nuclear applications. Therefore, a detailed understanding of microstructural evolution in materials at different time and length scales is of significant importance. The primary objective of this work is to utilize a hierarchical computational modeling approach i) to evaluate the potential for nanoscale precipitates to enhance point defect recombination rates and thereby the self-healing ability of advanced structural materials, and ii) to evaluate the stability and irradiation-induced evolution of such nanoscale precipitates resulting from enhanced point defect transport to and annihilation at precipitate interfaces. This project will utilize, and as necessary develop, computational materials modeling techniques within a hierarchical computational modeling approach, principally including molecular dynamics, kinetic Monte Carlo and spatially-dependent cluster dynamics modeling, to identify and understand the most important physical processes relevant to promoting the “selfhealing” or radiation resistance in advanced materials containing

  20. Modeling the thermal and structural response of engineered systems to abnormal environments

    SciTech Connect (OSTI)

    Skocypec, R.D.; Thomas, R.K.; Moya, J.L.

    1993-10-01

    Sandia National Laboratories (SNL) is engaged actively in research to improve the ability to accurately predict the response of engineered systems to thermal and structural abnormal environments. Abnormal environments that will be addressed in this paper include: fire, impact, and puncture by probes and fragments, as well as a combination of all of the above. Historically, SNL has demonstrated the survivability of engineered systems to abnormal environments using a balanced approach between numerical simulation and testing. It is necessary to determine the response of engineered systems in two cases: (1) to satisfy regulatory specifications, and (2) to enable quantification of a probabilistic risk assessment (PRA). In a regulatory case, numerical simulation of system response is generally used to guide the system design such that the system will respond satisfactorily to the specified regulatory abnormal environment. Testing is conducted at the regulatory abnormal environment to ensure compliance.

  1. Model for Eukaryotic Tail-anchored Protein Binding Based on the Structure

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Year 2006: Alternative Fuel and Advanced Technology Vehicles Fuel Type EPAct Compliant? Model Vehicle Type Emission Class Powertrain Fuel Capacity Range American Honda Motor Corporation 888-CCHONDA www.honda.com CNG Dedicated EPAct Yes Civic GX Compact Sedan SULEV Tier 2 Bin II 1.7L, 4-cylinder 8 GGE 200 mi HEV (NiMH) EPAct No Accord Hybrid Sedan ULEV 3.0L V6 144 volt NiMH + 17.1 Gal Gasoline TBD HEV (NiMH) EPAct No Civic Hybrid Sedan CA ULEV 1.3L, 4-cylinder 144 volt NiMH + 13.2 Gal Gasoline

  2. Water versus DNA: New insights into proton track-structure modeling in radiobiology and radiotherapy

    SciTech Connect (OSTI)

    Champion, Christophe; Galassi, Mariel E.; Weck, Philippe F.; Fojon, Omar A.; Hanssen, Jocelyn; Rivarola, Roberto D.

    2015-09-25

    Water is a common surrogate of DNA for modelling the charged particle-induced ionizing processes in living tissue exposed to radiations. The present study aims at scrutinizing the validity of this approximation and then revealing new insights into proton-induced energy transfers by a comparative analysis between water and realistic biological medium. In this context, a self-consistent quantum mechanical modelling of the ionization and electron capture processes is reported within the continuum distorted wave-eikonal initial state framework for both isolated water molecules and DNA components impacted by proton beams. Their respective probability of occurrence-expressed in terms of total cross sections-as well as their energetic signature (potential and kinetic) are assessed in order to clearly emphasize the differences existing between realistic building blocks of living matter and the controverted water-medium surrogate. Thus the consequences in radiobiology and radiotherapy will be discussed in particular in view of treatment planning refinement aiming at better radiotherapy strategies.

  3. Water versus DNA: New insights into proton track-structure modeling in radiobiology and radiotherapy

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Champion, Christophe; Quinto, Michele A.; Monti, Juan M.; Galassi, Mariel E.; Weck, Philippe F.; Fojon, Omar A.; Hanssen, Jocelyn; Rivarola, Roberto D.

    2015-09-25

    Water is a common surrogate of DNA for modelling the charged particle-induced ionizing processes in living tissue exposed to radiations. The present study aims at scrutinizing the validity of this approximation and then revealing new insights into proton-induced energy transfers by a comparative analysis between water and realistic biological medium. In this context, a self-consistent quantum mechanical modelling of the ionization and electron capture processes is reported within the continuum distorted wave-eikonal initial state framework for both isolated water molecules and DNA components impacted by proton beams. Their respective probability of occurrence-expressed in terms of total cross sections-as well asmore » their energetic signature (potential and kinetic) are assessed in order to clearly emphasize the differences existing between realistic building blocks of living matter and the controverted water-medium surrogate. Thus the consequences in radiobiology and radiotherapy will be discussed in particular in view of treatment planning refinement aiming at better radiotherapy strategies.« less

  4. Performance Modeling for 3D Visualization in a Heterogeneous...

    Office of Scientific and Technical Information (OSTI)

    We explore a methodology for building a model of overall application performance using a ... The prediction methodology will form the foundation of a more robust resource management ...

  5. The National Energy Modeling System: An overview 1998

    SciTech Connect (OSTI)

    1998-02-01

    The National Energy Modeling System (NEMS) is a computer-based, energy-economy modeling system of US energy markets for the midterm period through 2020. NEMS projects the production, imports, conversion, consumption, and prices of energy, subject to assumptions on macroeconomic and financial factors world energy markets, resource availability and costs, behavior and technological choice criteria, cost and performance characteristics of energy technologies, and demographics. This report presents an overview of the structure and methodology of NEMS and each of its components. The first chapter provides a description of the design and objectives of the system, followed by a chapter on the overall modeling structure and solution algorithm. The remainder of the report summarizes the methodology and scope of the component modules of NEMS. The model descriptions are intended for readers familiar with terminology from economics, operations research, and energy modeling. 21 figs.

  6. Biosafety Risk Assessment Model

    Energy Science and Technology Software Center (OSTI)

    2011-05-27

    Software tool based on a structured methodology for conducting laboratory biosafety risk assessments by biosafety experts. Software is based upon an MCDA scheme and uses peer reviewed criteria and weights. The software was developed upon Microsoft’s .net framework. The methodology defines likelihood and consequence of a laboratory exposure for thirteen unique scenarios and provides numerical relative risks for each of the relevant thirteen. The software produces 2-d graphs reflecting the relative risk and a sensitivitymore » analysis which highlights the overall importance of each factor. The software works as a set of questions with absolute scales and uses a weighted additive model to calculate the likelihood and consequence.« less

  7. Symmetric structure of field algebra of G-spin models determined by a normal subgroup

    SciTech Connect (OSTI)

    Xin, Qiaoling Jiang, Lining

    2014-09-15

    Let G be a finite group and H a normal subgroup. D(H; G) is the crossed product of C(H) and CG which is only a subalgebra of D(G), the double algebra of G. One can construct a C*-subalgebra F{sub H} of the field algebra F of G-spin models, so that F{sub H} is a D(H; G)-module algebra, whereas F is not. Then the observable algebra A{sub (H,G)} is obtained as the D(H; G)-invariant subalgebra of F{sub H}, and there exists a unique C*-representation of D(H; G) such that D(H; G) and A{sub (H,G)} are commutants with each other.

  8. Investigating surety methodologies for cognitive systems.

    SciTech Connect (OSTI)

    Caudell, Thomas P. (University of New Mexico, Albuquerque, NM); Peercy, David Eugene; Mills, Kristy; Caldera, Eva

    2006-11-01

    Advances in cognitive science provide a foundation for new tools that promise to advance human capabilities with significant positive impacts. As with any new technology breakthrough, associated technical and non-technical risks are involved. Sandia has mitigated both technical and non-technical risks by applying advanced surety methodologies in such areas as nuclear weapons, nuclear reactor safety, nuclear materials transport, and energy systems. In order to apply surety to the development of cognitive systems, we must understand the concepts and principles that characterize the certainty of a system's operation as well as the risk areas of cognitive sciences. This SAND report documents a preliminary spectrum of risks involved with cognitive sciences, and identifies some surety methodologies that can be applied to potentially mitigate such risks. Some potential areas for further study are recommended. In particular, a recommendation is made to develop a cognitive systems epistemology framework for more detailed study of these risk areas and applications of surety methods and techniques.

  9. The Laboratory Microfusion Facility standardized costing methodology

    SciTech Connect (OSTI)

    Harris, D.B.; Dudziak, D.J.

    1988-01-01

    The DOE-organized Laboratory Microfusion Facility (LMF) has a goal of generation 1000 MJ of fusion yield in order to perform weapons physics experiments, simulate weapons effects, and develop high-gain inertial confinement fusion (ICF) targets for military and civil applications. There are currently three options seriously being considered for the driver of this facility: KrF lasers, Nd:glass lasers, and light-ion accelerators. In order to provide a basis for comparison of the cost estimated for each of the different driver technologies, a standardized costing methodology has been devised. This methodology defines the driver-independent costs and indirect cost multipliers for the LMF to aid in the comparison of the LMF proposal cost estimates. 10 refs., 4 tabs.

  10. Cosmic ray transport in heliospheric magnetic structures. I. Modeling background solar wind using the CRONOS magnetohydrodynamic code

    SciTech Connect (OSTI)

    Wiengarten, T.; Kleimann, J.; Fichtner, H.; Kühl, P.; Kopp, A.; Heber, B.; Kissmann, R.

    2014-06-10

    The transport of energetic particles such as cosmic rays is governed by the properties of the plasma being traversed. While these properties are rather poorly known for galactic and interstellar plasmas due to the lack of in situ measurements, the heliospheric plasma environment has been probed by spacecraft for decades and provides a unique opportunity for testing transport theories. Of particular interest for the three-dimensional (3D) heliospheric transport of energetic particles are structures such as corotating interaction regions, which, due to strongly enhanced magnetic field strengths, turbulence, and associated shocks, can act as diffusion barriers on the one hand, but also as accelerators of low energy CRs on the other hand as well. In a two-fold series of papers, we investigate these effects by modeling inner-heliospheric solar wind conditions with a numerical magnetohydrodynamic (MHD) setup (this paper), which will serve as an input to a transport code employing a stochastic differential equation approach (second paper). In this first paper, we present results from 3D MHD simulations with our code CRONOS: for validation purposes we use analytic boundary conditions and compare with similar work by Pizzo. For a more realistic modeling of solar wind conditions, boundary conditions derived from synoptic magnetograms via the Wang-Sheeley-Arge (WSA) model are utilized, where the potential field modeling is performed with a finite-difference approach in contrast to the traditional spherical harmonics expansion often utilized in the WSA model. Our results are validated by comparing with multi-spacecraft data for ecliptical (STEREO-A/B) and out-of-ecliptic (Ulysses) regions.

  11. Evaluation of Cloud-Resolving Model Intercomparison Simulations Using TWP-ICE Observations: Precipitation and Cloud Structure

    SciTech Connect (OSTI)

    Varble, Adam C.; Fridlind, Ann; Zipser, Ed; Ackerman, Andrew; Chaboureau, Jean-Pierre; Fan, Jiwen; Hill, Adrian; McFarlane, Sally A.; Pinty, Jean-Pierre; Shipway, Ben

    2011-06-24

    The Tropical Warm Pool - International Cloud Experiment (TWP-ICE) provided high quality model forcing and observational datasets through which detailed model and observational intercomparisons could be performed. In this first of a two part study, precipitation and cloud structures within nine cloud-resolving model simulations are compared with scanning radar reflectivity and satellite infrared brightness temperature observations during an active monsoon period from 19 to 25 January 2006. Most simulations slightly overestimate volumetric convective rainfall. Overestimation of simulated convective area by 50% or more in several simulations is somewhat offset by underestimation of mean convective rain rates. Stratiform volumetric rainfall is underestimated by 13% to 53% despite overestimation of stratiform area by up to 65% because stratiform rain rates in every simulation are much lower than observed. Although simulations match the peaked convective radar reflectivity distribution at low levels, they do not reproduce the peaked distributions observed above the melting level. Simulated radar reflectivity aloft in convective regions is too high in most simulations. 29 In stratiform regions, there is a large spread in model results with none resembling 30 observed distributions. Above the melting level, observed radar reflectivity decreases 31 more gradually with height than simulated radar reflectivity. A few simulations produce 32 unrealistically uniform and cold 10.8-?m infrared brightness temperatures, but several 33 simulations produce distributions close to observed. Assumed ice particle size 34 distributions appear to play a larger role than ice water contents in producing incorrect 35 simulated radar reflectivity distributions aloft despite substantial differences in mean 36 graupel and snow water contents across models. 37

  12. NREL: Jobs and Economic Development Impact (JEDI) Models - Methodology

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    is not designed to provide a precise forecast, but rather an estimate of overall ... That is, dollars spent on a power generation project in a state, county or region are ...

  13. An object-oriented approach to risk and reliability analysis : methodology and aviation safety applications.

    SciTech Connect (OSTI)

    Dandini, Vincent John; Duran, Felicia Angelica; Wyss, Gregory Dane

    2003-09-01

    This article describes how features of event tree analysis and Monte Carlo-based discrete event simulation can be combined with concepts from object-oriented analysis to develop a new risk assessment methodology, with some of the best features of each. The resultant object-based event scenario tree (OBEST) methodology enables an analyst to rapidly construct realistic models for scenarios for which an a priori discovery of event ordering is either cumbersome or impossible. Each scenario produced by OBEST is automatically associated with a likelihood estimate because probabilistic branching is integral to the object model definition. The OBEST methodology is then applied to an aviation safety problem that considers mechanisms by which an aircraft might become involved in a runway incursion incident. The resulting OBEST model demonstrates how a close link between human reliability analysis and probabilistic risk assessment methods can provide important insights into aviation safety phenomenology.

  14. EVALUATION OF ACTIVATION PRODUCTS IN REMAINING IN REMAINING K-, L- AND C-REACTOR STRUCTURES

    SciTech Connect (OSTI)

    Vinson, D.; Webb, R.

    2010-09-30

    An analytic model and calculational methodology was previously developed for P-reactor and R-reactor to quantify the radioisotopes present in Savannah River Site (SRS) reactor tanks and the surrounding structural materials as a result of neutron activation of the materials during reactor operation. That methodology has been extended to K-reactor, L-reactor, and C-reactor. The analysis was performed to provide a best-estimate source term input to the Performance Assessment for an in-situ disposition strategy by Site Decommissioning and Demolition (SDD). The reactor structure model developed earlier for the P-reactor and R-reactor analyses was also used for the K-reactor and L-reactor. The model was suitably modified to handle the larger Creactor tank and associated structures. For all reactors, the structure model consisted of 3 annular zones, homogenized by the amount of structural materials in the zone, and 5 horizontal layers. The curie content on an individual radioisotope basis and total basis for each of the regions was determined. A summary of these results are provided herein. The efficacy of this methodology to accurately predict the radioisotopic content of the reactor systems in question has been demonstrated and is documented in Reference 1. As noted in that report, results for one reactor facility cannot be directly extrapolated to other SRS reactors.

  15. A General Methodology for Evaluation of Carbon Sequestration Activities and Carbon Credits

    SciTech Connect (OSTI)

    Klasson, KT

    2002-12-23

    A general methodology was developed for evaluation of carbon sequestration technologies. In this document, we provide a method that is quantitative, but is structured to give qualitative comparisons despite changes in detailed method parameters, i.e., it does not matter what ''grade'' a sequestration technology gets but a ''better'' technology should receive a better grade. To meet these objectives, we developed and elaborate on the following concepts: (1) All resources used in a sequestration activity should be reviewed by estimating the amount of greenhouse gas emissions for which they historically are responsible. We have done this by introducing a quantifier we term Full-Cycle Carbon Emissions, which is tied to the resource. (2) The future fate of sequestered carbon should be included in technology evaluations. We have addressed this by introducing a variable called Time-adjusted Value of Carbon Sequestration to weigh potential future releases of carbon, escaping the sequestered form. (3) The Figure of Merit of a sequestration technology should address the entire life-cycle of an activity. The figures of merit we have developed relate the investment made (carbon release during the construction phase) to the life-time sequestration capacity of the activity. To account for carbon flows that occur during different times of an activity we incorporate the Time Value of Carbon Flows. The methodology we have developed can be expanded to include financial, social, and long-term environmental aspects of a sequestration technology implementation. It does not rely on global atmospheric modeling efforts but is consistent with these efforts and could be combined with them.

  16. Identification and design of novel polymer-based mechanical transducers: A nano-structural model for thin film indentation

    SciTech Connect (OSTI)

    Villanueva, Joshua; Huang, Qian; Sirbuly, Donald J.

    2014-09-14

    Mechanical characterization is important for understanding small-scale systems and developing devices, particularly at the interface of biology, medicine, and nanotechnology. Yet, monitoring sub-surface forces is challenging with current technologies like atomic force microscopes (AFMs) or optical tweezers due to their probe sizes and sophisticated feedback mechanisms. An alternative transducer design relying on the indentation mechanics of a compressible thin polymer would be an ideal system for more compact and versatile probes, facilitating measurements in situ or in vivo. However, application-specific tuning of a polymer's mechanical properties can be burdensome via experimental optimization. Therefore, efficient transducer design requires a fundamental understanding of how synthetic parameters such as the molecular weight and grafting density influence the bulk material properties that determine the force response. In this work, we apply molecular-level polymer scaling laws to a first order elastic foundation model, relating the conformational state of individual polymer chains to the macroscopic compression of thin film systems. A parameter sweep analysis was conducted to observe predicted model trends under various system conditions and to understand how nano-structural elements influence the material stiffness. We validate the model by comparing predicted force profiles to experimental AFM curves for a real polymer system and show that it has reasonable predictive power for initial estimates of the force response, displaying excellent agreement with experimental force curves. We also present an analysis of the force sensitivity of an example transducer system to demonstrate identification of synthetic protocols based on desired mechanical properties. These results highlight the usefulness of this simple model as an aid for the design of a new class of compact and tunable nanomechanical force transducers.

  17. Advanced Fuel Cycle Economic Tools, Algorithms, and Methodologies

    SciTech Connect (OSTI)

    David E. Shropshire

    2009-05-01

    The Advanced Fuel Cycle Initiative (AFCI) Systems Analysis supports engineering economic analyses and trade-studies, and requires a requisite reference cost basis to support adequate analysis rigor. In this regard, the AFCI program has created a reference set of economic documentation. The documentation consists of the “Advanced Fuel Cycle (AFC) Cost Basis” report (Shropshire, et al. 2007), “AFCI Economic Analysis” report, and the “AFCI Economic Tools, Algorithms, and Methodologies Report.” Together, these documents provide the reference cost basis, cost modeling basis, and methodologies needed to support AFCI economic analysis. The application of the reference cost data in the cost and econometric systems analysis models will be supported by this report. These methodologies include: the energy/environment/economic evaluation of nuclear technology penetration in the energy market—domestic and internationally—and impacts on AFCI facility deployment, uranium resource modeling to inform the front-end fuel cycle costs, facility first-of-a-kind to nth-of-a-kind learning with application to deployment of AFCI facilities, cost tradeoffs to meet nuclear non-proliferation requirements, and international nuclear facility supply/demand analysis. The economic analysis will be performed using two cost models. VISION.ECON will be used to evaluate and compare costs under dynamic conditions, consistent with the cases and analysis performed by the AFCI Systems Analysis team. Generation IV Excel Calculations of Nuclear Systems (G4-ECONS) will provide static (snapshot-in-time) cost analysis and will provide a check on the dynamic results. In future analysis, additional AFCI measures may be developed to show the value of AFCI in closing the fuel cycle. Comparisons can show AFCI in terms of reduced global proliferation (e.g., reduction in enrichment), greater sustainability through preservation of a natural resource (e.g., reduction in uranium ore depletion), value from

  18. Waste Package Component Design Methodology Report

    SciTech Connect (OSTI)

    D.C. Mecham

    2004-07-12

    This Executive Summary provides an overview of the methodology being used by the Yucca Mountain Project (YMP) to design waste packages and ancillary components. This summary information is intended for readers with general interest, but also provides technical readers a general framework surrounding a variety of technical details provided in the main body of the report. The purpose of this report is to document and ensure appropriate design methods are used in the design of waste packages and ancillary components (the drip shields and emplacement pallets). The methodology includes identification of necessary design inputs, justification of design assumptions, and use of appropriate analysis methods, and computational tools. This design work is subject to ''Quality Assurance Requirements and Description''. The document is primarily intended for internal use and technical guidance for a variety of design activities. It is recognized that a wide audience including project management, the U.S. Department of Energy (DOE), the U.S. Nuclear Regulatory Commission, and others are interested to various levels of detail in the design methods and therefore covers a wide range of topics at varying levels of detail. Due to the preliminary nature of the design, readers can expect to encounter varied levels of detail in the body of the report. It is expected that technical information used as input to design documents will be verified and taken from the latest versions of reference sources given herein. This revision of the methodology report has evolved with changes in the waste package, drip shield, and emplacement pallet designs over many years and may be further revised as the design is finalized. Different components and analyses are at different stages of development. Some parts of the report are detailed, while other less detailed parts are likely to undergo further refinement. The design methodology is intended to provide designs that satisfy the safety and operational

  19. A Generic Semi-Implicit Coupling Methodology for Use in RELAP5-3D

    SciTech Connect (OSTI)

    Weaver, Walter Leslie; Tomlinson, E. T.; Aumiller, D. L.

    2002-01-01

    A generic semi-implicit coupling methodology has been developed and implemented in the RELAP5-3D© computer program. This methodology allows RELAP5-3D© to be used with other computer programs to perform integrated analyses of nuclear power reactor systems and related experimental facilities. The coupling methodology potentially allows different programs to be used to model different portions of the system. The programs are chosen based on their capability to model the phenomena that are important in the simulation in the various portions of the system being considered and may use different numbers of conservation equations to model fluid flow in their respective solution domains. The methodology was demonstrated using a test case in which the test geometry was divided into two parts, each of which was solved as a RELAP5-3D© simulation. This test problem exercised all of the semi-implicit coupling features that were implemented in RELAP5-3D© The results of this verification test case show that the semi-implicit coupling methodology produces the same answer as the simulation of the test system as a single process.

  20. Using Discrete Event Simulation for Programming Model Exploration at Extreme-Scale: Macroscale Components for the Structural Simulation Toolkit (SST).

    SciTech Connect (OSTI)

    Wilke, Jeremiah J; Kenny, Joseph P.

    2015-02-01

    Discrete event simulation provides a powerful mechanism for designing and testing new extreme- scale programming models for high-performance computing. Rather than debug, run, and wait for results on an actual system, design can first iterate through a simulator. This is particularly useful when test beds cannot be used, i.e. to explore hardware or scales that do not yet exist or are inaccessible. Here we detail the macroscale components of the structural simulation toolkit (SST). Instead of depending on trace replay or state machines, the simulator is architected to execute real code on real software stacks. Our particular user-space threading framework allows massive scales to be simulated even on small clusters. The link between the discrete event core and the threading framework allows interesting performance metrics like call graphs to be collected from a simulated run. Performance analysis via simulation can thus become an important phase in extreme-scale programming model and runtime system design via the SST macroscale components.

  1. Update of Part 61 Impacts Analysis Methodology. Methodology report. Volume 1

    SciTech Connect (OSTI)

    Oztunali, O.I.; Roles, G.W.

    1986-01-01

    Under contract to the US Nuclear Regulatory Commission, the Envirosphere Company has expanded and updated the impacts analysis methodology used during the development of the 10 CFR Part 61 rule to allow improved consideration of the costs and impacts of treatment and disposal of low-level waste that is close to or exceeds Class C concentrations. The modifications described in this report principally include: (1) an update of the low-level radioactive waste source term, (2) consideration of additional alternative disposal technologies, (3) expansion of the methodology used to calculate disposal costs, (4) consideration of an additional exposure pathway involving direct human contact with disposed waste due to a hypothetical drilling scenario, and (5) use of updated health physics analysis procedures (ICRP-30). Volume 1 of this report describes the calculational algorithms of the updated analysis methodology.

  2. Methodology for Clustering High-Resolution Spatiotemporal Solar Resource Data

    SciTech Connect (OSTI)

    Getman, Dan; Lopez, Anthony; Mai, Trieu; Dyson, Mark

    2015-09-01

    In this report, we introduce a methodology to achieve multiple levels of spatial resolution reduction of solar resource data, with minimal impact on data variability, for use in energy systems modeling. The selection of an appropriate clustering algorithm, parameter selection including cluster size, methods of temporal data segmentation, and methods of cluster evaluation are explored in the context of a repeatable process. In describing this process, we illustrate the steps in creating a reduced resolution, but still viable, dataset to support energy systems modeling, e.g. capacity expansion or production cost modeling. This process is demonstrated through the use of a solar resource dataset; however, the methods are applicable to other resource data represented through spatiotemporal grids, including wind data. In addition to energy modeling, the techniques demonstrated in this paper can be used in a novel top-down approach to assess renewable resources within many other contexts that leverage variability in resource data but require reduction in spatial resolution to accommodate modeling or computing constraints.

  3. Cost Model and Cost Estimating Software

    Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]

    1997-03-28

    This chapter discusses a formalized methodology is basically a cost model, which forms the basis for estimating software.

  4. Methodologies for Reservoir Characterization Using Fluid Inclusion Gas

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Chemistry | Department of Energy Methodologies for Reservoir Characterization Using Fluid Inclusion Gas Chemistry Methodologies for Reservoir Characterization Using Fluid Inclusion Gas Chemistry Methodologies for Reservoir Characterization Using Fluid Inclusion Gas Chemistry presentation at the April 2013 peer review meeting held in Denver, Colorado. dilley_methodologies_peer2013.pdf (2.79 MB) More Documents & Publications Innovative Computational Tools for Reducing Exploration Risk

  5. A Proposed Methodology to Determine the Leverage Impacts of Technology

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Deployment Programs 2008 | Department of Energy A Proposed Methodology to Determine the Leverage Impacts of Technology Deployment Programs 2008 A Proposed Methodology to Determine the Leverage Impacts of Technology Deployment Programs 2008 This report contains a proposed methodology to determine the leverage impacts of technology deployment programs for the U.S. Department of Energy's Office of Energy Efficiency and Renewable Energy. Proposed Methodology Report (1.17 MB) More Documents &

  6. DIGITAL TECHNOLOGY BUSINESS CASE METHODOLOGY GUIDE & WORKBOOK

    SciTech Connect (OSTI)

    Thomas, Ken; Lawrie, Sean; Hart, Adam; Vlahoplus, Chris

    2014-09-01

    Performance advantages of the new digital technologies are widely acknowledged, but it has proven difficult for utilities to derive business cases for justifying investment in these new capabilities. Lack of a business case is often cited by utilities as a barrier to pursuing wide-scale application of digital technologies to nuclear plant work activities. The decision to move forward with funding usually hinges on demonstrating actual cost reductions that can be credited to budgets and thereby truly reduce O&M or capital costs. Technology enhancements, while enhancing work methods and making work more efficient, often fail to eliminate workload such that it changes overall staffing and material cost requirements. It is critical to demonstrate cost reductions or impacts on non-cost performance objectives in order for the business case to justify investment by nuclear operators. This Business Case Methodology approaches building a business case for a particular technology or suite of technologies by detailing how they impact an operator in one or more of the three following areas: Labor Costs, Non-Labor Costs, and Key Performance Indicators (KPIs). Key to those impacts will be identifying where the savings are harvestable, meaning they result in an actual reduction in headcount and/or cost. The report consists of a Digital Technology Business Case Methodology Guide and an accompanying spreadsheet workbook that will enable the user to develop a business case.

  7. Methodology for the Assessment of the Macroeconomic Impacts of Stricter CAFE Standards - Addendum

    Reports and Publications (EIA)

    2002-01-01

    This assessment of the economic impacts of Corporate Average Fuel Economy (CAFÉ) standards marks the first time the Energy Information Administration has used the new direct linkage of the DRI-WEFA Macroeconomic Model to the National Energy Modeling System (NEMS) in a policy setting. This methodology assures an internally consistent solution between the energy market concepts forecast by NEMS and the aggregate economy as forecast by the DRI-WEFA Macroeconomic Model of the U.S. Economy.

  8. DOE Challenge Home Label Methodology | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Label Methodology DOE Challenge Home Label Methodology A document of the U.S. Department of Energy's Zero Energy Ready Home (formerly Challenge Home) program. ch_label_methodology_1012.pdf (222.71 KB) More Documents & Publications DOE Zero Energy Ready Home Partner Resources Indoor airPLUS Construction Specifications Indoor airPLUS Construction Specifications Version 1 (Rev. 02)

  9. Criticism of generally accepted fundamentals and methodologies of traffic and transportation theory

    SciTech Connect (OSTI)

    Kerner, Boris S.

    2015-03-10

    It is explained why the set of the fundamental empirical features of traffic breakdown (a transition from free flow to congested traffic) should be the empirical basis for any traffic and transportation theory that can be reliable used for control and optimization in traffic networks. It is shown that generally accepted fundamentals and methodologies of traffic and transportation theory are not consistent with the set of the fundamental empirical features of traffic breakdown at a highway bottleneck. To these fundamentals and methodologies of traffic and transportation theory belong (i) Lighthill-Whitham-Richards (LWR) theory, (ii) the General Motors (GM) model class (for example, Herman, Gazis et al. GM model, Gipps’s model, Payne’s model, Newell’s optimal velocity (OV) model, Wiedemann’s model, Bando et al. OV model, Treiber’s IDM, Krauß’s model), (iii) the understanding of highway capacity as a particular stochastic value, and (iv) principles for traffic and transportation network optimization and control (for example, Wardrop’s user equilibrium (UE) and system optimum (SO) principles). Alternatively to these generally accepted fundamentals and methodologies of traffic and transportation theory, we discuss three-phase traffic theory as the basis for traffic flow modeling as well as briefly consider the network breakdown minimization (BM) principle for the optimization of traffic and transportation networks with road bottlenecks.

  10. Surface Protonation at the Rutile (110) Interface: Explicit Incorporation of Solvation Structure within the Refined MUSIC Model Framework

    SciTech Connect (OSTI)

    Machesky, Michael L.; Predota, M.; Wesolowski, David J

    2008-11-01

    The detailed solvation structure at the (110) surface of rutile ({alpha}-TiO{sub 2}) in contact with bulk liquid water has been obtained primarily from experimentally verified classical molecular dynamics (CMD) simulations of the ab initio-optimized surface in contact with SPC/E water. The results are used to explicitly quantify H-bonding interactions, which are then used within the refined MUSIC model framework to predict surface oxygen protonation constants. Quantum mechanical molecular dynamics (QMD) simulations in the presence of freely dissociable water molecules produced H-bond distributions around deprotonated surface oxygens very similar to those obtained by CMD with nondissociable SPC/E water, thereby confirming that the less computationally intensive CMD simulations provide accurate H-bond information. Utilizing this H-bond information within the refined MUSIC model, along with manually adjusted Ti-O surface bond lengths that are nonetheless within 0.05 {angstrom} of those obtained from static density functional theory (DFT) calculations and measured in X-ray reflectivity experiments (as well as bulk crystal values), give surface protonation constants that result in a calculated zero net proton charge pH value (pHznpc) at 25 C that agrees quantitatively with the experimentally determined value (5.4 {+-} 0.2) for a specific rutile powder dominated by the (110) crystal face. Moreover, the predicted pH{sub znpc} values agree to within 0.1 pH unit with those measured at all temperatures between 10 and 250 C. A slightly smaller manual adjustment of the DFT-derived Ti-O surface bond lengths was sufficient to bring the predicted pH{sub znpc} value of the rutile (110) surface at 25 C into quantitative agreement with the experimental value (4.8 {+-} 0.3) obtained from a polished and annealed rutile (110) single crystal surface in contact with dilute sodium nitrate solutions using second harmonic generation (SHG) intensity measurements as a function of ionic

  11. Assessment of methodologies for analysis of the dungeness B accidental aircraft crash risk.

    SciTech Connect (OSTI)

    LaChance, Jeffrey L.; Hansen, Clifford W.

    2010-09-01

    The Health and Safety Executive (HSE) has requested Sandia National Laboratories (SNL) to review the aircraft crash methodology for nuclear facilities that are being used in the United Kingdom (UK). The scope of the work included a review of one method utilized in the UK for assessing the potential for accidental airplane crashes into nuclear facilities (Task 1) and a comparison of the UK methodology against similar International Atomic Energy Agency (IAEA), United States (US) Department of Energy (DOE), and the US Nuclear Regulatory Commission (NRC) methods (Task 2). Based on the conclusions from Tasks 1 and 2, an additional Task 3 would provide an assessment of a site-specific crash frequency for the Dungeness B facility using one of the other methodologies. This report documents the results of Task 2. The comparison of the different methods was performed for the three primary contributors to aircraft crash risk at the Dungeness B site: airfield related crashes, crashes below airways, and background crashes. The methods and data specified in each methodology were compared for each of these risk contributors, differences in the methodologies were identified, and the importance of these differences was qualitatively and quantitatively assessed. The bases for each of the methods and the data used were considered in this assessment process. A comparison of the treatment of the consequences of the aircraft crashes was not included in this assessment because the frequency of crashes into critical structures is currently low based on the existing Dungeness B assessment. Although the comparison found substantial differences between the UK and the three alternative methodologies (IAEA, NRC, and DOE) this assessment concludes that use of any of these alternative methodologies would not change the conclusions reached for the Dungeness B site. Performance of Task 3 is thus not recommended.

  12. Chemical structures of low-pressure premixed methylcyclohexane flames as benchmarks for the development of a predictive combustion chemistry model

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Skeen, Scott A.; Yang, Bin; Jasper, Ahren W.; Pitz, William J.; Hansen, Nils

    2011-11-14

    The chemical compositions of three low-pressure premixed flames of methylcyclohexane (MCH) are investigated with the emphasis on the chemistry of MCH decomposition and the formation of aromatic species, including benzene and toluene. The flames are stabilized on a flat-flame (McKenna type) burner at equivalence ratios of φ = 1.0, 1.75, and 1.9 and at low pressures between 15 Torr (= 20 mbar) and 30 Torr (= 40 mbar). The complex chemistry of MCH consumption is illustrated in the experimental identification of several C7H12, C7H10, C6H12, and C6H10 isomers sampled from the flames as a function of distance from the burner.more » Three initiation steps for MCH consumption are discussed: ring-opening to heptenes and methyl-hexenes (isomerization), methyl radical loss yielding the cyclohexyl radical (dissociation), and H abstraction from MCH. Mole fraction profiles as a function of distance from the burner for the C7 species supplemented by theoretical calculations are presented, indicating that flame structures resulting in steeper temperature gradients and/or greater peak temperatures can lead to a relative increase in MCH consumption through the dissociation and isomerization channels. Trends observed among the stable C6 species as well as 1,3-pentadiene and isoprene also support this conclusion. Relatively large amounts of toluene and benzene are observed in the experiments, illustrating the importance of sequential H-abstraction steps from MCH to toluene and from cyclohexyl to benzene. Furthermore, modeled results using the detailed chemical model of Pitz et al. (Proc. Combust. Inst.2007, 31, 267–275) are also provided to illustrate the use of these data as a benchmark for the improvement or future development of a MCH mechanism.« less

  13. Continuous mutual improvement of macromolecular structure models in the PDB and of X-ray crystallographic software: The dual role of deposited experimental data

    SciTech Connect (OSTI)

    Terwilliger, Thomas C.; Bricogne, Gerard

    2014-09-30

    Accurate crystal structures of macromolecules are of high importance in the biological and biomedical fields. Models of crystal structures in the Protein Data Bank (PDB) are in general of very high quality as deposited. However, methods for obtaining the best model of a macromolecular structure from a given set of experimental X-ray data continue to progress at a rapid pace, making it possible to improve most PDB entries after their deposition by re-analyzing the original deposited data with more recent software. This possibility represents a very significant departure from the situation that prevailed when the PDB was created, when it was envisioned as a cumulative repository of static contents. A radical paradigm shift for the PDB is therefore proposed, away from the static archive model towards a much more dynamic body of continuously improving results in symbiosis with continuously improving methods and software. These simultaneous improvements in methods and final results are made possible by the current deposition of processed crystallographic data (structure-factor amplitudes) and will be supported further by the deposition of raw data (diffraction images). It is argued that it is both desirable and feasible to carry out small-scale and large-scale efforts to make this paradigm shift a reality. Small-scale efforts would focus on optimizing structures that are of interest to specific investigators. Large-scale efforts would undertake a systematic re-optimization of all of the structures in the PDB, or alternatively the redetermination of groups of structures that are either related to or focused on specific questions. All of the resulting structures should be made generally available, along with the precursor entries, with various views of the structures being made available depending on the types of questions that users are interested in answering.

  14. Continuous mutual improvement of macromolecular structure models in the PDB and of X-ray crystallographic software: The dual role of deposited experimental data

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Terwilliger, Thomas C.; Bricogne, Gerard

    2014-09-30

    Accurate crystal structures of macromolecules are of high importance in the biological and biomedical fields. Models of crystal structures in the Protein Data Bank (PDB) are in general of very high quality as deposited. However, methods for obtaining the best model of a macromolecular structure from a given set of experimental X-ray data continue to progress at a rapid pace, making it possible to improve most PDB entries after their deposition by re-analyzing the original deposited data with more recent software. This possibility represents a very significant departure from the situation that prevailed when the PDB was created, when itmore » was envisioned as a cumulative repository of static contents. A radical paradigm shift for the PDB is therefore proposed, away from the static archive model towards a much more dynamic body of continuously improving results in symbiosis with continuously improving methods and software. These simultaneous improvements in methods and final results are made possible by the current deposition of processed crystallographic data (structure-factor amplitudes) and will be supported further by the deposition of raw data (diffraction images). It is argued that it is both desirable and feasible to carry out small-scale and large-scale efforts to make this paradigm shift a reality. Small-scale efforts would focus on optimizing structures that are of interest to specific investigators. Large-scale efforts would undertake a systematic re-optimization of all of the structures in the PDB, or alternatively the redetermination of groups of structures that are either related to or focused on specific questions. All of the resulting structures should be made generally available, along with the precursor entries, with various views of the structures being made available depending on the types of questions that users are interested in answering.« less

  15. Feature Detection, Characterization and Confirmation Methodology: Final Report

    SciTech Connect (OSTI)

    Karasaki, Kenzi; Apps, John; Doughty, Christine; Gwatney, Hope; Onishi, Celia Tiemi; Trautz, Robert; Tsang, Chin-Fu

    2007-03-01

    This is the final report of the NUMO-LBNL collaborative project: Feature Detection, Characterization and Confirmation Methodology under NUMO-DOE/LBNL collaboration agreement, the task description of which can be found in the Appendix. We examine site characterization projects from several sites in the world. The list includes Yucca Mountain in the USA, Tono and Horonobe in Japan, AECL in Canada, sites in Sweden, and Olkiluoto in Finland. We identify important geologic features and parameters common to most (or all) sites to provide useful information for future repository siting activity. At first glance, one could question whether there was any commonality among the sites, which are in different rock types at different locations. For example, the planned Yucca Mountain site is a dry repository in unsaturated tuff, whereas the Swedish sites are situated in saturated granite. However, the study concludes that indeed there are a number of important common features and parameters among all the sites--namely, (1) fault properties, (2) fracture-matrix interaction (3) groundwater flux, (4) boundary conditions, and (5) the permeability and porosity of the materials. We list the lessons learned from the Yucca Mountain Project and other site characterization programs. Most programs have by and large been quite successful. Nonetheless, there are definitely 'should-haves' and 'could-haves', or lessons to be learned, in all these programs. Although each site characterization program has some unique aspects, we believe that these crosscutting lessons can be very useful for future site investigations to be conducted in Japan. One of the most common lessons learned is that a repository program should allow for flexibility, in both schedule and approach. We examine field investigation technologies used to collect site characterization data in the field. An extensive list of existing field technologies is presented, with some discussion on usage and limitations. Many of the

  16. Handbook on dynamics of jointed structures.

    SciTech Connect (OSTI)

    Ames, Nicoli M.; Lauffer, James P.; Jew, Michael D.; Segalman, Daniel Joseph; Gregory, Danny Lynn; Starr, Michael James; Resor, Brian Ray

    2009-07-01

    The problem of understanding and modeling the complicated physics underlying the action and response of the interfaces in typical structures under dynamic loading conditions has occupied researchers for many decades. This handbook presents an integrated approach to the goal of dynamic modeling of typical jointed structures, beginning with a mathematical assessment of experimental or simulation data, development of constitutive models to account for load histories to deformation, establishment of kinematic models coupling to the continuum models, and application of finite element analysis leading to dynamic structural simulation. In addition, formulations are discussed to mitigate the very short simulation time steps that appear to be required in numerical simulation for problems such as this. This handbook satisfies the commitment to DOE that Sandia will develop the technical content and write a Joints Handbook. The content will include: (1) Methods for characterizing the nonlinear stiffness and energy dissipation for typical joints used in mechanical systems and components. (2) The methodology will include practical guidance on experiments, and reduced order models that can be used to characterize joint behavior. (3) Examples for typical bolted and screw joints will be provided.

  17. Methods for developing seismic and extreme wind-hazard models for evaluating critical structures and equipment at US Department of Energy facilities and commercial plutonium facilities in the United States

    SciTech Connect (OSTI)

    Coats, D.W.; Murray, R.C.; Bernreuter, D.L.

    1981-02-04

    Lawrence Livermore National Laboratory (LLNL) is developing seismic and wind hazard models for the US Department of Energy (DOE). The work is part of a three-phase effort to establish building design criteria developed with a uniform methodology for seismic and wind hazards at the various DOE sites throughout the United States. In Phase 1, LLNL gathered information on the sites and their critical facilities, including nuclear reactors, fuel-reprocessing plants, high-level waste storage and treatment facilities, and special nuclear material facilities. Phase 2 - development of seismic and wind hazard models - is discussed in this paper, which summarizes the methodologies used by seismic and extreme-wind experts and gives sample hazard curves for the first sites to be modeled. These hazard models express the annual probability that the site will experience an earthquake (or windspeed) greater than some specified magnitude. In the final phase, the DOE will use the hazards models and LLNL-recommended uniform design criteria to evaluate critical facilities. The methodology presented in this paper also was used for a related LLNL study - involving the seismic assessment of six commercial plutonium fabrication plants licensed by the US Nuclear Regulatory Commission (NRC). Details and results of this reassessment are documented in reference.

  18. 3-d-interactive-scouring-methodology

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Sixty percent of bridge failures are the result of riverbed erosion (scour) at bridge support structures, and about 1 in 20 bridges are classified as scour critical, meaning that ...

  19. CONTAMINATED SOIL VOLUME ESTIMATE TRACKING METHODOLOGY

    SciTech Connect (OSTI)

    Durham, L.A.; Johnson, R.L.; Rieman, C.; Kenna, T.; Pilon, R.

    2003-02-27

    The U.S. Army Corps of Engineers (USACE) is conducting a cleanup of radiologically contaminated properties under the Formerly Utilized Sites Remedial Action Program (FUSRAP). The largest cost element for most of the FUSRAP sites is the transportation and disposal of contaminated soil. Project managers and engineers need an estimate of the volume of contaminated soil to determine project costs and schedule. Once excavation activities begin and additional remedial action data are collected, the actual quantity of contaminated soil often deviates from the original estimate, resulting in cost and schedule impacts to the project. The project costs and schedule need to be frequently updated by tracking the actual quantities of excavated soil and contaminated soil remaining during the life of a remedial action project. A soil volume estimate tracking methodology was developed to provide a mechanism for project managers and engineers to create better project controls of costs and schedule. For the FUSRAP Linde site, an estimate of the initial volume of in situ soil above the specified cleanup guidelines was calculated on the basis of discrete soil sample data and other relevant data using indicator geostatistical techniques combined with Bayesian analysis. During the remedial action, updated volume estimates of remaining in situ soils requiring excavation were calculated on a periodic basis. In addition to taking into account the volume of soil that had been excavated, the updated volume estimates incorporated both new gamma walkover surveys and discrete sample data collected as part of the remedial action. A civil survey company provided periodic estimates of actual in situ excavated soil volumes. By using the results from the civil survey of actual in situ volumes excavated and the updated estimate of the remaining volume of contaminated soil requiring excavation, the USACE Buffalo District was able to forecast and update project costs and schedule. The soil volume

  20. How Anion Chaotrope Changes the Local Structure of Water. Insights from Photoelectron Spectroscopy and Theoretical Modeling of SCN- Water Clusters

    SciTech Connect (OSTI)

    Valiev, Marat; Deng, Shihu; Wang, Xue B.

    2015-09-09

    The behavior of charged solute molecules in aqueous solutions is often classified using the concept of kosmotropes (“structure makers”) and chaotropes (“structure breakers”). There is a growing consensus that the key to kosmotropic/chaotropic behaviors lies in the local solvent region, but the exact microscopic basis for such differentiation is not well understood. This issue is examined in this work by analyzing size selective solvation of a well-known chaotrope, negatively charged SCN- molecule. Combining experimental photoelectron spectroscopy measurements with theoretical modeling we examine evolution of solvation structure up to eight waters. We observe that SCN- indeed fits the description of weakly hydrated ion and its solvation is heavily driven by stabilization of water-water interaction network. However, the impact on water structure is more subtle than that associated with “structure breaker”. In particular, we observe that the solvation structure of SCN- preserves the “packing” structure of the water network but changes local directionality of hydrogen bonds in the local solvent region. The resulting effect closer to that of “structure weakener”, where solute can be readily accommodated into the native water network, at the cost of compromising its stability due to constraints on hydrogen bonding.

  1. Recovery act. Characterizing structural controls of EGS-candidate and conventional geothermal reservoirs in the Great Basin. Developing successful exploration strategies in extended terranes

    SciTech Connect (OSTI)

    Faulds, James

    2015-06-25

    We conducted a comprehensive analysis of the structural controls of geothermal systems within the Great Basin and adjacent regions. Our main objectives were to: 1) Produce a catalogue of favorable structural environments and models for geothermal systems. 2) Improve site-specific targeting of geothermal resources through detailed studies of representative sites, which included innovative techniques of slip tendency analysis of faults and 3D modeling. 3) Compare and contrast the structural controls and models in different tectonic settings. 4) Synthesize data and develop methodologies for enhancement of exploration strategies for conventional and EGS systems, reduction in the risk of drilling non-productive wells, and selecting the best EGS sites.

  2. Quantitative Cyber Risk Reduction Estimation Methodology for a Small Scada Control System

    SciTech Connect (OSTI)

    Miles A. McQueen; Wayne F. Boyer; Mark A. Flynn; George A. Beitel

    2006-01-01

    We propose a new methodology for obtaining a quick quantitative measurement of the risk reduction achieved when a control system is modified with the intent to improve cyber security defense against external attackers. The proposed methodology employs a directed graph called a compromise graph, where the nodes represent stages of a potential attack and the edges represent the expected time-to-compromise for differing attacker skill levels. Time-to-compromise is modeled as a function of known vulnerabilities and attacker skill level. The methodology was used to calculate risk reduction estimates for a specific SCADA system and for a specific set of control system security remedial actions. Despite an 86% reduction in the total number of vulnerabilities, the estimated time-to-compromise was increased only by about 3 to 30% depending on target and attacker skill level.

  3. Methodology for Predicting Flammable Gas Mixtures in Double Contained Receiver Tanks [SEC 1 THRU SEC 3

    SciTech Connect (OSTI)

    HEDENGREN, D.C.

    2000-01-31

    This methodology document provides an estimate of the maximum concentrations of flammable gases (ammonia, hydrogen, and methane) which could exist in the vapor space of a double-contained receiver tank (DCRT) from the simultaneous saltwell pumping of one or more single-shell tanks (SSTs). This document expands Calculation Note 118 (Hedengren et a1 1997) and removes some of the conservatism from it, especially in vapor phase ammonia predictions. The methodologies of Calculation Note 118 (Hedengren et a1 1997) are essentially identical for predicting flammable gas mixtures in DCRTs from saltwell pumping for low DCRT ventilation rates, 1e, < 1 cfm. The hydrogen generation model has also been updated in the methodology of this document.

  4. Electronic structure of the SiN{sub x}/TiN interface: A model system for superhard nanocomposites

    SciTech Connect (OSTI)

    Patscheider, Joerg; Hellgren, Niklas; Haasch, Richard T.; Petrov, Ivan; Greene, J. E.

    2011-03-15

    Nanostructured materials such as nanocomposites and nanolaminates--subjects of intense interest in modern materials research--are defined by internal interfaces, the nature of which is generally unknown. Nevertheless, the interfaces often determine the bulk properties. An example of this is superhard nanocomposites with hardness approaching that of diamond. TiN/Si{sub 3}N{sub 4} nanocomposites (TiN nanocrystals encapsulated in a fully percolated SiN{sub x} tissue phase) and nanolaminates, in particular, have attracted much attention as model systems for the synthesis of such superhard materials. Here, we use in situ angle-resolved x-ray photoelectron spectroscopy to probe the electronic structure of Si{sub 3}N{sub 4}/TiN(001), Si/TiN(001), and Ti/TiN(001) bilayer interfaces, in which 4-ML-thick overlayers are grown in an ultrahigh vacuum system by reactive magnetron sputter deposition onto epitaxial TiN layers on MgO(001). The thickness of the Si{sub 3}N{sub 4}, Si, and Ti overlayers is chosen to be thin enough to insure sufficient electron transparency to probe the interfaces, while being close to values reported in typical nanocomposites and nanolaminates. The results show that these overlayer/TiN(001) interfaces have distinctly different bonding characteristics. Si{sub 3}N{sub 4} exhibits interface polarization through the formation of an interlayer, in which the N concentration is enhanced at higher substrate bias values during Si{sub 3}N{sub 4} deposition. The increased number of Ti-N bonds at the interface, together with the resulting polarization, strengthens interfacial bonding. In contrast, overlayers of Si and, even more so, metallic Ti weaken the interface by minimizing the valence band energy difference between the two phases. A model is proposed that provides a semiquantitative explanation of the interfacial bond strength in nitrogen-saturated and nitrogen-deficient Ti-Si-N nanocomposites.

  5. Energy Department Hosts FORGE Webinar and Resource Reporting Methodology

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Workshop at the Upcoming National Geothermal Summit, August 4-5 | Department of Energy Hosts FORGE Webinar and Resource Reporting Methodology Workshop at the Upcoming National Geothermal Summit, August 4-5 Energy Department Hosts FORGE Webinar and Resource Reporting Methodology Workshop at the Upcoming National Geothermal Summit, August 4-5 July 29, 2014 - 1:34pm Addthis Energy Department Hosts FORGE Webinar and Resource Reporting Methodology Workshop at the Upcoming National Geothermal

  6. The Development and Application of NMR Methodologies for the...

    Office of Scientific and Technical Information (OSTI)

    in Complex Silicones Citation Details In-Document Search Title: The Development and Application of NMR Methodologies for the Study of Degradation in Complex Silicones ...

  7. Hydrogen Program Goal-Setting Methodologies Report to Congress

    Broader source: Energy.gov [DOE]

    This Report to Congress, published in August 2006, focuses on the methodologies used by the DOE Hydrogen Program for goal-setting.

  8. Quality Guidline for Cost Estimation Methodology for NETL Assessments...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    and Benefits 2 Power Plant Cost Estimation Methodology Quality Guidelines for Energy System Studies April 2011 Disclaimer This report was prepared as an account of work...

  9. PROLIFERATION RESISTANCE AND PHYSICAL PROTECTION WORKING GROUP: METHODOLOGY AND APPLICATIONS

    SciTech Connect (OSTI)

    Bari R. A.; Whitlock, J.; Therios, I.U.; Peterson, P.F.

    2012-11-14

    We summarize the technical progress and accomplishments on the evaluation methodology for proliferation resistance and physical protection (PR and PP) of Generation IV nuclear energy systems. We intend the results of the evaluations performed with the methodology for three types of users: system designers, program policy makers, and external stakeholders. The PR and PP Working Group developed the methodology through a series of demonstration and case studies. Over the past few years various national and international groups have applied the methodology to nuclear energy system designs as well as to developing approaches to advanced safeguards.

  10. Session #1: Cutting Edge Methodologies--Beyond Current DFT

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Session 1: Cutting Edge Methodologies (beyond Current DFT) Moderator: Shengbai Zhang (RPI REL) Topics to be addressed: Benchmarking state-of-the-art approaches, accurate energy ...

  11. National Academies Criticality Methodology and Assessment Video (Text Version)

    Office of Energy Efficiency and Renewable Energy (EERE)

    This is a text version of the "National Academies Criticality Methodology and Assessment" video presented at the Critical Materials Workshop, held on April 3, 2012 in Arlington, Virginia.

  12. The Development and Application of NMR Methodologies for the...

    Office of Scientific and Technical Information (OSTI)

    Development and Application of NMR Methodologies for the Study of Degradation in Complex Silicones Citation Details In-Document Search Title: The Development and Application of NMR...

  13. Proliferation Resistance and Physical Protection Evaluation Methodology Development and Applications

    SciTech Connect (OSTI)

    Bari,R.A.; Bari, R.; Peterson, P.; Therios, I.; Whitlock, J.

    2009-07-08

    An overview of the technical progress and accomplishments on the evaluation methodology for proliferation resistance and physical protection of Generation IV nuclear energy Systems.

  14. Security Risk Assessment Methodologies (RAM) for Critical Infrastructu...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Building Energy Efficiency Find More Like This Return to Search Security Risk Assessment Methodologies (RAM) for Critical Infrastructures Sandia National Laboratories...

  15. Towards Developing a Calibrated EGS Exploration Methodology Using...

    Open Energy Info (EERE)

    Towards Developing a Calibrated EGS Exploration Methodology Using the Dixie Valley Geothermal System, Nevada Jump to: navigation, search OpenEI Reference LibraryAdd to library...

  16. Egs Exploration Methodology Project Using the Dixie Valley Geothermal...

    Open Energy Info (EERE)

    Egs Exploration Methodology Project Using the Dixie Valley Geothermal System, Nevada, Status Update Jump to: navigation, search OpenEI Reference LibraryAdd to library Conference...

  17. UNFCCC-Consolidated baseline and monitoring methodology for landfill...

    Open Energy Info (EERE)

    Consolidated baseline and monitoring methodology for landfill gas project activities Jump to: navigation, search Tool Summary LAUNCH TOOL Name: UNFCCC-Consolidated baseline and...

  18. Energy Efficiency Standards for Refrigerators in Brazil: A Methodology...

    Open Energy Info (EERE)

    Standards for Refrigerators in Brazil: A Methodology for Impact Evaluation Jump to: navigation, search Tool Summary LAUNCH TOOL Name: Energy Efficiency Standards for Refrigerators...

  19. A Review of Geothermal Resource Estimation Methodology | Open...

    Open Energy Info (EERE)

    Geothermal Resource Estimation Methodology Jump to: navigation, search OpenEI Reference LibraryAdd to library Conference Paper: A Review of Geothermal Resource Estimation...

  20. Methodology for Carbon Accounting of Grouped Mosaic and Landscape...

    Open Energy Info (EERE)

    REDD Projects Jump to: navigation, search Tool Summary LAUNCH TOOL Name: Methodology for Carbon Accounting of Grouped Mosaic and Landscape-scale REDD Projects Agency...

  1. Survey of Transmission Cost Allocation Methodologies for Regional Transmission Organizations

    SciTech Connect (OSTI)

    Fink, S.; Porter, K.; Mudd, C.; Rogers, J.

    2011-02-01

    The report presents transmission cost allocation methodologies for reliability transmission projects, generation interconnection, and economic transmission projects for all Regional Transmission Organizations.

  2. Criticality Model

    SciTech Connect (OSTI)

    A. Alsaed

    2004-09-14

    The ''Disposal Criticality Analysis Methodology Topical Report'' (YMP 2003) presents the methodology for evaluating potential criticality situations in the monitored geologic repository. As stated in the referenced Topical Report, the detailed methodology for performing the disposal criticality analyses will be documented in model reports. Many of the models developed in support of the Topical Report differ from the definition of models as given in the Office of Civilian Radioactive Waste Management procedure AP-SIII.10Q, ''Models'', in that they are procedural, rather than mathematical. These model reports document the detailed methodology necessary to implement the approach presented in the Disposal Criticality Analysis Methodology Topical Report and provide calculations utilizing the methodology. Thus, the governing procedure for this type of report is AP-3.12Q, ''Design Calculations and Analyses''. The ''Criticality Model'' is of this latter type, providing a process evaluating the criticality potential of in-package and external configurations. The purpose of this analysis is to layout the process for calculating the criticality potential for various in-package and external configurations and to calculate lower-bound tolerance limit (LBTL) values and determine range of applicability (ROA) parameters. The LBTL calculations and the ROA determinations are performed using selected benchmark experiments that are applicable to various waste forms and various in-package and external configurations. The waste forms considered in this calculation are pressurized water reactor (PWR), boiling water reactor (BWR), Fast Flux Test Facility (FFTF), Training Research Isotope General Atomic (TRIGA), Enrico Fermi, Shippingport pressurized water reactor, Shippingport light water breeder reactor (LWBR), N-Reactor, Melt and Dilute, and Fort Saint Vrain Reactor spent nuclear fuel (SNF). The scope of this analysis is to document the criticality computational method. The criticality

  3. Methodology to design a municipal solid waste pre-collection system. A case study

    SciTech Connect (OSTI)

    Gallardo, A. Carlos, M. Peris, M. Colomer, F.J.

    2015-02-15

    Highlights: • MSW recovery starts at homes; therefore it is important to facilitate it to people. • Additionally, to optimize MSW collection a previous pre-collection must be planned. • A methodology to organize pre-collection considering several factors is presented. • The methodology has been verified applying it to a Spanish middle town. - Abstract: The municipal solid waste (MSW) management is an important task that local governments as well as private companies must take into account to protect human health, the environment and to preserve natural resources. To design an adequate MSW management plan the first step consists in defining the waste generation and composition patterns of the town. As these patterns depend on several socio-economic factors it is advisable to organize them previously. Moreover, the waste generation and composition patterns may vary around the town and over the time. Generally, the data are not homogeneous around the city as the number of inhabitants is not constant nor it is the economic activity. Therefore, if all the information is showed in thematic maps, the final waste management decisions can be made more efficiently. The main aim of this paper is to present a structured methodology that allows local authorities or private companies who deal with MSW to design its own MSW management plan depending on the available data. According to these data, this paper proposes two ways of action: a direct way when detailed data are available and an indirect way when there is a lack of data and it is necessary to take into account bibliographic data. In any case, the amount of information needed is considerable. This paper combines the planning methodology with the Geographic Information Systems to present the final results in thematic maps that make easier to interpret them. The proposed methodology is a previous useful tool to organize the MSW collection routes including the selective collection. To verify the methodology it has

  4. Methodology for Identification of the Coolant Thermalhydraulic Regimes in the Core of Nuclear Reactors

    SciTech Connect (OSTI)

    Sharaevsky, L.G.; Sharaevskaya, E.I.; Domashev, E.D.; Arkhypov, A.P.; Kolochko, V.N.

    2002-07-01

    The paper deals with one of the acute for the nuclear energy problem of accident regimes of NPPs recognition diagnostics using noise signal diagnostics methodology. The methodology intends transformation of the random noise signals of the main technological parameters at the exit of a nuclear facility (neutron flow, dynamic pressure etc.) which contain the important information about the technical status of the equipment. The effective algorithms for identification of random processes wore developed. After proper transformation its were considered as multidimensional random vectors. Automatic classification of these vectors in the developed algorithms is realized on the basis of the probability function in particular Bayes classifier and decision functions. Till now there no mathematical models for thermalhydraulic regimes of fuel assemblies recognition on the acoustic and neutron noises parameters in the core of nuclear facilities. The two mathematical models for analysis of the random processes submitted to the automatic classification is proposed, i.e. statistical (using Bayes classifier of acoustic spectral density diagnosis signals) and geometrical (on the basis of formation in the featured space of dividing hyper-plane). The theoretical basis of the bubble boiling regimes in the fuel assemblies is formulated as identification of these regimes on the basis of random parameters of auto spectral density of acoustic noise (ASD) measured in the fuel assemblies (dynamic pressure in the upper plenum in the paper). The elaborated algorithms allow recognize realistic status of the fuel assemblies. For verification of the proposed mathematical models the analysis of experimental measurements was carried out. The research of the boiling onset and definition of the local values of the flow parameters in the seven-beam fuel assembly (length of 1.3 m, diameter of 6 mm) have shown the correct identification of the bubble boiling regimes. The experimental measurements on

  5. Methodology for Scaling Fusion Power Plant Availability

    SciTech Connect (OSTI)

    Lester M. Waganer

    2011-01-04

    Normally in the U.S. fusion power plant conceptual design studies, the development of the plant availability and the plant capital and operating costs makes the implicit assumption that the plant is a 10th of a kind fusion power plant. This is in keeping with the DOE guidelines published in the 1970s, the PNL report1, "Fusion Reactor Design Studies - Standard Accounts for Cost Estimates. This assumption specifically defines the level of the industry and technology maturity and eliminates the need to define the necessary research and development efforts and costs to construct a one of a kind or the first of a kind power plant. It also assumes all the "teething" problems have been solved and the plant can operate in the manner intended. The plant availability analysis assumes all maintenance actions have been refined and optimized by the operation of the prior nine or so plants. The actions are defined to be as quick and efficient as possible. This study will present a methodology to enable estimation of the availability of the one of a kind (one OAK) plant or first of a kind (1st OAK) plant. To clarify, one of the OAK facilities might be the pilot plant or the demo plant that is prototypical of the next generation power plant, but it is not a full-scale fusion power plant with all fully validated "mature" subsystems. The first OAK facility is truly the first commercial plant of a common design that represents the next generation plant design. However, its subsystems, maintenance equipment and procedures will continue to be refined to achieve the goals for the 10th OAK power plant.

  6. Methodology for Preliminary Design of Electrical Microgrids

    SciTech Connect (OSTI)

    Jensen, Richard P.; Stamp, Jason E.; Eddy, John P.; Henry, Jordan M; Munoz-Ramos, Karina; Abdallah, Tarek

    2015-09-30

    Many critical loads rely on simple backup generation to provide electricity in the event of a power outage. An Energy Surety Microgrid TM can protect against outages caused by single generator failures to improve reliability. An ESM will also provide a host of other benefits, including integration of renewable energy, fuel optimization, and maximizing the value of energy storage. The ESM concept includes a categorization for microgrid value proposi- tions, and quantifies how the investment can be justified during either grid-connected or utility outage conditions. In contrast with many approaches, the ESM approach explic- itly sets requirements based on unlikely extreme conditions, including the need to protect against determined cyber adversaries. During the United States (US) Department of Defense (DOD)/Department of Energy (DOE) Smart Power Infrastructure Demonstration for Energy Reliability and Security (SPIDERS) effort, the ESM methodology was successfully used to develop the preliminary designs, which direct supported the contracting, construction, and testing for three military bases. Acknowledgements Sandia National Laboratories and the SPIDERS technical team would like to acknowledge the following for help in the project: * Mike Hightower, who has been the key driving force for Energy Surety Microgrids * Juan Torres and Abbas Akhil, who developed the concept of microgrids for military installations * Merrill Smith, U.S. Department of Energy SPIDERS Program Manager * Ross Roley and Rich Trundy from U.S. Pacific Command * Bill Waugaman and Bill Beary from U.S. Northern Command * Melanie Johnson and Harold Sanborn of the U.S. Army Corps of Engineers Construc- tion Engineering Research Laboratory * Experts from the National Renewable Energy Laboratory, Idaho National Laboratory, Oak Ridge National Laboratory, and Pacific Northwest National Laboratory

  7. The National Energy Modeling System: An overview

    SciTech Connect (OSTI)

    Not Available

    1994-05-01

    The National Energy Modeling System (NEMS) is a computer-based, energy-economy modeling system of US energy markets for the midterm period of 1990 to 2010. NEMS projects the production, imports, conversion, consumption, and prices of energy, subject to assumptions on macroeconomic and financial factors, world energy markets, resource availability and costs, behavioral and technological choice criteria, cost and performance characteristics of energy technologies, and demographics. This report presents an overview of the structure and methodology of NEMS and each of its components. The first chapter provides a description of the design and objectives of the system. The second chapter describes the modeling structure. The remainder of the report summarizes the methodology and scope of the component modules of NEMS. The model descriptions are intended for readers familiar with terminology from economics, operations research, and energy modeling. Additional background on the development of the system is provided in Appendix A of this report, which describes the EIA modeling systems that preceded NEMS. More detailed model documentation reports for all the NEMS modules are also available from EIA.

  8. Technical progress report for application of numerical simulation methodology to automotive combustion

    SciTech Connect (OSTI)

    1980-12-01

    The second quarterly technical progress report is presented for a program entitled, Application of Numerical Simulation Methodology to Automotive Combustion. The goal of the program is to develop, validate, demonstrate and apply a numerical simulation methodology for in-cylinder reactive flows in internal combustion engines. Previous work on this contract involved the initial development and validation of a finite difference based simulation model for time dependent axisymmetric flows which includes: a generalized coordinate system for arbitrary mesh design and treatment of complex and time dependent boundaries; multiple and interacting chemical species; coupled swirl flow velocity component; and two-equation turbulence closure. In its various stages of development, the model has been used to simulate numerous engine-related problems for validation and demonstration purposes. The technical effort during the current reporting period has concentrated on: reactive flow model development, test and data comparison studies; swirl flow simulations; and in-cylinder compression cycle flow simulations. Results of these studies are discussed.

  9. Tularosa Basin Play Fairway Analysis: Methodology Flow Charts

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    Adam Brandt

    2015-11-15

    These images show the comprehensive methodology used for creation of a Play Fairway Analysis to explore the geothermal resource potential of the Tularosa Basin, New Mexico. The deterministic methodology was originated by the petroleum industry, but was custom-modified to function as a knowledge-based geothermal exploration tool. The stochastic PFA flow chart uses weights of evidence, and is data-driven.

  10. Model documentation report: Commercial Sector Demand Module of the National Energy Modeling System

    SciTech Connect (OSTI)

    1998-01-01

    This report documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Commercial Sector Demand Module. The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated through the synthesis and scenario development based on these components. The NEMS Commercial Sector Demand Module is a simulation tool based upon economic and engineering relationships that models commercial sector energy demands at the nine Census Division level of detail for eleven distinct categories of commercial buildings. Commercial equipment selections are performed for the major fuels of electricity, natural gas, and distillate fuel, for the major services of space heating, space cooling, water heating, ventilation, cooking, refrigeration, and lighting. The algorithm also models demand for the minor fuels of residual oil, liquefied petroleum gas, steam coal, motor gasoline, and kerosene, the renewable fuel sources of wood and municipal solid waste, and the minor services of office equipment. Section 2 of this report discusses the purpose of the model, detailing its objectives, primary input and output quantities, and the relationship of the Commercial Module to the other modules of the NEMS system. Section 3 of the report describes the rationale behind the model design, providing insights into further assumptions utilized in the model development process to this point. Section 3 also reviews alternative commercial sector modeling methodologies drawn from existing literature, providing a comparison to the chosen approach. Section 4 details the model structure, using graphics and text to illustrate model flows and key computations.

  11. A Methodology for Calculating Radiation Signatures

    SciTech Connect (OSTI)

    Klasky, Marc Louis; Wilcox, Trevor; Bathke, Charles G.; James, Michael R.

    2015-05-01

    A rigorous formalism is presented for calculating radiation signatures from both Special Nuclear Material (SNM) as well as radiological sources. The use of MCNP6 in conjunction with CINDER/ORIGEN is described to allow for the determination of both neutron and photon leakages from objects of interest. In addition, a description of the use of MCNP6 to properly model the background neutron and photon sources is also presented. Examinations of the physics issues encountered in the modeling are investigated so as to allow for guidance in the user discerning the relevant physics to incorporate into general radiation signature calculations. Furthermore, examples are provided to assist in delineating the pertinent physics that must be accounted for. Finally, examples of detector modeling utilizing MCNP are provided along with a discussion on the generation of Receiver Operating Curves, which are the suggested means by which to determine detectability radiation signatures emanating from objects.

  12. Effect of Network Structure on Characterization and Flow Modeling Using X-ray Micro-Tomography Images of Granular and Fibrous Porous Media

    SciTech Connect (OSTI)

    Bhattad, Pradeep; Willson, Clinton S.; Thompson, Karsten E.

    2012-07-31

    Image-based network modeling has become a powerful tool for modeling transport in real materials that have been imaged using X-ray computed micro-tomography (XCT) or other three-dimensional imaging techniques. Network generation is an essential part of image-based network modeling, but little quantitative work has been done to understand the influence of different network structures on modeling. We use XCT images of three different porous materials (disordered packings of spheres, sand, and cylinders) to create a series of four networks for each material. Despite originating from the same data, the networks can be made to vary over two orders of magnitude in pore density, which in turn affects network properties such as pore-size distribution and pore connectivity. Despite the orders-of-magnitude difference in pore density, single-phase permeability predictions remain remarkably consistent for a given material, even for the simplest throat conductance formulas. Detailed explanations for this beneficial attribute are given in the article; in general, it is a consequence of using physically representative network models. The capillary pressure curve generated from quasi-static drainage is more sensitive to network structure than permeability. However, using the capillary pressure curve to extract pore-size distributions gives reasonably consistent results even though the networks vary significantly. These results provide encouraging evidence that robust network modeling algorithms are not overly sensitive to the specific structure of the underlying physically representative network, which is important given the variety image-based network-generation strategies that have been developed in recent years.

  13. Risk assessment methodology applied to counter IED research & development portfolio prioritization

    SciTech Connect (OSTI)

    Shevitz, Daniel W; O' Brien, David A; Zerkle, David K; Key, Brian P; Chavez, Gregory M

    2009-01-01

    In an effort to protect the United States from the ever increasing threat of domestic terrorism, the Department of Homeland Security, Science and Technology Directorate (DHS S&T), has significantly increased research activities to counter the terrorist use of explosives. More over, DHS S&T has established a robust Counter-Improvised Explosive Device (C-IED) Program to Deter, Predict, Detect, Defeat, and Mitigate this imminent threat to the Homeland. The DHS S&T portfolio is complicated and changing. In order to provide the ''best answer'' for the available resources, DHS S&T would like some ''risk based'' process for making funding decisions. There is a definite need for a methodology to compare very different types of technologies on a common basis. A methodology was developed that allows users to evaluate a new ''quad chart'' and rank it, compared to all other quad charts across S&T divisions. It couples a logic model with an evidential reasoning model using an Excel spreadsheet containing weights of the subjective merits of different technologies. The methodology produces an Excel spreadsheet containing the aggregate rankings of the different technologies. It uses Extensible Logic Modeling (ELM) for logic models combined with LANL software called INFTree for evidential reasoning.

  14. Theoretical, Methodological, and Empirical Approaches to Cost Savings: A Compendium

    SciTech Connect (OSTI)

    M Weimar

    1998-12-10

    This publication summarizes and contains the original documentation for understanding why the U.S. Department of Energy's (DOE's) privatization approach provides cost savings and the different approaches that could be used in calculating cost savings for the Tank Waste Remediation System (TWRS) Phase I contract. The initial section summarizes the approaches in the different papers. The appendices are the individual source papers which have been reviewed by individuals outside of the Pacific Northwest National Laboratory and the TWRS Program. Appendix A provides a theoretical basis for and estimate of the level of savings that can be" obtained from a fixed-priced contract with performance risk maintained by the contractor. Appendix B provides the methodology for determining cost savings when comparing a fixed-priced contractor with a Management and Operations (M&O) contractor (cost-plus contractor). Appendix C summarizes the economic model used to calculate cost savings and provides hypothetical output from preliminary calculations. Appendix D provides the summary of the approach for the DOE-Richland Operations Office (RL) estimate of the M&O contractor to perform the same work as BNFL Inc. Appendix E contains information on cost growth and per metric ton of glass costs for high-level waste at two other DOE sites, West Valley and Savannah River. Appendix F addresses a risk allocation analysis of the BNFL proposal that indicates,that the current approach is still better than the alternative.

  15. Modeling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Modeling & Analysis, News, News & Events, Photovoltaic, Renewable Energy, Research & Capabilities, Solar, Solar Newsletter, SunShot, Systems Analysis Sandia Develops Stochastic ...

  16. Modeling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Monte Carlo modeling it was found that for noisy signals with a significant background component, accuracy is improved by fitting the total emission data which includes the...

  17. Modeling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Solar Sandia Labs Releases New Version of PVLib Toolbox Sandia has released version 1.3 of PVLib, its widely used Matlab toolbox for modeling photovoltaic (PV) power ...

  18. Modeling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... Sandia Will Host PV Bankability Workshop at Solar Power International (SPI) 2013 Computational Modeling & Simulation, Distribution Grid Integration, Energy, Facilities, Grid ...

  19. Modeling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Science and Actuarial Practice" Read More Permalink New Project Is the ACME of Computer Science to Address Climate Change Analysis, Climate, Global Climate & Energy, Modeling, ...

  20. Modeling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Though adequate for modeling mean transport, this approach does not address ... Microphysics such as diffusive transport and chemical kinetics are represented by ...

  1. Soils Activity Mobility Study: Methodology and Application

    SciTech Connect (OSTI)

    Silvas, Alissa; Yucel, Vefa

    2014-09-29

    This report presents a three-level approach for estimation of sediment transport to provide an assessment of potential erosion risk for sites at the Nevada National Security Site (NNSS) that are posted for radiological purposes and where migration is suspected or known to occur due to storm runoff. Based on the assessed risk, the appropriate level of effort can be determined for analysis of radiological surveys, field experiments to quantify erosion and transport rates, and long-term monitoring. The method is demonstrated at contaminated sites, including Plutonium Valley, Shasta, Smoky, and T-1. The Pacific Southwest Interagency Committee (PSIAC) procedure is selected as the Level 1 analysis tool. The PSIAC method provides an estimation of the total annual sediment yield based on factors derived from the climatic and physical characteristics of a watershed. If the results indicate low risk, then further analysis is not warranted. If the Level 1 analysis indicates high risk or is deemed uncertain, a Level 2 analysis using the Modified Universal Soil Loss Equation (MUSLE) is proposed. In addition, if a sediment yield for a storm event rather than an annual sediment yield is needed, then the proposed Level 2 analysis should be performed. MUSLE only provides sheet and rill erosion estimates. The U.S. Army Corps of Engineers Hydrologic Engineering Center-Hydrologic Modeling System (HEC-HMS) provides storm peak runoff rate and storm volumes, the inputs necessary for MUSLE. Channel Sediment Transport (CHAN-SED) I and II models are proposed for estimating sediment deposition or erosion in a channel reach from a storm event. These models require storm hydrograph associated sediment concentration and bed load particle size distribution data. When the Level 2 analysis indicates high risk for sediment yield and associated contaminant migration or when there is high uncertainty in the Level 2 results, the sites can be further evaluated with a Level 3 analysis using more complex

  2. Risk Estimation Methodology for Launch Accidents.

    SciTech Connect (OSTI)

    Clayton, Daniel James; Lipinski, Ronald J.; Bechtel, Ryan D.

    2014-02-01

    As compact and light weight power sources with reliable, long lives, Radioisotope Power Systems (RPSs) have made space missions to explore the solar system possible. Due to the hazardous material that can be released during a launch accident, the potential health risk of an accident must be quantified, so that appropriate launch approval decisions can be made. One part of the risk estimation involves modeling the response of the RPS to potential accident environments. Due to the complexity of modeling the full RPS response deterministically on dynamic variables, the evaluation is performed in a stochastic manner with a Monte Carlo simulation. The potential consequences can be determined by modeling the transport of the hazardous material in the environment and in human biological pathways. The consequence analysis results are summed and weighted by appropriate likelihood values to give a collection of probabilistic results for the estimation of the potential health risk. This information is used to guide RPS designs, spacecraft designs, mission architecture, or launch procedures to potentially reduce the risk, as well as to inform decision makers of the potential health risks resulting from the use of RPSs for space missions.

  3. Integrating rock mechanics issues with repository design through design process principles and methodology

    SciTech Connect (OSTI)

    Bieniawski, Z.T.

    1996-04-01

    A good designer needs not only knowledge for designing (technical know-how that is used to generate alternative design solutions) but also must have knowledge about designing (appropriate principles and systematic methodology to follow). Concepts such as {open_quotes}design for manufacture{close_quotes} or {open_quotes}concurrent engineering{close_quotes} are widely used in the industry. In the field of rock engineering, only limited attention has been paid to the design process because design of structures in rock masses presents unique challenges to the designers as a result of the uncertainties inherent in characterization of geologic media. However, a stage has now been reached where we are be able to sufficiently characterize rock masses for engineering purposes and identify the rock mechanics issues involved but are still lacking engineering design principles and methodology to maximize our design performance. This paper discusses the principles and methodology of the engineering design process directed to integrating site characterization activities with design, construction and performance of an underground repository. Using the latest information from the Yucca Mountain Project on geology, rock mechanics and starter tunnel design, the current lack of integration is pointed out and it is shown how rock mechanics issues can be effectively interwoven with repository design through a systematic design process methodology leading to improved repository performance. In essence, the design process is seen as the use of design principles within an integrating design methodology, leading to innovative problem solving. In particular, a new concept of {open_quotes}Design for Constructibility and Performance{close_quotes} is introduced. This is discussed with respect to ten rock mechanics issues identified for repository design and performance.

  4. Hydrogen Program Goal-Setting Methodologies Report to Congress

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    6 Hydrogen Program Goal-Setting Methodologies Report to Congress (ESECS EE-4015) Hydrogen Program Goal-Setting Methodologies (This page intentionally left blank) 8/7/2006 - 2 - Hydrogen Program Goal-Setting Methodologies Introduction This report addresses section 1819 of Public Law 109-58, also referred to as the Energy Policy Act of 2005. Section 1819 states: "Not later than 1 year after the date of enactment of this Act, the Secretary shall submit to Congress a report evaluating

  5. Development of an Automated Security Risk Assessment Methodology Tool for Critical Infrastructures.

    SciTech Connect (OSTI)

    Jaeger, Calvin D.; Roehrig, Nathaniel S.; Torres, Teresa M.

    2008-12-01

    This document presents the security automated Risk Assessment Methodology (RAM) prototype tool developed by Sandia National Laboratories (SNL). This work leverages SNL's capabilities and skills in security risk analysis and the development of vulnerability assessment/risk assessment methodologies to develop an automated prototype security RAM tool for critical infrastructures (RAM-CITM). The prototype automated RAM tool provides a user-friendly, systematic, and comprehensive risk-based tool to assist CI sector and security professionals in assessing and managing security risk from malevolent threats. The current tool is structured on the basic RAM framework developed by SNL. It is envisioned that this prototype tool will be adapted to meet the requirements of different CI sectors and thereby provide additional capabilities.

  6. Methodology to design a municipal solid waste generation and composition map: A case study

    SciTech Connect (OSTI)

    Gallardo, A. Carlos, M. Peris, M. Colomer, F.J.

    2014-11-15

    Highlights: • To draw a waste generation and composition map of a town a lot of factors must be taken into account. • The methodology proposed offers two different depending on the available data combined with geographical information systems. • The methodology has been applied to a Spanish city with success. • The methodology will be a useful tool to organize the municipal solid waste management. - Abstract: The municipal solid waste (MSW) management is an important task that local governments as well as private companies must take into account to protect human health, the environment and to preserve natural resources. To design an adequate MSW management plan the first step consist in defining the waste generation and composition patterns of the town. As these patterns depend on several socio-economic factors it is advisable to organize them previously. Moreover, the waste generation and composition patterns may vary around the town and over the time. Generally, the data are not homogeneous around the city as the number of inhabitants is not constant nor it is the economic activity. Therefore, if all the information is showed in thematic maps, the final waste management decisions can be made more efficiently. The main aim of this paper is to present a structured methodology that allows local authorities or private companies who deal with MSW to design its own MSW management plan depending on the available data. According to these data, this paper proposes two ways of action: a direct way when detailed data are available and an indirect way when there is a lack of data and it is necessary to take into account bibliographic data. In any case, the amount of information needed is considerable. This paper combines the planning methodology with the Geographic Information Systems to present the final results in thematic maps that make easier to interpret them. The proposed methodology is a previous useful tool to organize the MSW collection routes including the

  7. Modeling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    with application in modeling NDCX-II experiments Wangyi Liu 1 , John Barnard 2 , Alex Friedman 2 , Nathan Masters 2 , Aaron Fisher 2 , Alice Koniges 2 , David Eder 2 1 LBNL, USA, 2...

  8. Modeling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    NASA Earth at Night Video EC, Energy, Energy Efficiency, Global, Modeling, News & Events, Solid-State Lighting, Videos NASA Earth at Night Video Have you ever wondered what the ...

  9. Biopower Report Presents Methodology for Assessing the Value...

    Energy Savers [EERE]

    Biomass in Pulverized Coal Plants Biopower Report Presents Methodology for Assessing the Value of Co-Firing Biomass in Pulverized Coal Plants November 20, 2014 - 12:22pm ...

  10. DOE 2009 Geothermal Risk Analysis: Methodology and Results (Presentation)

    SciTech Connect (OSTI)

    Young, K. R.; Augustine, C.; Anderson, A.

    2010-02-01

    This presentation summarizes the methodology and results for a probabilistic risk analysis of research, development, and demonstration work-primarily for enhanced geothermal systems (EGS)-sponsored by the U.S. Department of Energy Geothermal Technologies Program.

  11. Hanford Site baseline risk assessment methodology. Revision 2

    SciTech Connect (OSTI)

    Not Available

    1993-03-01

    This methodology has been developed to prepare human health and environmental evaluations of risk as part of the Comprehensive Environmental Response, Compensation, and Liability Act remedial investigations (RIs) and the Resource Conservation and Recovery Act facility investigations (FIs) performed at the Hanford Site pursuant to the Hanford Federal Facility Agreement and Consent Order referred to as the Tri-Party Agreement. Development of the methodology has been undertaken so that Hanford Site risk assessments are consistent with current regulations and guidance, while providing direction on flexible, ambiguous, or undefined aspects of the guidance. The methodology identifies Site-specific risk assessment considerations and integrates them with approaches for evaluating human and environmental risk that can be factored into the risk assessment program supporting the Hanford Site cleanup mission. Consequently, the methodology will enhance the preparation and review of individual risk assessments at the Hanford Site.

  12. Hydrogen Goal-Setting Methodologies Report to Congress

    Fuel Cell Technologies Publication and Product Library (EERE)

    DOE's Hydrogen Goal-Setting Methodologies Report to Congress summarizes the processes used to set Hydrogen Program goals and milestones. Published in August 2006, it fulfills the requirement under se

  13. Average System Cost Methodology : Administrator's Record of Decision.

    SciTech Connect (OSTI)

    United States. Bonneville Power Administration.

    1984-06-01

    Significant features of average system cost (ASC) methodology adopted are: retention of the jurisdictional approach where retail rate orders of regulartory agencies provide primary data for computing the ASC for utilities participating in the residential exchange; inclusion of transmission costs; exclusion of construction work in progress; use of a utility's weighted cost of debt securities; exclusion of income taxes; simplification of separation procedures for subsidized generation and transmission accounts from other accounts; clarification of ASC methodology rules; more generous review timetable for individual filings; phase-in of reformed methodology; and each exchanging utility must file under the new methodology within 20 days of implementation by the Federal Energy Regulatory Commission of the ten major participating utilities, the revised ASC will substantially only affect three. (PSB)

  14. Adaptive LES Methodology for Turbulent Flow Simulations

    SciTech Connect (OSTI)

    Oleg V. Vasilyev

    2008-06-12

    have recently been completed at the Japanese Earth Simulator (Yokokawa et al. 2002, Kaneda et al. 2003) using a resolution of 40963 (approximately 10{sup 11}) grid points with a Taylor-scale Reynolds number of 1217 (Re {approx} 10{sup 6}). Impressive as these calculations are, performed on one of the world's fastest super computers, more brute computational power would be needed to simulate the flow over the fuselage of a commercial aircraft at cruising speed. Such a calculation would require on the order of 10{sup 16} grid points and would have a Reynolds number in the range of 108. Such a calculation would take several thousand years to simulate one minute of flight time on today's fastest super computers (Moin & Kim 1997). Even using state-of-the-art zonal approaches, which allow DNS calculations that resolve the necessary range of scales within predefined 'zones' in the flow domain, this calculation would take far too long for the result to be of engineering interest when it is finally obtained. Since computing power, memory, and time are all scarce resources, the problem of simulating turbulent flows has become one of how to abstract or simplify the complexity of the physics represented in the full Navier-Stokes (NS) equations in such a way that the 'important' physics of the problem is captured at a lower cost. To do this, a portion of the modes of the turbulent flow field needs to be approximated by a low order model that is cheaper than the full NS calculation. This model can then be used along with a numerical simulation of the 'important' modes of the problem that cannot be well represented by the model. The decision of what part of the physics to model and what kind of model to use has to be based on what physical properties are considered 'important' for the problem. It should be noted that 'nothing is free', so any use of a low order model will by definition lose some information about the original flow.

  15. A preliminary investigation of the structure of southern Yucca Flat, Massachusetts Mountain, and CP basin, Nevada Test Site, Nevada, based on geophysical modeling.

    SciTech Connect (OSTI)

    Geoffrey A. Phelps; Leigh Justet; Barry C. Moring, and Carter W. Roberts

    2006-03-17

    New gravity and magnetic data collected in the vicinity of Massachusetts Mountain and CP basin (Nevada Test Site, NV) provides a more complex view of the structural relationships present in the vicinity of CP basin than previous geologic models, helps define the position and extent of structures in southern Yucca Flat and CP basin, and better constrains the configuration of the basement structure separating CP basin and Frenchman Flat. The density and gravity modeling indicates that CP basin is a shallow, oval-shaped basin which trends north-northeast and contains ~800 m of basin-filling rocks and sediment at its deepest point in the northeast. CP basin is separated from the deeper Frenchman Flat basin by a subsurface ridge that may represent a Tertiary erosion surface at the top of the Paleozoic strata. The magnetic modeling indicates that the Cane Spring fault appears to merge with faults in northwest Massachusetts Mountain, rather than cut through to Yucca Flat basin and that the basin is downed-dropped relative to Massachusetts Mountain. The magnetic modeling indicates volcanic units within Yucca Flat basin are down-dropped on the west and supports the interpretations of Phelps and KcKee (1999). The magnetic data indicate that the only faults that appear to be through-going from Yucca Flat into either Frenchman Flat or CP basin are the faults that bound the CP hogback. In general, the north-trending faults present along the length of Yucca Flat bend, merge, and disappear before reaching CP hogback and Massachusetts Mountain or French Peak.

  16. New Methodologies for Analysis of Premixed Charge Compression Ignition

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Engines | Department of Energy New Methodologies for Analysis of Premixed Charge Compression Ignition Engines New Methodologies for Analysis of Premixed Charge Compression Ignition Engines Presentation given at the 2007 Diesel Engine-Efficiency & Emissions Research Conference (DEER 2007). 13-16 August, 2007, Detroit, Michigan. Sponsored by the U.S. Department of Energy's (DOE) Office of FreedomCAR and Vehicle Technologies (OFCVT). deer07_aceves.pdf (1012.81 KB) More Documents &

  17. Synthesizing Membrane Proteins Using In Vitro Methodology | Argonne

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    National Laboratory Membrane Proteins Using In Vitro Methodology Technology available for licensing: in vitro, cell-free expression system that caters to the production of protein types that are challenging to study: membrane proteins, membrane-associated proteins, and soluble proteins that require complex redox cofactors. A cell-free, in vitro protein synthesis method for targeting difficult-to-study proteins Quicker and easier than conventional methodologies, this system does not require

  18. Technical report on LWR design decision methodology. Phase I

    SciTech Connect (OSTI)

    None

    1980-03-01

    Energy Incorporated (EI) was selected by Sandia Laboratories to develop and test on LWR design decision methodology. Contract Number 42-4229 provided funding for Phase I of this work. This technical report on LWR design decision methodology documents the activities performed under that contract. Phase I was a short-term effort to thoroughly review the curret LWR design decision process to assure complete understanding of current practices and to establish a well defined interface for development of initial quantitative design guidelines.

  19. Federal and State Structures to Support Financing Utility-Scale Solar Projects and the Business Models Designed to Utilize Them

    SciTech Connect (OSTI)

    Mendelsohn, M.; Kreycik, C.

    2012-04-01

    Utility-scale solar projects have grown rapidly in number and size over the last few years, driven in part by strong renewable portfolio standards (RPS) and federal incentives designed to stimulate investment in renewable energy technologies. This report provides an overview of such policies, as well as the project financial structures they enable, based on industry literature, publicly available data, and questionnaires conducted by the National Renewable Energy Laboratory (NREL).

  20. Application Of The Iberdrola Licensing Methodology To The Cofrentes BWR-6 110% Extended Power Up-rate

    SciTech Connect (OSTI)

    Mata, Pedro; Fuente, Rafael de la; Iglesias, Javier; Sedano, Pablo G.

    2002-07-01

    Iberdrola (spanish utility) and Iberdrola Ingenieria (engineering branch) have been developing during the last two years the 110% Extended Power Up-rate Project (EPU 110%) for Cofrentes BWR-6. IBERDROLA has available an in-house design and licensing reload methodology that has been approved by the Spanish Nuclear Regulatory Authority. This methodology has been already used to perform the nuclear design and the reload licensing analysis for Cofrentes cycles 12 to 14. The methodology has been also applied to develop a significant number of safety analysis of the Cofrentes Extended Power Up-rate including: Reactor Heat Balance, Core and Fuel performance, Thermal Hydraulic Stability, ECCS LOCA Evaluation, Transient Analysis, Anticipated Transient Without Scram (ATWS) and Station Blackout (SBO) Since the scope of the licensing process of the Cofrentes Extended Power Up-rate exceeds the range of analysis included in the Cofrentes generic reload licensing process, it has been required to extend the applicability of the Cofrentes licensing methodology to the analysis of new transients. This is the case of the TLFW transient. The content of this paper shows the benefits of having an in-house design and licensing methodology, and describes the process to extend the applicability of the methodology to the analysis of new transients. The case of analysis of Total Loss of Feedwater with the Cofrentes Retran Model is included as an example of this process. (authors)

  1. inner-sphere complexation of cations at the rutile-water interface: A concise surface structural interpretation with the CD and MUSIC model

    SciTech Connect (OSTI)

    Ridley, Mora K.; Hiemstra, T; Van Riemsdijk, Willem H.; Machesky, Michael L.

    2009-01-01

    Acid base reactivity and ion-interaction between mineral surfaces and aqueous solutions is most frequently investigated at the macroscopic scale as a function of pH. Experimental data are then rationalized by a variety of surface complexation models. These models are thermodynamically based which in principle does not require a molecular picture. The models are typically calibrated to relatively simple solid-electrolyte solution pairs and may provide poor descriptions of complex multicomponent mineral aqueous solutions, including those found in natural environments. Surface complexation models may be improved by incorporating molecular-scale surface structural information to constrain the modeling efforts. Here, we apply a concise, molecularly-constrained surface complexation model to a diverse suite of surface titration data for rutile and thereby begin to address the complexity of multi-component systems. Primary surface charging curves in NaCl, KCl, and RbCl electrolyte media were fit simultaneously using a charge distribution (CD) and multisite complexation (MUSIC) model [Hiemstra T. and Van Riemsdijk W. H. (1996) A surface structural approach to ion adsorption: the charge distribution (CD) model. J. Colloid Interf. Sci. 179, 488 508], coupled with a Basic Stern layer description of the electric double layer. In addition, data for the specific interaction of Ca2+ and Sr2+ with rutile, in NaCl and RbCl media, were modeled. In recent developments, spectroscopy, quantum calculations, and molecular simulations have shown that electrolyte and divalent cations are principally adsorbed in various inner-sphere configurations on the rutile 110 surface [Zhang Z., Fenter P., Cheng L., Sturchio N. C., Bedzyk M. J., Pr edota M., Bandura A., Kubicki J., Lvov S. N., Cummings P. T., Chialvo A. A., Ridley M. K., Be ne zeth P., Anovitz L., Palmer D. A., Machesky M. L. and Wesolowski D. J. (2004) Ion adsorption at the rutile water interface: linking molecular and macroscopic

  2. Influence of myelin proteins on the structure and dynamics of a model membrane with emphasis on the low temperature regime

    SciTech Connect (OSTI)

    Knoll, W.; Peters, J.; Kursula, P.; Gerelli, Y.; Natali, F.

    2014-11-28

    Myelin is an insulating, multi-lamellar membrane structure wrapped around selected nerve axons. Increasing the speed of nerve impulses, it is crucial for the proper functioning of the vertebrate nervous system. Human neurodegenerative diseases, such as multiple sclerosis, are linked to damage to the myelin sheath through demyelination. Myelin exhibits a well defined subset of myelin-specific proteins, whose influence on membrane dynamics, i.e., myelin flexibility and stability, has not yet been explored in detail. In a first paper [W. Knoll, J. Peters, P. Kursula, Y. Gerelli, J. Ollivier, B. Demé, M. Telling, E. Kemner, and F. Natali, Soft Matter 10, 519 (2014)] we were able to spotlight, through neutron scattering experiments, the role of peripheral nervous system myelin proteins on membrane stability at room temperature. In particular, the myelin basic protein and peripheral myelin protein 2 were found to synergistically influence the membrane structure while keeping almost unchanged the membrane mobility. Further insight is provided by this work, in which we particularly address the investigation of the membrane flexibility in the low temperature regime. We evidence a different behavior suggesting that the proton dynamics is reduced by the addition of the myelin basic protein accompanied by negligible membrane structural changes. Moreover, we address the importance of correct sample preparation and characterization for the success of the experiment and for the reliability of the obtained results.

  3. Survey and Evaluate Uncertainty Quantification Methodologies

    SciTech Connect (OSTI)

    Lin, Guang; Engel, David W.; Eslinger, Paul W.

    2012-02-01

    The Carbon Capture Simulation Initiative (CCSI) is a partnership among national laboratories, industry and academic institutions that will develop and deploy state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technologies from discovery to development, demonstration, and ultimately the widespread deployment to hundreds of power plants. The CCSI Toolset will provide end users in industry with a comprehensive, integrated suite of scientifically validated models with uncertainty quantification, optimization, risk analysis and decision making capabilities. The CCSI Toolset will incorporate commercial and open-source software currently in use by industry and will also develop new software tools as necessary to fill technology gaps identified during execution of the project. The CCSI Toolset will (1) enable promising concepts to be more quickly identified through rapid computational screening of devices and processes; (2) reduce the time to design and troubleshoot new devices and processes; (3) quantify the technical risk in taking technology from laboratory-scale to commercial-scale; and (4) stabilize deployment costs more quickly by replacing some of the physical operational tests with virtual power plant simulations. The goal of CCSI is to deliver a toolset that can simulate the scale-up of a broad set of new carbon capture technologies from laboratory scale to full commercial scale. To provide a framework around which the toolset can be developed and demonstrated, we will focus on three Industrial Challenge Problems (ICPs) related to carbon capture technologies relevant to U.S. pulverized coal (PC) power plants. Post combustion capture by solid sorbents is the technology focus of the initial ICP (referred to as ICP A). The goal of the uncertainty quantification (UQ) task (Task 6) is to provide a set of capabilities to the user community for the quantification of uncertainties associated with the carbon

  4. Direct-contact condensers for open-cycle OTEC applications: Model validation with fresh water experiments for structured packings

    SciTech Connect (OSTI)

    Bharathan, D.; Parsons, B.K.; Althof, J.A.

    1988-10-01

    The objective of the reported work was to develop analytical methods for evaluating the design and performance of advanced high-performance heat exchangers for use in open-cycle thermal energy conversion (OC-OTEC) systems. This report describes the progress made on validating a one-dimensional, steady-state analytical computer of fresh water experiments. The condenser model represents the state of the art in direct-contact heat exchange for condensation for OC-OTEC applications. This is expected to provide a basis for optimizing OC-OTEC plant configurations. Using the model, we examined two condenser geometries, a cocurrent and a countercurrent configuration. This report provides detailed validation results for important condenser parameters for cocurrent and countercurrent flows. Based on the comparisons and uncertainty overlap between the experimental data and predictions, the model is shown to predict critical condenser performance parameters with an uncertainty acceptable for general engineering design and performance evaluations. 33 refs., 69 figs., 38 tabs.

  5. CPR methodology with new steady-state criterion and more accurate statistical treatment of channel bow

    SciTech Connect (OSTI)

    Baumgartner, S.; Bieli, R.; Bergmann, U. C.

    2012-07-01

    An overview is given of existing CPR design criteria and the methods used in BWR reload analysis to evaluate the impact of channel bow on CPR margins. Potential weaknesses in today's methodologies are discussed. Westinghouse in collaboration with KKL and Axpo - operator and owner of the Leibstadt NPP - has developed an optimized CPR methodology based on a new criterion to protect against dryout during normal operation and with a more rigorous treatment of channel bow. The new steady-state criterion is expressed in terms of an upper limit of 0.01 for the dryout failure probability per year. This is considered a meaningful and appropriate criterion that can be directly related to the probabilistic criteria set-up for the analyses of Anticipated Operation Occurrences (AOOs) and accidents. In the Monte Carlo approach a statistical modeling of channel bow and an accurate evaluation of CPR response functions allow the associated CPR penalties to be included directly in the plant SLMCPR and OLMCPR in a best-estimate manner. In this way, the treatment of channel bow is equivalent to all other uncertainties affecting CPR. Emphasis is put on quantifying the statistical distribution of channel bow throughout the core using measurement data. The optimized CPR methodology has been implemented in the Westinghouse Monte Carlo code, McSLAP. The methodology improves the quality of dryout safety assessments by supplying more valuable information and better control of conservatisms in establishing operational limits for CPR. The methodology is demonstrated with application examples from the introduction at KKL. (authors)

  6. Development of a Kelp-type Structure Module in a Coastal Ocean Model to Assess the Hydrodynamic Impact of Seawater Uranium Extraction Technology

    SciTech Connect (OSTI)

    Wang, Taiping; Khangaonkar, Tarang; Long, Wen; Gill, Gary A.

    2014-02-07

    In recent years, with the rapid growth of global energy demand, the interest in extracting uranium from seawater for nuclear energy has been renewed. While extracting seawater uranium is not yet commercially viable, it serves as a “backstop” to the conventional uranium resources and provides an essentially unlimited supply of uranium resource. With recent advances in seawater uranium extraction technology, extracting uranium from seawater could be economically feasible when the extraction devices are deployed at a large scale (e.g., several hundred km2). There is concern however that the large scale deployment of adsorbent farms could result in potential impacts to the hydrodynamic flow field in an oceanic setting. In this study, a kelp-type structure module was incorporated into a coastal ocean model to simulate the blockage effect of uranium extraction devices on the flow field. The module was quantitatively validated against laboratory flume experiments for both velocity and turbulence profiles. The model-data comparison showed an overall good agreement and validated the approach of applying the model to assess the potential hydrodynamic impact of uranium extraction devices or other underwater structures in coastal oceans.

  7. Advanced Power Plant Development and Analysis Methodologies

    SciTech Connect (OSTI)

    A.D. Rao; G.S. Samuelsen; F.L. Robson; B. Washom; S.G. Berenyi

    2006-06-30

    Under the sponsorship of the U.S. Department of Energy/National Energy Technology Laboratory, a multi-disciplinary team led by the Advanced Power and Energy Program of the University of California at Irvine is defining the system engineering issues associated with the integration of key components and subsystems into advanced power plant systems with goals of achieving high efficiency and minimized environmental impact while using fossil fuels. These power plant concepts include 'Zero Emission' power plants and the 'FutureGen' H2 co-production facilities. The study is broken down into three phases. Phase 1 of this study consisted of utilizing advanced technologies that are expected to be available in the 'Vision 21' time frame such as mega scale fuel cell based hybrids. Phase 2 includes current state-of-the-art technologies and those expected to be deployed in the nearer term such as advanced gas turbines and high temperature membranes for separating gas species and advanced gasifier concepts. Phase 3 includes identification of gas turbine based cycles and engine configurations suitable to coal-based gasification applications and the conceptualization of the balance of plant technology, heat integration, and the bottoming cycle for analysis in a future study. Also included in Phase 3 is the task of acquiring/providing turbo-machinery in order to gather turbo-charger performance data that may be used to verify simulation models as well as establishing system design constraints. The results of these various investigations will serve as a guide for the U. S. Department of Energy in identifying the research areas and technologies that warrant further support.

  8. Advanced Power Plant Development and Analyses Methodologies

    SciTech Connect (OSTI)

    G.S. Samuelsen; A.D. Rao

    2006-02-06

    Under the sponsorship of the U.S. Department of Energy/National Energy Technology Laboratory, a multi-disciplinary team led by the Advanced Power and Energy Program of the University of California at Irvine is defining the system engineering issues associated with the integration of key components and subsystems into advanced power plant systems with goals of achieving high efficiency and minimized environmental impact while using fossil fuels. These power plant concepts include ''Zero Emission'' power plants and the ''FutureGen'' H{sub 2} co-production facilities. The study is broken down into three phases. Phase 1 of this study consisted of utilizing advanced technologies that are expected to be available in the ''Vision 21'' time frame such as mega scale fuel cell based hybrids. Phase 2 includes current state-of-the-art technologies and those expected to be deployed in the nearer term such as advanced gas turbines and high temperature membranes for separating gas species and advanced gasifier concepts. Phase 3 includes identification of gas turbine based cycles and engine configurations suitable to coal-based gasification applications and the conceptualization of the balance of plant technology, heat integration, and the bottoming cycle for analysis in a future study. Also included in Phase 3 is the task of acquiring/providing turbo-machinery in order to gather turbo-charger performance data that may be used to verify simulation models as well as establishing system design constraints. The results of these various investigations will serve as a guide for the U. S. Department of Energy in identifying the research areas and technologies that warrant further support.

  9. EIA model documentation: Petroleum Market Model of the National Energy Modeling System

    SciTech Connect (OSTI)

    1994-02-24

    The purpose of this report is to define the objectives of the Petroleum Market Model (PMM), describe its basic approach, and provide detail on how it works. This report is intended as a reference document for model analysts, users, and the public. Documentation of the model is in accordance with EIA`s legal obligation to provide adequate documentation in support of its models (Public Law 94-385, section 57.b.2.) The PMM projects petroleum product prices and sources of supply for meeting petroleum product demand. The sources of supply include crude oil, both domestic and imported; other inputs including alcohols and ethers; natural gas plant liquids production; petroleum product imports; and refinery processing gain. In addition, the PMM estimates domestic refinery capacity expansion and fuel consumption. Product prices are estimated at the Census division level and much of the refining activity information is at the Petroleum Administration for Defense (PAD) District level. The report is organized as follows: Chapter 2, Model Purpose; Chapter 3, Model Overview and Rationale; Chapter 4, Model Structure; Appendix A, Inventory of Input Data, Parameter Estimates, and Model Outputs; Appendix B, Detailed Mathematical Description of the Model; Appendix C, Bibliography; Appendix D, Model Abstract; and Appendix E, Data Quality; and Appendix F, Estimation Methodologies.

  10. The effect of large-scale model time step and multiscale coupling frequency on cloud climatology, vertical structure, and rainfall extremes in a superparameterized GCM

    SciTech Connect (OSTI)

    Yu, Sungduk; Pritchard, Michael S.

    2015-12-17

    The effect of global climate model (GCM) time step—which also controls how frequently global and embedded cloud resolving scales are coupled—is examined in the Superparameterized Community Atmosphere Model ver 3.0. Systematic bias reductions of time-mean shortwave cloud forcing (~10 W/m2) and longwave cloud forcing (~5 W/m2) occur as scale coupling frequency increases, but with systematically increasing rainfall variance and extremes throughout the tropics. An overarching change in the vertical structure of deep tropical convection, favoring more bottom-heavy deep convection as a global model time step is reduced may help orchestrate these responses. The weak temperature gradient approximation is more faithfully satisfied when a high scale coupling frequency (a short global model time step) is used. These findings are distinct from the global model time step sensitivities of conventionally parameterized GCMs and have implications for understanding emergent behaviors of multiscale deep convective organization in superparameterized GCMs. Lastly, the results may also be useful for helping to tune them.

  11. The effect of large-scale model time step and multiscale coupling frequency on cloud climatology, vertical structure, and rainfall extremes in a superparameterized GCM

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Yu, Sungduk; Pritchard, Michael S.

    2015-12-17

    The effect of global climate model (GCM) time step—which also controls how frequently global and embedded cloud resolving scales are coupled—is examined in the Superparameterized Community Atmosphere Model ver 3.0. Systematic bias reductions of time-mean shortwave cloud forcing (~10 W/m2) and longwave cloud forcing (~5 W/m2) occur as scale coupling frequency increases, but with systematically increasing rainfall variance and extremes throughout the tropics. An overarching change in the vertical structure of deep tropical convection, favoring more bottom-heavy deep convection as a global model time step is reduced may help orchestrate these responses. The weak temperature gradient approximation is more faithfullymore » satisfied when a high scale coupling frequency (a short global model time step) is used. These findings are distinct from the global model time step sensitivities of conventionally parameterized GCMs and have implications for understanding emergent behaviors of multiscale deep convective organization in superparameterized GCMs. Lastly, the results may also be useful for helping to tune them.« less

  12. Developing custom fire behavior fuel models from ecologically complex fuel structures for upper Atlantic Coastal Plain forests.

    SciTech Connect (OSTI)

    Parresol, Bernard, R.; Scott, Joe, H.; Andreu, Anne; Prichard, Susan; Kurth, Laurie

    2012-01-01

    Currently geospatial fire behavior analyses are performed with an array of fire behavior modeling systems such as FARSITE, FlamMap, and the Large Fire Simulation System. These systems currently require standard or customized surface fire behavior fuel models as inputs that are often assigned through remote sensing information. The ability to handle hundreds or thousands of measured surface fuelbeds representing the fine scale variation in fire behavior on the landscape is constrained in terms of creating compatible custom fire behavior fuel models. In this study, we demonstrate an objective method for taking ecologically complex fuelbeds from inventory observations and converting those into a set of custom fuel models that can be mapped to the original landscape. We use an original set of 629 fuel inventory plots measured on an 80,000 ha contiguous landscape in the upper Atlantic Coastal Plain of the southeastern United States. From models linking stand conditions to component fuel loads, we impute fuelbeds for over 6000 stands. These imputed fuelbeds were then converted to fire behavior parameters under extreme fuel moisture and wind conditions (97th percentile) using the fuel characteristic classification system (FCCS) to estimate surface fire rate of spread, surface fire flame length, shrub layer reaction intensity (heat load), non-woody layer reaction intensity, woody layer reaction intensity, and litter-lichen-moss layer reaction intensity. We performed hierarchical cluster analysis of the stands based on the values of the fire behavior parameters. The resulting 7 clusters were the basis for the development of 7 custom fire behavior fuel models from the cluster centroids that were calibrated against the FCCS point data for wind and fuel moisture. The latter process resulted in calibration against flame length as it was difficult to obtain a simultaneous calibration against both rate of spread and flame length. The clusters based on FCCS fire behavior

  13. Structure Discovery in Large Semantic Graphs Using Extant Ontological Scaling and Descriptive Statistics

    SciTech Connect (OSTI)

    al-Saffar, Sinan; Joslyn, Cliff A.; Chappell, Alan R.

    2011-07-18

    As semantic datasets grow to be very large and divergent, there is a need to identify and exploit their inherent semantic structure for discovery and optimization. Towards that end, we present here a novel methodology to identify the semantic structures inherent in an arbitrary semantic graph dataset. We first present the concept of an extant ontology as a statistical description of the semantic relations present amongst the typed entities modeled in the graph. This serves as a model of the underlying semantic structure to aid in discovery and visualization. We then describe a method of ontological scaling in which the ontology is employed as a hierarchical scaling filter to infer different resolution levels at which the graph structures are to be viewed or analyzed. We illustrate these methods on three large and publicly available semantic datasets containing more than one billion edges each. Keywords-Semantic Web; Visualization; Ontology; Multi-resolution Data Mining;

  14. Application of the IBERDROLA RETRAN Licensing Methodology to the Confrentes BWR-6 110% Extended Power Uprate

    SciTech Connect (OSTI)

    Fuente, Rafael de la; Iglesias, Javier; Sedano, Pablo G.; Mata, Pedro

    2003-04-15

    IBERDROLA (Spanish utility) and IBERDROLA INGENIERIA (engineering branch) have been developing during the last 2 yr the 110% Extended Power Uprate Project for Cofrentes BWR-6. IBERDROLA has available an in-house design and licensing reload methodology that has been approved in advance by the Spanish Nuclear Regulatory Authority. This methodology has been applied to perform the nuclear design and the reload licensing analysis for Cofrentes cycles 12 and 13 and to develop a significant number of safety analyses of the Cofrentes Extended Power.Because the scope of the licensing process of the Cofrentes Extended Power Uprate exceeds the range of analysis included in the Cofrentes generic reload licensing process, it has been required to extend the applicability of the Cofrentes RETRAN model to the analysis of new transients. This is the case of the total loss of feedwater (TLFW) transient.The content of this paper shows the benefits of having an in-house design and licensing methodology and describes the process to extend the applicability of the Cofrentes RETRAN model to the analysis of new transients, particularly in this paper the TLFW transient.

  15. World Energy Projection System Plus Model Documentation: Refinery Model

    Reports and Publications (EIA)

    2016-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Refinery Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  16. World Energy Projection System Plus Model Documentation: District Heat Model

    Reports and Publications (EIA)

    2011-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) District Heat Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  17. World Energy Projection System Plus Model Documentation: Coal Model

    Reports and Publications (EIA)

    2011-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Coal Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  18. World Energy Projection System Plus Model Documentation: Commercial Model

    Reports and Publications (EIA)

    2011-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Commercial Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  19. World Energy Projection System Plus Model Documentation: Natural Gas Model

    Reports and Publications (EIA)

    2011-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Natural Gas Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  20. World Energy Projection System Plus Model Documentation: Main Model

    Reports and Publications (EIA)

    2011-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Main Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  1. World Energy Projection System Plus Model Documentation: Industrial Model

    Reports and Publications (EIA)

    2011-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) World Industrial Model (WIM). It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  2. World Energy Projection System Plus Model Documentation: Refinery Model

    Reports and Publications (EIA)

    2011-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Refinery Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  3. International Natural Gas Model 2011, Model Documentation Report

    Reports and Publications (EIA)

    2013-01-01

    This report documents the objectives, analytical approach and development of the International Natural Gas Model (INGM). It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  4. World Energy Projection System Plus Model Documentation: World Electricity Model

    Reports and Publications (EIA)

    2011-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) World Electricity Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  5. World Energy Projection System Plus Model Documentation: Transportation Model

    Reports and Publications (EIA)

    2011-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) International Transportation model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  6. World Energy Projection System Plus Model Documentation: Greenhouse Gases Model

    Reports and Publications (EIA)

    2011-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Greenhouse Gases Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  7. World Energy Projection System Plus Model Documentation: Residential Model

    Reports and Publications (EIA)

    2011-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Residential Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  8. Final Report for Award #0006731. Modeling, Patterning and Evolving Syntrophic Communities that Link Fermentation to Metal Reduction

    SciTech Connect (OSTI)

    Marx, Christopher J.

    2015-07-17

    This project has developed and combined mathematical models, multi-species consortia, and spatially structured environments as an approach for studying metabolic exchange in communities like the ones between fermenters and metal reducers. We have developed novel, broadly-applicable tools for following community dynamics, come to a better understanding of both sugar and lactate-utilization in S. oneidensis, the interactions between carbon and mineral availability, and have a methodology for cell printing to match with spatiotemporal models of consortia metabolism.

  9. A generic semi-implicit coupling methodology for use in RELAP5-3D{copyright}

    SciTech Connect (OSTI)

    Aumiller, D.L.; Tomlinson, E.T.; Weaver, W.L.

    2000-09-01

    A generic semi-implicit coupling methodology has been developed and implemented in the RELAP5-3D{copyright} computer program. This methodology allows RELAP5-3D{copyright} to be used with other computer programs to perform integrated analyses of nuclear power reactor systems and related experimental facilities. The coupling methodology potentially allows different programs to be used to model different portions of the system. The programs are chosen based on their capability to model the phenomena that are important in the simulation in the various portions of the system being considered. The methodology was demonstrated using a test case in which the test geometry was divided into two parts each of which was solved as a RELAP5-3D{copyright} simulation. This test problem exercised all of the semi-implicit coupling features which were installed in RELAP5-3D0. The results of this verification test case show that the semi-implicit coupling methodology produces the same answer as the simulation of the test system as a single process.

  10. Petroleum Market Model of the National Energy Modeling System

    SciTech Connect (OSTI)

    1997-01-01

    The purpose of this report is to define the objectives of the Petroleum Market Model (PMM), describe its basic approach, and provide detail on how it works. This report is intended as a reference document for model analysts, users, and the public. The PMM models petroleum refining activities, the marketing of petroleum products to consumption regions. The production of natural gas liquids in gas processing plants, and domestic methanol production. The PMM projects petroleum product prices and sources of supply for meeting petroleum product demand. The sources of supply include crude oil, both domestic and imported; other inputs including alcohols and ethers; natural gas plant liquids production; petroleum product imports; and refinery processing gain. In addition, the PMM estimates domestic refinery capacity expansion and fuel consumption. Product prices are estimated at the Census division level and much of the refining activity information is at the Petroleum Administration for Defense (PAD) District level. This report is organized as follows: Chapter 2, Model Purpose; Chapter 3, Model Overview and Rationale; Chapter 4, Model Structure; Appendix A, Inventory of Input Data, Parameter Estimates, and Model Outputs; Appendix B, Detailed Mathematical Description of the Model; Appendix C, Bibliography; Appendix D, Model Abstract; Appendix E, Data Quality; Appendix F, Estimation methodologies; Appendix G, Matrix Generator documentation; Appendix H, Historical Data Processing; and Appendix I, Biofuels Supply Submodule.

  11. Scales in the fine structure of the magnetic dipole resonance: A wavelet approach to the shell model

    SciTech Connect (OSTI)

    Petermann, I.; Langanke, K.; Martinez-Pinedo, G.; Neumann-Cosel, P. von; Nowacki, F.; Richter, A.

    2010-01-15

    Wavelet analysis is applied as a tool for the examination of magnetic dipole (M1) strength distributions in pf-shell nuclei by the extraction of wavelet scales. Results from the analysis of theoretical M1 strength distributions calculated with the KB3G interaction are compared to experimental data from (e,e{sup '}) experiments and good agreement of the deduced wavelet scales is observed. This provides further insight into the nature of the scales from the model results. The influence of the number of Lanczos iterations on the development and stability of scales and the role of the model space in terms of the truncation level are studied. Moreover, differences in the scales of spin and orbital parts of the M1 strength are investigated, as is the use of different effective interactions (KB3G, GXPF1, and FPD6).

  12. A state-impact-state methodology for assessing environmental impact in land use planning

    SciTech Connect (OSTI)

    Chen, Longgao; Yang, Xiaoyan; Chen, Longqian; Potter, Rebecca; Li, Yingkui

    2014-04-01

    The implementation of land use planning (LUP) has a large impact on environmental quality. There lacks a widely accepted and consolidated approach to assess the LUP environmental impact using Strategic Environmental Assessment (SEA). In this paper, we developed a state-impact-state (SIS) model employed in the LUP environmental impact assessment (LUPEA). With the usage of Matter-element (ME) and Extenics method, the methodology based on the SIS model was established and applied in the LUPEA of Zoucheng County, China. The results show that: (1) this methodology provides an intuitive and easy understanding logical model for both the theoretical analysis and application of LUPEA; (2) the spatial multi-temporal assessment from base year, near-future year to planning target year suggests the positive impact on the environmental quality in the whole County despite certain environmental degradation in some towns; (3) besides the spatial assessment, other achievements including the environmental elements influenced by land use and their weights, the identification of key indicators in LUPEA, and the appropriate environmental mitigation measures were obtained; and (4) this methodology can be used to achieve multi-temporal assessment of LUP environmental impact of County or Town level in other areas. - Highlights: A State-Impact-State model for Land Use Planning Environmental Assessment (LUPEA). Matter-element (ME) and Extenics methods were embedded in the LUPEA. The model was applied to the LUPEA of Zoucheng County. The assessment shows improving environment quality since 2000 in Zoucheng County. The method provides a useful tool for the LUPEA in the county level.

  13. Modeling Long-term Creep Performance for Welded Nickel-base Superalloy Structures for Power Generation Systems

    SciTech Connect (OSTI)

    Shen, Chen

    2015-01-01

    We report here a constitutive model for predicting long-term creep strain evolution in ’ strengthened Ni-base superalloys. Dislocation climb-bypassing ’, typical in intermediate ’ volume fraction (~20%) alloys, is considered as the primary deformation mechanism. Dislocation shearing ’ to anti-phase boundary (APB) faults and diffusional creep are also considered for high-stress and high-temperature low-stress conditions, respectively. Additional damage mechanism is taken into account for rapid increase in tertiary creep strain. The model has been applied to Alloy 282, and calibrated in a temperature range of 1375-1450˚F, and stress range of 15-45ksi. The model parameters and a MATLAB code are provided. This report is prepared by Monica Soare and Chen Shen at GE Global Research. Technical discussions with Dr. Vito Cedro are greatly appreciated. This work was supported by DOE program DE-FE0005859

  14. The {sup 13}C-pocket structure in AGB models: constraints from zirconium isotope abundances in single mainstream SiC grains

    SciTech Connect (OSTI)

    Liu, Nan; Davis, Andrew M.; Pellin, Michael J.; Gallino, Roberto; Bisterzo, Sara; Savina, Michael R.

    2014-06-20

    We present postprocess asymptotic giant branch (AGB) nucleosynthesis models with different {sup 13}C-pocket internal structures to better explain zirconium isotope measurements in mainstream presolar SiC grains by Nicolussi et al. and Barzyk et al. We show that higher-than-solar {sup 92}Zr/{sup 94}Zr ratios can be predicted by adopting a {sup 13}C-pocket with a flat {sup 13}C profile, instead of the previous decreasing-with-depth {sup 13}C profile. The improved agreement between grain data for zirconium isotopes and AGB models provides additional support for a recent proposal of a flat {sup 13}C profile based on barium isotopes in mainstream SiC grains by Liu et al.

  15. Life prediction methodology for ceramic components of advanced vehicular heat engines: Volume 1. Final report

    SciTech Connect (OSTI)

    Khandelwal, P.K.; Provenzano, N.J.; Schneider, W.E.

    1996-02-01

    One of the major challenges involved in the use of ceramic materials is ensuring adequate strength and durability. This activity has developed methodology which can be used during the design phase to predict the structural behavior of ceramic components. The effort involved the characterization of injection molded and hot isostatic pressed (HIPed) PY-6 silicon nitride, the development of nondestructive evaluation (NDE) technology, and the development of analytical life prediction methodology. Four failure modes are addressed: fast fracture, slow crack growth, creep, and oxidation. The techniques deal with failures initiating at the surface as well as internal to the component. The life prediction methodology for fast fracture and slow crack growth have been verified using a variety of confirmatory tests. The verification tests were conducted at room and elevated temperatures up to a maximum of 1371 {degrees}C. The tests involved (1) flat circular disks subjected to bending stresses and (2) high speed rotating spin disks. Reasonable correlation was achieved for a variety of test conditions and failure mechanisms. The predictions associated with surface failures proved to be optimistic, requiring re-evaluation of the components` initial fast fracture strengths. Correlation was achieved for the spin disks which failed in fast fracture from internal flaws. Time dependent elevated temperature slow crack growth spin disk failures were also successfully predicted.

  16. Efficient Computation of Info-Gap Robustness for Finite Element Models

    SciTech Connect (OSTI)

    Stull, Christopher J.; Hemez, Francois M.; Williams, Brian J.

    2012-07-05

    A recent research effort at LANL proposed info-gap decision theory as a framework by which to measure the predictive maturity of numerical models. Info-gap theory explores the trade-offs between accuracy, that is, the extent to which predictions reproduce the physical measurements, and robustness, that is, the extent to which predictions are insensitive to modeling assumptions. Both accuracy and robustness are necessary to demonstrate predictive maturity. However, conducting an info-gap analysis can present a formidable challenge, from the standpoint of the required computational resources. This is because a robustness function requires the resolution of multiple optimization problems. This report offers an alternative, adjoint methodology to assess the info-gap robustness of Ax = b-like numerical models solved for a solution x. Two situations that can arise in structural analysis and design are briefly described and contextualized within the info-gap decision theory framework. The treatments of the info-gap problems, using the adjoint methodology are outlined in detail, and the latter problem is solved for four separate finite element models. As compared to statistical sampling, the proposed methodology offers highly accurate approximations of info-gap robustness functions for the finite element models considered in the report, at a small fraction of the computational cost. It is noted that this report considers only linear systems; a natural follow-on study would extend the methodologies described herein to include nonlinear systems.

  17. Modeling the Effects of (lambda)-gun on SSPX Operation: Mode Spectra, Internal Magnetic Field Structure, and Energy Confinement

    SciTech Connect (OSTI)

    Hooper, E

    2005-08-23

    The Sustained Spheromak Physics Experiment (SSPX) shows considerable sensitivity to the value of the injected (''gun'') current, I{sub gun}, parameterized by the relative values of {lambda}{sub gun} = {mu}{sub 0}I{sub gun}/{Psi}{sub gun} (with {Psi}{sub gun} the bias poloidal magnetic flux) to the lowest eigenvalue of {del} x B = {lambda}{sub FC}B in the flux conserver geometry. This report discusses modeling calculations using the NIMROD resistive-MHD code in the SSPX geometry. The behavior is found to be very sensitive to the profile of the safety factor, q, with the excitation of interior MHD modes at low-order resonant surfaces significantly affecting the evolution. Their evolution affects the fieldline topology (closed flux, islands, stochastic fieldlines confined by KAM surfaces, and open fieldlines), and thus electron temperature and other parameters. Because of this sensitivity, a major effect is the modification of the q-profile by the current on the open fieldlines in the flux core along the geometric axis. The time-history of a discharge can thus vary considerably for relatively small changes in I{sub gun}. The possibility of using this sensitivity for feedback control of the discharge evolution is discussed, but modeling of the process is left for future work.

  18. A high-entropy-wind r-process study based on nuclear-structure quantities from the new finite-range droplet model FRDM(2012)

    SciTech Connect (OSTI)

    Kratz, Karl-Ludwig; Farouqi, Khalil; Mller, Peter E-mail: kfarouqi@lsw.uni-heidelberg.de

    2014-09-01

    Attempts to explain the source of r-process elements in our solar system (S.S.) by particular astrophysical sites still face entwined uncertainties, stemming from the extrapolation of nuclear properties far from stability, inconsistent sources of different properties (e.g., nuclear masses and ?-decay properties), and the poor understanding of astrophysical conditions, which are hard to disentangle. In this paper we present results from the investigation of r-process in the high-entropy wind (HEW) of core-collapse supernovae (here chosen as one of the possible scenarios for this nucleosynthesis process), using new nuclear-data input calculated in a consistent approach, for masses and ?-decay properties from the new finite-range droplet model FRDM(2012). The accuracy of the new mass model is 0.56 MeV with respect to AME2003, to which it was adjusted. We compare the new HEW r-process abundance pattern to the latest S.S. r-process residuals and to our earlier calculations with the nuclear-structure quantities based on FRDM(1992). Substantial overall and specific local improvements in the calculated pattern of the r-process between A ? 110 and {sup 209}Bi, as well as remaining deficiencies, are discussed in terms of the underlying spherical and deformed shell structure far from stability.

  19. RTI International Develops SSL Luminaire Reliability Model |...

    Broader source: Energy.gov (indexed) [DOE]

    life testing (ALT) methodologies and a reliability model for predicting the lifetime of ... is not a proxy for luminaire reliability, and that a systems-level approach ...

  20. An Integrated Safety Assessment Methodology for Generation IV Nuclear Systems

    SciTech Connect (OSTI)

    Timothy J. Leahy

    2010-06-01

    The Generation IV International Forum (GIF) Risk and Safety Working Group (RSWG) was created to develop an effective approach for the safety of Generation IV advanced nuclear energy systems. Early work of the RSWG focused on defining a safety philosophy founded on lessons learned from current and prior generations of nuclear technologies, and on identifying technology characteristics that may help achieve Generation IV safety goals. More recent RSWG work has focused on the definition of an integrated safety assessment methodology for evaluating the safety of Generation IV systems. The methodology, tentatively called ISAM, is an integrated toolkit consisting of analytical techniques that are available and matched to appropriate stages of Generation IV system concept development. The integrated methodology is intended to yield safety-related insights that help actively drive the evolving design throughout the technology development cycle, potentially resulting in enhanced safety, reduced costs, and shortened development time.

  1. NSTP 2002-2 Methodology for Final Hazard Categorization for Nuclear...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    NSTP 2002-2 Methodology for Final Hazard Categorization for Nuclear Facilities from Category 3 to Radiological (111302). NSTP 2002-2 Methodology for Final Hazard Categorization ...

  2. Modeling

    SciTech Connect (OSTI)

    Loth, E.; Tryggvason, G.; Tsuji, Y.; Elghobashi, S. E.; Crowe, Clayton T.; Berlemont, A.; Reeks, M.; Simonin, O.; Frank, Th; Onishi, Yasuo; Van Wachem, B.

    2005-09-01

    Slurry flows occur in many circumstances, including chemical manufacturing processes, pipeline transfer of coal, sand, and minerals; mud flows; and disposal of dredged materials. In this section we discuss slurry flow applications related to radioactive waste management. The Hanford tank waste solids and interstitial liquids will be mixed to form a slurry so it can be pumped out for retrieval and treatment. The waste is very complex chemically and physically. The ARIEL code is used to model the chemical interactions and fluid dynamics of the waste.

  3. Lipopolysaccharide density and structure govern the extent and distance of nanoparticle interaction with actual and model bacterial outer membranes

    SciTech Connect (OSTI)

    Jacobson, Kurt H.; Gunsolus, Ian L.; Kuech, Thomas R.; Troiano, Julianne M.; Melby, Eric S.; Lohse, Samuel E.; Hu, Dehong; Chrisler, William B.; Murphy, Catherine; Orr, Galya; Geiger, Franz M.; Haynes, Christy L.; Pedersen, Joel A.

    2015-07-24

    Design of nanomedicines and nanoparticle-based antimicrobial and antifouling formulations, and assessment of the potential implications of nanoparticle release into the environment require understanding nanoparticle interaction with bacterial surfaces. Here we demonstrate electrostatically driven association of functionalized nanoparticles with lipopolysaccharides of Gram-negative bacterial outer membranes and find that lipopolysaccharide structure influences the extent and location of binding relative to the lipid-solution interface. By manipulating the lipopolysaccharide content in Shewanella oneidensis outer membranes, we observed electrostatically driven interaction of cationic gold nanoparticles with the lipopolysaccharide-containing leaflet. We probed this interaction by quartz crystal microbalance with dissipation monitoring (QCM-D) and second harmonic generation (SHG) using solid-supported lipopolysaccharide-containing bilayers. Association of cationic nanoparticles increased with lipopolysaccharide content, while no association of anionic nanoparticles was observed. The harmonic-dependence of QCM-D measurements suggested that a population of the cationic nanoparticles was held at a distance from the outer leaflet-solution interface of bilayers containing smooth lipopolysaccharides (those bearing a long O-polysaccharide). Additionally, smooth lipopolysaccharides held the bulk of the associated cationic particles outside of the interfacial zone probed by SHG. Our results demonstrate that positively charged nanoparticles are more likely to interact with Gram-negative bacteria than are negatively charged particles, and this interaction occurs primarily through lipopolysaccharides.

  4. Lipopolysaccharide Density and Structure Govern the Extent and Distance of Nanoparticle Interaction with Actual and Model Bacterial Outer Membranes

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Jacobson, Kurt H.; Gunsolus, Ian L.; Kuech, Thomas R.; Troiano, Julianne M.; Melby, Eric S.; Lohse, Samuel E.; Hu, Dehong; Chrisler, William B.; Murphy, Catherine J.; Orr, Galya; et al

    2015-07-24

    We report that design of nanomedicines and nanoparticle-based antimicrobial and antifouling formulations, and assessment of the potential implications of nanoparticle release into the environment require understanding nanoparticle interaction with bacterial surfaces. Here we demonstrate electrostatically driven association of functionalized nanoparticles with lipopolysaccharides of Gram-negative bacterial outer membranes and find that lipopolysaccharide structure influences the extent and location of binding relative to the lipid-solution interface. By manipulating the lipopolysaccharide content in Shewanella oneidensis outer membranes, we observed electrostatically driven interaction of cationic gold nanoparticles with the lipopolysaccharide-containing leaflet. We probed this interaction by quartz crystal microbalance with dissipation monitoring (QCM-D) andmore » second harmonic generation (SHG) using solid-supported lipopolysaccharide-containing bilayers. Association of cationic nanoparticles increased with lipopolysaccharide content, while no association of anionic nanoparticles was observed. The harmonic-dependence of QCM-D measurements suggested that a population of the cationic nanoparticles was held at a distance from the outer leaflet-solution interface of bilayers containing smooth lipopolysaccharides (those bearing a long O-polysaccharide). Additionally, smooth lipopolysaccharides held the bulk of the associated cationic particles outside of the interfacial zone probed by SHG. Lastly, our results demonstrate that positively charged nanoparticles are more likely to interact with Gram-negative bacteria than are negatively charged particles, and this interaction occurs primarily through lipopolysaccharides.« less

  5. Compartmentalization analysis using discrete fracture network models

    SciTech Connect (OSTI)

    La Pointe, P.R.; Eiben, T.; Dershowitz, W.; Wadleigh, E.

    1997-08-01

    This paper illustrates how Discrete Fracture Network (DFN) technology can serve as a basis for the calculation of reservoir engineering parameters for the development of fractured reservoirs. It describes the development of quantitative techniques for defining the geometry and volume of structurally controlled compartments. These techniques are based on a combination of stochastic geometry, computational geometry, and graph the theory. The parameters addressed are compartment size, matrix block size and tributary drainage volume. The concept of DFN models is explained and methodologies to compute these parameters are demonstrated.

  6. Methodology for evaluating military systems in a counterproliferation role. Master`s thesis

    SciTech Connect (OSTI)

    Stafira, S.

    1995-03-01

    This thesis develops a methodology to evaluate how dissimilar military systems support the accomplishment of the United States. counterproliferation objectives. The overall scope is to develop a model of the counterproliferation decision process that enables systems to be evaluated against common criteria. By using decision analysis, a influence diagram model is developed which represents military activities in the counterproliferation process. The key questions which must be asked in evaluating counterproliferation systems are highlighted. An analysis of perfect intelligence, perfect defensive, and perfect offensive systems reveal that a perfect intelligence system provides the greatest potential to meet the United States` counterproliferation objectives. Sensitivity analysis is conducted to determine which factors in the model are mode important. To demonstrate the model, nine systems from the Air Force wargame Vulcan`s Forge 1995 are evaluated. The results are used to demonstrate the type of analysis which can be performed to evaluate U.S. counterproliferation systems.

  7. Electric Double-Layer Structure in Primitive Model Electrolytes. Comparing Molecular Dynamics with Local-Density Approximations

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Giera, Brian; Lawrence Livermore National Lab.; Henson, Neil; Kober, Edward M.; Shell, M. Scott; Squires, Todd M.

    2015-02-27

    We evaluate the accuracy of local-density approximations (LDAs) using explicit molecular dynamics simulations of binary electrolytes comprised of equisized ions in an implicit solvent. The Bikerman LDA, which considers ions to occupy a lattice, poorly captures excluded volume interactions between primitive model ions. Instead, LDAs based on the Carnahan–Starling (CS) hard-sphere equation of state capture simulated values of ideal and excess chemical potential profiles extremely well, as is the relationship between surface charge density and electrostatic potential. Excellent agreement between the EDL capacitances predicted by CS-LDAs and computed in molecular simulations is found even in systems where ion correlations drivemore » strong density and free charge oscillations within the EDL, despite the inability of LDAs to capture the oscillations in the detailed EDL profiles.« less

  8. U.S. Energy-by-Rail Data Methodology

    U.S. Energy Information Administration (EIA) Indexed Site

    by-Rail Data Methodology June 2016 Independent Statistics & Analysis www.eia.gov U.S. Department of Energy Washington, DC 20585 U.S. Energy Information Administration | U.S. Energy-by-Rail Data Methodology i This report was prepared by the U.S. Energy Information Administration (EIA), the statistical and analytical agency within the U.S. Department of Energy. By law, EIA's data, analyses, and forecasts are independent of approval by any other officer or employee of the United States

  9. Surface Signature Characterization at SPE through Ground-Proximal Methods: Methodology Change and Technical Justification

    SciTech Connect (OSTI)

    Schultz-Fellenz, Emily S.

    2015-09-09

    A portion of LANL’s FY15 SPE objectives includes initial ground-based or ground-proximal investigations at the SPE Phase 2 site. The area of interest is the U2ez location in Yucca Flat. This collection serves as a baseline for discrimination of surface features and acquisition of topographic signatures prior to any development or pre-shot activities associated with SPE Phase 2. Our team originally intended to perform our field investigations using previously vetted ground-based (GB) LIDAR methodologies. However, the extended proposed time frame of the GB LIDAR data collection, and associated data processing time and delivery date, were unacceptable. After technical consultation and careful literature research, LANL identified an alternative methodology to achieve our technical objectives and fully support critical model parameterization. Very-low-altitude unmanned aerial systems (UAS) photogrammetry appeared to satisfy our objectives in lieu of GB LIDAR. The SPE Phase 2 baseline collection was used as a test of this UAS photogrammetric methodology.

  10. Methodology and emission scenarios employed in the development of the National Energy Strategy

    SciTech Connect (OSTI)

    Fisher, R.E.

    1992-09-01

    This paper describes the steps taken to model the National Energy Strategy (NES). It provides an overview of the NES process including the models used for the project. The National Energy Strategy Environmental Analysis Model (NESEAM), which was used in analyzing environmental impacts, is discussed. The structure of NESEAM, as well as results and analyses are presented.

  11. Methodology and emission scenarios employed in the development of the National Energy Strategy

    SciTech Connect (OSTI)

    Fisher, R.E.

    1992-01-01

    This paper describes the steps taken to model the National Energy Strategy (NES). It provides an overview of the NES process including the models used for the project. The National Energy Strategy Environmental Analysis Model (NESEAM), which was used in analyzing environmental impacts, is discussed. The structure of NESEAM, as well as results and analyses are presented.

  12. Adaptive resolution simulation of a biomolecule and its hydration shell: Structural and dynamical properties

    SciTech Connect (OSTI)

    Fogarty, Aoife C. Potestio, Raffaello Kremer, Kurt

    2015-05-21

    A fully atomistic modelling of many biophysical and biochemical processes at biologically relevant length- and time scales is beyond our reach with current computational resources, and one approach to overcome this difficulty is the use of multiscale simulation techniques. In such simulations, when system properties necessitate a boundary between resolutions that falls within the solvent region, one can use an approach such as the Adaptive Resolution Scheme (AdResS), in which solvent particles change their resolution on the fly during the simulation. Here, we apply the existing AdResS methodology to biomolecular systems, simulating a fully atomistic protein with an atomistic hydration shell, solvated in a coarse-grained particle reservoir and heat bath. Using as a test case an aqueous solution of the regulatory protein ubiquitin, we first confirm the validity of the AdResS approach for such systems, via an examination of protein and solvent structural and dynamical properties. We then demonstrate how, in addition to providing a computational speedup, such a multiscale AdResS approach can yield otherwise inaccessible physical insights into biomolecular function. We use our methodology to show that protein structure and dynamics can still be correctly modelled using only a few shells of atomistic water molecules. We also discuss aspects of the AdResS methodology peculiar to biomolecular simulations.

  13. Sandia software guidelines: Volume 5, Tools, techniques, and methodologies

    SciTech Connect (OSTI)

    Not Available

    1989-07-01

    This volume is one in a series of Sandia Software Guidelines intended for use in producing quality software within Sandia National Laboratories. This volume describes software tools and methodologies available to Sandia personnel for the development of software, and outlines techniques that have proven useful within the Laboratories and elsewhere. References and evaluations by Sandia personnel are included. 6 figs.

  14. Methodology for testing metal detectors using variables test data

    SciTech Connect (OSTI)

    Spencer, D.D.; Murray, D.W.

    1993-08-01

    By extracting and analyzing measurement (variables) data from portal metal detectors whenever possible instead of the more typical ``alarm``/``no-alarm`` (attributes or binomial) data, we can be more informed about metal detector health with fewer tests. This testing methodology discussed in this report is an alternative to the typical binomial testing and in many ways is far superior.

  15. Prometheus Reactor I&C Software Development Methodology, for Action

    SciTech Connect (OSTI)

    T. Hamilton

    2005-07-30

    The purpose of this letter is to submit the Reactor Instrumentation and Control (I&C) software life cycle, development methodology, and programming language selections and rationale for project Prometheus to NR for approval. This letter also provides the draft Reactor I&C Software Development Process Manual and Reactor Module Software Development Plan to NR for information.

  16. UNDERSTANDING FLOW OF ENERGY IN BUILDINGS USING MODAL ANALYSIS METHODOLOGY

    SciTech Connect (OSTI)

    John Gardner; Kevin Heglund; Kevin Van Den Wymelenberg; Craig Rieger

    2013-07-01

    It is widely understood that energy storage is the key to integrating variable generators into the grid. It has been proposed that the thermal mass of buildings could be used as a distributed energy storage solution and several researchers are making headway in this problem. However, the inability to easily determine the magnitude of the building’s effective thermal mass, and how the heating ventilation and air conditioning (HVAC) system exchanges thermal energy with it, is a significant challenge to designing systems which utilize this storage mechanism. In this paper we adapt modal analysis methods used in mechanical structures to identify the primary modes of energy transfer among thermal masses in a building. The paper describes the technique using data from an idealized building model. The approach is successfully applied to actual temperature data from a commercial building in downtown Boise, Idaho.

  17. Methodology for the characterization and management of nonpoint source water pollution. Master's thesis

    SciTech Connect (OSTI)

    Praner, D.M.; Sprewell, G.M.

    1992-09-01

    The purpose of this research was development of a methodology for characterization and management of Nonpoint Source (NPS) water pollution. Section 319 of the 1987 Water Quality Act requires states to develop management programs for reduction of NPS pollution via Best Management Practices (BMPs). Air Force installations are expected to abide by federal, state, and local environmental regulations. Currently, the Air Force does not have a methodology to identify and quantify NPS pollution, or a succinct catalog of BMPs. Air Force installation managers need a package to assist them in meeting legislative and regulatory requirements associated with NPS pollution. Ten constituents characteristic of urban runoff were identified in the Nationwide Urban Runoff Program (NURP) and selected as those constituents of concern for modeling and sampling. Two models were used and compared with the results of a sampling and analysis program. Additionally, a compendium of BMPs was developed.... Nonpoint Source Pollution (NPS), Best Management Practices (BMPs), Water pollution, Water sampling and analysis, Stormwater runoff modeling, NPDES.

  18. Methods for simulation-based analysis of fluid-structure interaction.

    SciTech Connect (OSTI)

    Barone, Matthew Franklin; Payne, Jeffrey L.

    2005-10-01

    Methods for analysis of fluid-structure interaction using high fidelity simulations are critically reviewed. First, a literature review of modern numerical techniques for simulation of aeroelastic phenomena is presented. The review focuses on methods contained within the arbitrary Lagrangian-Eulerian (ALE) framework for coupling computational fluid dynamics codes to computational structural mechanics codes. The review treats mesh movement algorithms, the role of the geometric conservation law, time advancement schemes, wetted surface interface strategies, and some representative applications. The complexity and computational expense of coupled Navier-Stokes/structural dynamics simulations points to the need for reduced order modeling to facilitate parametric analysis. The proper orthogonal decomposition (POD)/Galerkin projection approach for building a reduced order model (ROM) is presented, along with ideas for extension of the methodology to allow construction of ROMs based on data generated from ALE simulations.

  19. A Probabilistic-Micro-mechanical Methodology for Assessing Zirconium Alloy Cladding Failure

    SciTech Connect (OSTI)

    Pan, Y.M.; Chan, K.S.; Riha, D.S.

    2007-07-01

    Cladding failure of fuel rods caused by hydride-induced embrittlement is a reliability concern for spent nuclear fuel after extended burnup. Uncertainties in the cladding temperature, cladding stress, oxide layer thickness, and the critical stress value for hydride reorientation preclude an assessment of the cladding failure risk. A set of micro-mechanical models for treating oxide cracking, blister cracking, delayed hydride cracking, and cladding fracture was developed and incorporated in a computer model. Results obtained from the preliminary model calculations indicate that at temperatures below a critical temperature of 318.5 deg. C [605.3 deg. F], the time to failure by delayed hydride cracking in Zr-2.5%Nb decreased with increasing cladding temperature. The overall goal of this project is to develop a probabilistic-micro-mechanical methodology for assessing the probability of hydride-induced failure in Zircaloy cladding and thereby establish performance criteria. (authors)

  20. A structural model of anti-anti-[sigma];#963; inhibition by a two-component receiver domain: the PhyR stress response regulator

    SciTech Connect (OSTI)

    Herrou, Julien; Foreman, Robert; Fiebig, Aretha; Crosson, Sean

    2012-03-30

    PhyR is a hybrid stress regulator conserved in {alpha}-proteobacteria that contains an N-terminal {sigma}-like (SL) domain and a C-terminal receiver domain. Phosphorylation of the receiver domain is known to promote binding of the SL domain to an anti-{sigma} factor. PhyR thus functions as an anti-anti-{sigma} factor in its phosphorylated state. We present genetic evidence that Caulobacter crescentus PhyR is a phosphorylation-dependent stress regulator that functions in the same pathway as {sigma}{sup T} and its anti-{sigma} factor, NepR. Additionally, we report the X-ray crystal structure of PhyR at 1.25 {angstrom} resolution, which provides insight into the mechanism of anti-anti-{sigma} regulation. Direct intramolecular contact between the PhyR receiver and SL domains spans regions {sigma}{sub 2} and {sigma}{sub 4}, likely serving to stabilize the SL domain in a closed conformation. The molecular surface of the receiver domain contacting the SL domain is the structural equivalent of {alpha}4-{beta}5-{alpha}5, which is known to undergo dynamic conformational change upon phosphorylation in a diverse range of receiver proteins. We propose a structural model of PhyR regulation in which receiver phosphorylation destabilizes the intramolecular interaction between SL and receiver domains, thereby permitting regions {sigma}{sub 2} and {sigma}{sub 4} in the SL domain to open about a flexible connector loop and bind anti-{sigma} factor.

  1. A structural model of anti-anti-[sigma] inhibition by a two-component receiver domain: the PhyR stress response regulator

    SciTech Connect (OSTI)

    Herrou, Julien; Foreman, Robert; Fiebig, Aretha; Crosson, Sean

    2012-05-09

    PhyR is a hybrid stress regulator conserved in {alpha}-proteobacteria that contains an N-terminal {sigma}-like (SL) domain and a C-terminal receiver domain. Phosphorylation of the receiver domain is known to promote binding of the SL domain to an anti-{sigma} factor. PhyR thus functions as an anti-anti-{sigma} factor in its phosphorylated state. We present genetic evidence that Caulobacter crescentus PhyR is a phosphorylation-dependent stress regulator that functions in the same pathway as {sigma}{sup T} and its anti-{sigma} factor, NepR. Additionally, we report the X-ray crystal structure of PhyR at 1.25 {angstrom} resolution, which provides insight into the mechanism of anti-anti-{sigma} regulation. Direct intramolecular contact between the PhyR receiver and SL domains spans regions {sigma}{sub 2} and {sigma}{sub 4}, likely serving to stabilize the SL domain in a closed conformation. The molecular surface of the receiver domain contacting the SL domain is the structural equivalent of {alpha}4-{beta}5-{alpha}5, which is known to undergo dynamic conformational change upon phosphorylation in a diverse range of receiver proteins. We propose a structural model of PhyR regulation in which receiver phosphorylation destabilizes the intramolecular interaction between SL and receiver domains, thereby permitting regions {sigma}{sub 2} and {sigma}{sub 4} in the SL domain to open about a flexible connector loop and bind anti-{sigma} factor.

  2. Enhancing the Benefit of the Chemical Mixture Methodology: A Report on Methodology Testing and Potential Approaches for Improving Performance

    SciTech Connect (OSTI)

    Yu, Xiao-Ying; Yao, Juan; He, Hua; Glantz, Clifford S.; Booth, Alexander E.

    2012-01-01

    Extensive testing shows that the current version of the Chemical Mixture Methodology (CMM) is meeting its intended mission to provide conservative estimates of the health effects from exposure to airborne chemical mixtures. However, the current version of the CMM could benefit from several enhancements that are designed to improve its application of Health Code Numbers (HCNs) and employ weighting factors to reduce over conservatism.

  3. A Coupling Methodology for Mesoscale-informed Nuclear Fuel Performance Codes

    SciTech Connect (OSTI)

    Michael Tonks; Derek Gaston; Cody Permann; Paul Millett; Glen Hansen; Dieter Wolf

    2010-10-01

    This study proposes an approach for capturing the effect of microstructural evolution on reactor fuel performance by coupling a mesoscale irradiated microstructure model with a finite element fuel performance code. To achieve this, the macroscale system is solved in a parallel, fully coupled, fully-implicit manner using the preconditioned Jacobian-free Newton Krylov (JFNK) method. Within the JFNK solution algorithm, microstructure-influenced material parameters are calculated by the mesoscale model and passed back to the macroscale calculation. Due to the stochastic nature of the mesoscale model, a dynamic fitting technique is implemented to smooth roughness in the calculated material parameters. The proposed methodology is demonstrated on a simple model of a reactor fuel pellet. In the model, INLs BISON fuel performance code calculates the steady-state temperature profile in a fuel pellet and the microstructure-influenced thermal conductivity is determined with a phase field model of irradiated microstructures. This simple multiscale model demonstrates good nonlinear convergence and near ideal parallel scalability. By capturing the formation of large mesoscale voids in the pellet interior, the multiscale model predicted the irradiation-induced reduction in the thermal conductivity commonly observed in reactors.

  4. On the Use of the Polynomial Annihilation Edge Detection for Locating Cracks in Beam-Like Structures

    SciTech Connect (OSTI)

    Saxena, Rishu; Surace, Cecilia; Archibald, Richard K

    2013-01-01

    A crack in a structure causes a discontinuity in the first derivative of the mode shapes: On this basis, a numerical method for detecting discontinuities in smooth piecewise functions and their derivatives, based on a polynomial annihilation technique, has been applied to the problem of crack detection and localisation in beam-like structures for which only post-damage mode shapes are available. Using a finite-element model of a cracked beam, the performance of this methodology has been analysed for different crack depths and increasing amounts of noise. Given the crack position, a procedure to estimate its depth is also proposed and corresponding results shown.

  5. A METHODOLOGY FOR INTEGRATING IMAGES AND TEXT FOR OBJECT IDENTIFICATION

    SciTech Connect (OSTI)

    Paulson, Patrick R.; Hohimer, Ryan E.; Doucette, Peter J.; Harvey, William J.; Seedahmed, Gamal H.; Petrie, Gregg M.; Martucci, Louis M.

    2006-02-13

    Often text and imagery contain information that must be combined to solve a problem. One approach begins with transforming the raw text and imagery into a common structure that contains the critical information in a usable form. This paper presents an application in which the imagery of vehicles and the text from police reports were combined to demonstrate the power of data fusion to correctly identify the target vehicle--e.g., a red 2002 Ford truck identified in a police report--from a collection of diverse vehicle images. The imagery was abstracted into a common signature by first capturing the conceptual models of the imagery experts in software. Our system then (1) extracted fundamental features (e.g., wheel base, color), (2) made inferences about the information (e.g., it’s a red Ford) and then (3) translated the raw information into an abstract knowledge signature that was designed to both capture the important features and account for uncertainty. Likewise, the conceptual models of text analysis experts were instantiated into software that was used to generate an abstract knowledge signature that could be readily compared to the imagery knowledge signature. While this experiment primary focus was to demonstrate the power of text and imagery fusion for a specific example it also suggested several ways that text and geo-registered imagery could be combined to help solve other types of problems.

  6. Structurally controlled and aligned tight gas reservoir compartmentalization in the San Juan and Piceance Basins

    SciTech Connect (OSTI)

    Decker, A.D.; Kuuskraa, V.A.; Klawitter, A.L.

    1995-10-01

    Recurrent basement faulting is the primary controlling mechanism for aligning and compartmentalizing upper Cretaceous aged tight gas reservoirs of the San Juan and Piceance Basins. Northwest trending structural lineaments that formed in conjunction with the Uncompahgre Highlands have profoundly influenced sedimentation trends and created boundaries for gas migration; sealing and compartmentalizing sedimentary packages in both basins. Fractures which formed over the structural lineaments provide permeability pathways which allowing gas recovery from otherwise tight gas reservoirs. Structural alignments and associated reservoir compartments have been accurately targeted by integrating advanced remote sensing imagery, high resolution aeromagnetics, seismic interpretation, stratigraphic mapping and dynamic structural modelling. This unifying methodology is a powerful tool for exploration geologists and is also a systematic approach to tight gas resource assessment in frontier basins.

  7. Structure/Function Studies of Proteins Using Linear Scaling Quantum Mechanical Methodologies

    SciTech Connect (OSTI)

    Merz, K. M.

    2004-07-19

    We developed a linear-scaling semiempirical quantum mechanical (QM) program (DivCon). Using DivCon we can now routinely carry out calculations at the fully QM level on systems containing up to about 15 thousand atoms. We also implemented a Poisson-Boltzmann (PM) method into DivCon in order to compute solvation free energies and electrostatic properties of macromolecules in solution. This new suite of programs has allowed us to bring the power of quantum mechanics to bear on important biological problems associated with protein folding, drug design and enzyme catalysis. Hence, we have garnered insights into biological systems that have been heretofore impossible to obtain using classical simulation techniques.

  8. Forest-atmosphere BVOC exchange in diverse and structurally complex canopies: 1-D modeling of a mid-successional forest in northern Michigan

    SciTech Connect (OSTI)

    Bryan, Alexander M.; Cheng, Susan J.; Ashworth, Kirsti; Guenther, Alex B.; Hardiman, Brady; Bohrer, Gil; Steiner, A. L.

    2015-11-01

    Foliar emissions of biogenic volatile organic compounds (BVOC)dimportant precursors of tropospheric ozone and secondary organic aerosolsdvary widely by vegetation type. Modeling studies to date typi-cally represent the canopy as a single dominant tree type or a blend of tree types, yet many forests are diverse with trees of varying height. To assess the sensitivity of biogenic emissions to tree height vari-ation, we compare two 1-D canopy model simulations in which BVOC emission potentials are homo-geneous or heterogeneous with canopy depth. The heterogeneous canopy emulates the mid-successional forest at the University of Michigan Biological Station (UMBS). In this case, high-isoprene-emitting fo-liage (e.g., aspen and oak) is constrained to the upper canopy, where higher sunlight availability increases the light-dependent isoprene emission, leading to 34% more isoprene and its oxidation products as compared to the homogeneous simulation. Isoprene declines from aspen mortality are 10% larger when heterogeneity is considered. Overall, our results highlight the importance of adequately representing complexities of forest canopy structure when simulating light-dependent BVOC emissions and chemistry.

  9. Coupled modeling of a directly heated tubular solar receiver for supercritical carbon dioxide Brayton cycle: Structural and creep-fatigue evaluation

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Ortega, Jesus; Khivsara, Sagar; Christian, Joshua; Ho, Clifford; Dutta, Pradip

    2016-06-06

    A supercritical carbon dioxide (sCO2) Brayton cycle is an emerging high energy-density cycle undergoing extensive research due to the appealing thermo-physical properties of sCO2 and single phase operation. Development of a solar receiver capable of delivering sCO2 at 20 MPa and 700 °C is required for implementation of the high efficiency (~50%) solar powered sCO2 Brayton cycle. In this work, extensive candidate materials are review along with tube size optimization using the ASME Boiler and Pressure Vessel Code. Moreover, temperature and pressure distribution obtained from the thermal-fluid modeling (presented in a complementary publication) are used to evaluate the thermal andmore » mechanical stresses along with detailed creep-fatigue analysis of the tubes. For resulting body stresses were used to approximate the lifetime performance of the receiver tubes. A cyclic loading analysis is performed by coupling the Strain-Life approach and the Larson-Miller creep model. The structural integrity of the receiver was examined and it was found that the stresses can be withstood by specific tubes, determined by a parametric geometric analysis. The creep-fatigue analysis display the damage accumulation due to cycling and the permanent deformation on the tubes showed that the tubes can operate for the full lifetime of the receiver.« less

  10. Methodological Framework for Analysis of Buildings-Related Programs with BEAMS, 2008

    SciTech Connect (OSTI)

    Elliott, Douglas B.; Dirks, James A.; Hostick, Donna J.

    2008-09-30

    The U.S. Department of Energy’s (DOE’s) Office of Energy Efficiency and Renewable Energy (EERE) develops official “benefits estimates” for each of its major programs using its Planning, Analysis, and Evaluation (PAE) Team. PAE conducts an annual integrated modeling and analysis effort to produce estimates of the energy, environmental, and financial benefits expected from EERE’s budget request. These estimates are part of EERE’s budget request and are also used in the formulation of EERE’s performance measures. Two of EERE’s major programs are the Building Technologies Program (BT) and the Weatherization and Intergovernmental Program (WIP). Pacific Northwest National Laboratory (PNNL) supports PAE by developing the program characterizations and other market information necessary to provide input to the EERE integrated modeling analysis as part of PAE’s Portfolio Decision Support (PDS) effort. Additionally, PNNL also supports BT by providing line-item estimates for the Program’s internal use. PNNL uses three modeling approaches to perform these analyses. This report documents the approach and methodology used to estimate future energy, environmental, and financial benefits using one of those methods: the Building Energy Analysis and Modeling System (BEAMS). BEAMS is a PC-based accounting model that was built in Visual Basic by PNNL specifically for estimating the benefits of buildings-related projects. It allows various types of projects to be characterized including whole-building, envelope, lighting, and equipment projects. This document contains an overview section that describes the estimation process and the models used to estimate energy savings. The body of the document describes the algorithms used within the BEAMS software. This document serves both as stand-alone documentation for BEAMS, and also as a supplemental update of a previous document, Methodological Framework for Analysis of Buildings-Related Programs: The GPRA Metrics Effort

  11. Cost Methodology for Biomass Feedstocks: Herbaceous Crops and Agricultural Residues

    SciTech Connect (OSTI)

    Turhollow Jr, Anthony F; Webb, Erin; Sokhansanj, Shahabaddine

    2009-12-01

    This report describes a set of procedures and assumptions used to estimate production and logistics costs of bioenergy feedstocks from herbaceous crops and agricultural residues. The engineering-economic analysis discussed here is based on methodologies developed by the American Society of Agricultural and Biological Engineers (ASABE) and the American Agricultural Economics Association (AAEA). An engineering-economic analysis approach was chosen due to lack of historical cost data for bioenergy feedstocks. Instead, costs are calculated using assumptions for equipment performance, input prices, and yield data derived from equipment manufacturers, research literature, and/or standards. Cost estimates account for fixed and variable costs. Several examples of this costing methodology used to estimate feedstock logistics costs are included at the end of this report.

  12. Possible Improvements of the ACE Diversity Interchange Methodology

    SciTech Connect (OSTI)

    Etingov, Pavel V.; Zhou, Ning; Makarov, Yuri V.; Ma, Jian; Guttromson, Ross T.; McManus, Bart; Loutan, Clyde

    2010-07-26

    North American Electric Reliability Corporation (NERC) grid is operated by about 131 balancing authorities (BA). Within each BA, operators are responsible for managing the unbalance (caused by both load and wind). As wind penetration levels increase, the challenges of managing power variation increases. Working independently, balancing area with limited regulating/load following generation and high wind power penetration faces significant challenges. The benefits of BA cooperation and consolidation increase when there is a significant wind energy penetration. To explore the benefits of BA cooperation, this paper investigates ACE sharing approach. A technology called ACE diversity interchange (ADI) is already in use in the western interconnection. A new methodology extending ADI is proposed in the paper. The proposed advanced ADI overcoming some limitations existing in conventional ADI. Simulations using real statistical data of CAISO and BPA have shown high performance of the proposed advanced ADI methodology.

  13. Wind Technology Modeling Within the System Advisor Model (SAM) (Poster)

    SciTech Connect (OSTI)

    Blair, N.; Dobos, A.; Ferguson, T.; Freeman, J.; Gilman, P.; Whitmore, J.

    2014-05-01

    This poster provides detail for implementation and the underlying methodology for modeling wind power generation performance in the National Renewable Energy Laboratory's (NREL's) System Advisor Model (SAM). SAM's wind power model allows users to assess projects involving one or more large or small wind turbines with any of the detailed options for residential, commercial, or utility financing. The model requires information about the wind resource, wind turbine specifications, wind farm layout (if applicable), and costs, and provides analysis to compare the absolute or relative impact of these inputs. SAM is a system performance and economic model designed to facilitate analysis and decision-making for project developers, financers, policymakers, and energy researchers. The user pairs a generation technology with a financing option (residential, commercial, or utility) to calculate the cost of energy over the multi-year project period. Specifically, SAM calculates the value of projects which buy and sell power at retail rates for residential and commercial systems, and also for larger-scale projects which operate through a power purchase agreement (PPA) with a utility. The financial model captures complex financing and rate structures, taxes, and incentives.

  14. Methodology for Clustering High-Resolution Spatiotemporal Solar Resource Data

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Methodology for Clustering High-Resolution Spatiotemporal Solar Resource Data Dan Getman, Anthony Lopez, Trieu Mai, and Mark Dyson National Renewable Energy Laboratory Technical Report NREL/TP-6A20-63148 September 2015 NREL is a national laboratory of the U.S. Department of Energy Office of Energy Efficiency & Renewable Energy Operated by the Alliance for Sustainable Energy, LLC This report is available at no cost from the National Renewable Energy Laboratory (NREL) at

  15. Methodologies for Reservoir Characterization Using Fluid Inclusion Gas Chemistry

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Methodologies for Reservoir Characterization Using Fluid Inclusion Gas Chemistry Lorie M. Dilley Hattenburg Dilley & Linnell Track Name: Geochemistry Project Officer: Ava Coy Total Project Funding: $414,000 April 25, 2013 This presentation does not contain any proprietary confidential, or otherwise restricted information. Insert photo of your choice Fluid types interpreted from fluid inclusion gas chemistry across Coso geothermal system 2 | US DOE Geothermal Office eere.energy.gov

  16. Enzyme and methodology for the treatment of a biomass

    DOE Patents [OSTI]

    Thompson, Vicki S.; Thompson, David N.; Schaller, Kastli D.; Apel, William A.

    2010-06-01

    An enzyme isolated from an extremophilic microbe, and a method for utilizing same is described, and wherein the enzyme displays optimum enzymatic activity at a temperature of greater than about 80.degree. C., and a pH of less than about 2, and further may be useful in methodology including pretreatment of a biomass so as to facilitate the production of an end product.

  17. Methodology for EIA Weekly Underground Natural Gas Storage Estimates

    Weekly Natural Gas Storage Report (EIA)

    Methodology for EIA Weekly Underground Natural Gas Storage Estimates Latest Update: November 16, 2015 This report consists of the following sections: Survey and Survey Processing - a description of the survey and an overview of the program Sampling - a description of the selection process used to identify companies in the survey Estimation - how the regional estimates are prepared from the collected data Computing the Five-year Averages, Maxima, Minima, and Year-Ago Values for the Weekly Natural

  18. Design methodology for rock excavations at the Yucca Mountain project

    SciTech Connect (OSTI)

    Alber, M.; Bieniawski, Z.T.

    1993-12-31

    The problems involved in the design of the proposed underground repository for high-level nuclear waste call for novel design approaches. Guidelines for the design are given by the Mission Plan Amendment in which licensing and regulatory aspects have to be satisfied. Moreover, systems engineering was proposed, advocating a top-down approach leading to the identification of discrete, implementable system elements. These objectives for the design process can be integrated in an engineering design methodology. While design methodologies for some engineering disciplines are available, they were of limited use for rock engineering because of the inherent uncertainties about the geologic media. Based on the axiomatic design approach of Suh, Bieniawski developed a methodology for design in rock. Design principles and design stages are clearly stated to assist in effective decision making. For overall performance goals, the domain of objectives is defined through components (DCs) - representing a design solution - satisfy the FRs, resulting in discrete, independent functional relations. Implementation is satisfied by evaluation and optimization of the design with respect to the constructibility of the design components.

  19. Reduced dimension rovibrational variational calculations of the S{sub 1} state of C{sub 2}H{sub 2}. I. Methodology and implementation

    SciTech Connect (OSTI)

    Changala, P. Bryan

    2014-01-14

    The bending and torsional degrees of freedom in S{sub 1} acetylene, C{sub 2}H{sub 2}, are subject to strong vibrational resonances and rovibrational interactions, which create complex vibrational polyad structures even at low energy. As the internal energy approaches that of the barrier to cis-trans isomerization, these energy level patterns undergo further large-scale reorganization that cannot be satisfactorily treated by traditional models tied to local minima of the potential energy surface for nuclear motion. Experimental spectra in the region near the cis-trans transition state have revealed these complicated new patterns. In order to understand near-barrier spectroscopic observations and to predict the detailed effects of cis-trans isomerization on the rovibrational energy level structure, we have performed reduced dimension rovibrational variational calculations of the S{sub 1} state. In this paper, we present the methodological details, several of which require special care. Our calculation uses a high accuracy ab initio potential surface and a fully symmetrized extended complete nuclear permutation inversion group theoretical treatment of a multivalued internal coordinate system that is appropriate for large amplitude bending and torsional motions. We also discuss the details of the rovibrational basis functions and their symmetrization, as well as the use of a constrained reduced dimension rovibrational kinetic energy operator.

  20. Boundary Layer Structure:

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Boundary Layer Structure: a comparison between methods and sites Thiago Biscaro Suzane de Sá Jae-In Song Shaoyue "Emily" Qiu Mentors: Virendra Ghate and Ewan O'Connor July 24 2015 1 st ever ARM Summer Training Outline * IntroducQon * Methodology * Results - SGP - MAO - Comparison between the 2 sites * Conclusions INTRODUCTION Focus: esQmates of PBL height Boundary Layer: "The boUom layer of the troposphere that is in contact with the surface of the earth." (AMS, Glossary of

  1. Methodology, status and plans for development and assessment of Cathare code

    SciTech Connect (OSTI)

    Bestion, D.; Barre, F.; Faydide, B.

    1997-07-01

    This paper presents the methodology, status and plans for the development, assessment and uncertainty evaluation of the Cathare code. Cathare is a thermalhydraulic code developed by CEA (DRN), IPSN, EDF and FRAMATOME for PWR safety analysis. First, the status of the code development and assessment is presented. The general strategy used for the development and the assessment of the code is presented. Analytical experiments with separate effect tests, and component tests are used for the development and the validation of closure laws. Successive Revisions of constitutive laws are implemented in successive Versions of the code and assessed. System tests or integral tests are used to validate the general consistency of the Revision. Each delivery of a code Version + Revision is fully assessed and documented. A methodology is being developed to determine the uncertainty on all constitutive laws of the code using calculations of many analytical tests and applying the Discrete Adjoint Sensitivity Method (DASM). At last, the plans for the future developments of the code are presented. They concern the optimization of the code performance through parallel computing - the code will be used for real time full scope plant simulators - the coupling with many other codes (neutronic codes, severe accident codes), the application of the code for containment thermalhydraulics. Also, physical improvements are required in the field of low pressure transients and in the modeling for the 3-D model.

  2. A METHODOLOGY TO INTEGRATE MAGNETIC RESONANCE AND ACOUSTIC MEASUREMENTS FOR RESERVOIR CHARACTERIZATION

    SciTech Connect (OSTI)

    Jorge O. Parra; Chris L. Hackert; Lorna L. Wilson

    2002-09-20

    core and borehole scales. Vp, density, porosity, and permeability logs are integrated with crosswell reflection data to produce impedance, permeability, and porosity images. These images capture three flow units that are characterized at the pore and borehole scales. The upper flow units are thin, continuous beds, and the deeper flow unit is thicker and heterogeneous. NMR well log calibration data and thin section analysis demonstrate that interwell region permeability is controlled mainly by micropores and macropores, which represent the flow unit matrices of the confined aquifer. Reflection image-derived impedance provides lateral detail and the depth of the deeper confining unit. The permeable regions identified in both parts of this phase of the study are consistent with the hydrological results of high water production being monitored between two wells in the South Florida aquifer. Finally, we describe the two major methodologies developed to support the aquifer characterization efforts--(1) a method to estimate frequency-dependent scattering attenuation based on the volume fraction and typical size of vugs or karsts, and (2) a method to more accurately interpret NMR well logs by taking into account the diffusion of magnetization between large and small pores. For the first method, we take the exact vug structure from x-ray CT scans of two carbonate cores and use 3-D finite difference modeling to determine the P-wave scattering attenuation in these cores at ultrasonic frequencies. In spite of the sharp contrast in medium properties between cavity and rock and the violation of the small perturbation assumption, the computed scattering attenuation is roughly comparable to that predicted by various random medium scattering theories. For the second method, we investigate how the diffusion of magnetization between macropores and micropores influences NMR log interpretation through 2D simulation of magnetization diffusion in realistic macropore geometries derived from

  3. Computational implementation of a systems prioritization methodology for the Waste Isolation Pilot Plant: A preliminary example

    SciTech Connect (OSTI)

    Helton, J.C.; Anderson, D.R.; Baker, B.L.

    1996-04-01

    A systems prioritization methodology (SPM) is under development to provide guidance to the US DOE on experimental programs and design modifications to be supported in the development of a successful licensing application for the Waste Isolation Pilot Plant (WIPP) for the geologic disposal of transuranic (TRU) waste. The purpose of the SPM is to determine the probabilities that the implementation of different combinations of experimental programs and design modifications, referred to as activity sets, will lead to compliance. Appropriate tradeoffs between compliance probability, implementation cost and implementation time can then be made in the selection of the activity set to be supported in the development of a licensing application. Descriptions are given for the conceptual structure of the SPM and the manner in which this structure determines the computational implementation of an example SPM application. Due to the sophisticated structure of the SPM and the computational demands of many of its components, the overall computational structure must be organized carefully to provide the compliance probabilities for the large number of activity sets under consideration at an acceptable computational cost. Conceptually, the determination of each compliance probability is equivalent to a large numerical integration problem. 96 refs., 31 figs., 36 tabs.

  4. Linear Scaling Electronic Structure Methods with Periodic Boundary Conditions

    SciTech Connect (OSTI)

    Gustavo E. Scuseria

    2008-02-08

    The methodological development and computational implementation of linear scaling quantum chemistry methods for the accurate calculation of electronic structure and properties of periodic systems (solids, surfaces, and polymers) and their application to chemical problems of DOE relevance.

  5. A filtered tabulated chemistry model for LES of premixed combustion

    SciTech Connect (OSTI)

    Fiorina, B.; Auzillon, P.; Darabiha, N.; Gicquel, O.; Veynante, D.; Vicquelin, R.

    2010-03-15

    A new modeling strategy called F-TACLES (Filtered Tabulated Chemistry for Large Eddy Simulation) is developed to introduce tabulated chemistry methods in Large Eddy Simulation (LES) of turbulent premixed combustion. The objective is to recover the correct laminar flame propagation speed of the filtered flame front when subgrid scale turbulence vanishes as LES should tend toward Direct Numerical Simulation (DNS). The filtered flame structure is mapped using 1-D filtered laminar premixed flames. Closure of the filtered progress variable and the energy balance equations are carefully addressed in a fully compressible formulation. The methodology is first applied to 1-D filtered laminar flames, showing the ability of the model to recover the laminar flame speed and the correct chemical structure when the flame wrinkling is completely resolved. The model is then extended to turbulent combustion regimes by including subgrid scale wrinkling effects in the flame front propagation. Finally, preliminary tests of LES in a 3-D turbulent premixed flame are performed. (author)

  6. Methodology for comparing a standoff weapon with current conventional munitions in a runway attack scenario. Master's thesis

    SciTech Connect (OSTI)

    Coulter, D.M.; Fry, D.W.

    1986-03-01

    This research developed a SLAM discrete-event simulation model to support a methodology for comparing a standoff weapon with current conventional weapons. This study is limited to the defensive threats within a 20-NM terminal area surrounding a generic Warsaw Pact airfield. The emphasis of the study was simulation of the standoff weapon interactions with the terminal threats. Previous models have not attempted to model the threat reactions to the standoff-weapons. The resulting simulation enables the analyst to study the effects of weapon release conditions on weapon attrition, runway damage effectiveness, and aircraft attrition.

  7. Fukushima Daiichi unit 1 uncertainty analysis--Preliminary selection of uncertain parameters and analysis methodology

    SciTech Connect (OSTI)

    Cardoni, Jeffrey N.; Kalinich, Donald A.

    2014-02-01

    Sandia National Laboratories (SNL) plans to conduct uncertainty analyses (UA) on the Fukushima Daiichi unit (1F1) plant with the MELCOR code. The model to be used was developed for a previous accident reconstruction investigation jointly sponsored by the US Department of Energy (DOE) and Nuclear Regulatory Commission (NRC). However, that study only examined a handful of various model inputs and boundary conditions, and the predictions yielded only fair agreement with plant data and current release estimates. The goal of this uncertainty study is to perform a focused evaluation of uncertainty in core melt progression behavior and its effect on key figures-of-merit (e.g., hydrogen production, vessel lower head failure, etc.). In preparation for the SNL Fukushima UA work, a scoping study has been completed to identify important core melt progression parameters for the uncertainty analysis. The study also lays out a preliminary UA methodology.

  8. Methodology for optimizing the development and operation of gas storage fields

    SciTech Connect (OSTI)

    Mercer, J.C.; Ammer, J.R.; Mroz, T.H.

    1995-04-01

    The Morgantown Energy Technology Center is pursuing the development of a methodology that uses geologic modeling and reservoir simulation for optimizing the development and operation of gas storage fields. Several Cooperative Research and Development Agreements (CRADAs) will serve as the vehicle to implement this product. CRADAs have been signed with National Fuel Gas and Equitrans, Inc. A geologic model is currently being developed for the Equitrans CRADA. Results from the CRADA with National Fuel Gas are discussed here. The first phase of the CRADA, based on original well data, was completed last year and reported at the 1993 Natural Gas RD&D Contractors Review Meeting. Phase 2 analysis was completed based on additional core and geophysical well log data obtained during a deepening/relogging program conducted by the storage operator. Good matches, within 10 percent, of wellhead pressure were obtained using a numerical simulator to history match 2 1/2 injection withdrawal cycles.

  9. QuickSite{sup SM}, the Argonne expedited site characterization methodology,

    SciTech Connect (OSTI)

    Burton, J.C.; Meyer, W.T.

    1997-09-01

    Expedited site characterization (ESC), developed by Argonne National Laboratory, is an interactive, integrated process emphasizing the use of existing data of sufficient quality, multiple complementary characterization methods, and on-site decision making to optimize site investigations. The Argonne ESC is the basis for the provisional ESC standard of the ASTM (American Society for Testing and Materials). QuickSite{sup SM} is the implementation package developed by Argonne to facilitate ESC of sites contaminated with hazardous wastes. At various sites, Argonne has successfully implemented QuickSite{sup SM} and demonstrated the technical superiority of the ESC process over traditional methodologies guided by statistics and random-sampling approaches. For example, in a QuickSite{sup SM} characterization of a perched aquifer at the Pantex Plant in Texas, past data and geochemical analyses of existing wells were used to develop a model for recharge and contaminant movement. With the model as a guide, closure was achieved with minimal field work.

  10. Deployment evaluation methodology for the electrometallurgical treatment of DOE-EM spent nuclear fuel

    SciTech Connect (OSTI)

    Dahl, C.A.; Adams, J.P.; Ramer, R.J.

    1998-07-01

    Part of the Department of Energy (DOE) spent nuclear fuel (SNF) inventory may require some type of treatment to meet acceptance criteria at various disposition sites. The current focus for much of this spent nuclear fuel is the electrometallurgical treatment process under development at Argonne National Laboratory. Potential flowsheets for this treatment process are presented. Deployment of the process for the treatment of the spent nuclear fuel requires evaluation to determine the spent nuclear fuel program need for treatment and compatibility of the spent nuclear fuel with the process. The evaluation of need includes considerations of cost, technical feasibility, process material disposition, and schedule to treat a proposed fuel. A siting evaluation methodology has been developed to account for these variables. A work breakdown structure is proposed to gather life-cycle cost information to allow evaluation of alternative siting strategies on a similar basis. The evaluation methodology, while created specifically for the electrometallurgical evaluation, has been written such that it could be applied to any potential treatment process that is a disposition option for spent nuclear fuel. Future work to complete the evaluation of the process for electrometallurgical treatment is discussed.

  11. Simplified Plant Analysis Risk (SPAR) Human Reliability Analysis (HRA) Methodology: Comparisons with other HRA Methods

    SciTech Connect (OSTI)

    Byers, James Clifford; Gertman, David Ira; Hill, Susan Gardiner; Blackman, Harold Stabler; Gentillon, Cynthia Ann; Hallbert, Bruce Perry; Haney, Lon Nolan

    2000-08-01

    The 1994 Accident Sequence Precursor (ASP) human reliability analysis (HRA) methodology was developed for the U.S. Nuclear Regulatory Commission (USNRC) in 1994 by the Idaho National Engineering and Environmental Laboratory (INEEL). It was decided to revise that methodology for use by the Simplified Plant Analysis Risk (SPAR) program. The 1994 ASP HRA methodology was compared, by a team of analysts, on a point-by-point basis to a variety of other HRA methods and sources. This paper briefly discusses how the comparisons were made and how the 1994 ASP HRA methodology was revised to incorporate desirable aspects of other methods. The revised methodology was renamed the SPAR HRA methodology.

  12. Simplified plant analysis risk (SPAR) human reliability analysis (HRA) methodology: Comparisons with other HRA methods

    SciTech Connect (OSTI)

    J. C. Byers; D. I. Gertman; S. G. Hill; H. S. Blackman; C. D. Gentillon; B. P. Hallbert; L. N. Haney

    2000-07-31

    The 1994 Accident Sequence Precursor (ASP) human reliability analysis (HRA) methodology was developed for the U.S. Nuclear Regulatory Commission (USNRC) in 1994 by the Idaho National Engineering and Environmental Laboratory (INEEL). It was decided to revise that methodology for use by the Simplified Plant Analysis Risk (SPAR) program. The 1994 ASP HRA methodology was compared, by a team of analysts, on a point-by-point basis to a variety of other HRA methods and sources. This paper briefly discusses how the comparisons were made and how the 1994 ASP HRA methodology was revised to incorporate desirable aspects of other methods. The revised methodology was renamed the SPAR HRA methodology.

  13. Three-Dimensional Thermal-Electrochemical Coupled Model for Spirally Wound Large-Format Lithium-Ion Batteries (Presentation)

    SciTech Connect (OSTI)

    Lee, K. J.; Smith K.; Kim, G. H.

    2011-04-01

    This presentation discusses the behavior of spirally wound large-format Li-ion batteries with respect to their design. The objectives of the study include developing thermal and electrochemical models resolving 3-dimensional spirally wound structures of cylindrical cells, understanding the mechanisms and interactions between local electrochemical reactions and macroscopic heat and electron transfers, and developing a tool and methodology to support macroscopic designs of cylindrical Li-ion battery cells.

  14. REVIEW OF PROPOSED METHODOLOGY FOR A RISK- INFORMED RELAXATION TO ASME SECTION XI APPENDIX G

    SciTech Connect (OSTI)

    Dickson, Terry L; Kirk, Mark

    2010-01-01

    The current regulations, as set forth by the United States Nuclear Regulatory Commission (NRC), to insure that light-water nuclear reactor pressure vessels (RPVs) maintain their structural integrity when subjected to planned normal reactor startup (heat-up) and shut-down (cool-down) transients are specified in Appendix G to 10 CFR Part 50, which incorporates by reference Appendix G to Section XI of the American Society of Mechanical Engineers (ASME) Code. The technical basis for these regulations are now recognized by the technical community as being conservative and some plants are finding it increasingly difficult to comply with the current regulations. Consequently, the nuclear industry has developed, and submitted to the ASME Code for approval, an alternative risk-informed methodology that reduces the conservatism and is consistent with the methods previously used to develop a risk-informed revision to the regulations for accidental transients such as pressurized thermal shock (PTS). The objective of the alternative methodology is to provide a relaxation to the current regulations which will provide more operational flexibility, particularly for reactor pressure vessels with relatively high irradiation levels and radiation sensitive materials, while continuing to provide reasonable assurance of adequate protection to public health and safety. The NRC and its contractor at Oak Ridge National Laboratory (ORNL) have recently performed an independent review of the industry proposed methodology. The NRC / ORNL review consisted of performing probabilistic fracture mechanics (PFM) analyses for a matrix of cool-down and heat-up rates, permutated over various reactor geometries and characteristics, each at multiple levels of embrittlement, including 60 effective full power years (EFPY) and beyond, for various postulated flaw characterizations. The objective of this review is to quantify the risk of a reactor vessel experiencing non-ductile fracture, and possible

  15. Attack Methodology Analysis: Emerging Trends in Computer-Based Attack Methodologies and Their Applicability to Control System Networks

    SciTech Connect (OSTI)

    Bri Rolston

    2005-06-01

    Threat characterization is a key component in evaluating the threat faced by control systems. Without a thorough understanding of the threat faced by critical infrastructure networks, adequate resources cannot be allocated or directed effectively to the defense of these systems. Traditional methods of threat analysis focus on identifying the capabilities and motivations of a specific attacker, assessing the value the adversary would place on targeted systems, and deploying defenses according to the threat posed by the potential adversary. Too many effective exploits and tools exist and are easily accessible to anyone with access to an Internet connection, minimal technical skills, and a significantly reduced motivational threshold to be able to narrow the field of potential adversaries effectively. Understanding how hackers evaluate new IT security research and incorporate significant new ideas into their own tools provides a means of anticipating how IT systems are most likely to be attacked in the future. This research, Attack Methodology Analysis (AMA), could supply pertinent information on how to detect and stop new types of attacks. Since the exploit methodologies and attack vectors developed in the general Information Technology (IT) arena can be converted for use against control system environments, assessing areas in which cutting edge exploit development and remediation techniques are occurring can provide significance intelligence for control system network exploitation, defense, and a means of assessing threat without identifying specific capabilities of individual opponents. Attack Methodology Analysis begins with the study of what exploit technology and attack methodologies are being developed in the Information Technology (IT) security research community within the black and white hat community. Once a solid understanding of the cutting edge security research is established, emerging trends in attack methodology can be identified and the gap between

  16. Improved methodology to assess modification and completion of landfill gas management in the aftercare period

    SciTech Connect (OSTI)

    Morris, Jeremy W.F.; Crest, Marion; Barlaz, Morton A.; Spokas, Kurt A.; Akerman, Anna; Yuan, Lei

    2012-12-15

    Highlights: Black-Right-Pointing-Pointer Performance-based evaluation of landfill gas control system. Black-Right-Pointing-Pointer Analytical framework to evaluate transition from active to passive gas control. Black-Right-Pointing-Pointer Focus on cover oxidation as an alternative means of passive gas control. Black-Right-Pointing-Pointer Integrates research on long-term landfill behavior with practical guidance. - Abstract: Municipal solid waste landfills represent the dominant option for waste disposal in many parts of the world. While some countries have greatly reduced their reliance on landfills, there remain thousands of landfills that require aftercare. The development of cost-effective strategies for landfill aftercare is in society's interest to protect human health and the environment and to prevent the emergence of landfills with exhausted aftercare funding. The Evaluation of Post-Closure Care (EPCC) methodology is a performance-based approach in which landfill performance is assessed in four modules including leachate, gas, groundwater, and final cover. In the methodology, the objective is to evaluate landfill performance to determine when aftercare monitoring and maintenance can be reduced or possibly eliminated. This study presents an improved gas module for the methodology. While the original version of the module focused narrowly on regulatory requirements for control of methane migration, the improved gas module also considers best available control technology for landfill gas in terms of greenhouse gas emissions, air quality, and emissions of odoriferous compounds. The improved module emphasizes the reduction or elimination of fugitive methane by considering the methane oxidation capacity of the cover system. The module also allows for the installation of biologically active covers or other features designed to enhance methane oxidation. A methane emissions model, CALMIM, was used to assist with an assessment of the methane oxidation capacity of

  17. Theory of the electronic structure of dilute bismide and bismide-nitride alloys of GaAs: Tight-binding and k.p models

    SciTech Connect (OSTI)

    Usman, Muhammad; Broderick, Christopher A.; O'Reilly, Eoin P.

    2013-12-04

    The addition of dilute concentrations of bismuth (Bi) into GaAs to form GaBi{sub x}As{sub 1?x} alloys results in a large reduction of the band gap energy (E{sub g}) accompanied by a significant increase of the spin-orbit-splitting energy (?{sub SO}), leading to an E{sub g} < ?{sub SO} regime for x ? 10% which is technologically relevant for the design of highly efficient photonic devices. The quaternary alloy GaBi{sub x}N{sub y}As{sub 1?x?y} offers further flexibility for band gap tuning, because both nitrogen and bismuth can independently induce band gap reduction. This work reports sp{sup 3}s* tight binding and 14-band k?p models for the study of the electronic structure of GaBi{sub x}As{sub 1?x} and GaBi{sub x}N{sub y}As{sub 1?x?y} alloys. Our results are in good agreement with the available experimental data.

  18. Microsoft Word - eGallon methodology update Jan 2016.docx

    Broader source: Energy.gov (indexed) [DOE]

    ... Trends, October 2014. 4 This includes Tesla Model S, Nissan Leaf, Chevrolet Volt, BMW ... PEV Model kWh100 Miles Combined 1 Chevrolet Volt 35 Nissan Leaf 30 Tesla Model S 34 BMW ...

  19. Technical support document: Energy efficiency standards for consumer products: Room air conditioners, water heaters, direct heating equipment, mobile home furnaces, kitchen ranges and ovens, pool heaters, fluorescent lamp ballasts and television sets. Volume 1, Methodology

    SciTech Connect (OSTI)

    Not Available

    1993-11-01

    The Energy Policy and Conservation Act (P.L. 94-163), as amended, establishes energy conservation standards for 12 of the 13 types of consumer products specifically covered by the Act. The legislation requires the Department of Energy (DOE) to consider new or amended standards for these and other types of products at specified times. DOE is currently considering amending standards for seven types of products: water heaters, direct heating equipment, mobile home furnaces, pool heaters, room air conditioners, kitchen ranges and ovens (including microwave ovens), and fluorescent light ballasts and is considering establishing standards for television sets. This Technical Support Document presents the methodology, data, and results from the analysis of the energy and economic impacts of the proposed standards. This volume presents a general description of the analytic approach, including the structure of the major models.

  20. Work plan for revising the RAMC mine costing methodology. [USA; regional; modifications; based on specific mines reviewed

    SciTech Connect (OSTI)

    Not Available

    1980-06-16

    Based on discussions with the Technical Project Officer and current budget constraints, the approach chosen for improving the RAMC mine costing methodology is as follows: Develop a set of regional model mines (both surface and underground) which reflect mining conditions and preference for each major producing district; develop regional equations relating capital and operating costs to various system components; and develop the input data necessary for each estimating relationship. To date, engineering work-ups for all model mines have been prepared, a preliminary surface mine cost model has been developed and steps have been taken to reduce EIA-7 data for use in developing an underground cost model. Descriptions of the surface and underground model mines are contained in Appendices A and B, respectively, and the preliminary surface mine cost model is contained in Appendix C.

  1. FTCP-08-001, Methodology for Counting TQP Personnel and Qualifications...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    1, Methodology for Counting TQP Personnel and Qualifications FTCP-08-001, Methodology for Counting TQP Personnel and Qualifications FTCP Issue Paper: FTCP-08-001 Approved by FTCP, ...

  2. Distributed Wind Diffusion Model Overview (Presentation)

    SciTech Connect (OSTI)

    Preus, R.; Drury, E.; Sigrin, B.; Gleason, M.

    2014-07-01

    Distributed wind market demand is driven by current and future wind price and performance, along with several non-price market factors like financing terms, retail electricity rates and rate structures, future wind incentives, and others. We developed a new distributed wind technology diffusion model for the contiguous United States that combines hourly wind speed data at 200m resolution with high resolution electricity load data for various consumer segments (e.g., residential, commercial, industrial), electricity rates and rate structures for utility service territories, incentive data, and high resolution tree cover. The model first calculates the economics of distributed wind at high spatial resolution for each market segment, and then uses a Bass diffusion framework to estimate the evolution of market demand over time. The model provides a fundamental new tool for characterizing how distributed wind market potential could be impacted by a range of future conditions, such as electricity price escalations, improvements in wind generator performance and installed cost, and new financing structures. This paper describes model methodology and presents sample results for distributed wind market potential in the contiguous U.S. through 2050.

  3. DOE Systems Engineering Methodology (SEM): Stage Exit V3 | Department of

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Energy Systems Engineering Methodology (SEM): Stage Exit V3 DOE Systems Engineering Methodology (SEM): Stage Exit V3 The DOE Systems Engineering Methodology (SEM) describes the standard system development lifecycle (SDLC) used for information systems developed and maintained for the Department Of Energy DOE Systems Engineering Methodology (SEM): Stage Exit V3 (96.05 KB) More Documents & Publications Software Quality Assurance Plan Example In-Stage Assessment Process Guide Systems

  4. RTI International Develops Ssl Luminaire System Reliability Model

    Broader source: Energy.gov [DOE]

    With the help of DOE funding, RTI International is developing and validating accelerated life testing (ALT) methodologies and reliability models for predicting the lifetime of integrated solid...

  5. Renewable Energy Assessment Methodology for Japanese OCONUS Army Installations

    SciTech Connect (OSTI)

    Solana, Amy E.; Horner, Jacob A.; Russo, Bryan J.; Gorrissen, Willy J.; Kora, Angela R.; Weimar, Mark R.; Hand, James R.; Orrell, Alice C.; Williamson, Jennifer L.

    2010-08-30

    Since 2005, Pacific Northwest National Laboratory (PNNL) has been asked by Installation Management Command (IMCOM) to conduct strategic assessments at selected US Army installations of the potential use of renewable energy resources, including solar, wind, geothermal, biomass, waste, and ground source heat pumps (GSHPs). IMCOM has the same economic, security, and legal drivers to develop alternative, renewable energy resources overseas as it has for installations located in the US. The approach for continental US (CONUS) studies has been to use known, US-based renewable resource characterizations and information sources coupled with local, site-specific sources and interviews. However, the extent to which this sort of data might be available for outside the continental US (OCONUS) sites was unknown. An assessment at Camp Zama, Japan was completed as a trial to test the applicability of the CONUS methodology at OCONUS installations. It was found that, with some help from Camp Zama personnel in translating and locating a few Japanese sources, there was relatively little difficulty in finding sources that should provide a solid basis for conducting an assessment of comparable depth to those conducted for US installations. Project implementation will likely be more of a challenge, but the feasibility analysis will be able to use the same basic steps, with some adjusted inputs, as PNNL’s established renewable resource assessment methodology.

  6. Methodology and Process for Condition Assessment at Existing Hydropower Plants

    SciTech Connect (OSTI)

    Zhang, Qin Fen; Smith, Brennan T; Cones, Marvin; March, Patrick; Dham, Rajesh; Spray, Michael

    2012-01-01

    Hydropower Advancement Project was initiated by the U.S. Department of Energy Office of Energy Efficiency and Renewable Energy to develop and implement a systematic process with a standard methodology to identify the opportunities of performance improvement at existing hydropower facilities and to predict and trend the overall condition and improvement opportunity within the U.S. hydropower fleet. The concept of performance for the HAP focuses on water use efficiency how well a plant or individual unit converts potential energy to electrical energy over a long-term averaging period of a year or more. The performance improvement involves not only optimization of plant dispatch and scheduling but also enhancement of efficiency and availability through advanced technology and asset upgrades, and thus requires inspection and condition assessment for equipment, control system, and other generating assets. This paper discusses the standard methodology and process for condition assessment of approximately 50 nationwide facilities, including sampling techniques to ensure valid expansion of the 50 assessment results to the entire hydropower fleet. The application and refining process and the results from three demonstration assessments are also presented in this paper.

  7. Optimal pulsed pumping schedule using calculus of variation methodology

    SciTech Connect (OSTI)

    Johannes, T.W.

    1999-03-01

    The application of a variational optimization technique has demonstrated the potential strength of pulsed pumping operations for use at existing pump-and-treat aquifer remediation sites. The optimized pulsed pumping technique has exhibited notable improvements in operational effectiveness over continuous pumping. The optimized pulsed pumping technique has also exhibited an advantage over uniform time intervals for pumping and resting cycles. The most important finding supports the potential for managing and improving pumping operations in the absence of complete knowledge of plume characteristics. An objective functional was selected to minimize mass of water removed and minimize the non- essential mass of contaminant removed. General forms of an essential concentration function were analyzed to determine the appropriate form required for compliance with management preferences. Third-order essential concentration functions provided optimal solutions for the objective functional. Results of using this form of the essential concentration function in the methodology provided optimal solutions for switching times. The methodology was applied to a hypothetical, two-dimensional aquifer influenced by specified and no-flow boundaries, injection wells and extraction wells. Flow simulations used MODFLOW, transport simulations used MT3D, and the graphical interface for obtaining concentration time series data and flow/transport links were generated by GMS version 2.1.

  8. Structural aging program status report

    SciTech Connect (OSTI)

    Naus, D.J.; Oland, C.B.; Ellingwood, B.; Graves, H.L. III

    1994-12-31

    Research is being conducted at the Oak Ridge National Laboratory under Nuclear Regulatory Commission (USNRC) sponsorship to address aging management of safety-related concrete structures. Documentation is being prepared to provide the USNRC with potential structural safety issues and acceptance criteria for use in continued service evaluations of nuclear power plants. Program accomplishments have included development of the Structural Materials Information Center containing data and information on the time variation of 144 material properties under the influence of pertinent environmental stressors or aging factors, performance assessments of reinforced concrete structures in several United Kingdom nuclear power facilities, evaluation of European and North American repair practices for concrete, an evaluation of factors affecting the corrosion of metals embedded in concrete, and application of the time-dependent reliability methodology to reinforced concrete flexure and shear structural elements to investigate the role of in-service inspection and repair on their probability of failure.

  9. Structural aging program status report

    SciTech Connect (OSTI)

    Naus, D.J.; Oland, C.B.; Ellingwood, B.

    1995-04-01

    Research is being conducted at the Oak Ridge National Laboratory (ORNL) under U.S. Nuclear Regulatory Commission (USNRC) sponsorship to address aging management of safety-related concrete structures. Documentation is being prepared to provide the USNRC with potential structural safety issues and acceptance criteria for use in continued service evaluations of nuclear power plants. Program accomplishments have included development of the Structural Materials Information Center containing data and information of the time variation of 144 material properties under the influence of pertinent environmental stressors of aging factors, performance assessments of reinforced concrete structures in several United Kingdom nuclear power facilities, evaluation of European and North American repair practices for concrete, an evaluation of factors affecting the corrosion of metals embedded in concrete, and application of the time-dependent reliability methodology to reinforced concrete flexure and shear structural elements to investigate the role of in-service inspection and repair on their probability of failure.

  10. A Thermo-Optic Propagation Modeling Capability.

    SciTech Connect (OSTI)

    Schrader, Karl; Akau, Ron

    2014-10-01

    A new theoretical basis is derived for tracing optical rays within a finite-element (FE) volume. The ray-trajectory equations are cast into the local element coordinate frame and the full finite-element interpolation is used to determine instantaneous index gradient for the ray-path integral equation. The FE methodology (FEM) is also used to interpolate local surface deformations and the surface normal vector for computing the refraction angle when launching rays into the volume, and again when rays exit the medium. The method is implemented in the Matlab(TM) environment and compared to closed- form gradient index models. A software architecture is also developed for implementing the algorithms in the Zemax(TM) commercial ray-trace application. A controlled thermal environment was constructed in the laboratory, and measured data was collected to validate the structural, thermal, and optical modeling methods.

  11. The MARSAME Methodology: Fundamentals, Benefits and Applications - 12135

    SciTech Connect (OSTI)

    Boerner, Alex J.

    2012-07-01

    MARSAME is an acronym for the 'Multi-Agency Radiation Survey and Assessment of Materials and Equipment'. Published in January 2009, MARSAME was a joint effort between the U.S. Department of Energy (DOE), the U.S. Department of Defense, the U.S. Department of Energy, and the U.S. Nuclear Regulatory Commission (NRC) to aid sites in the clearance of materials and equipment (M and E). The MARSAME manual supplements the Multi-Agency Radiation Survey and Site Investigation Manual (MARSSIM), published in 1997. As cited in the MARSAME, applicable M and E includes metals, concrete, tools, equipment, piping, conduit, and furniture. Also included are dispersible bulk materials such as trash, rubble, roofing materials, and sludge. Solids stored in containers, as well as liquids and gases, represent additional M and E. The MARSAME methodology covers multiple technical areas, including the initial assessment (IA), Measurement Quality Objectives (MQOs), survey approaches and considerations, survey plans, survey implementation, and Data Quality Assessment (DQA). These topics are generally captured under four phases of the data life cycle, which are Planning, Implementation, Assessment, and Decision-Making. Flexibility and a graded approach are inherent components of the MARSAME methodology pertaining to M and E property clearance programs. Because large quantities of M and E potentially affected by radioactivity are present in the United States, owners of the M and E need to identify acceptable disposition options. Thirteen disposition options, broadly defined under both release and interdiction scenarios, are described in MARSAME. Nine disposition options are listed for release; these options are categorized into two for reuse, two for recycle, four for disposal, and one that is essentially 'status quo' (i.e., maintain current radiological controls). Four interdiction options are also cited. To date, applications of the MARSAME approach for M and E property clearance under reuse

  12. A total risk assessment methodology for security assessment.

    SciTech Connect (OSTI)

    Aguilar, Richard; Pless, Daniel J.; Kaplan, Paul Garry; Silva, Consuelo Juanita; Rhea, Ronald Edward; Wyss, Gregory Dane; Conrad, Stephen Hamilton

    2009-06-01

    Sandia National Laboratories performed a two-year Laboratory Directed Research and Development project to develop a new collaborative risk assessment method to enable decision makers to fully consider the interrelationships between threat, vulnerability, and consequence. A five-step Total Risk Assessment Methodology was developed to enable interdisciplinary collaborative risk assessment by experts from these disciplines. The objective of this process is promote effective risk management by enabling analysts to identify scenarios that are simultaneously achievable by an adversary, desirable to the adversary, and of concern to the system owner or to society. The basic steps are risk identification, collaborative scenario refinement and evaluation, scenario cohort identification and risk ranking, threat chain mitigation analysis, and residual risk assessment. The method is highly iterative, especially with regard to scenario refinement and evaluation. The Total Risk Assessment Methodology includes objective consideration of relative attack likelihood instead of subjective expert judgment. The 'probability of attack' is not computed, but the relative likelihood for each scenario is assessed through identifying and analyzing scenario cohort groups, which are groups of scenarios with comparable qualities to the scenario being analyzed at both this and other targets. Scenarios for the target under consideration and other targets are placed into cohort groups under an established ranking process that reflects the following three factors: known targeting, achievable consequences, and the resources required for an adversary to have a high likelihood of success. The development of these target cohort groups implements, mathematically, the idea that adversaries are actively choosing among possible attack scenarios and avoiding scenarios that would be significantly suboptimal to their objectives. An adversary who can choose among only a few comparable targets and scenarios (a

  13. Theoretical description of methodology in PHASER (Probabilistic hybrid analytical system evaluation routine)

    SciTech Connect (OSTI)

    Cooper, J.A.

    1996-01-01

    Probabilistic safety analyses (PSAs) frequently depend on fault tree and event tree models, using probabilities of `events` for inputs. Uncertainty or variability is sometimes included by assuming that the input probabilities vary independently and according to an assumed stochastic probability distribution modes. Evidence is accumulating that this methodology does not apply well to some situations, most significantly when the inputs contain a degree of subjectivity or are dependent. This report documents the current status of an investigation into methods for effectively incorporating subjectivity and dependence in PSAs and into the possibility of incorporating inputs that are partly subjective and partly stochastic. One important byproduct of this investigation was a computer routine that combines conventional PSA techniques with newly developed subjective techniques in a `hybrid` (subjective and conventional PSA) program. This program (PHASER) and a user`s manual are now available for beta use.

  14. Methodology for Augmenting Existing Paths with Additional Parallel Transects

    SciTech Connect (OSTI)

    Wilson, John E.

    2013-09-30

    Visual Sample Plan (VSP) is sample planning software that is used, among other purposes, to plan transect sampling paths to detect areas that were potentially used for munition training. This module was developed for application on a large site where existing roads and trails were to be used as primary sampling paths. Gap areas between these primary paths needed to found and covered with parallel transect paths. These gap areas represent areas on the site that are more than a specified distance from a primary path. These added parallel paths needed to optionally be connected together into a single paththe shortest path possible. The paths also needed to optionally be attached to existing primary paths, again with the shortest possible path. Finally, the process must be repeatable and predictable so that the same inputs (primary paths, specified distance, and path options) will result in the same set of new paths every time. This methodology was developed to meet those specifications.

  15. Aggregate Building Simulator (ABS) Methodology Development, Application, and User Manual

    SciTech Connect (OSTI)

    Dirks, James A.; Gorrissen, Willy J.

    2011-11-30

    As the relationship between the national building stock and various global energy issues becomes a greater concern, it has been deemed necessary to develop a system of predicting the energy consumption of large groups of buildings. Ideally this system is to take advantage of the most advanced energy simulation software available, be able to execute runs quickly, and provide concise and useful results at a level of detail that meets the users needs without inundating them with data. The resulting methodology that was developed allows the user to quickly develop and execute energy simulations of many buildings simultaneously, taking advantage of parallel processing to greatly reduce total simulation times. The result of these simulations can then be rapidly condensed and presented in a useful and intuitive manner.

  16. Damage prognosis of adhesively-bonded joints in laminated composite structural components of unmanned aerial vehicles

    SciTech Connect (OSTI)

    Farrar, Charles R; Gobbato, Maurizio; Conte, Joel; Kosmatke, John; Oliver, Joseph A

    2009-01-01

    The extensive use of lightweight advanced composite materials in unmanned aerial vehicles (UAVs) drastically increases the sensitivity to both fatigue- and impact-induced damage of their critical structural components (e.g., wings and tail stabilizers) during service life. The spar-to-skin adhesive joints are considered one of the most fatigue sensitive subcomponents of a lightweight UAV composite wing with damage progressively evolving from the wing root. This paper presents a comprehensive probabilistic methodology for predicting the remaining service life of adhesively-bonded joints in laminated composite structural components of UAVs. Non-destructive evaluation techniques and Bayesian inference are used to (i) assess the current state of damage of the system and, (ii) update the probability distribution of the damage extent at various locations. A probabilistic model for future loads and a mechanics-based damage model are then used to stochastically propagate damage through the joint. Combined local (e.g., exceedance of a critical damage size) and global (e.g.. flutter instability) failure criteria are finally used to compute the probability of component failure at future times. The applicability and the partial validation of the proposed methodology are then briefly discussed by analyzing the debonding propagation, along a pre-defined adhesive interface, in a simply supported laminated composite beam with solid rectangular cross section, subjected to a concentrated load applied at mid-span. A specially developed Eliler-Bernoulli beam finite element with interlaminar slip along the damageable interface is used in combination with a cohesive zone model to study the fatigue-induced degradation in the adhesive material. The preliminary numerical results presented are promising for the future validation of the methodology.

  17. STRUCtural Simulator

    Energy Science and Technology Software Center (OSTI)

    2004-07-01

    STRUC-ANL is a derivative of the FLUSTR-ANL finite element code. It contains only the structural capabilities of the original fluid-structural FLUSTR code.

  18. Structural Materials

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Structural Materials Development enables advanced technologies through the discovery, development, and demonstration of cost-effective advanced structural materials for use in ...

  19. Model documentation: Natural Gas Transmission and Distribution Model of the National Energy Modeling System; Volume 1

    SciTech Connect (OSTI)

    1994-02-24

    The Natural Gas Transmission and Distribution Model (NGTDM) is a component of the National Energy Modeling System (NEMS) used to represent the domestic natural gas transmission and distribution system. NEMS is the third in a series of computer-based, midterm energy modeling systems used since 1974 by the Energy Information Administration (EIA) and its predecessor, the Federal Energy Administration, to analyze domestic energy-economy markets and develop projections. This report documents the archived version of NGTDM that was used to produce the natural gas forecasts used in support of the Annual Energy Outlook 1994, DOE/EIA-0383(94). The purpose of this report is to provide a reference document for model analysts, users, and the public that defines the objectives of the model, describes its basic design, provides detail on the methodology employed, and describes the model inputs, outputs, and key assumptions. It is intended to fulfill the legal obligation of the EIA to provide adequate documentation in support of its models (Public Law 94-385, Section 57.b.2). This report represents Volume 1 of a two-volume set. (Volume 2 will report on model performance, detailing convergence criteria and properties, results of sensitivity testing, comparison of model outputs with the literature and/or other model results, and major unresolved issues.) Subsequent chapters of this report provide: (1) an overview of the NGTDM (Chapter 2); (2) a description of the interface between the National Energy Modeling System (NEMS) and the NGTDM (Chapter 3); (3) an overview of the solution methodology of the NGTDM (Chapter 4); (4) the solution methodology for the Annual Flow Module (Chapter 5); (5) the solution methodology for the Distributor Tariff Module (Chapter 6); (6) the solution methodology for the Capacity Expansion Module (Chapter 7); (7) the solution methodology for the Pipeline Tariff Module (Chapter 8); and (8) a description of model assumptions, inputs, and outputs (Chapter 9).

  20. Application of Direct Assessment Approaches and Methodologies to Cathodically Protected Nuclear Waste Transfer Lines

    SciTech Connect (OSTI)

    Dahl, Megan M.; Pikas, Joseph; Edgemon, Glenn L.; Philo, Sarah

    2013-01-22

    The U.S. Department of Energy's (DOE) Hanford Site is responsible for the safe storage, retrieval, treatment, and disposal of approximately 54 million gallons (204 million liters) of radioactive waste generated since the site's inception in 1943. Today, the major structures involved in waste management at Hanford include 149 carbon steel single-shell tanks, 28 carbon-steel double-shell tanks, plus a network of buried metallic transfer lines and ancillary systems (pits, vaults, catch tanks, etc.) required to store, retrieve, and transfer waste within the tank farm system. Many of the waste management systems at Hanford are still in use today. In response to uncertainties regarding the structural integrity of these systems,' an independent, comprehensive integrity assessment of the Hanford Site piping system was performed. It was found that regulators do not require the cathodically protected pipelines located within the Hanford Site to be assessed by External Corrosion Direct Assessment (ECDA) or any other method used to ensure integrity. However, a case study is presented discussing the application of the direct assessment process on pipelines in such a nuclear environment. Assessment methodology and assessment results are contained herein. An approach is described for the monitoring, integration of outside data, and analysis of this information in order to identify whether coating deterioration accompanied by external corrosion is a threat for these waste transfer lines.

  1. Residential applliance data, assumptions and methodology for end-use forecasting with EPRI-REEPS 2.1

    SciTech Connect (OSTI)

    Hwang, R.J,; Johnson, F.X.; Brown, R.E.; Hanford, J.W.; Kommey, J.G.

    1994-05-01

    This report details the data, assumptions and methodology for end-use forecasting of appliance energy use in the US residential sector. Our analysis uses the modeling framework provided by the Appliance Model in the Residential End-Use Energy Planning System (REEPS), which was developed by the Electric Power Research Institute. In this modeling framework, appliances include essentially all residential end-uses other than space conditioning end-uses. We have defined a distinct appliance model for each end-use based on a common modeling framework provided in the REEPS software. This report details our development of the following appliance models: refrigerator, freezer, dryer, water heater, clothes washer, dishwasher, lighting, cooking and miscellaneous. Taken together, appliances account for approximately 70% of electricity consumption and 30% of natural gas consumption in the US residential sector. Appliances are thus important to those residential sector policies or programs aimed at improving the efficiency of electricity and natural gas consumption. This report is primarily methodological in nature, taking the reader through the entire process of developing the baseline for residential appliance end-uses. Analysis steps documented in this report include: gathering technology and market data for each appliance end-use and specific technologies within those end-uses, developing cost data for the various technologies, and specifying decision models to forecast future purchase decisions by households. Our implementation of the REEPS 2.1 modeling framework draws on the extensive technology, cost and market data assembled by LBL for the purpose of analyzing federal energy conservation standards. The resulting residential appliance forecasting model offers a flexible and accurate tool for analyzing the effect of policies at the national level.

  2. An Experiment on Graph Analysis Methodologies for Scenarios

    SciTech Connect (OSTI)

    Brothers, Alan J.; Whitney, Paul D.; Wolf, Katherine E.; Kuchar, Olga A.; Chin, George

    2005-09-30

    Visual graph representations are increasingly used to represent, display, and explore scenarios and the structure of organizations. The graph representations of scenarios are readily understood, and commercial software is available to create and manage these representations. The purpose of the research presented in this paper is to explore whether these graph representations support quantitative assessments of the underlying scenarios. The underlying structure of the scenarios is the information that is being targeted in the experiment and the extent to which the scenarios are similar in content. An experiment was designed that incorporated both the contents of the scenarios and analysts’ graph representations of the scenarios. The scenarios’ content was represented graphically by analysts, and both the structure and the semantics of the graph representation were attempted to be used to understand the content. The structure information was not found to be discriminating for the content of the scenarios in this experiment; but, the semantic information was discriminating.

  3. Biopolymer structures: Where do they come from? Where are they going? Evolutionary perspectives on biopolymer structure and function

    SciTech Connect (OSTI)

    Goldstein, R.A.; Bornberg-Bauer, E.

    1996-12-31

    This session provides evolutionary perspectives on biopolymer structures, namely DNA, RNA and proteins. Structural models are presented and the structure and function relationships are discussed.

  4. Use of Forward Sensitivity Analysis Method to Improve Code Scaling, Applicability, and Uncertainty (CSAU) Methodology

    SciTech Connect (OSTI)

    Haihua Zhao; Vincent A. Mousseau; Nam T. Dinh

    2010-10-01

    Code Scaling, Applicability, and Uncertainty (CSAU) methodology was developed in late 1980s by US NRC to systematically quantify reactor simulation uncertainty. Basing on CSAU methodology, Best Estimate Plus Uncertainty (BEPU) methods have been developed and widely used for new reactor designs and existing LWRs power uprate. In spite of these successes, several aspects of CSAU have been criticized for further improvement: i.e., (1) subjective judgement in PIRT process; (2) high cost due to heavily relying large experimental database, needing many experts man-years work, and very high computational overhead; (3) mixing numerical errors with other uncertainties; (4) grid dependence and same numerical grids for both scaled experiments and real plants applications; (5) user effects; Although large amount of efforts have been used to improve CSAU methodology, the above issues still exist. With the effort to develop next generation safety analysis codes, new opportunities appear to take advantage of new numerical methods, better physical models, and modern uncertainty qualification methods. Forward sensitivity analysis (FSA) directly solves the PDEs for parameter sensitivities (defined as the differential of physical solution with respective to any constant parameter). When the parameter sensitivities are available in a new advanced system analysis code, CSAU could be significantly improved: (1) Quantifying numerical errors: New codes which are totally implicit and with higher order accuracy can run much faster with numerical errors quantified by FSA. (2) Quantitative PIRT (Q-PIRT) to reduce subjective judgement and improving efficiency: treat numerical errors as special sensitivities against other physical uncertainties; only parameters having large uncertainty effects on design criterions are considered. (3) Greatly reducing computational costs for uncertainty qualification by (a) choosing optimized time steps and spatial sizes; (b) using gradient information

  5. Seismic hazard methodology for the Central and Eastern United...

    Office of Scientific and Technical Information (OSTI)

    of a tectonic framework interpretation from which alternative seismic sources are derived. ... earthquake recurrence model and an extensive assessment of its applicability is provided. ...

  6. Development Methodology for Power-Dense Military Diesel Engine...

    Broader source: Energy.gov (indexed) [DOE]

    Laboratory data and modeling results are presented on a military auxiliary power unit ... More Documents & Publications Oxygen-Enriched Combustion for Military Diesel Engine ...

  7. Classification methodology for tritiated waste requiring interim storage

    SciTech Connect (OSTI)

    Cana, D.; Dall'ava, D.

    2015-03-15

    Fusion machines like the ITER experimental research facility will use tritium as fuel. Therefore, most of the solid radioactive waste will result not only from activation by 14 MeV neutrons, but also from contamination by tritium. As a consequence, optimizing the treatment process for waste containing tritium (tritiated waste) is a major challenge. This paper summarizes the studies conducted in France within the framework of the French national plan for the management of radioactive materials and waste. The paper recommends a reference program for managing this waste based on its sorting, treatment and packaging by the producer. It also recommends setting up a 50-year temporary storage facility to allow for tritium decay and designing future disposal facilities using tritiated radwaste characteristics as input data. This paper first describes this waste program and then details an optimized classification methodology which takes into account tritium decay over a 50-year storage period. The paper also describes a specific application for purely tritiated waste and discusses the set-up expected to be implemented for ITER decommissioning waste (current assumption). Comparison between this optimized approach and other viable detritiation techniques will be drawn. (authors)

  8. Hanford Technical Basis for Multiple Dosimetry Effective Dose Methodology

    SciTech Connect (OSTI)

    Hill, Robin L.; Rathbone, Bruce A.

    2010-08-01

    The current method at Hanford for dealing with the results from multiple dosimeters worn during non-uniform irradiation is to use a compartmentalization method to calculate the effective dose (E). The method, as documented in the current version of Section 6.9.3 in the 'Hanford External Dosimetry Technical Basis Manual, PNL-MA-842,' is based on the compartmentalization method presented in the 1997 ANSI/HPS N13.41 standard, 'Criteria for Performing Multiple Dosimetry.' With the adoption of the ICRP 60 methodology in the 2007 revision to 10 CFR 835 came changes that have a direct affect on the compartmentalization method described in the 1997 ANSI/HPS N13.41 standard, and, thus, to the method used at Hanford. The ANSI/HPS N13.41 standard committee is in the process of updating the standard, but the changes to the standard have not yet been approved. And, the drafts of the revision of the standard tend to align more with ICRP 60 than with the changes specified in the 2007 revision to 10 CFR 835. Therefore, a revised method for calculating effective dose from non-uniform external irradiation using a compartmental method was developed using the tissue weighting factors and remainder organs specified in 10 CFR 835 (2007).

  9. Proposed methodologies for evaluating grid benefits of distributed generation

    SciTech Connect (OSTI)

    Skowronski, M.J.

    1999-11-01

    As new Distributed Generation technologies are brought to the market, new hurdles to successful commercialization of these promising forms of on-site generation are becoming apparent. The impetus to commercialize these technologies has, up to now, been the value and benefits that the end user derives from the installation of Distributed Generation. These benefits are primarily economic as Distributed Generation is normally installed to reduce the customer utility bill. There are, however, other benefits of Distributed Generation other than the reduction in the cost of electric service, and these benefits normally accrue to the system or system operator. The purpose of this paper is to evaluate and suggest methodologies to quantify these ancillary benefits that the grid and/or connecting utility derive from customer on-site generation. Specifically, the following are discussed: reliability in service; transmission loss reduction; spinning and non-spinning reserve margin; peak shaving and interruptible loads; transmission and distribution deferral; VAR support/power quality; cogeneration capability; improvement in utility load factor fuel diversity; emission reductions; and qualitative factors -- reduced energy congestion, less societal disruption, faster response time, black start capability, system operation benefits.

  10. Partial oxidation of landfill leachate in supercritical water: Optimization by response surface methodology

    SciTech Connect (OSTI)

    Gong, Yanmeng; Wang, Shuzhong; Xu, Haidong; Guo, Yang; Tang, Xingying

    2015-09-15

    Highlights: • Partial oxidation of landfill leachate in supercritical water was investigated. • The process was optimized by Box–Behnken design and response surface methodology. • GY{sub H2}, TRE and CR could exhibit up to 14.32 mmol·gTOC{sup −1}, 82.54% and 94.56%. • Small amounts of oxidant can decrease the generation of tar and char. - Abstract: To achieve the maximum H{sub 2} yield (GY{sub H2}), TOC removal rate (TRE) and carbon recovery rate (CR), response surface methodology was applied to optimize the process parameters for supercritical water partial oxidation (SWPO) of landfill leachate in a batch reactor. Quadratic polynomial models for GY{sub H2}, CR and TRE were established with Box–Behnken design. GY{sub H2}, CR and TRE reached up to 14.32 mmol·gTOC{sup −1}, 82.54% and 94.56% under optimum conditions, respectively. TRE was invariably above 91.87%. In contrast, TC removal rate (TR) only changed from 8.76% to 32.98%. Furthermore, carbonate and bicarbonate were the most abundant carbonaceous substances in product, whereas CO{sub 2} and H{sub 2} were the most abundant gaseous products. As a product of nitrogen-containing organics, NH{sub 3} has an important effect on gas composition. The carbon balance cannot be reached duo to the formation of tar and char. CR increased with the increase of temperature and oxidation coefficient.

  11. Basement Structure and Implications for Hydrothermal Circulation...

    Open Energy Info (EERE)

    California Abstract Detailed surface mapping, subsurface drill hole data, and geophysical modeling are the basis of a structural and hydrothermal model for the western part of Long...

  12. Modeling the U.S. Rooftop Photovoltaics Market

    SciTech Connect (OSTI)

    Drury, E.; Denholm, P.; Margolis, R.

    2010-09-01

    Global rooftop PV markets are growing rapidly, fueled by a combination of declining PV prices and several policy-based incentives. The future growth, and size, of the rooftop market is highly dependent on continued PV cost reductions, financing options, net metering policy, carbon prices and future incentives. Several PV market penetration models, sharing a similar structure and methodology, have been developed over the last decade to quantify the impacts of these factors on market growth. This study uses a geospatially rich, bottom-up, PV market penetration model--the Solar Deployment Systems (SolarDS) model developed by the National Renewable Energy Laboratory--to explore key market and policy-based drivers for residential and commercial rooftop PV markets. The identified drivers include a range of options from traditional incentives, to attractive customer financing options, to net metering and carbon policy.

  13. Methodology for Use of Reclaimed Water at Federal Locations | Department of

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Energy Methodology for Use of Reclaimed Water at Federal Locations Methodology for Use of Reclaimed Water at Federal Locations Fact sheet offers guidelines to help Federal agencies use reclaimed water as part of Executive Order 13514 and other water-reduction requirements and goals. Download the Methodology for Use of Reclaimed Water at Federal Locations fact sheet. (873.11 KB) More Documents & Publications Reclaimed Wastewater FUPWG Winter 2014 Meeting Agenda, Report, and Presentations

  14. GREET 1.5 - transportation fuel-cycle model - Vol. 1 : methodology...

    Office of Scientific and Technical Information (OSTI)

    ... SYSTEMS; FUEL CONSUMPTION; ETHERS; GREENHOUSE GASES; LIQUEFIED NATURAL GAS; AIR POLLUTION; FLY ASH; DIESEL FUELS; GASOLINE; LIQUEFIED PETROLEUM GASES; METHANOL; FUEL ...

  15. Natural Circulation in Water Cooled Nuclear Power Plants Phenomena, models, and methodology for system reliability assessments

    SciTech Connect (OSTI)

    Jose Reyes

    2005-02-14

    In recent years it has been recognized that the application of passive safety systems (i.e., those whose operation takes advantage of natural forces such as convection and gravity), can contribute to simplification and potentially to improved economics of new nuclear power plant designs. In 1991 the IAEA Conference on ''The Safety of Nuclear Power: Strategy for the Future'' noted that for new plants the use of passive safety features is a desirable method of achieving simplification and increasing the reliability of the performance of essential safety functions, and should be used wherever appropriate''.

  16. Application of the NUREG/CR-6850 EPRI/NRC Fire PRA Methodology to a DOE Facility

    SciTech Connect (OSTI)

    Tom Elicson; Bentley Harwood; Richard Yorg; Heather Lucek; Jim Bouchard; Ray Jukkola; Duan Phan

    2011-03-01

    The application NUREG/CR-6850 EPRI/NRC fire PRA methodology to DOE facility presented several challenges. This paper documents the process and discusses several insights gained during development of the fire PRA. A brief review of the tasks performed is provided with particular focus on the following: • Tasks 5 and 14: Fire-induced risk model and fire risk quantification. A key lesson learned was to begin model development and quantification as early as possible in the project using screening values and simplified modeling if necessary. • Tasks 3 and 9: Fire PRA cable selection and detailed circuit failure analysis. In retrospect, it would have been beneficial to perform the model development and quantification in 2 phases with detailed circuit analysis applied during phase 2. This would have allowed for development of a robust model and quantification earlier in the project and would have provided insights into where to focus the detailed circuit analysis efforts. • Tasks 8 and 11: Scoping fire modeling and detailed fire modeling. More focus should be placed on detailed fire modeling and less focus on scoping fire modeling. This was the approach taken for the fire PRA. • Task 14: Fire risk quantification. Typically, multiple safe shutdown (SSD) components fail during a given fire scenario. Therefore dependent failure analysis is critical to obtaining a meaningful fire risk quantification. Dependent failure analysis for the fire PRA presented several challenges which will be discussed in the full paper.

  17. Measurement of laminar burning speeds and Markstein lengths using a novel methodology

    SciTech Connect (OSTI)

    Tahtouh, Toni; Halter, Fabien; Mounaim-Rousselle, Christine [Institut PRISME, Universite d'Orleans, 8 rue Leonard de Vinci-45072, Orleans Cedex 2 (France)

    2009-09-15

    Three different methodologies used for the extraction of laminar information are compared and discussed. Starting from an asymptotic analysis assuming a linear relation between the propagation speed and the stretch acting on the flame front, temporal radius evolutions of spherically expanding laminar flames are postprocessed to obtain laminar burning velocities and Markstein lengths. The first methodology fits the temporal radius evolution with a polynomial function, while the new methodology proposed uses the exact solution of the linear relation linking the flame speed and the stretch as a fit. The last methodology consists in an analytical resolution of the problem. To test the different methodologies, experiments were carried out in a stainless steel combustion chamber with methane/air mixtures at atmospheric pressure and ambient temperature. The equivalence ratio was varied from 0.55 to 1.3. The classical shadowgraph technique was used to detect the reaction zone. The new methodology has proven to be the most robust and provides the most accurate results, while the polynomial methodology induces some errors due to the differentiation process. As original radii are used in the analytical methodology, it is more affected by the experimental radius determination. Finally, laminar burning velocity and Markstein length values determined with the new methodology are compared with results reported in the literature. (author)

  18. Methodology for Allocating Municipal Solid Waste to Biogenic and Non-Biogenic Energy

    Reports and Publications (EIA)

    2007-01-01

    This report summarizes the methodology used to split the heat content of municipal solid waste (MSW) into its biogenic and non-biogenic shares.

  19. High-Throughput Methodology for Discovery of Metal-Organic Frameworks...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Methodology for Discovery of Metal-Organic Frameworks with a High Hydrogen Binding Enthalpy Steven S. Kaye, Satoshi Horike, and Jeffrey R. Long Department of Chemistry, University ...

  20. LOCA analysis evaluation model with TRAC-PF1/NEM

    SciTech Connect (OSTI)

    Orive Moreno, Raul; Gallego Cabezon, Ines; Garcia Sedano, Pablo

    2004-07-01

    Nowadays regulatory rules and code models development are progressing on the goal of using best-estimate approximations in applications of license. Inside this framework, IBERDROLA is developing a PWR LOCA Analysis Methodology with one double slope, by a side the development of an Evaluation Model (upper-bounding model) that covers with conservative form the different aspects from the PWR LOCA phenomenology and on the other hand, a proposal of CSAU (Code Scaling Applicability and Uncertainty) type evaluation, methodology that strictly covers the 95/95 criterion in the Peak Cladding Temperature. A structured method is established, that basically involves the following steps: 1. Selection of the Large Break LOCA like accident to analyze and of TRAC-PF1/MOD2 V99.1 NEM (PSU version) computer code like analysis tool. 2. Code Assessment, identifying the most remarkable phenomena (PIRT, Phenomena Identification and Ranking Tabulation) and estimation of a possible code deviation (bias) and uncertainties associated to the specific models that control these phenomena (critical flow mass, heat transfer, countercurrent flow, etc...). 3. Evaluation of an overall PCT uncertainty, taking into account code uncertainty, reactor initial conditions, and accident boundary conditions. Uncertainties quantification requires an excellent experiments selection that allows to define a complete evaluation matrix, and the comparison of the simulations results with the experiments measured data, as well as in the relative to the scaling of these phenomena. To simulate these experiments it was necessary to modify the original code, because it was not able to reproduce, in a qualitative way, the expected phenomenology. It can be concluded that there is a good agreement between the TRAC-PF1/NEM results and the experimental data. Once average error ({epsilon}) and standard deviation ({sigma}) for those correlations under study are obtained, these factors could be used to correct in a conservative