National Library of Energy BETA

Sample records for analysis base cases

  1. 1980 Base case and feasibility analysis

    SciTech Connect (OSTI)

    1993-03-01

    This report describes a task of documenting a ``base case`` and performing a feasibility analysis for a national residential energy efficiency program for new homes, The principal objective of the task wasto estimate the energy consumption of typical homes built in 1980 and then to identify and assess the feasibility of methods to reduce that consumption by 50%. The goal of the program by the year 2000 is to reduce heating and cooling energy use in new homes built under the program to one-half of the energy use in typical new homes built in 1980. The task also calls for determining whether the program goal should be revised, based on the analysis.

  2. 1980 Base case and feasibility analysis

    SciTech Connect (OSTI)

    Not Available

    1993-03-01

    This report describes a task of documenting a base case'' and performing a feasibility analysis for a national residential energy efficiency program for new homes, The principal objective of the task wasto estimate the energy consumption of typical homes built in 1980 and then to identify and assess the feasibility of methods to reduce that consumption by 50%. The goal of the program by the year 2000 is to reduce heating and cooling energy use in new homes built under the program to one-half of the energy use in typical new homes built in 1980. The task also calls for determining whether the program goal should be revised, based on the analysis.

  3. Definition of the base analysis case of the interim performance assessment

    SciTech Connect (OSTI)

    Mann, F.M.

    1995-12-01

    The base analysis case for the ``Hanford Low-Level Tank Waste Interim Performance Assessment`` is defined. Also given are brief description of the sensitivity cases.

  4. Fuel Cycle Analysis Framework Base Cases for the IAEA/INPRO GAINS Collaborative Project

    SciTech Connect (OSTI)

    Brent Dixon

    2012-09-01

    Thirteen countries participated in the Collaborative Project GAINS Global Architecture of Innovative Nuclear Energy Systems Based on Thermal and Fast Reactors Including a Closed Fuel Cycle, which was the primary activity within the IAEA/INPRO Program Area B: Global Vision on Sustainable Nuclear Energy for the last three years. The overall objective of GAINS was to develop a standard framework for assessing future nuclear energy systems taking into account sustainable development, and to validate results through sample analyses. This paper details the eight scenarios that constitute the GAINS framework base cases for analysis of the transition to future innovative nuclear energy systems. The framework base cases provide a reference for users of the framework to start from in developing and assessing their own alternate systems. Each base case is described along with performance results against the GAINS sustainability evaluation metrics. The eight cases include four using a moderate growth projection and four using a high growth projection for global nuclear electricity generation through 2100. The cases are divided into two sets, addressing homogeneous and heterogeneous scenarios developed by GAINS to model global fuel cycle strategies. The heterogeneous world scenario considers three separate nuclear groups based on their fuel cycle strategies, with non-synergistic and synergistic cases. The framework base case analyses results show the impact of these different fuel cycle strategies while providing references for future users of the GAINS framework. A large number of scenario alterations are possible and can be used to assess different strategies, different technologies, and different assumptions about possible futures of nuclear power. Results can be compared to the framework base cases to assess where these alternate cases perform differently versus the sustainability indicators.

  5. A case study for cloud based high throughput analysis of NGS data using the globus genomics system

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Bhuvaneshwar, Krithika; Sulakhe, Dinanath; Gauba, Robinder; Rodriguez, Alex; Madduri, Ravi; Dave, Utpal; Lacinski, Lukasz; Foster, Ian; Gusev, Yuriy; Madhavan, Subha

    2015-01-01

    Next generation sequencing (NGS) technologies produce massive amounts of data requiring a powerful computational infrastructure, high quality bioinformatics software, and skilled personnel to operate the tools. We present a case study of a practical solution to this data management and analysis challenge that simplifies terabyte scale data handling and provides advanced tools for NGS data analysis. These capabilities are implemented using the “Globus Genomics” system, which is an enhanced Galaxy workflow system made available as a service that offers users the capability to process and transfer data easily, reliably and quickly to address end-to-end NGS analysis requirements. The Globus Genomicsmore » system is built on Amazon's cloud computing infrastructure. The system takes advantage of elastic scaling of compute resources to run multiple workflows in parallel and it also helps meet the scale-out analysis needs of modern translational genomics research.« less

  6. A case study for cloud based high throughput analysis of NGS data using the globus genomics system

    SciTech Connect (OSTI)

    Bhuvaneshwar, Krithika; Sulakhe, Dinanath; Gauba, Robinder; Rodriguez, Alex; Madduri, Ravi; Dave, Utpal; Lacinski, Lukasz; Foster, Ian; Gusev, Yuriy; Madhavan, Subha

    2015-01-01

    Next generation sequencing (NGS) technologies produce massive amounts of data requiring a powerful computational infrastructure, high quality bioinformatics software, and skilled personnel to operate the tools. We present a case study of a practical solution to this data management and analysis challenge that simplifies terabyte scale data handling and provides advanced tools for NGS data analysis. These capabilities are implemented using the “Globus Genomics” system, which is an enhanced Galaxy workflow system made available as a service that offers users the capability to process and transfer data easily, reliably and quickly to address end-to-end NGS analysis requirements. The Globus Genomics system is built on Amazon's cloud computing infrastructure. The system takes advantage of elastic scaling of compute resources to run multiple workflows in parallel and it also helps meet the scale-out analysis needs of modern translational genomics research.

  7. Geographically-Based Infrastructure Analysis

    Broader source: Energy.gov (indexed) [DOE]

    January 26, 2006 Geographically-Based Infrastructure Analysis (GIA) Utilizes GIS, ... Geographically-based Infrastructure Analysis GIS Transportation Technologies & Systems ...

  8. ARM - Field Campaign - CASES Data Analysis

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    govCampaignsCASES Data Analysis Comments? We would love to hear from you! Send us a note below or call us at 1-888-ARM-DATA. Send Campaign : CASES Data Analysis 2004.07.01 - 2009.06.30 Lead Scientist : Margaret LeMone Abstract CASES Data Analysis: Potential Benefits Diurnal variation of the Atmospheric Boundary Layer. Taken together, the two Cooperative Atmosphere Surface Exchange Study (CASES) field programs, CASES-97 (morning and evening) and CASES-99 (evening, night, morning) provide a robust

  9. Final base case community analysis: Indian Springs, Nevada for the Clark County socioeconomic impact assessment of the proposed high- level nuclear waste repository at Yucca Mountain, Nevada

    SciTech Connect (OSTI)

    1992-06-18

    This document provides a base case description of the rural Clark County community of Indian Springs in anticipation of change associated with the proposed high-level nuclear waste repository at Yucca Mountain. As the community closest to the proposed site, Indian Springs may be seen by site characterization workers, as well as workers associated with later repository phases, as a logical place to live. This report develops and updates information relating to a broad spectrum of socioeconomic variables, thereby providing a `snapshot` or `base case` look at Indian Springs in early 1992. With this as a background, future repository-related developments may be analytically separated from changes brought about by other factors, thus allowing for the assessment of the magnitude of local changes associated with the proposed repository. Given the size of the community, changes that may be considered small in an absolute sense may have relatively large impacts at the local level. Indian Springs is, in many respects, a unique community and a community of contrasts. An unincorporated town, it is a small yet important enclave of workers on large federal projects and home to employees of small- scale businesses and services. It is a rural community, but it is also close to the urbanized Las Vega Valley. It is a desert community, but has good water resources. It is on flat terrain, but it is located within 20 miles of the tallest mountains in Nevada. It is a town in which various interest groups diverge on issues of local importance, but in a sense of community remains an important feature of life. Finally, it has a sociodemographic history of both surface transience and underlying stability. If local land becomes available, Indian Springs has some room for growth but must first consider the historical effects of growth on the town and its desired direction for the future.

  10. Analysis of Restricted Natural Gas Supply Cases

    Reports and Publications (EIA)

    2004-01-01

    The four cases examined in this study have progressively greater impacts on overall natural gas consumption, prices, and supply. Compared to the Annual Energy Outlook 2004 reference case, the no Alaska pipeline case has the least impact; the low liquefied natural gas case has more impact; the low unconventional gas recovery case has even more impact; and the combined case has the most impact.

  11. Analysis of design tradeoffs for diplay case evaporators

    SciTech Connect (OSTI)

    Bullard, CLARK

    2004-08-11

    A model for simulating a display case evaporator under frosting conditions has been developed, using a quasi-steady and finite-volume approach and a Newton-Raphson based solution algorithm. It is capable of simulating evaporators with multiple modules having different geometries, e.g. tube and fin thicknesses and pitch. The model was validated against data taken at two-minute intervals from a well-instrumented medium-temperature vertical display case, for two evaporators having very different configurations. The data from these experiments provided both the input data for the model and also the data to compare the modeling results. The validated model has been used to generate some general guidelines for coil design. Effects of various geometrical parameters were quantified, and compressor performance data were used to express the results in terms of total power consumption. Using these general guidelines, a new prototype evaporator was designed for the subject display case, keeping in mind the current packaging restrictions, tube and fin availabilities. It is an optimum coil for the given external load conditions. Subsequently, the validated model was used in a more extensive analysis to design prototype coils with some of the current tube and fin spacing restrictions removed. A new microchannel based suction line heat exchanger was installed in the display case system. The performance of this suction line heat exchanger is reported.

  12. Chapter 11. Community analysis-based methods

    SciTech Connect (OSTI)

    Cao, Y.; Wu, C.H.; Andersen, G.L.; Holden, P.A.

    2010-05-01

    Microbial communities are each a composite of populations whose presence and relative abundance in water or other environmental samples are a direct manifestation of environmental conditions, including the introduction of microbe-rich fecal material and factors promoting persistence of the microbes therein. As shown by culture-independent methods, different animal-host fecal microbial communities appear distinctive, suggesting that their community profiles can be used to differentiate fecal samples and to potentially reveal the presence of host fecal material in environmental waters. Cross-comparisons of microbial communities from different hosts also reveal relative abundances of genetic groups that can be used to distinguish sources. In increasing order of their information richness, several community analysis methods hold promise for MST applications: phospholipid fatty acid (PLFA) analysis, denaturing gradient gel electrophoresis (DGGE), terminal restriction fragment length polymorphism (TRFLP), cloning/sequencing, and PhyloChip. Specific case studies involving TRFLP and PhyloChip approaches demonstrate the ability of community-based analyses of contaminated waters to confirm a diagnosis of water quality based on host-specific marker(s). The success of community-based MST for comprehensively confirming fecal sources relies extensively upon using appropriate multivariate statistical approaches. While community-based MST is still under evaluation and development as a primary diagnostic tool, results presented herein demonstrate its promise. Coupled with its inherently comprehensive ability to capture an unprecedented amount of microbiological data that is relevant to water quality, the tools for microbial community analysis are increasingly accessible, and community-based approaches have unparalleled potential for translation into rapid, perhaps real-time, monitoring platforms.

  13. Network-based Analysis and Insights | NISAC

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    NISACNetwork-based Analysis and Insights content top Chemical Supply Chain Analysis Posted by Admin on Mar 1, 2012 in | Comments 0 comments Chemical Supply Chain Analysis NISAC has...

  14. Economic Analysis Case Studies of Battery Energy Storage with...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Economic Analysis Case Studies of Battery Energy Storage with SAM Nicholas DiOrio, Aron Dobos, ... to use the storage system to increase the system value and mitigate demand charges. ...

  15. Integrated fire analysis: Application to offshore cases

    SciTech Connect (OSTI)

    Saubestre, V.; Khalfi, J.P.; Paygnard, J.C.

    1995-12-31

    Evaluating thermal loads from different fire scenarios and then response of the structure to these loads covers several fields. It is also difficult and time consuming to implement. Interfaces are necessary between the heat calculation, transient propagation and structural analysis software packages. Nevertheless, it is necessary to design structures to accommodate heat loads in order to meet safety requirements or functional specification. Elf, along with several operators and organizations, have sponsored a research project on this topic. The project, managed by SINTEF NBL (Norwegian Fire Research Laboratory), has delivered an integrated fire analysis software package which can be used to address design-to-fire-related issues in various contexts. The core modules of the integrated package are robust, well validated analysis tools. This paper describes some benefits (technical or cost related) of using an integrated approach to assess the response of a structure to thermal loads. Three examples are described: consequence of an accidental scenario on the living quarters in an offshore complex, necessity for the reinforcement of a flareboom following a change in process, evaluation of the amount of insulation needed for a topside process primary structure. The paper focuses on the importance for the operator to have a practical tool which can lead to substantial cost saving while reducing the uncertainty linked to safety issues.

  16. Byfl: Compiler-based Application Analysis

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Feynman Center (505) 665-9090 Email Byfl: Compiler-based Application Analysis Byfl is a productivity tool that helps computational scientists analyze their code for...

  17. Well casing-based geophysical sensor apparatus, system and method

    DOE Patents [OSTI]

    Daily, William D.

    2010-03-09

    A geophysical sensor apparatus, system, and method for use in, for example, oil well operations, and in particular using a network of sensors emplaced along and outside oil well casings to monitor critical parameters in an oil reservoir and provide geophysical data remote from the wells. Centralizers are affixed to the well casings and the sensors are located in the protective spheres afforded by the centralizers to keep from being damaged during casing emplacement. In this manner, geophysical data may be detected of a sub-surface volume, e.g. an oil reservoir, and transmitted for analysis. Preferably, data from multiple sensor types, such as ERT and seismic data are combined to provide real time knowledge of the reservoir and processes such as primary and secondary oil recovery.

  18. Geographically-Based Infrastructure Analysis for California

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Geographically-Based Infrastructure Analysis for California Joan Ogden Institute of Transportation Studies University of California, Davis Presented at the USDOE Hydrogen Transition Analysis Meeting Washington, DC August 9-10, 2006 Acknowledgments UC Davis Researchers: Michael Nicholas Dr. Marc Melaina Dr. Marshall Miller Dr. Chris Yang USDOE: Dr. Sig Gronich Research support: USDOE; H2 Pathways Program sponsors at UC Davis * Refueling station siting and sizing are key aspects of designing H2

  19. Business Case Analysis of Prototype Fabrication Division Recapitalization Plan. Summary

    SciTech Connect (OSTI)

    Booth, Steven Richard; Benson, Faith Ann; Dinehart, Timothy Grant

    2015-04-30

    Business case studies were completed to support procurement of new machines and capital equipment in the Prototype Fabrication (PF) Division SM-39 and TA-03-0102 machine shops. Economic analysis was conducted for replacing the Mazak 30Y Mill-Turn Machine in SM-39, the Haas Vertical CNC Mill in Building 102, and the Hardinge Q10/65-SP Lathe in SM-39. Analysis was also conducted for adding a NanoTech Lathe in Building 102 and a new electrical discharge machine (EDM) in SM-39 to augment current capabilities. To determine the value of switching machinery, a baseline scenario was compared with a future scenario where new machinery was purchased and installed. Costs and benefits were defined via interviews with subject matter experts.

  20. Chiller condition monitoring using topological case-based modeling

    SciTech Connect (OSTI)

    Tsutsui, Hiroaki; Kamimura, Kazuyuki

    1996-11-01

    To increase energy efficiency and economy, commercial building projects now often utilize centralized, shared sources of heat such as district heating and cooling (DHC) systems. To maintain efficiency, precise monitoring and scheduling of maintenance for chillers and heat pumps is essential. Low-performance operation results in energy loss, while unnecessary maintenance is expensive and wasteful. Plant supervisors are responsible for scheduling and supervising maintenance. Modeling systems that assist in analyzing system deterioration are of great benefit for these tasks. Topological case-based modeling (TCBM) (Tsutsui et al. 1993; Tsutsui 1995) is an effective tool for chiller performance deterioration monitoring. This paper describes TCBM and its application to this task using recorded historical performance data.

  1. Economic Analysis Case Studies of Battery Energy Storage with SAM

    SciTech Connect (OSTI)

    DiOrio, Nicholas; Dobos, Aron; Janzou, Steven

    2015-11-01

    Interest in energy storage has continued to increase as states like California have introduced mandates and subsidies to spur adoption. This energy storage includes customer sited behind-the-meter storage coupled with photovoltaics (PV). This paper presents case study results from California and Tennessee, which were performed to assess the economic benefit of customer-installed systems. Different dispatch strategies, including manual scheduling and automated peak-shaving were explored to determine ideal ways to use the storage system to increase the system value and mitigate demand charges. Incentives, complex electric tariffs, and site specific load and PV data were used to perform detailed analysis. The analysis was performed using the free, publically available System Advisor Model (SAM) tool. We find that installation of photovoltaics with a lithium-ion battery system priced at $300/kWh in Los Angeles under a high demand charge utility rate structure and dispatched using perfect day-ahead forecasting yields a positive net-present value, while all other scenarios cost the customer more than the savings accrued. Different dispatch strategies, including manual scheduling and automated peak-shaving were explored to determine ideal ways to use the storage system to increase the system value and mitigate demand charges. Incentives, complex electric tariffs, and site specific load and PV data were used to perform detailed analysis. The analysis was performed using the free, publically available System Advisor Model (SAM) tool. We find that installation of photovoltaics with a lithium-ion battery system priced at $300/kWh in Los Angeles under a high demand charge utility rate structure and dispatched using perfect day-ahead forecasting yields a positive net-present value, while all other scenarios cost the customer more than the savings accrued.

  2. A review of recent NEPA alternatives analysis case law

    SciTech Connect (OSTI)

    Smith, Michael D. . E-mail: michael.smith@humboldt.edu

    2007-03-15

    According to the Council on Environmental Quality (CEQ) Regulations for implementing the National Environmental Policy Act (NEPA), the analysis and comparison of alternatives is considered the 'heart' of the NEPA process. Although over 20 years have passed since the original mandate appeared to construct and assess a 'reasonable range' of alternatives contained in the CEQ Regulations, there is a perception that there is still a significant amount of confusion about what exactly constitutes a legally-compliant alternatives analysis. One manifestation of this confusion is the increasing amount of litigation over the alternatives analysis in NEPA documents. This study examined decisions on challenges to alternative analyses contained in federal agency NEPA documents in federal Courts of Appeals for the ten-year period 1996-2005. The results show that federal agencies are overwhelmingly successful against such challenges - winning 30 of the 37 cases. The most common challenge was that federal agencies had not included a full reasonable range of alternatives, while the second most frequent was that agencies had improperly constructed their purpose and need for their projects. Brief descriptions of several of the key court decisions are provided that illustrate the main factors that led to agencies being successful, as well as being unsuccessful, in their court challenges. The results provide little support for recent calls to amend the NEPA Statute and the CEQ Regulations to better clarify the requirements for alternatives analysis. The conclusion to the study focuses on practical steps NEPA practitioners can take to prepare their alternatives analyses in a manner that fulfills the requirements of the NEPA Statute and Council on Environmental Quality (CEQ) Regulations and makes them less vulnerable to an unfavorable court decision if legally challenged.

  3. Economic Analysis for Conceptual Design of Supercritical O2-Based...

    Office of Scientific and Technical Information (OSTI)

    Economic Analysis for Conceptual Design of Supercritical O2-Based PC Boiler Citation Details In-Document Search Title: Economic Analysis for Conceptual Design of Supercritical ...

  4. Preliminary Analysis of Texas Instrument Hercules Flash-based...

    Office of Scientific and Technical Information (OSTI)

    Technical Report: Preliminary Analysis of Texas Instrument Hercules Flash-based Microcontroller Citation Details In-Document Search Title: Preliminary Analysis of Texas Instrument ...

  5. Topology-based Feature Definition and Analysis

    SciTech Connect (OSTI)

    Weber, Gunther H.; Bremer, Peer-Timo; Gyulassy, Attila; Pascucci, Valerio

    2010-12-10

    Defining high-level features, detecting them, tracking them and deriving quantities based on them is an integral aspect of modern data analysis and visualization. In combustion simulations, for example, burning regions, which are characterized by high fuel-consumption, are a possible feature of interest. Detecting these regions makes it possible to derive statistics about their size and track them over time. However, features of interest in scientific simulations are extremely varied, making it challenging to develop cross-domain feature definitions. Topology-based techniques offer an extremely flexible means for general feature definitions and have proven useful in a variety of scientific domains. This paper will provide a brief introduction into topological structures like the contour tree and Morse-Smale complex and show how to apply them to define features in different science domains such as combustion. The overall goal is to provide an overview of these powerful techniques and start a discussion how these techniques can aid in the analysis of astrophysical simulations.

  6. Bismuth-based electrochemical stripping analysis

    DOE Patents [OSTI]

    Wang, Joseph

    2004-01-27

    Method and apparatus for trace metal detection and analysis using bismuth-coated electrodes and electrochemical stripping analysis. Both anodic stripping voltammetry and adsorptive stripping analysis may be employed.

  7. Geographically Based Hydrogen Demand and Infrastructure Analysis...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Presentation by NREL's Margo Melendez at the 2010 - 2025 Scenario Analysis for Hydrogen ... More Documents & Publications 2010 - 2025 Scenario Analysis Meeting Agenda for August 9 - ...

  8. Analysis of Geothermal Reservoir Stimulation using Geomechanics-Based

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Stochastic Analysis of Injection-Induced Seismicity | Department of Energy Geothermal Reservoir Stimulation using Geomechanics-Based Stochastic Analysis of Injection-Induced Seismicity Analysis of Geothermal Reservoir Stimulation using Geomechanics-Based Stochastic Analysis of Injection-Induced Seismicity This project will develop a model for seismicity-based reservoir characterization (SBRC) by combining rock mechanics; finite element modeling; geo-statistical concepts to establish

  9. Analysis of Geothermal Reservoir Stimulation using Geomechanics-Based Stochastic Analysis of Injection-Induced Seismicity

    Broader source: Energy.gov [DOE]

    Analysis of Geothermal Reservoir Stimulation using Geomechanics-Based Stochastic Analysis of Injection-Induced Seismicity presentation at the April 2013 peer review meeting held in Denver, Colorado.

  10. 20th International Conference on Case Based Reasoning | GE Global...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Efficiency of Scientific Data Analysis: Scientific ... other traditional Artificial Intelligence (AI) algorithms out there. ... Basically, the big take away is that while most AI ...

  11. Load flow analysis: Base cases, data, diagrams, and results ...

    Office of Scientific and Technical Information (OSTI)

    The report summarizes the load flow model construction, simulation, and validation and describes the general capabilities of an information query system designed to access load ...

  12. Geographically-Based Infrastructure Analysis for California | Department of

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Energy Geographically-Based Infrastructure Analysis for California Geographically-Based Infrastructure Analysis for California Presentation by Joan Ogden of the University of California at the 2010 - 2025 Scenario Analysis for Hydrogen Fuel Cell Vehicles and Infrastructure Meeting on August 9 - 10, 2006 in Washington, D.C. ogden_geo_infrastructure_analysis.pdf (5.39 MB) More Documents & Publications Hydrogen Infrastructure Strategies Consumer Water Heater, UEF - v1.0 EIS-0105: Draft

  13. Analysis of Geothermal Reservoir Stimulation Using Geomechanics-based

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Stochastic Analysis of Injection-induced Seismicity; 2010 Geothermal Technology Program Peer Review Report | Department of Energy Using Geomechanics-based Stochastic Analysis of Injection-induced Seismicity; 2010 Geothermal Technology Program Peer Review Report Analysis of Geothermal Reservoir Stimulation Using Geomechanics-based Stochastic Analysis of Injection-induced Seismicity; 2010 Geothermal Technology Program Peer Review Report DOE 2010 Geothermal Technologies Program Peer Review

  14. Cluster Analysis-Based Approaches for Geospatiotemporal Data...

    Office of Scientific and Technical Information (OSTI)

    Cluster Analysis-Based Approaches for Geospatiotemporal Data Mining of Massive Data Sets for Identification of Forest Threats Mills, Richard T ORNL ORNL; Hoffman, Forrest M...

  15. NETL - Petroleum-Based Fuels Life Cycle Greenhouse Gas Analysis...

    Open Energy Info (EERE)

    search Tool Summary LAUNCH TOOL Name: NETL - Petroleum-Based Fuels Life Cycle Greenhouse Gas Analysis 2005 Baseline Model AgencyCompany Organization: National Energy Technology...

  16. Physics-Based Constraints in the Forward Modeling Analysis of...

    Office of Scientific and Technical Information (OSTI)

    Image Data, (Long Version) Citation Details In-Document Search Title: Physics-Based Constraints in the Forward Modeling Analysis of Time-Correlated Image Data, (Long Version) ...

  17. Copula-Based Flood Frequency Analysis at Ungauged Basin Confluences...

    Office of Scientific and Technical Information (OSTI)

    SciTech Connect Search Results Journal Article: Copula-Based Flood Frequency Analysis at ... sustain user needs while also posing an increased flooding risk from multiple tributaries. ...

  18. Physics-based constraints in the forward modeling analysis of...

    Office of Scientific and Technical Information (OSTI)

    Conference: Physics-based constraints in the forward modeling analysis of time-correlated image data Citation Details In-Document Search Title: Physics-based constraints in the ...

  19. Physics-Based Constraints in the Forward Modeling Analysis of...

    Office of Scientific and Technical Information (OSTI)

    Technical Report: Physics-Based Constraints in the Forward Modeling Analysis of Time-Correlated Image Data, (Long Version) Citation Details In-Document Search Title: Physics-Based ...

  20. Topology-based Visualization and Analysis of High-dimensional...

    Office of Scientific and Technical Information (OSTI)

    Topology-based Visualization and Analysis of High-dimensional Data and Time-varying Data at the Extreme Scale Citation Details In-Document Search Title: Topology-based ...

  1. A case study of abnormal conditions and events (ACE) analysis

    SciTech Connect (OSTI)

    Reeves, R.; Hicks, G.; Karrasch, B.

    1995-08-01

    In August of 1993, EPRI initiated a project to perform an evaluation of the application of various methodologies for performing Abnormal Conditions and Events (ACE) analysis on computer systems used in nuclear plants. This paper discusses the application of ACE analysis techniques to two systems designed for the Tennessee Valley Authority (TVA) Browns Ferry Nuclear (BFN) plant. Further details can be obtained from EPRI TR-104595, ``Abnormal Conditions and Events Analysis for Instrumentation and Controls Systems`` which is scheduled for publication in December, 1994.

  2. Geographically Based Hydrogen Demand and Infrastructure Analysis

    Broader source: Energy.gov [DOE]

    Presentation by NREL's Margo Melendez at the 2010 - 2025 Scenario Analysis for Hydrogen Fuel Cell Vehicles and Infrastructure Meeting on August 9 - 10, 2006 in Washington, D.C.

  3. Overview of New Tools to Perform Safety Analysis: BWR Station Black Out Test Case

    SciTech Connect (OSTI)

    D. Mandelli; C. Smith; T. Riley; J. Nielsen; J. Schroeder; C. Rabiti; A. Alfonsi; Cogliati; R. Kinoshita; V. Pasucci; B. Wang; D. Maljovec

    2014-06-01

    Dynamic Probabilistic Risk Assessment (DPRA) methodologies couple system simulator codes (e.g., RELAP, MELCOR) with simulation controller codes (e.g., RAVEN, ADAPT). While system simulator codes accurately model system dynamics deterministically, simulation controller codes introduce both deterministic (e.g., system control logic, operating procedures) and stochastic (e.g., component failures, parameter uncertainties) elements into the simulation. Typically, a DPRA is performed by: 1) sampling values of a set of parameters from the uncertainty space of interest (using the simulation controller codes), and 2) simulating the system behavior for that specific set of parameter values (using the system simulator codes). For complex systems, one of the major challenges in using DPRA methodologies is to analyze the large amount of information (i.e., large number of scenarios ) generated, where clustering techniques are typically employed to allow users to better organize and interpret the data. In this paper, we focus on the analysis of a nuclear simulation dataset that is part of the Risk Informed Safety Margin Characterization (RISMC) Boiling Water Reactor (BWR) station blackout (SBO) case study. We apply a software tool that provides the domain experts with an interactive analysis and visualization environment for understanding the structures of such high-dimensional nuclear simulation datasets. Our tool encodes traditional and topology-based clustering techniques, where the latter partitions the data points into clusters based on their uniform gradient flow behavior. We demonstrate through our case study that both types of clustering techniques complement each other in bringing enhanced structural understanding of the data.

  4. Building America Special Research Project: High-R Walls Case Study Analysis

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    | Department of Energy Building America Special Research Project: High-R Walls Case Study Analysis Building America Special Research Project: High-R Walls Case Study Analysis This report considers a number of promising wall systems with improved thermal control to improve plant-wide performance. Unlike previous studies, it considers performance in a more realistic matter, including some true three-dimensional heat flow and the relative risk of moisture damage. Building America Special

  5. Analysis of Energy Efficiency Program Impacts Based on Program Spending

    U.S. Energy Information Administration (EIA) Indexed Site

    Analysis of Energy Efficiency Program Impacts Based on Program Spending May 2015 Independent Statistics & Analysis www.eia.gov U.S. Department of Energy Washington, DC 20585 U.S. Energy Information Administration | Analysis of Energy Efficiency Program Impacts Based on Program Spending i This report was prepared by the U.S. Energy Information Administration (EIA), the statistical and analytical agency within the U.S. Department of Energy. By law, EIA's data, analyses, and forecasts are

  6. 2007 Wholesale Power Rate Case Final Proposal : Risk Analysis Study.

    SciTech Connect (OSTI)

    United States. Bonneville Power Administration.

    2006-07-01

    BPA's operating environment is filled with numerous uncertainties, and thus the rate-setting process must take into account a wide spectrum of risks. The objective of the Risk Analysis is to identify, model, and analyze the impacts that key risks have on BPA's net revenue (total revenues less total expenses). This is carried out in two distinct steps: a risk analysis step, in which the distributions, or profiles, of operating and non operating risks are defined, and a risk mitigation step, in which different rate tools are tested to assess their ability to recover BPA's costs in the face of this uncertainty. Two statistical models are used in the risk analysis step for this rate proposal, the Risk Analysis Model (RiskMod), and the Non-Operating Risk Model (NORM), while a third model, the ToolKit, is used to test the effectiveness of rate tools options in the risk mitigation step. RiskMod is discussed in Sections 2.1 through 2.4, the NORM is discussed in Section 2.5, and the ToolKit is discussed in Section 3. The models function together so that BPA can develop rates that cover all of its costs and provide a high probability of making its Treasury payments on time and in full during the rate period. By law, BPA's payments to Treasury are the lowest priority for revenue application, meaning that payments to Treasury are the first to be missed if financial reserves are insufficient to pay all bills on time. For this reason, BPA measures its potential for recovering costs in terms of probability of being able to make Treasury payments on time (also known as Treasury Payment Probability or TPP).

  7. Desiccant-Based Preconditioning Market Analysis

    SciTech Connect (OSTI)

    Fischer, J.

    2001-01-11

    A number of important conclusions can be drawn as a result of this broad, first-phase market evaluation. The more important conclusions include the following: (1) A very significant market opportunity will exist for specialized outdoor air-handling units (SOAHUs) as more construction and renovation projects are designed to incorporate the recommendations made by the ASHRAE 62-1989 standard. Based on this investigation, the total potential market is currently $725,000,000 annually (see Table 6, Sect. 3). Based on the market evaluations completed, it is estimated that approximately $398,000,000 (55%) of this total market could be served by DBC systems if they were made cost-effective through mass production. Approximately $306,000,000 (42%) of the total can be served by a non-regenerated, desiccant-based total recovery approach, based on the information provided by this investigation. Approximately $92,000,000 (13%) can be served by a regenerated desiccant-based cooling approach (see Table 7, Sect. 3). (2) A projection of the market selling price of various desiccant-based SOAHU systems was prepared using prices provided by Trane for central-station, air-handling modules currently manufactured. The wheel-component pricing was added to these components by SEMCO. This resulted in projected pricing for these systems that is significantly less than that currently offered by custom suppliers (see Table 4, Sect. 2). Estimated payback periods for all SOAHU approaches were quite short when compared with conventional over-cooling and reheat systems. Actual paybacks may vary significantly depending on site-specific considerations. (3) In comparing cost vs benefit of each SOAHU approach, it is critical that the total system design be evaluated. For example, the cost premium of a DBC system is very significant when compared to a conventional air handling system, yet the reduced chiller, boiler, cooling tower, and other expense often equals or exceeds this premium, resulting in a

  8. Sandia National Laboratories analysis code data base

    SciTech Connect (OSTI)

    Peterson, C.W.

    1994-11-01

    Sandia National Laboratories, mission is to solve important problems in the areas of national defense, energy security, environmental integrity, and industrial technology. The Laboratories` strategy for accomplishing this mission is to conduct research to provide an understanding of the important physical phenomena underlying any problem, and then to construct validated computational models of the phenomena which can be used as tools to solve the problem. In the course of implementing this strategy, Sandia`s technical staff has produced a wide variety of numerical problem-solving tools which they use regularly in the design, analysis, performance prediction, and optimization of Sandia components, systems and manufacturing processes. This report provides the relevant technical and accessibility data on the numerical codes used at Sandia, including information on the technical competency or capability area that each code addresses, code ``ownership`` and release status, and references describing the physical models and numerical implementation.

  9. 2007 Wholesale Power Rate Case Initial Proposal : Risk Analysis Study.

    SciTech Connect (OSTI)

    United States. Bonneville Power Administration.

    2005-11-01

    The Federal Columbia River Power System (FCRPS), operated on behalf of the ratepayers of the PNW by BPA and other Federal agencies, faces many uncertainties during the FY 2007-2009 rate period. Among these uncertainties, the largest revolve around hydro conditions, market prices and river operations for fish recovery. In order to provide a high probability of making its U.S. Treasury payments, BPA performs a Risk Analysis as part of its rate-making process. In this Risk Analysis, BPA identifies key risks, models their relationships, and then analyzes their impacts on net revenues (total revenues less expenses). BPA subsequently evaluates in the ToolKit Model the Treasury Payment Probability (TPP) resulting from the rates, risks, and risk mitigation measures described here and in the Wholesale Power Rate Development Study (WPRDS). If the TPP falls short of BPA's standard, additional risk mitigation revenues, such as PNRR and CRAC revenues are incorporated in the modeling in ToolKit until the TPP standard is met. Increased wholesale market price volatility and six years of drought have significantly changed the profile of risk and uncertainty facing BPA and its stakeholders. These present new challenges for BPA in its effort to keep its power rates as low as possible while fully meeting its obligations to the U.S. Treasury. As a result, the risk BPA faces in not receiving the level of secondary revenues that have been credited to power rates before receiving those funds is greater. In addition to market price volatility, BPA also faces uncertainty around the financial impacts of operations for fish programs in FY 2006 and in the FY 2007-2009 rate period. A new Biological Opinion or possible court-ordered change to river operations in FY 2006 through FY 2009 may reduce BPA's net revenues included Initial Proposal. Finally, the FY 2007-2009 risk analysis includes new operational risks as well as a more comprehensive analysis of non-operating risks. Both the operational

  10. Analysis of Vehicle-Based Security Operations

    SciTech Connect (OSTI)

    Carter, Jason M; Paul, Nate R

    2015-01-01

    Vehicle-to-vehicle (V2V) communications promises to increase roadway safety by providing each vehicle with 360 degree situational awareness of other vehicles in proximity, and by complementing onboard sensors such as radar or camera in detecting imminent crash scenarios. In the United States, approximately three hundred million automobiles could participate in a fully deployed V2V system if Dedicated Short-Range Communication (DSRC) device use becomes mandatory. The system s reliance on continuous communication, however, provides a potential means for unscrupulous persons to transmit false data in an attempt to cause crashes, create traffic congestion, or simply render the system useless. V2V communications must be highly scalable while retaining robust security and privacy preserving features to meet the intra-vehicle and vehicle-to-infrastructure communication requirements for a growing vehicle population. Oakridge National Research Laboratory is investigating a Vehicle-Based Security System (VBSS) to provide security and privacy for a fully deployed V2V and V2I system. In the VBSS an On-board Unit (OBU) generates short-term certificates and signs Basic Safety Messages (BSM) to preserve privacy and enhance security. This work outlines a potential VBSS structure and its operational concepts; it examines how a vehicle-based system might feasibly provide security and privacy, highlights remaining challenges, and explores potential mitigations to address those challenges. Certificate management alternatives that attempt to meet V2V security and privacy requirements have been examined previously by the research community including privacy-preserving group certificates, shared certificates, and functional encryption. Due to real-world operational constraints, adopting one of these approaches for VBSS V2V communication is difficult. Timely misbehavior detection and revocation are still open problems for any V2V system. We explore the alternative approaches that may be

  11. An approach to model validation and model-based prediction -- polyurethane foam case study.

    SciTech Connect (OSTI)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical analyses and

  12. Surveillance data bases, analysis, and standardization program

    SciTech Connect (OSTI)

    Kam, F.B.K.

    1990-09-26

    The traveler presented a paper at the Seventh ASTM-EURATOM Symposium on Reactor Dosimetry and co-chaired an oral session on Computer Codes and Methods. Papers of considerable interest to the NRC Surveillance Dosimetry Program involved statistically based adjustment procedures and uncertainties. The information exchange meetings with Czechoslovakia and Hungary were very enlightening. Lack of large computers have hindered their surveillance program. They depended very highly on information from their measurement programs which were somewhat limited because of the lack of sophisticated electronics. The Nuclear Research Institute at Rez had to rely on expensive mockups of power reactor configurations to test their fluence exposures. Computers, computer codes, and updated nuclear data would advance their technology rapidly, and they were not hesitant to admit this fact. Both eastern-bloc countries said that IBM is providing an IBM 3090 for educational purposes but research and development studies would have very limited access. They were very apologetic that their currencies were not convertible, and any exchange means that they could provide services or pay for US scientists in their respective countries, but funding for their scientists in the United States, or expenses that involved payment in dollars, must come from us.

  13. Product Lifecycle Management Architecture: A Model Based Systems Engineering Analysis.

    SciTech Connect (OSTI)

    Noonan, Nicholas James

    2015-07-01

    This report is an analysis of the Product Lifecycle Management (PLM) program. The analysis is centered on a need statement generated by a Nuclear Weapons (NW) customer. The need statement captured in this report creates an opportunity for the PLM to provide a robust service as a solution. Lifecycles for both the NW and PLM are analyzed using Model Based System Engineering (MBSE).

  14. Business Case Analysis Requirements for Certain Interagency and Agency-Specific Acquisitions

    Office of Energy Efficiency and Renewable Energy (EERE)

    Office of Federal Procurement Policy’s (OFPP) memorandum, dated September 29, 2011, Development, Review and Approval of Business Cases for Certain Interagency and Agency-Specific Acquisitions, outlines required elements of business case analysis as well as a process for developing, reviewing, and approving business cases to support the establishment and renewal of government-wide acquisition contracts (GWACs), certain multi-agency contracts, certain agency-specific contracts, or agency-specific blanket purchase agreement (BPA). Agency-specific vehicles are either indefinite-delivery, indefinite quantity contracts or blanket purchase agreements intended for the use of your contracting activity, the Department of Energy, or another Federal Agency.

  15. Using the DOE Knowledge Base for Special Event Analysis

    SciTech Connect (OSTI)

    Armstrong, H.M.; Harris, J.M.; Young, C.J.

    1998-10-20

    The DOE Knowledge Base is a library of detailed information whose purpose is to support the United States National Data Center (USNDC) in its mission to monitor compliance with the Comprehensive Test Ban Treaty (CTBT). One of the important tasks which the USNDC must accomplish is to periodically perform detailed analysis of events of high interest, so-called "Special Events", to provide the national authority with information needed to make policy decisions. In this paper we investigate some possible uses of the Knowledge Base for Special Event Analysis (SEA), and make recommendations for improving Knowledge Base support for SEA. To analyze an event in detail, there are two basic types of data which must be used sensor-derived data (wave- forms, arrivals, events, etc.) and regiohalized contextual data (known sources, geological characteristics, etc.). Cur- rently there is no single package which can provide full access to both types of data, so for our study we use a separate package for each MatSeis, the Sandia Labs-developed MATLAB-based seismic analysis package, for wave- form data analysis, and ArcView, an ESRI product, for contextual data analysis. Both packages are well-suited to pro- totyping because they provide a rich set of currently available functionality and yet are also flexible and easily extensible, . Using these tools and Phase I Knowledge Base data sets, we show how the Knowledge Base can improve both the speed and the quality of SEA. Empirically-derived interpolated correction information can be accessed to improve both location estimates and associated error estimates. This information can in turn be used to identi~ any known nearby sources (e.g. mines, volcanos), which may then trigger specialized processing of the sensor data. Based on the location estimate, preferred magnitude formulas and discriminants can be retrieved, and any known blockages can be identified to prevent miscalculations. Relevant historic events can be identilled either by

  16. Klonos: A Similarity Analysis Based Tool for Software Porting

    Energy Science and Technology Software Center (OSTI)

    2014-07-30

    The Klonos is a compiler-based tool that can help users for scientific application porting. The tool is based on the similarity analysis with the help of the OpenUH compiler (a branch of Open64 compiler). This tool combines syntactic and cost-model-provided metrics clusters, which aggregate similar subroutines that can be ported similarity. The generated porting plan, which allows programmers and compilers to reuse porting experience as much as possible during the porting process.

  17. Financial Analysis of Incentive Mechanisms to Promote Energy Efficiency: Case Study of a Prototypical Southwest Utility

    SciTech Connect (OSTI)

    Cappers, Peter; Goldman, Charles; Chait, Michele; Edgar, George; Schlegel, Jeff; Shirley, Wayne

    2009-03-04

    alternative incentive approaches on utility shareholders and customers if energy efficiency is implemented under various utility operating, cost, and supply conditions.We used and adapted a spreadsheet-based financial model (the Benefits Calculator) which was developed originally as a tool to support the National Action Plan for Energy Efficiency (NAPEE). The major steps in our analysis are displayed graphically in Figure ES- 1. Two main inputs are required: (1) characterization of the utility which includes its initial financial and physical market position, a forecast of the utility?s future sales, peak demand, and resource strategy to meet projected growth; and (2) characterization of the Demand-Side Resource (DSR) portfolio ? projected electricity and demand savings, costs and economic lifetime of a portfolio of energy efficiency (and/or demand response) programs that the utility is planning or considering implementing during the analysis period. The Benefits Calculator also estimates total resource costs and benefits of the DSR portfolio using a forecast of avoided capacity and energy costs. The Benefits Calculator then uses inputs provided in the Utility Characterization to produce a ?business-as usual? base case as well as alternative scenarios that include energy efficiency resources, including the corresponding utility financial budgets required in each case. If a decoupling and/or a shareholder incentive mechanism are instituted, the Benefits Calculator model readjusts the utility?s revenue requirement and retail rates accordingly. Finally, for each scenario, the Benefits Calculator produces several metrics that provides insights on how energy efficiency resources, decoupling and/or a shareholder incentive mechanism impacts utility shareholders (e.g. overall earnings, return on equity), ratepayers (e.g., average customer bills and rates) and society (e.g. net resource benefits).

  18. Tariff-based analysis of commercial building electricityprices

    SciTech Connect (OSTI)

    Coughlin, Katie M.; Bolduc, Chris A.; Rosenquist, Greg J.; VanBuskirk, Robert D.; McMahon, James E.

    2008-03-28

    This paper presents the results of a survey and analysis ofelectricity tariffs and marginal electricity prices for commercialbuildings. The tariff data come from a survey of 90 utilities and 250tariffs for non-residential customers collected in 2004 as part of theTariff Analysis Project at LBNL. The goals of this analysis are toprovide useful summary data on the marginal electricity prices commercialcustomers actually see, and insight into the factors that are mostimportant in determining prices under different circumstances. We providea new, empirically-based definition of several marginal prices: theeffective marginal price and energy-only anddemand-only prices, andderive a simple formula that expresses the dependence of the effectivemarginal price on the marginal load factor. The latter is a variable thatcan be used to characterize the load impacts of a particular end-use orefficiency measure. We calculate all these prices for eleven regionswithin the continental U.S.

  19. Analysis of Geothermal Reservoir Stimulation using Geomechanics-Based Stochastic Analysis of Injection-Induced Seismicity

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    May 18, 2010 Geothermal Technologies Program 2013 Peer Review Ghassemi, 2002 Analysis of Geothermal Reservoir Stimulation using Geomechanics-Based Stochastic Analysis of Injection-Induced Seismicity Principal Investigator: Ahmad Ghassmi EGS Component R&D Stimulation Prediction Models This presentation does not contain any proprietary confidential, or otherwise restricted information. April, 2013 2 | US DOE Geothermal Program eere.energy.gov Relevance/Impact of Research * Develop a model for

  20. Identification and Prioritization of Analysis Cases for Marine and Hydrokinetic Energy Risk Screening

    SciTech Connect (OSTI)

    Anderson, Richard M.; Unwin, Stephen D.; Van Cleve, Frances B.

    2010-06-16

    In this report we describe the development of the Environmental Risk Evaluation System (ERES), a risk-informed analytical process for estimating the environmental risks associated with the construction and operation of marine and hydrokinetic energy generation projects. The development process consists of two main phases of analysis. In the first phase, preliminary risk analyses will take the form of screening studies in which key environmental impacts and the uncertainties that create risk are identified, leading to a better-focused characterization of the relevant environmental effects. Existence of critical data gaps will suggest areas in which specific modeling and/or data collection activities should take place. In the second phase, more detailed quantitative risk analyses will be conducted, with residual uncertainties providing the basis for recommending risk mitigation and monitoring activities. We also describe the process used for selecting three cases for fiscal year 2010 risk screening analysis using the ERES. A case is defined as a specific technology deployed in a particular location involving certain environmental receptors specific to that location. The three cases selected satisfy a number of desirable criteria: 1) they correspond to real projects whose deployment is likely to take place in the foreseeable future; 2) the technology developers are willing to share technology and project-related data; 3) the projects represent a diversity of technology-site-receptor characteristics; 4) the projects are of national interest, and 5) environmental effects data may be available for the projects.

  1. Techno-Economic Analysis of Biofuels Production Based on Gasification

    SciTech Connect (OSTI)

    Swanson, R. M.; Platon, A.; Satrio, J. A.; Brown, R. C.; Hsu, D. D.

    2010-11-01

    This study compares capital and production costs of two biomass-to-liquid production plants based on gasification. The first biorefinery scenario is an oxygen-fed, low-temperature (870?C), non-slagging, fluidized bed gasifier. The second scenario is an oxygen-fed, high-temperature (1,300?C), slagging, entrained flow gasifier. Both are followed by catalytic Fischer-Tropsch synthesis and hydroprocessing to naphtha-range (gasoline blend stock) and distillate-range (diesel blend stock) liquid fractions. Process modeling software (Aspen Plus) is utilized to organize the mass and energy streams and cost estimation software is used to generate equipment costs. Economic analysis is performed to estimate the capital investment and operating costs. Results show that the total capital investment required for nth plant scenarios is $610 million and $500 million for high-temperature and low-temperature scenarios, respectively. Product value (PV) for the high-temperature and low-temperature scenarios is estimated to be $4.30 and $4.80 per gallon of gasoline equivalent (GGE), respectively, based on a feedstock cost of $75 per dry short ton. Sensitivity analysis is also performed on process and economic parameters. This analysis shows that total capital investment and feedstock cost are among the most influential parameters affecting the PV.

  2. Knowledge representation and the application of case-based reasoning in engineering design

    SciTech Connect (OSTI)

    Bhangal, J.S.; Esat, I.

    1996-12-31

    This paper is an assessment of the requirements in the application of Case-based Reasoning to Engineering Design. The methods in which a CBR system will assist a designer when he/she is presented with a problem specification and the various methods which need to be understood before attempting to build an such expert system are discussed here. The problem is two fold, firstly the methods of utilizing CBR are varied and secondly the method of representing the knowledge in design also needs to be established. How a design represented basically differs for each application and this is a decision which needs to be made when setting up the case memory but the methods used are discussed here. CBR itself can also be utilized in various ways and it has been seen from previous applications that a hybrid approach can produce the best results.

  3. Lossless droplet transfer of droplet-based microfluidic analysis

    DOE Patents [OSTI]

    Kelly, Ryan T (West Richland, WA); Tang, Keqi (Richland, WA); Page, Jason S (Kennewick, WA); Smith, Richard D (Richland, WA)

    2011-11-22

    A transfer structure for droplet-based microfluidic analysis is characterized by a first conduit containing a first stream having at least one immiscible droplet of aqueous material and a second conduit containing a second stream comprising an aqueous fluid. The interface between the first conduit and the second conduit can define a plurality of apertures, wherein the apertures are sized to prevent exchange of the first and second streams between conduits while allowing lossless transfer of droplets from the first conduit to the second conduit through contact between the first and second streams.

  4. Landslide hazard analysis for pipelines: The case of the Simonette river crossing

    SciTech Connect (OSTI)

    Grivas, D.A.; Schultz, B.C.; O`Neil, G.; Rizkalla, M.; McGuffey, V.C.

    1995-12-31

    The overall objective of this study is to develop a probabilistic methodology to analyze landslide hazards and their effects on the safety of buried pipelines. The methodology incorporates a range of models that can accommodate differences in the ground movement modes and the amount and type of information available at various site locations. Two movement modes are considered, namely (a) instantaneous (catastrophic) slides, and (b) gradual ground movement which may result in cumulative displacements over the pipeline design life (30--40 years) that are in excess of allowable values. Probabilistic analysis is applied in each case to address the uncertainties associated with important factors that control slope stability. Availability of information ranges from relatively well studied, instrumented installations to cases where data is limited to what can be derived from topographic and geologic maps. The methodology distinguishes between procedures applied where there is little information and those that can be used when relatively extensive data is available. important aspects of the methodology are illustrated in a case study involving a pipeline located in Northern Alberta, Canada, in the Simonette river valley.

  5. A self-adaptive case-based reasoning system for dose planning in prostate cancer radiotherapy

    SciTech Connect (OSTI)

    Mishra, Nishikant; Petrovic, Sanja; Sundar, Santhanam

    2011-12-15

    Purpose: Prostate cancer is the most common cancer in the male population. Radiotherapy is often used in the treatment for prostate cancer. In radiotherapy treatment, the oncologist makes a trade-off between the risk and benefit of the radiation, i.e., the task is to deliver a high dose to the prostate cancer cells and minimize side effects of the treatment. The aim of our research is to develop a software system that will assist the oncologist in planning new treatments. Methods: A nonlinear case-based reasoning system is developed to capture the expertise and experience of oncologists in treating previous patients. Importance (weights) of different clinical parameters in the dose planning is determined by the oncologist based on their past experience, and is highly subjective. The weights are usually fixed in the system. In this research, the weights are updated automatically each time after generating a treatment plan for a new patient using a group based simulated annealing approach. Results: The developed approach is analyzed on the real data set collected from the Nottingham University Hospitals NHS Trust, City Hospital Campus, UK. Extensive experiments show that the dose plan suggested by the proposed method is coherent with the dose plan prescribed by an experienced oncologist or even better. Conclusions: The developed case-based reasoning system enables the use of knowledge and experience gained by the oncologist in treating new patients. This system may play a vital role to assist the oncologist in making a better decision in less computational time; it utilizes the success rate of the previously treated patients and it can also be used in teaching and training processes.

  6. Knowledge base navigator facilitating regional analysis inter-tool communication.

    SciTech Connect (OSTI)

    Hampton, Jeffery Wade; Chael, Eric Paul; Hart, Darren M.; Merchant, Bion John; Chown, Matthew N.

    2004-08-01

    To make use of some portions of the National Nuclear Security Administration (NNSA) Knowledge Base (KB) for which no current operational monitoring applications were available, Sandia National Laboratories have developed a set of prototype regional analysis tools (MatSeis, EventID Tool, CodaMag Tool, PhaseMatch Tool, Dendro Tool, Infra Tool, etc.), and we continue to maintain and improve these. Individually, these tools have proven effective in addressing specific monitoring tasks, but collectively their number and variety tend to overwhelm KB users, so we developed another application - the KB Navigator - to launch the tools and facilitate their use for real monitoring tasks. The KB Navigator is a flexible, extensible java application that includes a browser for KB data content, as well as support to launch any of the regional analysis tools. In this paper, we will discuss the latest versions of KB Navigator and the regional analysis tools, with special emphasis on the new overarching inter-tool communication methodology that we have developed to make the KB Navigator and the tools function together seamlessly. We use a peer-to-peer communication model, which allows any tool to communicate with any other. The messages themselves are passed as serialized XML, and the conversion from Java to XML (and vice versa) is done using Java Architecture for XML Binding (JAXB).

  7. Feature-Based Statistical Analysis of Combustion Simulation Data

    SciTech Connect (OSTI)

    Bennett, J; Krishnamoorthy, V; Liu, S; Grout, R; Hawkes, E; Chen, J; Pascucci, V; Bremer, P T

    2011-11-18

    We present a new framework for feature-based statistical analysis of large-scale scientific data and demonstrate its effectiveness by analyzing features from Direct Numerical Simulations (DNS) of turbulent combustion. Turbulent flows are ubiquitous and account for transport and mixing processes in combustion, astrophysics, fusion, and climate modeling among other disciplines. They are also characterized by coherent structure or organized motion, i.e. nonlocal entities whose geometrical features can directly impact molecular mixing and reactive processes. While traditional multi-point statistics provide correlative information, they lack nonlocal structural information, and hence, fail to provide mechanistic causality information between organized fluid motion and mixing and reactive processes. Hence, it is of great interest to capture and track flow features and their statistics together with their correlation with relevant scalar quantities, e.g. temperature or species concentrations. In our approach we encode the set of all possible flow features by pre-computing merge trees augmented with attributes, such as statistical moments of various scalar fields, e.g. temperature, as well as length-scales computed via spectral analysis. The computation is performed in an efficient streaming manner in a pre-processing step and results in a collection of meta-data that is orders of magnitude smaller than the original simulation data. This meta-data is sufficient to support a fully flexible and interactive analysis of the features, allowing for arbitrary thresholds, providing per-feature statistics, and creating various global diagnostics such as Cumulative Density Functions (CDFs), histograms, or time-series. We combine the analysis with a rendering of the features in a linked-view browser that enables scientists to interactively explore, visualize, and analyze the equivalent of one terabyte of simulation data. We highlight the utility of this new framework for combustion

  8. A Research Roadmap for Computation-Based Human Reliability Analysis

    SciTech Connect (OSTI)

    Boring, Ronald; Mandelli, Diego; Joe, Jeffrey; Smith, Curtis; Groth, Katrina

    2015-08-01

    The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is often secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.

  9. A component analysis based on serial results analyzing performance of parallel iterative programs

    SciTech Connect (OSTI)

    Richman, S.C.

    1994-12-31

    This research is concerned with the parallel performance of iterative methods for solving large, sparse, nonsymmetric linear systems. Most of the iterative methods are first presented with their time costs and convergence rates examined intensively on sequential machines, and then adapted to parallel machines. The analysis of the parallel iterative performance is more complicated than that of serial performance, since the former can be affected by many new factors, such as data communication schemes, number of processors used, and Ordering and mapping techniques. Although the author is able to summarize results from data obtained after examining certain cases by experiments, two questions remain: (1) How to explain the results obtained? (2) How to extend the results from the certain cases to general cases? To answer these two questions quantitatively, the author introduces a tool called component analysis based on serial results. This component analysis is introduced because the iterative methods consist mainly of several basic functions such as linked triads, inner products, and triangular solves, which have different intrinsic parallelisms and are suitable for different parallel techniques. The parallel performance of each iterative method is first expressed as a weighted sum of the parallel performance of the basic functions that are the components of the method. Then, one separately examines the performance of basic functions and the weighting distributions of iterative methods, from which two independent sets of information are obtained when solving a given problem. In this component approach, all the weightings require only serial costs not parallel costs, and each iterative method for solving a given problem is represented by its unique weighting distribution. The information given by the basic functions is independent of iterative method, while that given by weightings is independent of parallel technique, parallel machine and number of processors.

  10. SYSTEM DESIGN AND ANALYSIS FOR CONCEPTUAL DESIGN OF OXYGEN-BASED PC BOILER

    SciTech Connect (OSTI)

    Zhen Fan; Andrew Seltzer

    2003-11-01

    The objective of the system design and analysis task of the Conceptual Design of Oxygen-Based PC Boiler study is to optimize the PC boiler plant by maximizing system efficiency. Simulations of the oxygen-fired plant with CO{sub 2} sequestration were conducted using Aspen Plus and were compared to a reference air-fired 460 Mw plant. Flue gas recycle is used in the O{sub 2}-fired PC to control the flame temperature. Parametric runs were made to determine the effect of flame temperature on system efficiency and required waterwall material and thickness. The degree of improvement on system efficiency of various modifications including hot gas recycle, purge gas recycle, flue gas feedwater recuperation, and recycle purge gas expansion were investigated. The selected O{sub 2}-fired design case has a system efficiency of 30.1% compared to the air-fired system efficiency of 36.7%. The design O{sub 2}-fired case requires T91 waterwall material and has a waterwall surface area of only 44% of the air-fired reference case. Compared to other CO{sub 2} sequestration technologies, the O{sub 2}-fired PC is substantially better than both natural gas combined cycles and post CO{sub 2} removal PCs and is slightly better than integrated gasification combined cycles.

  11. Discrete Mathematical Approaches to Graph-Based Traffic Analysis

    SciTech Connect (OSTI)

    Joslyn, Cliff A.; Cowley, Wendy E.; Hogan, Emilie A.; Olsen, Bryan K.

    2014-04-01

    Modern cyber defense and anlaytics requires general, formal models of cyber systems. Multi-scale network models are prime candidates for such formalisms, using discrete mathematical methods based in hierarchically-structured directed multigraphs which also include rich sets of labels. An exemplar of an application of such an approach is traffic analysis, that is, observing and analyzing connections between clients, servers, hosts, and actors within IP networks, over time, to identify characteristic or suspicious patterns. Towards that end, NetFlow (or more generically, IPFLOW) data are available from routers and servers which summarize coherent groups of IP packets flowing through the network. In this paper, we consider traffic analysis of Netflow using both basic graph statistics and two new mathematical measures involving labeled degree distributions and time interval overlap measures. We do all of this over the VAST test data set of 96M synthetic Netflow graph edges, against which we can identify characteristic patterns of simulated ground-truth network attacks.

  12. Statistical Analysis of Microarray Data with Replicated Spots: A Case Study withSynechococcusWH8102

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Thomas, E. V.; Phillippy, K. H.; Brahamsha, B.; Haaland, D. M.; Timlin, J. A.; Elbourne, L. D. H.; Palenik, B.; Paulsen, I. T.

    2009-01-01

    Until recently microarray experiments often involved relatively few arrays with only a single representation of each gene on each array. A complete genome microarray with multiple spots per gene (spread out spatially across the array) was developed in order to compare the gene expression of a marine cyanobacterium and a knockout mutant strain in a defined artificial seawater medium. Statistical methods were developed for analysis in the special situation of this case study where there is gene replication within an array and where relatively few arrays are used, which can be the case with current array technology. Due in partmoreto the replication within an array, it was possible to detect very small changes in the levels of expression between the wild type and mutant strains. One interesting biological outcome of this experiment is the indication of the extent to which the phosphorus regulatory system of this cyanobacterium affects the expression of multiple genes beyond those strictly involved in phosphorus acquisition.less

  13. Statistical Analysis of Microarray Data with Replicated Spots: A Case Study with Synechococcus WH8102

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Thomas, E. V.; Phillippy, K. H.; Brahamsha, B.; Haaland, D. M.; Timlin, J. A.; Elbourne, L. D. H.; Palenik, B.; Paulsen, I. T.

    2009-01-01

    Until recently microarray experiments often involved relatively few arrays with only a single representation of each gene on each array. A complete genome microarray with multiple spots per gene (spread out spatially across the array) was developed in order to compare the gene expression of a marine cyanobacterium and a knockout mutant strain in a defined artificial seawater medium. Statistical methods were developed for analysis in the special situation of this case study where there is gene replication within an array and where relatively few arrays are used, which can be the case with current array technology. Due in partmore » to the replication within an array, it was possible to detect very small changes in the levels of expression between the wild type and mutant strains. One interesting biological outcome of this experiment is the indication of the extent to which the phosphorus regulatory system of this cyanobacterium affects the expression of multiple genes beyond those strictly involved in phosphorus acquisition.« less

  14. Survey of sampling-based methods for uncertainty and sensitivity analysis.

    SciTech Connect (OSTI)

    Johnson, Jay Dean; Helton, Jon Craig; Sallaberry, Cedric J. PhD.; Storlie, Curt B. (Colorado State University, Fort Collins, CO)

    2006-06-01

    Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (1) Definition of probability distributions to characterize epistemic uncertainty in analysis inputs, (2) Generation of samples from uncertain analysis inputs, (3) Propagation of sampled inputs through an analysis, (4) Presentation of uncertainty analysis results, and (5) Determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition.

  15. Feature-based Analysis of Plasma-based Particle Acceleration Data

    SciTech Connect (OSTI)

    Ruebel, Oliver; Geddes, Cameron G.R.; Chen, Min; Cormier-Michel, Estelle; Bethel, E. Wes

    2013-07-05

    Plasma-based particle accelerators can produce and sustain thousands of times stronger acceleration fields than conventional particle accelerators, providing a potential solution to the problem of the growing size and cost of conventional particle accelerators. To facilitate scientific knowledge discovery from the ever growing collections of accelerator simulation data generated by accelerator physicists to investigate next-generation plasma-based particle accelerator designs, we describe a novel approach for automatic detection and classification of particle beams and beam substructures due to temporal differences in the acceleration process, here called acceleration features. The automatic feature detection in combination with a novel visualization tool for fast, intuitive, query-based exploration of acceleration features enables an effective top-down data exploration process, starting from a high-level, feature-based view down to the level of individual particles. We describe the application of our analysis in practice to analyze simulations of single pulse and dual and triple colliding pulse accelerator designs, and to study the formation and evolution of particle beams, to compare substructures of a beam and to investigate transverse particle loss.

  16. Geography-based structural analysis of the Internet

    SciTech Connect (OSTI)

    Kasiviswanathan, Shiva; Eidenbenz, Stephan; Yan, Guanhua

    2010-01-01

    In this paper, we study some geographic aspects of the Internet. We base our analysis on a large set of geolocated IP hop-level session data (including about 300,000 backbone routers, 150 million end hosts, and 1 billion sessions) that we synthesized from a variety of different input sources such as US census data, computer usage statistics, Internet market share data, IP geolocation data sets, CAJDA's Skitter data set for backbone connectivity, and BGP routing tables. We use this model to perform a nationwide and statewide geographic analysis of the Internet. Our main observations are: (1) There is a dominant coast-to-coast pattern in the US Internet traffic. In fact, in many instances even if the end-devices are not near either coast, still the traffic between them takes a long detour through the coasts. (2) More than half of the Internet paths are inflated by 100% or more compared to their corresponding geometric straight-line distance. This circuitousness makes the average ratio between the routing distance and geometric distance big (around 10). (3) The weighted mean hop count is around 5, but the hop counts are very loosely correlated with the distances. The weighted mean AS count (number of ASes traversed) is around 3. (4) The AS size and the AS location number distributions are heavy-tailed and strongly correlated. Most of the ASes are medium sized and there is a wide variability in the geographic dispersion size (measured in terms of the convex hull area) of these ASes.

  17. Precipitation Estimate Using NEXRAD Ground-Based Radar Images: Validation, Calibration and Spatial Analysis

    SciTech Connect (OSTI)

    Zhang, Xuesong

    2012-12-17

    Precipitation is an important input variable for hydrologic and ecological modeling and analysis. Next Generation Radar (NEXRAD) can provide precipitation products that cover most of the continental United States with a high resolution display of approximately 4 × 4 km2. Two major issues concerning the applications of NEXRAD data are (1) lack of a NEXRAD geo-processing and geo-referencing program and (2) bias correction of NEXRAD estimates. In this chapter, a geographic information system (GIS) based software that can automatically support processing of NEXRAD data for hydrologic and ecological models is presented. Some geostatistical approaches to calibrating NEXRAD data using rain gauge data are introduced, and two case studies on evaluating accuracy of NEXRAD Multisensor Precipitation Estimator (MPE) and calibrating MPE with rain-gauge data are presented. The first case study examines the performance of MPE in mountainous region versus south plains and cold season versus warm season, as well as the effect of sub-grid variability and temporal scale on NEXRAD performance. From the results of the first case study, performance of MPE was found to be influenced by complex terrain, frozen precipitation, sub-grid variability, and temporal scale. Overall, the assessment of MPE indicates the importance of removing bias of the MPE precipitation product before its application, especially in the complex mountainous region. The second case study examines the performance of three MPE calibration methods using rain gauge observations in the Little River Experimental Watershed in Georgia. The comparison results show that no one method can perform better than the others in terms of all evaluation coefficients and for all time steps. For practical estimation of precipitation distribution, implementation of multiple methods to predict spatial precipitation is suggested.

  18. System planning analysis applied to OTEC: initial cases by Florida Power Corporation. Task II report No. FC-5237-2

    SciTech Connect (OSTI)

    1980-03-01

    The objective of the task was to exercise the FPC system planning methodology on: (1) Base Case, 10 year generation expansion plan with coal plants providing base load expansion, and (2) same, but 400 MW of OTEC substituting for coal burning units with equal resultant system reliability. OTEC inputs were based on reasonable economic projections of direct capital cost and O and M costs for first-generation large commercial plants. OTEC inputs discussed in Section 2. The Base Case conditions for FPC system planning methodology involved base load coal fueled additions during the 1980's and early 1990's. The first trial runs of the PROMOD system planning model substituted OTEC for 400 MW purchases of coal generated power during 1988-1989 and then 400 MW coal capacity thereafter. Result showed higher system reliability than Base Case runs. Reruns with greater coal fueled capacity displacement showed that OTEC could substitute for 400 MW purchases in 1988-1989 and replace the 800 MW coal unit scheduled for 1990 to yield equivalent system reliability. However, a 1995 unit would need to be moved to 1994. Production costing computer model runs were used as input to Corporate Model to examine corporate financial impact. Present value of total revenue requirements were primary indication of relative competitiveness between Base Case and OTEC. Results show present value of total revenue requirements unfavorable to OTEC as compared to coal units. The disparity was in excess of the allowable range for possible consideration.

  19. A comprehensive evaluation of various sensitivity analysis methods: A case study with a hydrological model

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Gan, Yanjun; Duan, Qingyun; Gong, Wei; Tong, Charles; Sun, Yunwei; Chu, Wei; Ye, Aizhong; Miao, Chiyuan; Di, Zhenhua

    2014-01-01

    Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin nearmore » Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more

  20. A comprehensive evaluation of various sensitivity analysis methods: A case study with a hydrological model

    SciTech Connect (OSTI)

    Gan, Yanjun; Duan, Qingyun; Gong, Wei; Tong, Charles; Sun, Yunwei; Chu, Wei; Ye, Aizhong; Miao, Chiyuan; Di, Zhenhua

    2014-01-01

    Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin near Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more efficient

  1. The IAEA Coordinated Research Program on HTGR Reactor Physics, Thermal-hydraulics and Depletion Uncertainty Analysis: Description of the Benchmark Test Cases and Phases

    SciTech Connect (OSTI)

    Frederik Reitsma; Gerhard Strydom; Bismark Tyobeka; Kostadin Ivanov

    2012-10-01

    The continued development of High Temperature Gas Cooled Reactors (HTGRs) requires verification of design and safety features with reliable high fidelity physics models and robust, efficient, and accurate codes. The uncertainties in the HTR analysis tools are today typically assessed with sensitivity analysis and then a few important input uncertainties (typically based on a PIRT process) are varied in the analysis to find a spread in the parameter of importance. However, one wish to apply a more fundamental approach to determine the predictive capability and accuracies of coupled neutronics/thermal-hydraulics and depletion simulations used for reactor design and safety assessment. Today there is a broader acceptance of the use of uncertainty analysis even in safety studies and it has been accepted by regulators in some cases to replace the traditional conservative analysis. Finally, there is also a renewed focus in supplying reliable covariance data (nuclear data uncertainties) that can then be used in uncertainty methods. Uncertainty and sensitivity studies are therefore becoming an essential component of any significant effort in data and simulation improvement. In order to address uncertainty in analysis and methods in the HTGR community the IAEA launched a Coordinated Research Project (CRP) on the HTGR Uncertainty Analysis in Modelling early in 2012. The project is built on the experience of the OECD/NEA Light Water Reactor (LWR) Uncertainty Analysis in Best-Estimate Modelling (UAM) benchmark activity, but focuses specifically on the peculiarities of HTGR designs and its simulation requirements. Two benchmark problems were defined with the prismatic type design represented by the MHTGR-350 design from General Atomics (GA) while a 250 MW modular pebble bed design, similar to the INET (China) and indirect-cycle PBMR (South Africa) designs are also included. In the paper more detail on the benchmark cases, the different specific phases and tasks and the latest

  2. Reduced order model based on principal component analysis for process simulation and optimization

    SciTech Connect (OSTI)

    Lang, Y.; Malacina, A.; Biegler, L.; Munteanu, S.; Madsen, J.; Zitney, S.

    2009-01-01

    It is well-known that distributed parameter computational fluid dynamics (CFD) models provide more accurate results than conventional, lumped-parameter unit operation models used in process simulation. Consequently, the use of CFD models in process/equipment co-simulation offers the potential to optimize overall plant performance with respect to complex thermal and fluid flow phenomena. Because solving CFD models is time-consuming compared to the overall process simulation, we consider the development of fast reduced order models (ROMs) based on CFD results to closely approximate the high-fidelity equipment models in the co-simulation. By considering process equipment items with complicated geometries and detailed thermodynamic property models, this study proposes a strategy to develop ROMs based on principal component analysis (PCA). Taking advantage of commercial process simulation and CFD software (for example, Aspen Plus and FLUENT), we are able to develop systematic CFD-based ROMs for equipment models in an efficient manner. In particular, we show that the validity of the ROM is more robust within well-sampled input domain and the CPU time is significantly reduced. Typically, it takes at most several CPU seconds to evaluate the ROM compared to several CPU hours or more to solve the CFD model. Two case studies, involving two power plant equipment examples, are described and demonstrate the benefits of using our proposed ROM methodology for process simulation and optimization.

  3. Model-based performance monitoring: Review of diagnostic methods and chiller case study

    SciTech Connect (OSTI)

    Haves, Phil; Khalsa, Sat Kartar

    2000-05-01

    The paper commences by reviewing the variety of technical approaches to the problem of detecting and diagnosing faulty operation in order to improve the actual performance of buildings. The review covers manual and automated methods, active testing and passive monitoring, the different classes of models used in fault detection, and methods of diagnosis. The process of model-based fault detection is then illustrated by describing the use of relatively simple empirical models of chiller energy performance to monitor equipment degradation and control problems. The CoolTools(trademark) chiller model identification package is used to fit the DOE-2 chiller model to on-site measurements from a building instrumented with high quality sensors. The need for simple algorithms to reject transient data, detect power surges and identify control problems is discussed, as is the use of energy balance checks to detect sensor problems. The accuracy with which the chiller model can be expected! to predict performance is assessed from the goodness of fit obtained and the implications for fault detection sensitivity and sensor accuracy requirements are discussed. A case study is described in which the model was applied retroactively to high-quality data collected in a San Francisco office building as part of a related project (Piette et al. 1999).

  4. Habitat-Lite: A GSC case study based on free text terms for environmental metadata

    SciTech Connect (OSTI)

    Kyrpides, Nikos; Hirschman, Lynette; Clark, Cheryl; Cohen, K. Bretonnel; Mardis, Scott; Luciano, Joanne; Kottmann, Renzo; Cole, James; Markowitz, Victor; Kyrpides, Nikos; Field, Dawn

    2008-04-01

    There is an urgent need to capture metadata on the rapidly growing number of genomic, metagenomic and related sequences, such as 16S ribosomal genes. This need is a major focus within the Genomic Standards Consortium (GSC), and Habitat is a key metadata descriptor in the proposed 'Minimum Information about a Genome Sequence' (MIGS) specification. The goal of the work described here is to provide a light-weight, easy-to-use (small) set of terms ('Habitat-Lite') that captures high-level information about habitat while preserving a mapping to the recently launched Environment Ontology (EnvO). Our motivation for building Habitat-Lite is to meet the needs of multiple users, such as annotators curating these data, database providers hosting the data, and biologists and bioinformaticians alike who need to search and employ such data in comparative analyses. Here, we report a case study based on semi-automated identification of terms from GenBank and GOLD. We estimate that the terms in the initial version of Habitat-Lite would provide useful labels for over 60% of the kinds of information found in the GenBank isolation-source field, and around 85% of the terms in the GOLD habitat field. We present a revised version of Habitat-Lite and invite the community's feedback on its further development in order to provide a minimum list of terms to capture high-level habitat information and to provide classification bins needed for future studies.

  5. Geographically Based Hydrogen Demand and Infrastructure Rollout Scenario Analysis

    Broader source: Energy.gov [DOE]

    Presentation by Margo Melendez at the 2010-2025 Scenario Analysis for Hydrogen Fuel Cell Vehicles and Infrastructure meeting on January 31, 2007.

  6. An Integrated Analysis of a NERVA Based Nuclear Thermal Propulsion...

    Office of Scientific and Technical Information (OSTI)

    require that self-consistent neutronicthermal-hydraulicstress analyses be carried out. ... SYSTEMS; PULSES; REACTOR SAFETY; STRESS ANALYSIS; THERMAL HYDRAULICS; WATER; ...

  7. FAQS Gap Analysis Qualification Card – General Technical Base

    Broader source: Energy.gov [DOE]

    Functional Area Qualification Standard Gap Analysis Qualification Cards outline the differences between the last and latest version of the FAQ Standard.

  8. Strategic backdrop analysis for fossil fuel planning. Task 1. Default Case. Report 468-117-07/01

    SciTech Connect (OSTI)

    Not Available

    1980-06-01

    This report presents data describing a default case analysis performed using the strategic backdrop analytical framework developed to facilitate fossil fuel planning within the DOE. Target years are 1985 and 2025. Residential, commercial, and industrial energy demands are forecast as well as the impacts of energy technology implementation and market penetration using a set of energy technology assumptions. (DMC)

  9. Strategic backdrop analysis for fossil fuel planning. Task 1. Default Case. Report 468-117-07/02

    SciTech Connect (OSTI)

    Not Available

    1980-06-01

    This report presents data describing a default case analysis performed using the strategic backdrop analytical framework developed to facilitate fossil fuel planning within the DOE. Target years are 1985, 2000, and 2025. Residential, commercial, and industrial energy demands and impacts of energy technology implementation and market penetration are forecast using a set of energy technology assumptions. (DMC)

  10. Strategic backdrop analysis for fossil fuel planning. Task 1. Default Case. Report 468-117-07/03

    SciTech Connect (OSTI)

    Not Available

    1980-06-01

    This report presents data describing a default case analysis performed using the strategic backdrop analytical framework developed to facilitate fossil fuel planning within the DOE. Target years are 1985, 2000, and 2025. Residential, commercial, and industrial energy demands and impacts of energy technology implementation and market penetration are forecast using a set of energy technology assumptions.

  11. Algorithms and tools for high-throughput geometry-based analysis...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Algorithms and tools for high-throughput geometry-based analysis of crystalline porous materials Previous Next List Thomas F. Willems, Chris H. Rycroft, Michaeel Kazi, Juan C....

  12. Algorithms and tools for high-throughput geometry-based analysis...

    Office of Scientific and Technical Information (OSTI)

    Journal Article: Algorithms and tools for high-throughput geometry-based analysis of crystalline porous materials ... Research Org: Energy Frontier Research Centers (EFRC); ...

  13. Prevalence and contribution of BRCA1 mutations in breast cancer and ovarian cancer: Results from three US population-based case-control studies of ovarian cancer

    SciTech Connect (OSTI)

    Whittemore, A.S.; Gong, G.; Itnyre, J.

    1997-03-01

    We investigate the familial risks of cancers of the breast and ovary, using data pooled from three population-based case-control studies of ovarian cancer that were conducted in the United States. We base estimates of the frequency of mutations of BRCA1 (and possibly other genes) on the reported occurrence of breast cancer and ovarian cancer in the mothers and sisters of 922 women with incident ovarian cancer (cases) and in 922 women with no history of ovarian cancer (controls). Segregation analysis and goodness-of-fit testing of genetic models suggest that rare mutations (frequency .0014; 95% confidence interval .0002-.011) account for all the observed aggregation of breast cancer and ovarian cancer in these families. The estimated risk of breast cancer by age 80 years is 73.5% in mutation carriers and 6.8% in noncarriers. The corresponding estimates for ovarian cancer are 27.8% in carriers and 1.8% in noncarriers. For cancer risk in carriers, these estimates are lower than those obtained from families selected for high cancer prevalence. The estimated proportion of all U.S. cancer diagnoses, by age 80 years, that are due to germ-line BRCA1 mutations is 3.0% for breast cancer and 4.4% for ovarian cancer. Aggregation of breast cancer and ovarian cancer was less evident in the families of 169 cases with borderline ovarian cancers than in the families of cases with invasive cancers. Familial aggregation did not differ by the ethnicity of the probands, although the number of non-White and Hispanic cases (N = 99) was sparse. 14 refs., 3 figs., 6 tabs.

  14. SYSTEM DESIGN AND ANALYSIS FOR CONCEPTUAL DESIGN OF OXYGEN-BASED...

    Office of Scientific and Technical Information (OSTI)

    DESIGN OF OXYGEN-BASED PC BOILER Citation Details In-Document Search Title: SYSTEM DESIGN AND ANALYSIS FOR CONCEPTUAL DESIGN OF OXYGEN-BASED PC BOILER The objective of the system ...

  15. Global Trade Analysis Project (GTAP) Data Base | Open Energy...

    Open Energy Info (EERE)

    TOOL Name: GTAP 6 Data Base AgencyCompany Organization: Purdue University Sector: Energy Topics: Policiesdeployment programs, Co-benefits assessment, - Macroeconomic,...

  16. A High Resolution Hydrometer Phase Classifier Based on Analysis...

    Office of Scientific and Technical Information (OSTI)

    Satellite-based retrievals of cloudmore phase in high latitudes are often hindered by the highly reflecting ice-covered ground and persistent temperature inversions. From the ...

  17. Psychosocial Modeling of Insider Threat Risk Based on Behavioral and Word Use Analysis

    SciTech Connect (OSTI)

    Greitzer, Frank L.; Kangas, Lars J.; Noonan, Christine F.; Brown, Christopher R.; Ferryman, Thomas A.

    2013-10-01

    In many insider crimes, managers and other coworkers observed that the offenders had exhibited signs of stress, disgruntlement, or other issues, but no alarms were raised. Barriers to using such psychosocial indicators include the inability to recognize the signs and the failure to record the behaviors so that they can be assessed. A psychosocial model was developed to assess an employee’s behavior associated with an increased risk of insider abuse. The model is based on case studies and research literature on factors/correlates associated with precursor behavioral manifestations of individuals committing insider crimes. A complementary Personality Factor modeling approach was developed based on analysis to derive relevant personality characteristics from word use. Several implementations of the psychosocial model were evaluated by comparing their agreement with judgments of human resources and management professionals; the personality factor modeling approach was examined using email samples. If implemented in an operational setting, these models should be part of a set of management tools for employee assessment to identify employees who pose a greater insider threat.

  18. Microsoft PowerPoint - Microbial Genome and Metagenome Analysis Case Study (NERSC Workshop - May 7-8, 2009).ppt [Compatibility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Microbial Genome & Metagenome Analysis: Computational Challenges Natalia N. Ivanova * Nikos C. Kyrpides * Victor M. Markowitz ** * Genome Biology Program, Joint Genome Institute ** Lawrence Berkeley National Lab Microbial genome & metagenome analysis General aims Understand microbial life Apply to agriculture, bioremediation, biofuels, human health Specific aims include Specific aims include Predict biochemistry & physiology of organisms based on genome sequence Explain known

  19. Code cases for implementing risk-based inservice testing in the ASME OM code

    SciTech Connect (OSTI)

    Rowley, C.W.

    1996-12-01

    Historically inservice testing has been reasonably effective, but quite costly. Recent applications of plant PRAs to the scope of the IST program have demonstrated that of the 30 pumps and 500 valves in the typical plant IST program, less than half of the pumps and ten percent of the valves are risk significant. The way the ASME plans to tackle this overly-conservative scope for IST components is to use the PRA and plant expert panels to create a two tier IST component categorization scheme. The PRA provides the quantitative risk information and the plant expert panel blends the quantitative and deterministic information to place the IST component into one of two categories: More Safety Significant Component (MSSC) or Less Safety Significant Component (LSSC). With all the pumps and valves in the IST program placed in MSSC or LSSC categories, two different testing strategies will be applied. The testing strategies will be unique for the type of component, such as centrifugal pump, positive displacement pump, MOV, AOV, SOV, SRV, PORV, HOV, CV, and MV. A series of OM Code Cases are being developed to capture this process for a plant to use. One Code Case will be for Component Importance Ranking. The remaining Code Cases will develop the MSSC and LSSC testing strategy for type of component. These Code Cases are planned for publication in early 1997. Later, after some industry application of the Code Cases, the alternative Code Case requirements will gravitate to the ASME OM Code as appendices.

  20. CHAPTER 7. BERYLLIUM ANALYSIS BY NON-PLASMA BASED METHODS

    SciTech Connect (OSTI)

    Ekechukwu, A

    2009-04-20

    The most common method of analysis for beryllium is inductively coupled plasma atomic emission spectrometry (ICP-AES). This method, along with inductively coupled plasma mass spectrometry (ICP-MS), is discussed in Chapter 6. However, other methods exist and have been used for different applications. These methods include spectroscopic, chromatographic, colorimetric, and electrochemical. This chapter provides an overview of beryllium analysis methods other than plasma spectrometry (inductively coupled plasma atomic emission spectrometry or mass spectrometry). The basic methods, detection limits and interferences are described. Specific applications from the literature are also presented.

  1. Digital Instrumentation and Control Failure Events Derivation and Analysis by Frame-Based Technique

    SciTech Connect (OSTI)

    Hui-Wen Huang; Chunkuan Shih [National Tsing Hua University, 101, Section 2, Kuang-Fu Road, Hsinchu, Taiwan 30013 (China); Swu Yih [DML International, 18F-1 295, Section 2 Kuang Fu Road, Hsinchu, Taiwan (China); Yen-Chang Tzeng; Ming-Huei Chen [Institute of Nuclear Energy Research, No. 1000, Wunhua Rd., Jiaan Village, Longtan Township, Taoyuan County 32546, Taiwan (China)

    2006-07-01

    A frame-based technique, including physical frame, logical frame, and cognitive frame, was adopted to perform digital I and C failure events derivation and analysis for generic ABWR. The physical frame was structured with a modified PCTran-ABWR plant simulation code, which was extended and enhanced on the feedwater system, recirculation system, and steam line system. The logical model is structured with MATLAB, which was incorporated into PCTran-ABWR to improve the pressure control system, feedwater control system, recirculation control system, and automated power regulation control system. As a result, the software failure of these digital control systems can be properly simulated and analyzed. The cognitive frame was simulated by the operator awareness status in the scenarios. Moreover, via an internal characteristics tuning technique, the modified PCTran-ABWR can precisely reflect the characteristics of the power-core flow. Hence, in addition to the transient plots, the analysis results can then be demonstrated on the power-core flow map. A number of postulated I and C system software failure events were derived to achieve the dynamic analyses. The basis for event derivation includes the published classification for software anomalies, the digital I and C design data for ABWR, chapter 15 accident analysis of generic SAR, and the reported NPP I and C software failure events. The case study of this research includes (1) the software CMF analysis for the major digital control systems; and (2) postulated ABWR digital I and C software failure events derivation from the actual happening of non-ABWR digital I and C software failure events, which were reported to LER of USNRC or IRS of IAEA. These events were analyzed by PCTran-ABWR. Conflicts among plant status, computer status, and human cognitive status are successfully identified. The operator might not easily recognize the abnormal condition, because the computer status seems to progress normally. However, a well

  2. Improving the Accuracy of Software-Based Energy Analysis for Residential Buildings (Presentation)

    SciTech Connect (OSTI)

    Polly, B.

    2011-09-01

    This presentation describes the basic components of software-based energy analysis for residential buildings, explores the concepts of 'error' and 'accuracy' when analysis predictions are compared to measured data, and explains how NREL is working to continuously improve the accuracy of energy analysis methods.

  3. Rapid analysis of steels using laser-based techniques

    SciTech Connect (OSTI)

    Cremers, D.A.; Archuleta, F.L.; Dilworth, H.C.

    1985-01-01

    Based on the data obtained by this study, we conclude that laser-based techniques can be used to provide at least semi-quantitative information about the elemental composition of molten steel. Of the two techniques investigated here, the Sample-Only method appears preferable to the LIBS (laser-induced breakdown spectroscopy) method because of its superior analytical performance. In addition, the Sample-Only method would probably be easier to incorporate into a steel plant environment. However, before either technique can be applied to steel monitoring, additional research is needed.

  4. Temperature-based Instanton Analysis: Identifying Vulnerability in Transmission Networks

    SciTech Connect (OSTI)

    Kersulis, Jonas; Hiskens, Ian; Chertkov, Michael; Backhaus, Scott N.; Bienstock, Daniel

    2015-04-08

    A time-coupled instanton method for characterizing transmission network vulnerability to wind generation fluctuation is presented. To extend prior instanton work to multiple-time-step analysis, line constraints are specified in terms of temperature rather than current. An optimization formulation is developed to express the minimum wind forecast deviation such that at least one line is driven to its thermal limit. Results are shown for an IEEE RTS-96 system with several wind-farms.

  5. Intelligent Control in Automation Based on Wireless Traffic Analysis

    SciTech Connect (OSTI)

    Kurt Derr; Milos Manic

    2007-09-01

    Wireless technology is a central component of many factory automation infrastructures in both the commercial and government sectors, providing connectivity among various components in industrial realms (distributed sensors, machines, mobile process controllers). However wireless technologies provide more threats to computer security than wired environments. The advantageous features of Bluetooth technology resulted in Bluetooth units shipments climbing to five million per week at the end of 2005 [1, 2]. This is why the real-time interpretation and understanding of Bluetooth traffic behavior is critical in both maintaining the integrity of computer systems and increasing the efficient use of this technology in control type applications. Although neuro-fuzzy approaches have been applied to wireless 802.11 behavior analysis in the past, a significantly different Bluetooth protocol framework has not been extensively explored using this technology. This paper presents a new neurofuzzy traffic analysis algorithm of this still new territory of Bluetooth traffic. Further enhancements of this algorithm are presented along with the comparison against the traditional, numerical approach. Through test examples, interesting Bluetooth traffic behavior characteristics were captured, and the comparative elegance of this computationally inexpensive approach was demonstrated. This analysis can be used to provide directions for future development and use of this prevailing technology in various control type applications, as well as making the use of it more secure.

  6. Intelligent Control in Automation Based on Wireless Traffic Analysis

    SciTech Connect (OSTI)

    Kurt Derr; Milos Manic

    2007-08-01

    Wireless technology is a central component of many factory automation infrastructures in both the commercial and government sectors, providing connectivity among various components in industrial realms (distributed sensors, machines, mobile process controllers). However wireless technologies provide more threats to computer security than wired environments. The advantageous features of Bluetooth technology resulted in Bluetooth units shipments climbing to five million per week at the end of 2005 [1, 2]. This is why the real-time interpretation and understanding of Bluetooth traffic behavior is critical in both maintaining the integrity of computer systems and increasing the efficient use of this technology in control type applications. Although neuro-fuzzy approaches have been applied to wireless 802.11 behavior analysis in the past, a significantly different Bluetooth protocol framework has not been extensively explored using this technology. This paper presents a new neurofuzzy traffic analysis algorithm of this still new territory of Bluetooth traffic. Further enhancements of this algorithm are presented along with the comparison against the traditional, numerical approach. Through test examples, interesting Bluetooth traffic behavior characteristics were captured, and the comparative elegance of this computationally inexpensive approach was demonstrated. This analysis can be used to provide directions for future development and use of this prevailing technology in various control type applications, as well as making the use of it more secure.

  7. Simulation and Non-Simulation Based Human Reliability Analysis Approaches

    SciTech Connect (OSTI)

    Boring, Ronald Laurids; Shirley, Rachel Elizabeth; Joe, Jeffrey Clark; Mandelli, Diego

    2014-12-01

    Part of the U.S. Department of Energy’s Light Water Reactor Sustainability (LWRS) Program, the Risk-Informed Safety Margin Characterization (RISMC) Pathway develops approaches to estimating and managing safety margins. RISMC simulations pair deterministic plant physics models with probabilistic risk models. As human interactions are an essential element of plant risk, it is necessary to integrate human actions into the RISMC risk model. In this report, we review simulation-based and non-simulation-based human reliability assessment (HRA) methods. Chapter 2 surveys non-simulation-based HRA methods. Conventional HRA methods target static Probabilistic Risk Assessments for Level 1 events. These methods would require significant modification for use in dynamic simulation of Level 2 and Level 3 events. Chapter 3 is a review of human performance models. A variety of methods and models simulate dynamic human performance; however, most of these human performance models were developed outside the risk domain and have not been used for HRA. The exception is the ADS-IDAC model, which can be thought of as a virtual operator program. This model is resource-intensive but provides a detailed model of every operator action in a given scenario, along with models of numerous factors that can influence operator performance. Finally, Chapter 4 reviews the treatment of timing of operator actions in HRA methods. This chapter is an example of one of the critical gaps between existing HRA methods and the needs of dynamic HRA. This report summarizes the foundational information needed to develop a feasible approach to modeling human interactions in the RISMC simulations.

  8. Case Studies

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Case Studies Case Studies The following case studies will be included in the HEP report. Final case studies are due January 7, 2013. Lattice Gauge Theories - Lead: Doug Toussaint Simulations for Cosmic Frontier Experiments - Leads: Peter Nugent & Andrew Connelly Cosmic Microwave Background Data Analysis - Lead: Julian Borrill Cosmological Simulations - Lead: Salman Habib Plasma Accelerator Simulation Using Laser and Particle Beam Drivers - Leads: Cameron Geddes & Frank Tsung Community

  9. Aerosol transport and wet scavenging in deep convective clouds: a case study and model evaluation using a multiple passive tracer analysis approach

    SciTech Connect (OSTI)

    Yang, Qing; Easter, Richard C.; Campuzano-Jost, Pedro; Jimenez, Jose L.; Fast, Jerome D.; Ghan, Steven J.; Wang, Hailong; Berg, Larry K.; Barth, Mary; Liu, Ying; Shrivastava, ManishKumar B.; Singh, Balwinder; Morrison, H.; Fan, Jiwen; Ziegler, Conrad L.; Bela, Megan; Apel, Eric; Diskin, G. S.; Mikoviny, Tomas; Wisthaler, Armin

    2015-08-20

    The effect of wet scavenging on ambient aerosols in deep, continental convective clouds in the mid-latitudes is studied for a severe storm case in Oklahoma during the Deep Convective Clouds and Chemistry (DC3) field campaign. A new passive-tracer based transport analysis framework is developed to characterize the convective transport based on the vertical distribution of several slowly reacting and nearly insoluble trace gases. The passive gas concentration in the upper troposphere convective outflow results from a mixture of 47% from the lower level (0-3 km), 21% entrained from the upper troposphere, and 32% from mid-atmosphere based on observations. The transport analysis framework is applied to aerosols to estimate aerosol transport and wet-scavenging efficiency. Observations yield high overall scavenging efficiencies of 81% and 68% for aerosol mass (Dp < 1μm) and aerosol number (0.03< Dp < 2.5μm), respectively. Little chemical selectivity to wet scavenging is seen among observed submicron sulfate (84%), organic (82%), and ammonium (80%) aerosols, while nitrate has a much lower scavenging efficiency of 57% likely due to the uptake of nitric acid. Observed larger size particles (0.15 - 2.5μm) are scavenged more efficiently (84%) than smaller particles (64%; 0.03 - 0.15μm). The storm is simulated using the chemistry version of the WRF model. Compared to the observation based analysis, the standard model underestimates the wet scavenging efficiency for both mass and number concentrations with low biases of 31% and 40%, respectively. Adding a new treatment of secondary activation significantly improves simulation results, so that the bias in scavenging efficiency in mass and number concentrations is reduced to <10%. This supports the hypothesis that secondary activation is an important process for wet removal of aerosols in deep convective storms.

  10. ANUDlSiTM-40 Load Flow Analysis: Base Cases, Data, Diagrams,...

    Office of Scientific and Technical Information (OSTI)

    ... set. 4.4 IMPLICATIONS OF THE ADOPTED MODELING APPROACH By representing the M W and ... those that would result if ComEd were simulated in the context of the full MAIN model. ...

  11. U.S. Renewable Energy Technical Potentials: A GIS-Based Analysis...

    Open Energy Info (EERE)

    Renewable Energy Technical Potentials: A GIS-Based Analysis Jump to: navigation, search OpenEI Reference LibraryAdd to library Report: U.S. Renewable Energy Technical Potentials: A...

  12. Proteomics based compositional analysis of complex cellulase-hemicellulase mixtures

    SciTech Connect (OSTI)

    Chundawat, Shishir P.; Lipton, Mary S.; Purvine, Samuel O.; Uppugundla, Nirmal; Gao, Dahai; Balan, Venkatesh; Dale, Bruce E.

    2011-10-07

    Efficient deconstruction of cellulosic biomass to fermentable sugars for fuel and chemical production is accomplished by a complex mixture of cellulases, hemicellulases and accessory enzymes (e.g., >50 extracellular proteins). Cellulolytic enzyme mixtures, produced industrially mostly using fungi like Trichoderma reesei, are poorly characterized in terms of their protein composition and its correlation to hydrolytic activity on cellulosic biomass. The secretomes of commercial glycosyl hydrolase producing microbes was explored using a proteomics approach with high-throughput quantification using liquid chromatography-tandem mass spectrometry (LC-MS/MS). Here, we show that proteomics based spectral counting approach is a reasonably accurate and rapid analytical technique that can be used to determine protein composition of complex glycosyl hydrolase mixtures that also correlates with the specific activity of individual enzymes present within the mixture. For example, a strong linear correlation was seen between Avicelase activity and total cellobiohydrolase content. Reliable, quantitative and cheaper analytical methods that provide insight into the cellulosic biomass degrading fungal and bacterial secretomes would lead to further improvements towards commercialization of plant biomass derived fuels and chemicals.

  13. A Monte Carlo based spent fuel analysis safeguards strategy assessment

    SciTech Connect (OSTI)

    Fensin, Michael L; Tobin, Stephen J; Swinhoe, Martyn T; Menlove, Howard O; Sandoval, Nathan P

    2009-01-01

    assessment process, the techniques employed to automate the coupled facets of the assessment process, and the standard burnup/enrichment/cooling time dependent spent fuel assembly library. We also clearly define the diversion scenarios that will be analyzed during the standardized assessments. Though this study is currently limited to generic PWR assemblies, it is expected that the results of the assessment will yield an adequate spent fuel analysis strategy knowledge that will help the down-select process for other reactor types.

  14. Analyzing simulation-based PRA data through traditional and topological clustering: A BWR station blackout case study

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Maljovec, D.; Liu, S.; Wang, B.; Mandelli, D.; Bremer, P. -T.; Pascucci, V.; Smith, C.

    2015-07-14

    Here, dynamic probabilistic risk assessment (DPRA) methodologies couple system simulator codes (e.g., RELAP and MELCOR) with simulation controller codes (e.g., RAVEN and ADAPT). Whereas system simulator codes model system dynamics deterministically, simulation controller codes introduce both deterministic (e.g., system control logic and operating procedures) and stochastic (e.g., component failures and parameter uncertainties) elements into the simulation. Typically, a DPRA is performed by sampling values of a set of parameters and simulating the system behavior for that specific set of parameter values. For complex systems, a major challenge in using DPRA methodologies is to analyze the large number of scenarios generated,more » where clustering techniques are typically employed to better organize and interpret the data. In this paper, we focus on the analysis of two nuclear simulation datasets that are part of the risk-informed safety margin characterization (RISMC) boiling water reactor (BWR) station blackout (SBO) case study. We provide the domain experts a software tool that encodes traditional and topological clustering techniques within an interactive analysis and visualization environment, for understanding the structures of such high-dimensional nuclear simulation datasets. We demonstrate through our case study that both types of clustering techniques complement each other for enhanced structural understanding of the data.« less

  15. Analyzing simulation-based PRA data through traditional and topological clustering: A BWR station blackout case study

    SciTech Connect (OSTI)

    Maljovec, D.; Liu, S.; Wang, B.; Mandelli, D.; Bremer, P. -T.; Pascucci, V.; Smith, C.

    2015-07-14

    Here, dynamic probabilistic risk assessment (DPRA) methodologies couple system simulator codes (e.g., RELAP and MELCOR) with simulation controller codes (e.g., RAVEN and ADAPT). Whereas system simulator codes model system dynamics deterministically, simulation controller codes introduce both deterministic (e.g., system control logic and operating procedures) and stochastic (e.g., component failures and parameter uncertainties) elements into the simulation. Typically, a DPRA is performed by sampling values of a set of parameters and simulating the system behavior for that specific set of parameter values. For complex systems, a major challenge in using DPRA methodologies is to analyze the large number of scenarios generated, where clustering techniques are typically employed to better organize and interpret the data. In this paper, we focus on the analysis of two nuclear simulation datasets that are part of the risk-informed safety margin characterization (RISMC) boiling water reactor (BWR) station blackout (SBO) case study. We provide the domain experts a software tool that encodes traditional and topological clustering techniques within an interactive analysis and visualization environment, for understanding the structures of such high-dimensional nuclear simulation datasets. We demonstrate through our case study that both types of clustering techniques complement each other for enhanced structural understanding of the data.

  16. EVENT TREE ANALYSIS AT THE SAVANNAH RIVER SITE: A CASE HISTORY

    SciTech Connect (OSTI)

    Williams, R

    2009-05-25

    At the Savannah River Site (SRS), a Department of Energy (DOE) installation in west-central South Carolina there is a unique geologic stratum that exists at depth that has the potential to cause surface settlement resulting from a seismic event. In the past the particular stratum in question has been remediated via pressure grouting, however the benefits of remediation have always been debatable. Recently the SRS has attempted to frame the issue in terms of risk via an event tree or logic tree analysis. This paper describes that analysis, including the input data required.

  17. Business Case Analysis for Replacing the Mazak 30Y Mill-Turn Machine in SM-39. Summary

    SciTech Connect (OSTI)

    Booth, Steven Richard; Dinehart, Timothy Grant; Benson, Faith Ann

    2015-03-19

    Business case studies are being looked at to support procurement of new machines and capital equipment in the SM-39 and TA-03-0102 machine shops. The first effort conducted economic analysis of replacing the Mazak 30Y Mill-Turn Machine located in SM-39. To determine the value of switching machinery, a baseline scenario was compared with a future scenario where new machinery was purchased and installed. The conditions under the two scenarios were defined via interviews with subject matter experts in terms of one-time and periodic costs. The results of the analysis were compiled in a life-cycle cost/benefit table. The costs of procuring, installing, and maintaining a new machine were balanced against the costs avoided by replacing older machinery. Productivity savings were included as a measure to show the costs avoided by being able to produce parts at a quicker and more efficient pace.

  18. Macroalgae Analysis A National GIS-based Analysis of Macroalgae Production Potential Summary Report and Project Plan

    SciTech Connect (OSTI)

    Roesijadi, Guritno; Coleman, Andre M.; Judd, Chaeli; Van Cleve, Frances B.; Thom, Ronald M.; Buenau, Kate E.; Tagestad, Jerry D.; Wigmosta, Mark S.; Ward, Jeffrey A.

    2011-12-01

    The overall project objective is to conduct a strategic analysis to assess the state of macroalgae as a feedstock for biofuels production. The objective in FY11 is to develop a multi-year systematic national assessment to evaluate the U.S. potential for macroalgae production using a GIS-based assessment tool and biophysical growth model developed as part of these activities. The initial model development for both resource assessment and constraints was completed and applied to the demonstration areas. The model for macroalgal growth was extended to the EEZ off the East and West Coasts of the United States, and a plan to merge the findings for an initial composite assessment was developed. In parallel, an assessment of land-based, port, and offshore infrastructure needs based on published and grey literature was conducted. Major information gaps and challenges encountered during this analysis were identified. Also conducted was an analysis of the type of local, state, and federal requirements that pertain to permitting land-based facilities and nearshore/offshore culture operations

  19. UXO detection and identification based on intrinsic target polarizabilities: A case history

    SciTech Connect (OSTI)

    Gasperikova, E.; Smith, J.T.; Morrison, H.F.; Becker, A.; Kappler, K.

    2008-07-15

    Electromagnetic induction data parameterized in time dependent object intrinsic polarizabilities allow discrimination of unexploded ordnance (UXO) from false targets (scrap metal). Data from a cart-mounted system designed for discrimination of UXO with 20 mm to 155 mm diameters are used. Discrimination of UXO from irregular scrap metal is based on the principal dipole polarizabilities of a target. A near-intact UXO displays a single major polarizability coincident with the long axis of the object and two equal smaller transverse polarizabilities, whereas metal scraps have distinct polarizability signatures that rarely mimic those of elongated symmetric bodies. Based on a training data set of known targets, object identification was made by estimating the probability that an object is a single UXO. Our test survey took place on a military base where both 4.2-inch mortar shells and scrap metal were present. The results show that we detected and discriminated correctly all 4.2-inch mortars, and in that process we added 7%, and 17%, respectively, of dry holes (digging scrap) to the total number of excavations in two different survey modes. We also demonstrated a mode of operation that might be more cost effective than the current practice.

  20. Energy-water analysis of the 10-year WECC transmission planning study cases.

    SciTech Connect (OSTI)

    Tidwell, Vincent Carroll; Passell, Howard David; Castillo, Cesar; Moreland, Barbara

    2011-11-01

    calculating water withdrawal and consumption for current and planned electric power generation; projected water demand from competing use sectors; and, surface and groundwater availability. WECC's long range planning is organized according to two target planning horizons, a 10-year and a 20-year. This study supports WECC in the 10-year planning endeavor. In this case the water implications associated with four of WECC's alternative future study cases (described below) are calculated and reported. In future phases of planning we will work with WECC to craft study cases that aim to reduce the thermoelectric footprint of the interconnection and/or limit production in the most water stressed regions of the West.

  1. Aminoindazole PDK1 Inhibitors: A Case Study in Fragment-Based Drug Discovery

    SciTech Connect (OSTI)

    Medina, Jesus R.; Blackledge, Charles W.; Heerding, Dirk A.; Campobasso, Nino; Ward, Paris; Briand, Jacques; Wright, Lois; Axten, Jeffrey M.

    2012-05-29

    Fragment screening of phosphoinositide-dependent kinase-1 (PDK1) in a biochemical kinase assay afforded hits that were characterized and prioritized based on ligand efficiency and binding interactions with PDK1 as determined by NMR. Subsequent crystallography and follow-up screening led to the discovery of aminoindazole 19, a potent leadlike PDK1 inhibitor with high ligand efficiency. Well-defined structure-activity relationships and protein crystallography provide a basis for further elaboration and optimization of 19 as a PDK1 inhibitor.

  2. A framework for graph-based synthesis, analysis, and visualization of HPC cluster job data.

    SciTech Connect (OSTI)

    Mayo, Jackson R.; Kegelmeyer, W. Philip, Jr.; Wong, Matthew H.; Pebay, Philippe Pierre; Gentile, Ann C.; Thompson, David C.; Roe, Diana C.; De Sapio, Vincent; Brandt, James M.

    2010-08-01

    The monitoring and system analysis of high performance computing (HPC) clusters is of increasing importance to the HPC community. Analysis of HPC job data can be used to characterize system usage and diagnose and examine failure modes and their effects. This analysis is not straightforward, however, due to the complex relationships that exist between jobs. These relationships are based on a number of factors, including shared compute nodes between jobs, proximity of jobs in time, etc. Graph-based techniques represent an approach that is particularly well suited to this problem, and provide an effective technique for discovering important relationships in job queuing and execution data. The efficacy of these techniques is rooted in the use of a semantic graph as a knowledge representation tool. In a semantic graph job data, represented in a combination of numerical and textual forms, can be flexibly processed into edges, with corresponding weights, expressing relationships between jobs, nodes, users, and other relevant entities. This graph-based representation permits formal manipulation by a number of analysis algorithms. This report presents a methodology and software implementation that leverages semantic graph-based techniques for the system-level monitoring and analysis of HPC clusters based on job queuing and execution data. Ontology development and graph synthesis is discussed with respect to the domain of HPC job data. The framework developed automates the synthesis of graphs from a database of job information. It also provides a front end, enabling visualization of the synthesized graphs. Additionally, an analysis engine is incorporated that provides performance analysis, graph-based clustering, and failure prediction capabilities for HPC systems.

  3. On the Existence of Our Metals-Based Civilization: I. Phase Space Analysis

    SciTech Connect (OSTI)

    D.D. Macdonald

    2005-06-22

    The stability of the barrier layers of bilayer passive films that form on metal and alloy surfaces, when in contact with oxidizing aqueous environments, is explored within the framework of the Point Defect Model (PDM) using phase-space analysis (PSA), in which the rate of growth of the barrier layer into the metal, (dL{sup +}/dt), and the barrier layer dissolution rate, (dL{sup -}/dt), are plotted simultaneously against the barrier layer thickness. A point of intersection of dL{sup -}/dt with dL{sup +}/dt indicates the existence of a metastable barrier layer with a steady state thickness greater than zero. If dL{sup -}/dt > (dL{sup +}/dt){sub L=0}, where the latter quantity is the barrier layer growth rate at zero barrier layer thickness, the barrier layer cannot exist, even as a metastable phase, as the resulting thickness would be negative. Under these conditions, the surface is depassivated and the metal may corrode at a rapid rate. Depassivation may result from a change in the oxidation state of the cation upon dissolution of the barrier layer, such that the dissolution rate becomes highly potential dependent (as in the case of transpassive dissolution of chromium-containing alloys, for example, in which the reaction Cr{sub 2}O{sub 3} + 5H{sub 2}O {yields} 2CrO{sub 4}{sup 2-} + 10H {sup +} + 6e{sup -} results in the destruction of the film), or by the action of some solution-phase species (e.g., H{sup +}, Cl{sup -}) that enhances the dissolution rate to the extent that dL{sup -}/dt > (dL{sup +}/dt){sub L=0}. The boundaries for depassivation may be plotted in potential-pH space to develop Kinetic Stability Diagrams (KSDs) as alternatives to the classical Pourbaix diagrams for describing the conditions under which metals or alloys exist in contact with an aqueous environment. The advantage of KSDs is that they provide kinetic descriptions of the state of a metal or alloy that is in much closer concert with the kinetic phenomenon of passivity and depassivation

  4. Dynamic Slope Stability Analysis of Mine Tailing Deposits: the Case of Raibl Mine

    SciTech Connect (OSTI)

    Roberto, Meriggi; Marco, Del Fabbro; Erica, Blasone; Erica, Zilli

    2008-07-08

    Over the last few years, many embankments and levees have collapsed during strong earthquakes or floods. In the Friuli Venezia Giulia Region (North-Eastern Italy), the main source of this type of risk is a slag deposit of about 2x10{sup 6} m{sup 3} deriving from galena and lead mining activity until 1991 in the village of Raibl. For the final remedial action plan, several in situ tests were performed: five boreholes equipped with piezometers, four CPTE and some geophysical tests with different approaches (refraction, ReMi and HVSR). Laboratory tests were conducted on the collected samples: geotechnical classification, triaxial compression tests and constant head permeability tests in triaxial cell. Pressure plate tests were also done on unsaturated slag to evaluate the characteristic soil-water curve useful for transient seepage analysis. A seepage analysis was performed in order to obtain the maximum pore water pressures during the intense rainfall event which hit the area on 29th August 2003. The results highlight that the slag low permeability prevents the infiltration of rainwater, which instead seeps easily through the boundary levees built with coarse materials. For this reason pore water pressures inside the deposits are not particularly influenced by rainfall intensity and frequency. Seismic stability analysis was performed with both the pseudo-static method, coupled with Newmark's method, and dynamic methods, using as design earthquake the one registered in Tolmezzo (Udine) on 6{sup th} May 1976. The low reduction of safety factors and the development of very small cumulative displacements show that the stability of embankments is assured even if an earthquake of magnitude 6.4 and a daily rainfall of 141.6 mm occur at the same time.

  5. Appendix E: Other NEMS-MP results for the base case and scenarios.

    SciTech Connect (OSTI)

    Plotkin, S. E.; Singh, M. K.; Energy Systems

    2009-12-03

    The NEMS-MP model generates numerous results for each run of a scenario. (This model is the integrated National Energy Modeling System [NEMS] version used for the Multi-Path Transportation Futures Study [MP].) This appendix examines additional findings beyond the primary results reported in the Multi-Path Transportation Futures Study: Vehicle Characterization and Scenario Analyses (Reference 1). These additional results are provided in order to help further illuminate some of the primary results. Specifically discussed in this appendix are: (1) Energy use results for light vehicles (LVs), including details about the underlying total vehicle miles traveled (VMT), the average vehicle fuel economy, and the volumes of the different fuels used; (2) Resource fuels and their use in the production of ethanol, hydrogen (H{sub 2}), and electricity; (3) Ethanol use in the scenarios (i.e., the ethanol consumption in E85 vs. other blends, the percent of travel by flex fuel vehicles on E85, etc.); (4) Relative availability of E85 and H2 stations; (5) Fuel prices; (6) Vehicle prices; and (7) Consumer savings. These results are discussed as follows: (1) The three scenarios (Mixed, (P)HEV & Ethanol, and H2 Success) when assuming vehicle prices developed through literature review; (2) The three scenarios with vehicle prices that incorporate the achievement of the U.S. Department of Energy (DOE) program vehicle cost goals; (3) The three scenarios with 'literature review' vehicle prices, plus vehicle subsidies; and (4) The three scenarios with 'program goals' vehicle prices, plus vehicle subsidies. The four versions or cases of each scenario are referred to as: Literature Review No Subsidies, Program Goals No Subsidies, Literature Review with Subsidies, and Program Goals with Subsidies. Two additional points must be made here. First, none of the results presented for LVs in this section include Class 2B trucks. Results for this class are included occasionally in Reference 1. They

  6. The Business Case for SEP | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    The Business Case for SEP The Business Case for SEP Superior Energy Performance logo Facilities pursue certification to Superior Energy Performance® (SEP(tm)) to achieve an attractive return on investment while enhancing sustainability. The business case for SEP is based on detailed accounts from facilities that have implemented ISO 50001 and SEP. Gain an insider's view from these pioneers. Read the cost-benefit analysis and case studies, and view videos and presentations. Cost-Benefit Analysis

  7. An analysis of uranium dispersal and health effects using a Gulf War case study.

    SciTech Connect (OSTI)

    Marshall, Albert Christian

    2005-07-01

    The study described in this report used mathematical modeling to estimate health risks from exposure to depleted uranium (DU) during the 1991 Gulf War for both U.S. troops and nearby Iraqi civilians. The analysis found that the risks of DU-induced leukemia or birth defects are far too small to result in an observable increase in these health effects among exposed veterans or Iraqi civilians. Only a few veterans in vehicles accidentally struck by U.S. DU munitions are predicted to have inhaled sufficient quantities of DU particulate to incur any significant health risk (i.e., the possibility of temporary kidney damage from the chemical toxicity of uranium and about a 1% chance of fatal lung cancer). The health risk to all downwind civilians is predicted to be extremely small. Recommendations for monitoring are made for certain exposed groups. Although the study found fairly large calculational uncertainties, the models developed and used are generally valid. The analysis was also used to assess potential uranium health hazards for workers in the weapons complex. No illnesses are projected for uranium workers following standard guidelines; nonetheless, some research suggests that more conservative guidelines should be considered.

  8. Station Blackout: A case study in the interaction of mechanistic and probabilistic safety analysis

    SciTech Connect (OSTI)

    Curtis Smith; Diego Mandelli; Cristian Rabiti

    2013-11-01

    The ability to better characterize and quantify safety margins is important to improved decision making about nuclear power plant design, operation, and plant life extension. As research and development (R&D) in the light-water reactor (LWR) Sustainability (LWRS) Program and other collaborative efforts yield new data, sensors, and improved scientific understanding of physical processes that govern the aging and degradation of plant SSCs needs and opportunities to better optimize plant safety and performance will become known. The purpose of the Risk Informed Safety Margin Characterization (RISMC) Pathway R&D is to support plant decisions for risk-informed margin management with the aim to improve economics, reliability, and sustain safety of current NPPs. In this paper, we describe the RISMC analysis process illustrating how mechanistic and probabilistic approaches are combined in order to estimate a safety margin. We use the scenario of a “station blackout” wherein offsite power and onsite power is lost, thereby causing a challenge to plant safety systems. We describe the RISMC approach, illustrate the station blackout modeling, and contrast this with traditional risk analysis modeling for this type of accident scenario.

  9. Preliminary Analysis and Case Study of Transmission Constraints and Wind Energy in the West: Preprint

    SciTech Connect (OSTI)

    Milligan, M.; Berger, D. P.

    2005-05-01

    Wind developers typically need long-term transmission service to finance their projects; however, most of the capacity on several key paths is reserved by existing firm contracts. Because non-firm contracts are only offered for periods up to 1 year, obtaining financing for the wind project is generally not possible when firm capacity is unavailable. However, sufficient capacity may exist on the constrained paths for new wind projects that can risk curtailment for a small number of hours of the year. This paper presents the results of a study sponsored by the National Renewable Energy Laboratory (NREL), a work group participant in the Rocky Mountain Area Transmission Study (RMATS). Using recent historical power flow data, case studies were conducted on the constrained paths between Wyoming-Colorado (TOT3) and Montana-Northwest, coinciding with areas of exceptional wind resources. The potential curtailment frequency for hypothetical 100-MW and 500-MW wind plants was calculated using hourly wind data. The results from the study indicate that sufficient potential exists for innovative transmission products that can help bring more wind to load centers and increase the efficiency of the existing transmission network.

  10. Lipid-Based Nanodiscs as Models for Studying Mesoscale Coalescence A Transport Limited Case

    SciTech Connect (OSTI)

    Hu, Andrew; Fan, Tai-Hsi; Katsaras, John; Xia, Yan; Li, Ming; Nieh, Mu-Ping

    2014-01-01

    Lipid-based nanodiscs (bicelles) are able to form in mixtures of long- and short-chain lipids. Initially, they are of uniform size but grow upon dilution. Previously, nanodisc growth kinetics have been studied using time-resolved small angle neutron scattering (SANS), a technique which is not well suited for probing their change in size immediately after dilution. To address this, we have used dynamic light scattering (DLS), a technique which permits the collection of useful data in a short span of time after dilution of the system. The DLS data indicate that the negatively charged lipids in nanodiscs play a significant role in disc stability and growth. Specifically, the charged lipids are most likely drawn out from the nanodiscs into solution, thereby reducing interparticle repulsion and enabling the discs to grow. We describe a population balance model, which takes into account Coulombic interactions and adequately predicts the initial growth of nanodiscs with a single parameter i.e., surface potential. The results presented here strongly support the notion that the disc coalescence rate strongly depends on nanoparticle charge density. The present system containing low-polydispersity lipid nanodiscs serves as a good model for understanding how charged discoidal micelles coalesce.

  11. Technology Solutions Case Study: Apartment Compartmentalization with an Aerosol-Based Sealing Process

    SciTech Connect (OSTI)

    2015-07-01

    Air sealing of building enclosures is a difficult and time-consuming process. Current methods in new construction require laborers to physically locate small and sometimes large holes in multiple assemblies and then manually seal each of them. This research study by Building America team Consortium for Advanced Residential Buildings demonstrated the automated air sealing and compartmentalization of buildings through the use of an aerosolized sealant developed by the Western Cooling Efficiency Center at University of California Davis. CARB demonstrated this new technology application in a multifamily building in Queens, NY. The effectiveness of the sealing process was evaluated by three methods: air leakage testing of overall apartment before and after sealing, point-source testing of individual leaks, and pressure measurements in the walls of the target apartment during sealing. Aerosolized sealing was successful by several measures in this study. Many individual leaks that are labor-intensive to address separately were well sealed by the aerosol particles. In addition, many diffuse leaks that are difficult to identify and treat were also sealed. The aerosol-based sealing process resulted in an average reduction of 71% in air leakage across three apartments and an average apartment airtightness of 0.08 CFM50/SF of enclosure area.

  12. IAEA CRP on HTGR Uncertainty Analysis: Benchmark Definition and Test Cases

    SciTech Connect (OSTI)

    Gerhard Strydom; Frederik Reitsma; Hans Gougar; Bismark Tyobeka; Kostadin Ivanov

    2012-11-01

    Uncertainty and sensitivity studies are essential elements of the reactor simulation code verification and validation process. Although several international uncertainty quantification activities have been launched in recent years in the LWR, BWR and VVER domains (e.g. the OECD/NEA BEMUSE program [1], from which the current OECD/NEA LWR Uncertainty Analysis in Modelling (UAM) benchmark [2] effort was derived), the systematic propagation of uncertainties in cross-section, manufacturing and model parameters for High Temperature Reactor (HTGR) designs has not been attempted yet. This paper summarises the scope, objectives and exercise definitions of the IAEA Coordinated Research Project (CRP) on HTGR UAM [3]. Note that no results will be included here, as the HTGR UAM benchmark was only launched formally in April 2012, and the specification is currently still under development.

  13. Framework for the Economic Analysis of Hybrid Systems Based on Exergy Consumption

    SciTech Connect (OSTI)

    Cristian Rabiti; Robert S. Cherry; Wesley R. Deason; Piyush Sabharwall; Shannon M. Bragg-Sitton; Richard D. Boardman

    2014-08-01

    Starting from an overview of the dynamic behavior of the electricity market the need of the introduction of energy users that will provide a damping capability to the system is derived as also a qualitative analysis of the impact of uncertainty, both in the demand and supply side, is performed. Then it follows an introduction to the investment analysis methodologies based on the discounting of the cash flow, and then work concludes with the illustration and application of the exergonomic principles to provide a sound methodology for the cost accounting of the plant components to be used in the cash flow analysis.

  14. 2007 Wholesale Power Rate Case Initial Proposal : Risk Analysis Study Documentation.

    SciTech Connect (OSTI)

    United States. Bonneville Power Administration.

    2005-11-01

    The RiskMod Model is comprised of a set of risk simulation models, collectively referred to as RiskSim; a set of computer programs that manages data referred to as Data Management Procedures; and RevSim, a model that calculates net revenues. RiskMod interacts with the AURORA Model, the RAM2007, and the ToolKit Model during the process of performing the Risk Analysis Study. AURORA is the computer model being used to perform the Market Price Forecast Study (see Market Price Forecast Study, WP-07-E-BPA-03); the RAM2007 is the computer model being used to calculate rates (see Wholesale Power Rate Development Study, WP-07-E-BPA-05); and the ToolKit is the computer model being used to develop the risk mitigation package that achieves BPA's 92.6 percent TPP standard (see Section 3 in the Risk Analysis Study, WP-07-E-BPA-04). Variations in monthly loads, resources, natural gas prices, forward market electricity prices, transmission expenses, and aluminum smelter benefit payments are simulated in RiskSim. Monthly spot market electricity prices for the simulated loads, resources, and natural gas prices are estimated by the AURORA Model. Data Management Procedures facilitate the format and movement of data that flow to and/or from RiskSim, AURORA, and RevSim. RevSim estimates net revenues using risk data from RiskSim, spot market electricity prices from AURORA, loads and resources data from the Load Resource Study, WP-07-E-BPA-01, various revenues from the Revenue Forecast component of the Wholesale Power Rate Development Study, WP-07-E-BPA-05, and rates and expenses from the RAM2007. Annual average surplus energy revenues, purchased power expenses, and section 4(h)(10)(C) credits calculated by RevSim are used in the Revenue Forecast and the RAM2007. Heavy Load Hour (HLH) and Light Load Hour (LLH) surplus and deficit energy values from RevSim are used in the Transmission Expense Risk Model. Net revenues estimated for each simulation by RevSim are input into the ToolKit Model

  15. 2007 Wholesale Power Rate Case Final Proposal : Risk Analysis Study Documentation.

    SciTech Connect (OSTI)

    United States. Bonneville Power Administration.

    2006-07-01

    The RiskMod Model is comprised of a set of risk simulation models, collectively referred to as RiskSim; a set of computer programs that manages data referred to as Data Management Procedures; and RevSim, a model that calculates net revenues. RiskMod interacts with the AURORA Model, the RAM2007, and the ToolKit Model during the process of performing the Risk Analysis Study. AURORA is the computer model being used to perform the Market Price Forecast Study (see Market Price Forecast Study, WP-07-FS-BPA-03); the RAM2007 is the computer model being used to calculate rates (see Wholesale Power Rate Development Study, WP-07-FS-BPA-05); and the ToolKit is the computer model being used to develop the risk mitigation package that achieves BPA's 92.6 percent TPP standard (see Section 3 in the Risk Analysis Study, WP-07-FS-BPA-04). Variations in monthly loads, resources, natural gas prices, forward market electricity prices, transmission expenses, and aluminum smelter benefit payments are simulated in RiskSim. Monthly spot market electricity prices for the simulated loads, resources, and natural gas prices are estimated by the AURORA Model. Data Management Procedures facilitate the format and movement of data that flow to and/or from RiskSim, AURORA, and RevSim. RevSim estimates net revenues using risk data from RiskSim, spot market electricity prices from AURORA, loads and resources data from the Load Resource Study, WP-07-FS-BPA-01, various revenues from the Revenue Forecast component of the Wholesale Power Rate Development Study, WP-07-FSBPA-05, and rates and expenses from the RAM2007. Annual average surplus energy revenues, purchased power expenses, and section 4(h)(10)(C) credits calculated by RevSim are used in the Revenue Forecast and the RAM2007. Heavy Load Hour (HLH) and Light Load Hour (LLH) surplus and deficit energy values from RevSim are used in the Transmission Expense Risk Model. Net revenues estimated for each simulation by RevSim are input into the Tool

  16. Distributed energy resources in practice: A case study analysis and validation of LBNL's customer adoption model

    SciTech Connect (OSTI)

    Bailey, Owen; Creighton, Charles; Firestone, Ryan; Marnay, Chris; Stadler, Michael

    2003-02-01

    This report describes a Berkeley Lab effort to model the economics and operation of small-scale (<500 kW) on-site electricity generators based on real-world installations at several example customer sites. This work builds upon the previous development of the Distributed Energy Resource Customer Adoption Model (DER-CAM), a tool designed to find the optimal combination of installed equipment, and idealized operating schedule, that would minimize the site's energy bills, given performance and cost data on available DER technologies, utility tariffs, and site electrical and thermal loads over a historic test period, usually a recent year. This study offered the first opportunity to apply DER-CAM in a real-world setting and evaluate its modeling results. DER-CAM has three possible applications: first, it can be used to guide choices of equipment at specific sites, or provide general solutions for example sites and propose good choices for sites with similar circumstances; second, it can additionally provide the basis for the operations of installed on-site generation; and third, it can be used to assess the market potential of technologies by anticipating which kinds of customers might find various technologies attractive. A list of approximately 90 DER candidate sites was compiled and each site's DER characteristics and their willingness to volunteer information was assessed, producing detailed information on about 15 sites of which five sites were analyzed in depth. The five sites were not intended to provide a random sample, rather they were chosen to provide some diversity of business activity, geography, and technology. More importantly, they were chosen in the hope of finding examples of true business decisions made based on somewhat sophisticated analyses, and pilot or demonstration projects were avoided. Information on the benefits and pitfalls of implementing a DER system was also presented from an additional ten sites including agriculture, education, health

  17. Analysis of Customer Enrollment Patterns in TIme-Based Rate Programs:

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Initial Results from the SGIG Consumer Behavior Studies (July 2013) | Department of Energy of Customer Enrollment Patterns in TIme-Based Rate Programs: Initial Results from the SGIG Consumer Behavior Studies (July 2013) Analysis of Customer Enrollment Patterns in TIme-Based Rate Programs: Initial Results from the SGIG Consumer Behavior Studies (July 2013) The U.S. Department of Energy is implementing the Smart Grid Investment Grant (SGIG) program under the American Recovery and Reinvestment

  18. Natural time analysis of critical phenomena: The case of pre-fracture electromagnetic emissions

    SciTech Connect (OSTI)

    Potirakis, S. M.; Karadimitrakis, A.; Eftaxias, K.

    2013-06-15

    Criticality of complex systems reveals itself in various ways. One way to monitor a system at critical state is to analyze its observable manifestations using the recently introduced method of natural time. Pre-fracture electromagnetic (EM) emissions, in agreement to laboratory experiments, have been consistently detected in the MHz band prior to significant earthquakes. It has been proposed that these emissions stem from the fracture of the heterogeneous materials surrounding the strong entities (asperities) distributed along the fault, preventing the relative slipping. It has also been proposed that the fracture of heterogeneous material could be described in analogy to the critical phase transitions in statistical physics. In this work, the natural time analysis is for the first time applied to the pre-fracture MHz EM signals revealing their critical nature. Seismicity and pre-fracture EM emissions should be two sides of the same coin concerning the earthquake generation process. Therefore, we also examine the corresponding foreshock seismic activity, as another manifestation of the same complex system at critical state. We conclude that the foreshock seismicity data present criticality features as well.

  19. Security Analysis of Selected AMI Failure Scenarios Using Agent Based Game Theoretic Simulation

    SciTech Connect (OSTI)

    Abercrombie, Robert K; Schlicher, Bob G; Sheldon, Frederick T

    2014-01-01

    Information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. We concentrated our analysis on the Advanced Metering Infrastructure (AMI) functional domain which the National Electric Sector Cyber security Organization Resource (NESCOR) working group has currently documented 29 failure scenarios. The strategy for the game was developed by analyzing five electric sector representative failure scenarios contained in the AMI functional domain. From these five selected scenarios, we characterize them into three specific threat categories affecting confidentiality, integrity and availability (CIA). The analysis using our ABGT simulation demonstrates how to model the AMI functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the AMI network with respect to CIA.

  20. Waste-to-wheel analysis of anaerobic-digestion-based renewable natural gas pathways with the GREET model.

    SciTech Connect (OSTI)

    Han, J.; Mintz, M.; Wang, M.

    2011-12-14

    In 2009, manure management accounted for 2,356 Gg or 107 billion standard cubic ft of methane (CH{sub 4}) emissions in the United States, equivalent to 0.5% of U.S. natural gas (NG) consumption. Owing to the high global warming potential of methane, capturing and utilizing this methane source could reduce greenhouse gas (GHG) emissions. The extent of that reduction depends on several factors - most notably, how much of this manure-based methane can be captured, how much GHG is produced in the course of converting it to vehicular fuel, and how much GHG was produced by the fossil fuel it might displace. A life-cycle analysis was conducted to quantify these factors and, in so doing, assess the impact of converting methane from animal manure into renewable NG (RNG) and utilizing the gas in vehicles. Several manure-based RNG pathways were characterized in the GREET (Greenhouse gases, Regulated Emissions, and Energy use in Transportation) model, and their fuel-cycle energy use and GHG emissions were compared to petroleum-based pathways as well as to conventional fossil NG pathways. Results show that despite increased total energy use, both fossil fuel use and GHG emissions decline for most RNG pathways as compared with fossil NG and petroleum. However, GHG emissions for RNG pathways are highly dependent on the specifics of the reference case, as well as on the process energy emissions and methane conversion factors assumed for the RNG pathways. The most critical factors are the share of flared controllable CH{sub 4} and the quantity of CH{sub 4} lost during NG extraction in the reference case, the magnitude of N{sub 2}O lost in the anaerobic digestion (AD) process and in AD residue, and the amount of carbon sequestered in AD residue. In many cases, data for these parameters are limited and uncertain. Therefore, more research is needed to gain a better understanding of the range and magnitude of environmental benefits from converting animal manure to RNG via AD.

  1. U.S. Renewable Energy Technical Potentials: A GIS-Based Analysis

    Office of Energy Efficiency and Renewable Energy (EERE)

    The National Renewable Energy Laboratory (NREL) routinely estimates the technical potential of specific renewable electricity generation technologies. These are technology-specific estimates of energy generation potential based on renewable resource availability and quality, technical system performance, topographic limitations, environmental, and land-use constraints only. The estimates do not consider (in most cases) economic or market constraints, and therefore do not represent a level of renewable generation that might actually be deployed. Technical potential estimates for six different renewable energy technologies were calculated by NREL, and methods and results for several other renewable technologies from previously published reports are also presented.

  2. Approach to proliferation risk assessment based on multiple objective analysis framework

    SciTech Connect (OSTI)

    Andrianov, A.; Kuptsov, I.

    2013-07-01

    The approach to the assessment of proliferation risk using the methods of multi-criteria decision making and multi-objective optimization is presented. The approach allows the taking into account of the specifics features of the national nuclear infrastructure, and possible proliferation strategies (motivations, intentions, and capabilities). 3 examples of applying the approach are shown. First, the approach has been used to evaluate the attractiveness of HEU (high enriched uranium)production scenarios at a clandestine enrichment facility using centrifuge enrichment technology. Secondly, the approach has been applied to assess the attractiveness of scenarios for undeclared production of plutonium or HEU by theft of materials circulating in nuclear fuel cycle facilities and thermal reactors. Thirdly, the approach has been used to perform a comparative analysis of the structures of developing nuclear power systems based on different types of nuclear fuel cycles, the analysis being based on indicators of proliferation risk.

  3. Posters Preliminary Analysis of Ground-Based Microwave and Infrared Radiance Observations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    3 Posters Preliminary Analysis of Ground-Based Microwave and Infrared Radiance Observations During the Pilot Radiation OBservation Experiment E. R. Westwater, Y. Han, J. H. Churnside, and J. B. Snider National Oceanic and Atmospheric Administration Environmental Research Laboratories Environmental Technology Laboratory Boulder, Colorado Introduction During Phase Two of the Pilot Radiation OBservation Experiment (PROBE) held in Kavieng, Papua New Guinea (Renné et al. 1994), the National Oceanic

  4. INEEL Subsurface Disposal Area CERCLA-based Decision Analysis for Technology Screening and Remedial Alternative Evaluation

    SciTech Connect (OSTI)

    Parnell, G. S.; Kloeber, Jr. J.; Westphal, D; Fung, V.; Richardson, John Grant

    2000-03-01

    A CERCLA-based decision analysis methodology for alternative evaluation and technology screening has been developed for application at the Idaho National Engineering and Environmental Laboratory WAG 7 OU13/14 Subsurface Disposal Area (SDA). Quantitative value functions derived from CERCLA balancing criteria in cooperation with State and Federal regulators are presented. A weighted criteria hierarchy is also summarized that relates individual value function numerical values to an overall score for a specific technology alternative.

  5. CyberShake 3.0: Physics-based Probabilistic Seismic Hazard Analysis |

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Argonne Leadership Computing Facility potential source faults for Southern California A 3D view showing potential source faults for Southern California's next "big one." Dynamic rupture and wave propagation simulations produce a model of ground motion at the earth's surface. Colors indicate possible distributions of displacement across the faults during rupture. Geoffrey Ely, Southern California Earthquake Center CyberShake 3.0: Physics-based Probabilistic Seismic Hazard Analysis

  6. CyberShake3.0: Physics-Based Probabilistic Seismic Hazard Analysis |

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Argonne Leadership Computing Facility CyberShake3.0: Physics-Based Probabilistic Seismic Hazard Analysis PI Name: Thomas Jordan PI Email: tjordan@usc.edu Institution: University of Southern California Allocation Program: INCITE Allocation Hours at ALCF: 2,000,000 Year: 2012 Research Domain: Earth Science Recent destructive earthquakes including Haiti (2010), Chile (2010), New Zealand( 2011), and Japan (2011) highlight the national and international need for improved seismic hazard

  7. Session Papers Preliminary Analysis of Ground-Based Microwave and Infrared Radiance Observations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Session Papers Preliminary Analysis of Ground-Based Microwave and Infrared Radiance Observations During the Pilot Radiation OBservation Experiment E. R. Westwater, Y. Han, J. H. Churnside, and J. B. Snider National Oceanic and Atmospheric Administration Environmental Research Laboratories Environmental Technology Laboratory Boulder, Colorado Introduction During Phase Two of the Pilot Radiation OBservation Experiment (PROBE) held in Kavieng, Papua New Guinea (Renné et al. 1994), the National

  8. Global Assessment of Hydrogen Technologies – Tasks 3 & 4 Report Economic, Energy, and Environmental Analysis of Hydrogen Production and Delivery Options in Select Alabama Markets: Preliminary Case Studies

    SciTech Connect (OSTI)

    Fouad, Fouad H.; Peters, Robert W.; Sisiopiku, Virginia P.; Sullivan Andrew J.; Gillette, Jerry; Elgowainy, Amgad; Mintz, Marianne

    2007-12-01

    This report documents a set of case studies developed to estimate the cost of producing, storing, delivering, and dispensing hydrogen for light-duty vehicles for several scenarios involving metropolitan areas in Alabama. While the majority of the scenarios focused on centralized hydrogen production and pipeline delivery, alternative delivery modes were also examined. Although Alabama was used as the case study for this analysis, the results provide insights into the unique requirements for deploying hydrogen infrastructure in smaller urban and rural environments that lie outside the DOE’s high priority hydrogen deployment regions. Hydrogen production costs were estimated for three technologies – steam-methane reforming (SMR), coal gasification, and thermochemical water-splitting using advanced nuclear reactors. In all cases examined, SMR has the lowest production cost for the demands associated with metropolitan areas in Alabama. Although other production options may be less costly for larger hydrogen markets, these were not examined within the context of the case studies.

  9. A Raman cell based on hollow core photonic crystal fiber for human breath analysis

    SciTech Connect (OSTI)

    Chow, Kam Kong; Zeng, Haishan; Short, Michael; Lam, Stephen; McWilliams, Annette

    2014-09-15

    Purpose: Breath analysis has a potential prospect to benefit the medical field based on its perceived advantages to become a point-of-care, easy to use, and cost-effective technology. Early studies done by mass spectrometry show that volatile organic compounds from human breath can represent certain disease states of our bodies, such as lung cancer, and revealed the potential of breath analysis. But mass spectrometry is costly and has slow-turnaround time. The authors’ goal is to develop a more portable and cost effective device based on Raman spectroscopy and hollow core-photonic crystal fiber (HC-PCF) for breath analysis. Methods: Raman scattering is a photon-molecular interaction based on the kinetic modes of an analyte which offers unique fingerprint type signals that allow molecular identification. HC-PCF is a novel light guide which allows light to be confined in a hollow core and it can be filled with a gaseous sample. Raman signals generated by the gaseous sample (i.e., human breath) can be guided and collected effectively for spectral analysis. Results: A Raman-cell based on HC-PCF in the near infrared wavelength range was developed and tested in a single pass forward-scattering mode for different gaseous samples. Raman spectra were obtained successfully from reference gases (hydrogen, oxygen, carbon dioxide gases), ambient air, and a human breath sample. The calculated minimum detectable concentration of this system was ∼15 parts per million by volume, determined by measuring the carbon dioxide concentration in ambient air via the characteristic Raman peaks at 1286 and 1388 cm{sup −1}. Conclusions: The results of this study were compared to a previous study using HC-PCF to trap industrial gases and backward-scatter 514.5 nm light from them. The authors found that the method presented in this paper has an advantage to enhance the signal-to-noise ratio (SNR). This SNR advantage, coupled with the better transmission of HC-PCF in the near-IR than in the

  10. Analysis of Geothermal Reservoir Stimulation Using Geomechanics-based Stochastic Analysis of Injection-induced Seismicity; 2010 Geothermal Technology Program Peer Review Report

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    5 4.5.7 Analysis of Geothermal Reservoir Stimulation Using Geomechanics-based Stochastic Analysis of Injection-induced Seismicity Presentation Number: 027 Investigator: Ghassemi, Ahmad (Texas A&M University) Objectives: To develop a model for seismicity-based reservoir characterization (SBRC) by combining rock mechanics, finite element modeling, geostatistical concepts to establish relationships between microseismicity, reservoir flow and geomechanical characteristics. Average Overall Score:

  11. FERC's acceptance of market-based pricing: An antitrust analysis. [Federal Energy Regulatory Commission

    SciTech Connect (OSTI)

    Harris, B.C.; Frankena, M.W. )

    1992-06-01

    In large part, FERC's determination of market power is based on an analysis that focuses on the ability of power suppliers to foreclose' other potential power suppliers by withholding transmission access to the buyer. The authors believe that this analysis is flawed because the conditions it considers are neither necessary nor sufficient for the existence of market power. That is, it is possible that market-based rates can be subject to market power even if no transmission supplier has the ability to foreclose some power suppliers; conversely, it is possible that no market power exists despite the ability to foreclose other suppliers. This paper provides a critical analysis of FERC's market-power determinations. The concept of market power is defined and its relationship to competition is discussed in Section 1, while a framework for evaluating the existence of market power is presented in Section 2. In Section 3, FERC's recent order in Terra Comfort is examined using this framework. A brief preview of FERC's order in TECO Power Services comprises Section 4. Overall conclusions are presented in Section 5.

  12. Adapting a GIS-Based Multicriteria Decision Analysis Approach for Evaluating New Power Generating Sites

    SciTech Connect (OSTI)

    Omitaomu, Olufemi A; Blevins, Brandon R; Jochem, Warren C; Mays, Gary T; Belles, Randy; Hadley, Stanton W; Harrison, Thomas J; Bhaduri, Budhendra L; Neish, Bradley S; Rose, Amy N

    2012-01-01

    There is a growing need to site new power generating plants that use cleaner energy sources due to increased regulations on air and water pollution and a sociopolitical desire to develop more clean energy sources. To assist utility and energy companies as well as policy-makers in evaluating potential areas for siting new plants in the contiguous United States, a geographic information system (GIS)-based multicriteria decision analysis approach is presented in this paper. The presented approach has led to the development of the Oak Ridge Siting Analysis for power Generation Expansion (OR-SAGE) tool. The tool takes inputs such as population growth, water availability, environmental indicators, and tectonic and geological hazards to provide an in-depth analysis for siting options. To the utility and energy companies, the tool can quickly and effectively provide feedback on land suitability based on technology specific inputs. However, the tool does not replace the required detailed evaluation of candidate sites. To the policy-makers, the tool provides the ability to analyze the impacts of future energy technology while balancing competing resource use.

  13. Demonstration of emulator-based Bayesian calibration of safety analysis codes: Theory and formulation

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Yurko, Joseph P.; Buongiorno, Jacopo; Youngblood, Robert

    2015-05-28

    System codes for simulation of safety performance of nuclear plants may contain parameters whose values are not known very accurately. New information from tests or operating experience is incorporated into safety codes by a process known as calibration, which reduces uncertainty in the output of the code and thereby improves its support for decision-making. The work reported here implements several improvements on classic calibration techniques afforded by modern analysis techniques. The key innovation has come from development of code surrogate model (or code emulator) construction and prediction algorithms. Use of a fast emulator makes the calibration processes used here withmore » Markov Chain Monte Carlo (MCMC) sampling feasible. This study uses Gaussian Process (GP) based emulators, which have been used previously to emulate computer codes in the nuclear field. The present work describes the formulation of an emulator that incorporates GPs into a factor analysis-type or pattern recognition-type model. This “function factorization” Gaussian Process (FFGP) model allows overcoming limitations present in standard GP emulators, thereby improving both accuracy and speed of the emulator-based calibration process. Calibration of a friction-factor example using a Method of Manufactured Solution is performed to illustrate key properties of the FFGP based process.« less

  14. Demonstration of emulator-based Bayesian calibration of safety analysis codes: Theory and formulation

    SciTech Connect (OSTI)

    Yurko, Joseph P.; Buongiorno, Jacopo; Youngblood, Robert

    2015-05-28

    System codes for simulation of safety performance of nuclear plants may contain parameters whose values are not known very accurately. New information from tests or operating experience is incorporated into safety codes by a process known as calibration, which reduces uncertainty in the output of the code and thereby improves its support for decision-making. The work reported here implements several improvements on classic calibration techniques afforded by modern analysis techniques. The key innovation has come from development of code surrogate model (or code emulator) construction and prediction algorithms. Use of a fast emulator makes the calibration processes used here with Markov Chain Monte Carlo (MCMC) sampling feasible. This study uses Gaussian Process (GP) based emulators, which have been used previously to emulate computer codes in the nuclear field. The present work describes the formulation of an emulator that incorporates GPs into a factor analysis-type or pattern recognition-type model. This “function factorization” Gaussian Process (FFGP) model allows overcoming limitations present in standard GP emulators, thereby improving both accuracy and speed of the emulator-based calibration process. Calibration of a friction-factor example using a Method of Manufactured Solution is performed to illustrate key properties of the FFGP based process.

  15. Methods for simulation-based analysis of fluid-structure interaction.

    SciTech Connect (OSTI)

    Barone, Matthew Franklin; Payne, Jeffrey L.

    2005-10-01

    Methods for analysis of fluid-structure interaction using high fidelity simulations are critically reviewed. First, a literature review of modern numerical techniques for simulation of aeroelastic phenomena is presented. The review focuses on methods contained within the arbitrary Lagrangian-Eulerian (ALE) framework for coupling computational fluid dynamics codes to computational structural mechanics codes. The review treats mesh movement algorithms, the role of the geometric conservation law, time advancement schemes, wetted surface interface strategies, and some representative applications. The complexity and computational expense of coupled Navier-Stokes/structural dynamics simulations points to the need for reduced order modeling to facilitate parametric analysis. The proper orthogonal decomposition (POD)/Galerkin projection approach for building a reduced order model (ROM) is presented, along with ideas for extension of the methodology to allow construction of ROMs based on data generated from ALE simulations.

  16. Fission matrix-based Monte Carlo criticality analysis of fuel storage pools

    SciTech Connect (OSTI)

    Farlotti, M.; Larsen, E. W.

    2013-07-01

    Standard Monte Carlo transport procedures experience difficulties in solving criticality problems in fuel storage pools. Because of the strong neutron absorption between fuel assemblies, source convergence can be very slow, leading to incorrect estimates of the eigenvalue and the eigenfunction. This study examines an alternative fission matrix-based Monte Carlo transport method that takes advantage of the geometry of a storage pool to overcome this difficulty. The method uses Monte Carlo transport to build (essentially) a fission matrix, which is then used to calculate the criticality and the critical flux. This method was tested using a test code on a simple problem containing 8 assemblies in a square pool. The standard Monte Carlo method gave the expected eigenfunction in 5 cases out of 10, while the fission matrix method gave the expected eigenfunction in all 10 cases. In addition, the fission matrix method provides an estimate of the error in the eigenvalue and the eigenfunction, and it allows the user to control this error by running an adequate number of cycles. Because of these advantages, the fission matrix method yields a higher confidence in the results than standard Monte Carlo. We also discuss potential improvements of the method, including the potential for variance reduction techniques. (authors)

  17. Impact of x-ray dose on track formation and data analysis for CR-39-based proton diagnostics

    SciTech Connect (OSTI)

    Rinderknecht, H. G. Rojas-Herrera, J.; Zylstra, A. B.; Frenje, J. A.; Gatu Johnson, M.; Sio, H.; Sinenian, N.; Rosenberg, M. J.; Li, C. K.; Séguin, F. H.; Petrasso, R. D.; Filkins, T.; Steidle, Jessica A.; Traynor, N.; Freeman, C.; Steidle, Jeffrey A.

    2015-12-15

    The nuclear track detector CR-39 is used extensively for charged particle diagnosis, in particular proton spectroscopy, at inertial confinement fusion facilities. These detectors can absorb x-ray doses from the experiments in the order of 1–100 Gy, the effects of which are not accounted for in the previous detector calibrations. X-ray dose absorbed in the CR-39 has previously been shown to affect the track size of alpha particles in the detector, primarily due to a measured reduction in the material bulk etch rate [Rojas-Herrera et al., Rev. Sci. Instrum. 86, 033501 (2015)]. Similar to the previous findings for alpha particles, protons with energies in the range 0.5–9.1 MeV are shown to produce tracks that are systematically smaller as a function of the absorbed x-ray dose in the CR-39. The reduction of track size due to x-ray dose is found to diminish with time between exposure and etching if the CR-39 is stored at ambient temperature, and complete recovery is observed after two weeks. The impact of this effect on the analysis of data from existing CR-39-based proton diagnostics on OMEGA and the National Ignition Facility is evaluated and best practices are proposed for cases in which the effect of x rays is significant.

  18. Impact of x-ray dose on track formation and data analysis for CR-39-based proton diagnostics

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Rinderknecht, H. G.; Rojas-Herrera, J.; Zylstra, A. B.; Frenje, J. A.; Gatu Johnson, M.; Sio, H.; Sinenian, N.; Rosenberg, M. J.; Li, C. K.; Seguin, F. H.; et al

    2015-12-23

    The nuclear track detector CR-39 is used extensively for charged particle diagnosis, in particular proton spectroscopy, at inertial confinement fusion facilities. These detectors can absorb x-ray doses from the experiments in the order of 1–100 Gy, the effects of which are not accounted for in the previous detector calibrations. X-ray dose absorbed in the CR-39 has previously been shown to affect the track size of alpha particles in the detector, primarily due to a measured reduction in the material bulk etch rate [Rojas-Herrera et al., Rev. Sci. Instrum. 86, 033501 (2015)]. Similar to the previous findings for alpha particles, protonsmore » with energies in the range 0.5–9.1 MeV are shown to produce tracks that are systematically smaller as a function of the absorbed x-ray dose in the CR-39. The reduction of track size due to x-ray dose is found to diminish with time between exposure and etching if the CR-39 is stored at ambient temperature, and complete recovery is observed after two weeks. Lastly, the impact of this effect on the analysis of data from existing CR-39-based proton diagnostics on OMEGA and the National Ignition Facility is evaluated and best practices are proposed for cases in which the effect of x rays is significant.« less

  19. Genetic Algorithms for Agent-Based Infrastructure Interdependency Modeling and Analysis

    SciTech Connect (OSTI)

    May Permann

    2007-03-01

    Today’s society relies greatly upon an array of complex national and international infrastructure networks such as transportation, electric power, telecommunication, and financial networks. This paper describes initial research combining agent-based infrastructure modeling software and genetic algorithms (GAs) to help optimize infrastructure protection and restoration decisions. This research proposes to apply GAs to the problem of infrastructure modeling and analysis in order to determine the optimum assets to restore or protect from attack or other disaster. This research is just commencing and therefore the focus of this paper is the integration of a GA optimization method with a simulation through the simulation’s agents.

  20. Analysis of FEL-based CeC amplification at high gain limit

    SciTech Connect (OSTI)

    Wang, G.; Litvinenko, V.; Jing, Y.

    2015-05-03

    An analysis of Coherent electron Cooling (CeC) amplifier based on 1D Free Electron Laser (FEL) theory was previously performed with exact solution of the dispersion relation, assuming electrons having Lorentzian energy distribution. At high gain limit, the asymptotic behavior of the FEL amplifier can be better understood by Taylor expanding the exact solution of the dispersion relation with respect to the detuning parameter. In this work, we make quadratic expansion of the dispersion relation for Lorentzian energy distribution and investigate how longitudinal space charge and electrons’ energy spread affect the FEL amplification process.

  1. Economic Analysis for Conceptual Design of Oxygen-Based PC Boiler

    SciTech Connect (OSTI)

    Andrew Seltzer

    2005-02-01

    The objective of the economic analysis is to prepare a budgetary estimate of capital and operating costs of the O{sub 2}-fired PC power plant as well as for the equivalent conventional PC-fired power plant. Capital and operating costs of conventional steam generation, steam heating, and power generation equipment are estimated based on Foster Wheeler's extensive experience and database. Capital and operating costs of equipment, such as oxygen separation and CO{sub 2} liquefaction, are based on vendor supplied data and FW process plant experience. The levelized cost of electricity is determined for both the air-fired and O{sub 2}-fired power plants as well as the CO{sub 2} mitigation cost. An economic comparison between the O{sub 2}-fired PC and other alternate technologies is presented.

  2. Loading and Regeneration Analysis of a Diesel Particulate Filter with a Radio Frequency-Based Sensor

    SciTech Connect (OSTI)

    Sappok, Alex; Prikhodko, Vitaly Y; Parks, II, James E

    2010-01-01

    Accurate knowledge of diesel particulate filter (DPF) loading is critical for robust and efficient operation of the combined engine-exhaust aftertreatment system. Furthermore, upcoming on-board diagnostics regulations require on-board technologies to evaluate the status of the DPF. This work describes the application of radio frequency (RF) based sensing techniques to accurately measure DPF soot levels and the spatial distribution of the accumulated material. A 1.9L GM turbo diesel engine and a DPF with an RF-sensor were studied. Direct comparisons between the RF measurement and conventional pressure-based methods were made. Further analysis of the particulate matter loading rates was obtained with a mass-based soot emission measurement instrument (TEOM). Comparison with pressure drop measurements show the RF technique is unaffected by exhaust flow variations and exhibits a high degree of sensitivity to DPF soot loading and good dynamic response. Additional computational and experimental work further illustrates the spatial resolution of the RF measurements. Based on the experimental results, the RF technique shows significant promise for improving DPF control enabling optimization of the combined engine-aftertreatment system for improved fuel economy and extended DPF service life.

  3. CIRA: A Microcomputer-based energy analysis and auditing tool for residential applications

    SciTech Connect (OSTI)

    Sonderegger, R.C.; Dixon, J.D.

    1983-01-01

    Computerized, Instrumented, Residential Audit (CIRA) is a collection of programs for energy analysis and energy auditing of residential buildings. CIRA is written for microcomputers with a CP/M operating system and 64K RAM. Its principal features are: user-friendliness, dynamic defaults, file-oriented structure, design energy analysis capability, economic optimization of retrofits, graphic and tabular output to screen and printer. To calculate monthly energy consumptions both for design and retrofit analyses CIRA uses a modified degree-day and degree-night approach, taking into account solar gains, IR losses to the sky, internal gains and ground heat transfer; the concept of solar storage factor addresses the delayed effect of daytime solar gains while the concept of effective thermal mass ensures proper handling of changes in thermostat setting from day to night; aie infiltration is modeled using the LBL infiltration model based on effective leakage area; HVAC system performance is modeled using correlations developed for DOE-2.1. For any given budget, CIRA can also develop an optimally sequenced list of retrofits with the highest combined savings. Long run-times necessary for economic optimization of retrofits are greatly reduced by using a method based on partial derivatives of energy consumption with respect to principal building parameters. Energy calculations of CIRA compare well with those of DOE-2.1 and with measured energy consumptions from a sample of monitored houses.

  4. A knowledge-based approach to the adaptive finite element analysis

    SciTech Connect (OSTI)

    Haghighi, K.; Kang, E.

    1995-12-31

    An automatic and knowledge-based finite element mesh generator (INTELMESH), which makes extensive use of interactive computer graphics techniques, has been developed. INTELMESH is designed for planar domains and axisymmetric 3-D structures of elasticity and heat transfer subjected to mechanical and thermal loading. It intelligently identifies the critical regions/points in the problem domain and utilizes the new concepts of substructuring and wave propagation to choose the proper mesh size for them. INTELMESH generates well-shaped triangular elements by applying triangulation and Laplacian smoothing procedures. The adaptive analysis involves the initial finite element analysis and an efficient a-posteriori error analysis and estimation. Once a problem is defined, the system automatically builds a finite element model and analyzes the problem through an automatic iterative process until the error reaches a desired level. It has been shown that the proposed approach which initiates the process with an a-priori, and near optimum mesh of the object, converges to the desired accuracy in less time and at less cost.

  5. A Laser-Based Method for On-Site Analysis of UF6 at Enrichment Plants

    SciTech Connect (OSTI)

    Anheier, Norman C.; Cannon, Bret D.; Martinez, Alonzo; Barrett, Christopher A.; Taubman, Matthew S.; Anderson, Kevin K.; Smith, Leon E.

    2014-11-23

    The International Atomic Energy Agency’s (IAEA’s) long-term research and development plan calls for more cost-effective and efficient safeguard methods to detect and deter misuse of gaseous centrifuge enrichment plants (GCEPs). The IAEA’s current safeguards approaches at GCEPs are based on a combination of routine and random inspections that include environmental sampling and destructive assay (DA) sample collection from UF6 in-process material and selected cylinders. Samples are then shipped offsite for subsequent laboratory analysis. In this paper, a new DA sample collection and onsite analysis approach that could help to meet challenges in transportation and chain of custody for UF6 DA samples is introduced. This approach uses a handheld sampler concept and a Laser Ablation, Laser Absorbance Spectrometry (LAARS) analysis instrument, both currently under development at the Pacific Northwest National Laboratory. A LAARS analysis instrument could be temporarily or permanently deployed in the IAEA control room of the facility, in the IAEA data acquisition cabinet, for example. The handheld PNNL DA sampler design collects and stabilizes a much smaller DA sample mass compared to current sampling methods. The significantly lower uranium mass reduces the sample radioactivity and the stabilization approach diminishes the risk of uranium and hydrogen fluoride release. These attributes enable safe sample handling needed during onsite LAARS assay and may help ease shipping challenges for samples to be processed at the IAEA’s offsite laboratory. The LAARS and DA sampler implementation concepts will be described and preliminary technical viability results presented.

  6. Optimized periodic verification testing blended risk and performance-based MOV inservice test program an application of ASME code case OMN-1

    SciTech Connect (OSTI)

    Sellers, C.; Fleming, K.; Bidwell, D.; Forbes, P.

    1996-12-01

    This paper presents an application of ASME Code Case OMN-1 to the GL 89-10 Program at the South Texas Project Electric Generating Station (STPEGS). Code Case OMN-1 provides guidance for a performance-based MOV inservice test program that can be used for periodic verification testing and allows consideration of risk insights. Blended probabilistic and deterministic evaluation techniques were used to establish inservice test strategies including both test methods and test frequency. Described in the paper are the methods and criteria for establishing MOV safety significance based on the STPEGS probabilistic safety assessment, deterministic considerations of MOV performance characteristics and performance margins, the expert panel evaluation process, and the development of inservice test strategies. Test strategies include a mix of dynamic and static testing as well as MOV exercising.

  7. The Milling Assistant, Case-Based Reasoning, and machining strategy: A report on the development of automated numerical control programming systems at New Mexico State University

    SciTech Connect (OSTI)

    Burd, W.; Culler, D.; Eskridge, T.; Cox, L.; Slater, T.

    1993-08-01

    The Milling Assistant (MA) programming system demonstrates the automated development of tool paths for Numerical Control (NC) machine tools. By integrating a Case-Based Reasoning decision processor with a commercial CAD/CAM software, intelligent tool path files for milled and point-to-point features can be created. The operational system is capable of reducing the time required to program a variety of parts and improving product quality by collecting and utilizing ``best of practice`` machining strategies.

  8. Modeling of electrodes and implantable pulse generator cases for the analysis of implant tip heating under MR imaging

    SciTech Connect (OSTI)

    Acikel, Volkan Atalar, Ergin; Uslubas, Ali

    2015-07-15

    Purpose: The authors’ purpose is to model the case of an implantable pulse generator (IPG) and the electrode of an active implantable medical device using lumped circuit elements in order to analyze their effect on radio frequency induced tissue heating problem during a magnetic resonance imaging (MRI) examination. Methods: In this study, IPG case and electrode are modeled with a voltage source and impedance. Values of these parameters are found using the modified transmission line method (MoTLiM) and the method of moments (MoM) simulations. Once the parameter values of an electrode/IPG case model are determined, they can be connected to any lead, and tip heating can be analyzed. To validate these models, both MoM simulations and MR experiments were used. The induced currents on the leads with the IPG case or electrode connections were solved using the proposed models and the MoTLiM. These results were compared with the MoM simulations. In addition, an electrode was connected to a lead via an inductor. The dissipated power on the electrode was calculated using the MoTLiM by changing the inductance and the results were compared with the specific absorption rate results that were obtained using MoM. Then, MRI experiments were conducted to test the IPG case and the electrode models. To test the IPG case, a bare lead was connected to the case and placed inside a uniform phantom. During a MRI scan, the temperature rise at the lead was measured by changing the lead length. The power at the lead tip for the same scenario was also calculated using the IPG case model and MoTLiM. Then, an electrode was connected to a lead via an inductor and placed inside a uniform phantom. During a MRI scan, the temperature rise at the electrode was measured by changing the inductance and compared with the dissipated power on the electrode resistance. Results: The induced currents on leads with the IPG case or electrode connection were solved for using the combination of the MoTLiM and

  9. Cut set-based risk and reliability analysis for arbitrarily interconnected networks

    DOE Patents [OSTI]

    Wyss, Gregory D.

    2000-01-01

    Method for computing all-terminal reliability for arbitrarily interconnected networks such as the United States public switched telephone network. The method includes an efficient search algorithm to generate minimal cut sets for nonhierarchical networks directly from the network connectivity diagram. Efficiency of the search algorithm stems in part from its basis on only link failures. The method also includes a novel quantification scheme that likewise reduces computational effort associated with assessing network reliability based on traditional risk importance measures. Vast reductions in computational effort are realized since combinatorial expansion and subsequent Boolean reduction steps are eliminated through analysis of network segmentations using a technique of assuming node failures to occur on only one side of a break in the network, and repeating the technique for all minimal cut sets generated with the search algorithm. The method functions equally well for planar and non-planar networks.

  10. Ion Trap Array-Based Systems And Methods For Chemical Analysis

    DOE Patents [OSTI]

    Whitten, William B [Oak Ridge, TN; Ramsey, J Michael [Knoxville, TN

    2005-08-23

    An ion trap-based system for chemical analysis includes an ion trap array. The ion trap array includes a plurality of ion traps arranged in a 2-dimensional array for initially confining ions. Each of the ion traps comprise a central electrode having an aperture, a first and second insulator each having an aperture sandwiching the central electrode, and first and second end cap electrodes each having an aperture sandwiching the first and second insulator. A structure for simultaneously directing a plurality of different species of ions out from the ion traps is provided. A spectrometer including a detector receives and identifies the ions. The trap array can be used with spectrometers including time-of-flight mass spectrometers and ion mobility spectrometers.

  11. Difference between healthy children and ADHD based on wavelet spectral analysis of nuclear magnetic resonance images

    SciTech Connect (OSTI)

    González Gómez Dulce, I. E-mail: emoreno@fcfm.buap.mx E-mail: joserm84@gmail.com; Moreno Barbosa, E. E-mail: emoreno@fcfm.buap.mx E-mail: joserm84@gmail.com; Hernández, Mario Iván Martínez E-mail: emoreno@fcfm.buap.mx E-mail: joserm84@gmail.com; Méndez, José Ramos E-mail: emoreno@fcfm.buap.mx E-mail: joserm84@gmail.com; Silvia, Hidalgo Tobón; Pilar, Dies Suarez E-mail: neurodoc@prodigy.net.mx; Eduardo, Barragán Pérez E-mail: neurodoc@prodigy.net.mx; Benito, De Celis Alonso

    2014-11-07

    The main goal of this project was to create a computer algorithm based on wavelet analysis of region of homogeneity images obtained during resting state studies. Ideally it would automatically diagnose ADHD. Because the cerebellum is an area known to be affected by ADHD, this study specifically analysed this region. Male right handed volunteers (infants with ages between 7 and 11 years old) were studied and compared with age matched controls. Statistical differences between the values of the absolute integrated wavelet spectrum were found and showed significant differences (p<0.0015) between groups. This difference might help in the future to distinguish healthy from ADHD patients and therefore diagnose ADHD. Even if results were statistically significant, the small size of the sample limits the applicability of this methods as it is presented here, and further work with larger samples and using freely available datasets must be done.

  12. Microcomputer Spectrum Analysis Models (MSAM) with terrain data base (for microcomputers). Software

    SciTech Connect (OSTI)

    Not Available

    1992-08-01

    The package contains a collection of 14 radio frequency communications engineering and spectrum management programs plus a menu program. An associated terrain elevation data base with 30-second data is provided for the U.S. (less Alaska), Hawaii, Puerto Rico, the Caribbean and border areas of Canada and Mexico. The following programs are included: Bearing/Distance Program (BDIST); Satellite Azimuth Program (SATAZ); Intermodulation Program (INTMOD); NLAMBDA-90 smooth-earth propagation program (NL90); Frequency Dependent Rejection program (FDR); ANNEX I program to evaluate frequency proposals per NTIA Manual (ANNEXI); Antenna Field Intensity program (AFI); Personal Computer Plot 2-D graphics program (PCPLT); Profile 4/3 earth terrain elevation plot program (PROFILE); Horizon radio line-of-sight plot program (HORIZON); Single-Emitter Analysis Mode (SEAM); Terrain Integrated Rough-Earth Model (TIREM); Power Density Display Program to produce power contour map (PDDP); Line-of-Sight antenna coverage map program (SHADO).

  13. Orbit-based analysis of resonant excitations of Alfvén waves in tokamaks

    SciTech Connect (OSTI)

    Bierwage, Andreas; Shinohara, Kouji

    2014-11-15

    The exponential growth phase of fast-ion-driven Alfvénic instabilities is simulated and the resonant wave-particle interactions are analyzed numerically. The simulations are carried out in realistic magnetic geometry and with a realistic particle distribution for a JT-60U plasma driven by negative-ion-based neutral beams. In order to deal with the large magnetic drifts of the fast ions, two new mapping methods are developed and applied. The first mapping yields the radii and pitch angles at the points, where the unperturbed orbit of a particle intersects the mid-plane. These canonical coordinates allow to express analysis results (e.g., drive profiles and resonance widths) in a form that is easy to understand and directly comparable to the radial mode structure. The second mapping yields the structure of the wave field along the particle trajectory. This allows us to unify resonance conditions for trapped and passing particles, determine which harmonics are driven, and which orders of the resonance are involved. This orbit-based resonance analysis (ORA) method is applied to fast-ion-driven instabilities with toroidal mode numbers n = 1-3. After determining the order and width of each resonance, the kinetic compression of resonant particles and the effect of linear resonance overlap are examined. On the basis of the ORA results, implications for the fully nonlinear regime, for the long-time evolution of the system in the presence of a fast ion source, and for the interpretation of experimental observations are discussed.

  14. Fenestration performance analysis using an interactive graphics-based methodology on a microcomputer

    SciTech Connect (OSTI)

    Sullivan, R.; Selkowitz, S.

    1989-09-01

    We show the development and implementation of a new methodology that can be used to evaluate the energy and comfort performance of fenestration in non-residential buildings. The methodology is based on the definition of a fenestration system figure of merit.'' The figure of merit'' is determined by considering five non-dimensional performance indices representing heating energy, cooling energy, cooling energy peak, thermal comfort, and visual comfort. These indices were derived by performing a regression analysis of several thousand hour-by-hour building heat transfer simulations of a prototypical office building module using the DOE-2 simulation program. The regression analysis resulted in a series of simplified algebraic expressions that related fenestration configuration variables to performance parameters. We implemented these equations in a hypermedia'' environment -- one that integrates graphics, sound, animation, and calculation sequences --and created a prototype fenestration performance design tool. Inputs required by the program consist of geographic location, building type, perimeter space, and envelope definition. Outputs are the calculated performance indices for electricity and fuel use, peak electric load, and thermal and visual comfort. 6 refs., 7 figs.

  15. Model-Based Analysis of Electric Drive Options for Medium-Duty Parcel Delivery Vehicles: Preprint

    SciTech Connect (OSTI)

    Barnitt, R. A.; Brooker, A. D.; Ramroth, L.

    2010-12-01

    Medium-duty vehicles are used in a broad array of fleet applications, including parcel delivery. These vehicles are excellent candidates for electric drive applications due to their transient-intensive duty cycles, operation in densely populated areas, and relatively high fuel consumption and emissions. The National Renewable Energy Laboratory (NREL) conducted a robust assessment of parcel delivery routes and completed a model-based techno-economic analysis of hybrid electric vehicle (HEV) and plug-in hybrid electric vehicle configurations. First, NREL characterized parcel delivery vehicle usage patterns, most notably daily distance driven and drive cycle intensity. Second, drive-cycle analysis results framed the selection of drive cycles used to test a parcel delivery HEV on a chassis dynamometer. Next, measured fuel consumption results were used to validate simulated fuel consumption values derived from a dynamic model of the parcel delivery vehicle. Finally, NREL swept a matrix of 120 component size, usage, and cost combinations to assess impacts on fuel consumption and vehicle cost. The results illustrated the dependency of component sizing on drive-cycle intensity and daily distance driven and may allow parcel delivery fleets to match the most appropriate electric drive vehicle to their fleet usage profile.

  16. Analysis of laser remote fusion cutting based on a mathematical model

    SciTech Connect (OSTI)

    Matti, R. S.; Ilar, T.; Kaplan, A. F. H.

    2013-12-21

    Laser remote fusion cutting is analyzed by the aid of a semi-analytical mathematical model of the processing front. By local calculation of the energy balance between the absorbed laser beam and the heat losses, the three-dimensional vaporization front can be calculated. Based on an empirical model for the melt flow field, from a mass balance, the melt film and the melting front can be derived, however only in a simplified manner and for quasi-steady state conditions. Front waviness and multiple reflections are not modelled. The model enables to compare the similarities, differences, and limits between laser remote fusion cutting, laser remote ablation cutting, and even laser keyhole welding. In contrast to the upper part of the vaporization front, the major part only slightly varies with respect to heat flux, laser power density, absorptivity, and angle of front inclination. Statistical analysis shows that for high cutting speed, the domains of high laser power density contribute much more to the formation of the front than for low speed. The semi-analytical modelling approach offers flexibility to simplify part of the process physics while, for example, sophisticated modelling of the complex focused fibre-guided laser beam is taken into account to enable deeper analysis of the beam interaction. Mechanisms like recast layer generation, absorptivity at a wavy processing front, and melt film formation are studied too.

  17. Knowledge-based analysis of microarray gene expression data by using support vector machines

    SciTech Connect (OSTI)

    William Grundy; Manuel Ares, Jr.; David Haussler

    2001-06-18

    The authors introduce a method of functionally classifying genes by using gene expression data from DNA microarray hybridization experiments. The method is based on the theory of support vector machines (SVMs). SVMs are considered a supervised computer learning method because they exploit prior knowledge of gene function to identify unknown genes of similar function from expression data. SVMs avoid several problems associated with unsupervised clustering methods, such as hierarchical clustering and self-organizing maps. SVMs have many mathematical features that make them attractive for gene expression analysis, including their flexibility in choosing a similarity function, sparseness of solution when dealing with large data sets, the ability to handle large feature spaces, and the ability to identify outliers. They test several SVMs that use different similarity metrics, as well as some other supervised learning methods, and find that the SVMs best identify sets of genes with a common function using expression data. Finally, they use SVMs to predict functional roles for uncharacterized yeast ORFs based on their expression data.

  18. NASTRAN-based computer program for structural dynamic analysis of horizontal axis wind turbines

    SciTech Connect (OSTI)

    Lobitz, D.W.

    1984-01-01

    This paper describes a computer program developed for structural dynamic analysis of horizontal axis wind turbines (HAWTs). It is based on the finite element method through its reliance on NASTRAN for the development of mass, stiffness, and damping matrices of the tower and rotor, which are treated in NASTRAN as separate structures. The tower is modeled in a stationary frame and the rotor in one rotating at a constant angular velocity. The two structures are subsequently joined together (external to NASTRAN) using a time-dependent transformation consistent with the hub configuration. Aerodynamic loads are computed with an established flow model based on strip theory. Aeroelastic effects are included by incorporating the local velocity and twisting deformation of the blade in the load computation. The turbulent nature of the wind, both in space and time, is modeled by adding in stochastic wind increments. The resulting equations of motion are solved in the time domain using the implicit Newmark-Beta integrator. Preliminary comparisons with data from the Boeing/NASA MOD2 HAWT indicate that the code is capable of accurately and efficiently predicting the response of HAWTs driven by turbulent winds.

  19. Economic analysis of coal-fired cogeneration plants for Air Force bases

    SciTech Connect (OSTI)

    Holcomb, R.S.; Griffin, F.P.

    1990-10-01

    The Defense Appropriations Act of 1986 requires the Department of Defense to use an additional 1,600,000 tons/year of coal at their US facilities by 1995 and also states that the most economical fuel should be used at each facility. In a previous study of Air Force heating plants burning gas or oil, Oak Ridge National Laboratory found that only a small fraction of this target 1,600,000 tons/year could be achieved by converting the plants where coal is economically viable. To identify projects that would use greater amounts of coal, the economic benefits of installing coal-fired cogeneration plants at 7 candidate Air Force bases were examined in this study. A life-cycle cost analysis was performed that included two types of financing (Air Force and private) and three levels of energy escalation for a total of six economic scenarios. Hill, McGuire, and Plattsburgh Air Force Bases were identified as the facilities with the best potential for coal-fired cogeneration, but the actual cost savings will depend strongly on how the projects are financed and to a lesser extent on future energy escalation rates. 10 refs., 11 figs., 27 tabs.

  20. On the rejection-based algorithm for simulation and analysis of large-scale reaction networks

    SciTech Connect (OSTI)

    Thanh, Vo Hong; Zunino, Roberto; Priami, Corrado

    2015-06-28

    Stochastic simulation for in silico studies of large biochemical networks requires a great amount of computational time. We recently proposed a new exact simulation algorithm, called the rejection-based stochastic simulation algorithm (RSSA) [Thanh et al., J. Chem. Phys. 141(13), 134116 (2014)], to improve simulation performance by postponing and collapsing as much as possible the propensity updates. In this paper, we analyze the performance of this algorithm in detail, and improve it for simulating large-scale biochemical reaction networks. We also present a new algorithm, called simultaneous RSSA (SRSSA), which generates many independent trajectories simultaneously for the analysis of the biochemical behavior. SRSSA improves simulation performance by utilizing a single data structure across simulations to select reaction firings and forming trajectories. The memory requirement for building and storing the data structure is thus independent of the number of trajectories. The updating of the data structure when needed is performed collectively in a single operation across the simulations. The trajectories generated by SRSSA are exact and independent of each other by exploiting the rejection-based mechanism. We test our new improvement on real biological systems with a wide range of reaction networks to demonstrate its applicability and efficiency.

  1. Monitoring Based Commissioning: Benchmarking Analysis of 24 UC/CSU/IOU Projects

    SciTech Connect (OSTI)

    Mills, Evan; Mathew, Paul

    2009-04-01

    Buildings rarely perform as intended, resulting in energy use that is higher than anticipated. Building commissioning has emerged as a strategy for remedying this problem in non-residential buildings. Complementing traditional hardware-based energy savings strategies, commissioning is a 'soft' process of verifying performance and design intent and correcting deficiencies. Through an evaluation of a series of field projects, this report explores the efficacy of an emerging refinement of this practice, known as monitoring-based commissioning (MBCx). MBCx can also be thought of as monitoring-enhanced building operation that incorporates three components: (1) Permanent energy information systems (EIS) and diagnostic tools at the whole-building and sub-system level; (2) Retro-commissioning based on the information from these tools and savings accounting emphasizing measurement as opposed to estimation or assumptions; and (3) On-going commissioning to ensure efficient building operations and measurement-based savings accounting. MBCx is thus a measurement-based paradigm which affords improved risk-management by identifying problems and opportunities that are missed with periodic commissioning. The analysis presented in this report is based on in-depth benchmarking of a portfolio of MBCx energy savings for 24 buildings located throughout the University of California and California State University systems. In the course of the analysis, we developed a quality-control/quality-assurance process for gathering and evaluating raw data from project sites and then selected a number of metrics to use for project benchmarking and evaluation, including appropriate normalizations for weather and climate, accounting for variations in central plant performance, and consideration of differences in building types. We performed a cost-benefit analysis of the resulting dataset, and provided comparisons to projects from a larger commissioning 'Meta-analysis' database. A total of 1120

  2. Critical analysis of the Hanford spent nuclear fuel project activity based cost estimate

    SciTech Connect (OSTI)

    Warren, R.N.

    1998-09-29

    In 1997, the SNFP developed a baseline change request (BCR) and submitted it to DOE-RL for approval. The schedule was formally evaluated to have a 19% probability of success [Williams, 1998]. In December 1997, DOE-RL Manager John Wagoner approved the BCR contingent upon a subsequent independent review of the new baseline. The SNFP took several actions during the first quarter of 1998 to prepare for the independent review. The project developed the Estimating Requirements and Implementation Guide [DESH, 1998] and trained cost account managers (CAMS) and other personnel involved in the estimating process in activity-based cost (ABC) estimating techniques. The SNFP then applied ABC estimating techniques to develop the basis for the December Baseline (DB) and documented that basis in Basis of Estimate (BOE) books. These BOEs were provided to DOE in April 1998. DOE commissioned Professional Analysis, Inc. (PAI) to perform a critical analysis (CA) of the DB. PAI`s review formally began on April 13. PAI performed the CA, provided three sets of findings to the SNFP contractor, and initiated reconciliation meetings. During the course of PAI`s review, DOE directed the SNFP to develop a new baseline with a higher probability of success. The contractor transmitted the new baseline, which is referred to as the High Probability Baseline (HPB), to DOE on April 15, 1998 [Williams, 1998]. The HPB was estimated to approach a 90% confidence level on the start of fuel movement [Williams, 1998]. This high probability resulted in an increased cost and a schedule extension. To implement the new baseline, the contractor initiated 26 BCRs with supporting BOES. PAI`s scope was revised on April 28 to add reviewing the HPB and the associated BCRs and BOES.

  3. Laser based analysis using a passively Q-switched laser employing analysis electronics and a means for detecting atomic optical emission of the laser media

    DOE Patents [OSTI]

    Woodruff, Steven D.; Mcintyre, Dustin L.

    2016-03-29

    A device for Laser based Analysis using a Passively Q-Switched Laser comprising an optical pumping source optically connected to a laser media. The laser media and a Q-switch are positioned between and optically connected to a high reflectivity mirror (HR) and an output coupler (OC) along an optical axis. The output coupler (OC) is optically connected to the output lens along the optical axis. A means for detecting atomic optical emission comprises a filter and a light detector. The optical filter is optically connected to the laser media and the optical detector. A control system is connected to the optical detector and the analysis electronics. The analysis electronics are optically connected to the output lens. The detection of the large scale laser output production triggers the control system to initiate the precise timing and data collection from the detector and analysis.

  4. High-Throughput Genetic Analysis and Combinatorial Chiral Separations Based on Capillary Electrophoresis

    SciTech Connect (OSTI)

    Wenwan Zhong

    2003-08-05

    Capillary electrophoresis (CE) offers many advantages over conventional analytical methods, such as speed, simplicity, high resolution, low cost, and small sample consumption, especially for the separation of enantiomers. However, chiral method developments still can be time consuming and tedious. They designed a comprehensive enantioseparation protocol employing neutral and sulfated cyclodextrins as chiral selectors for common basic, neutral, and acidic compounds with a 96-capillary array system. By using only four judiciously chosen separation buffers, successful enantioseparations were achieved for 49 out of 54 test compounds spanning a large variety of pKs and structures. Therefore, unknown compounds can be screened in this manner to identify optimal enantioselective conditions in just one rn. In addition to superior separation efficiency for small molecules, CE is also the most powerful technique for DNA separations. Using the same multiplexed capillary system with UV absorption detection, the sequence of a short DNA template can be acquired without any dye-labels. Two internal standards were utilized to adjust the migration time variations among capillaries, so that the four electropherograms for the A, T, C, G Sanger reactions can be aligned and base calling can be completed with a high level of confidence. the CE separation of DNA can be applied to study differential gene expression as well. Combined with pattern recognition techniques, small variations among electropherograms obtained by the separation of cDNA fragments produced from the total RNA samples of different human tissues can be revealed. These variations reflect the differences in total RNA expression among tissues. Thus, this Ce-based approach can serve as an alternative to the DNA array techniques in gene expression analysis.

  5. Structural Analysis of a Highly Glycosylated and Unliganded gp120-Based Antigen Using Mass Spectrometry

    SciTech Connect (OSTI)

    L Wang; Y Qin; S Ilchenko; J Bohon; W Shi; M Cho; K Takamoto; M Chance

    2011-12-31

    Structural characterization of the HIV-1 envelope protein gp120 is very important for providing an understanding of the protein's immunogenicity and its binding to cell receptors. So far, the crystallographic structure of gp120 with an intact V3 loop (in the absence of a CD4 coreceptor or antibody) has not been determined. The third variable region (V3) of the gp120 is immunodominant and contains glycosylation signatures that are essential for coreceptor binding and entry of the virus into T-cells. In this study, we characterized the structure of the outer domain of gp120 with an intact V3 loop (gp120-OD8) purified from Drosophila S2 cells utilizing mass spectrometry-based approaches. We mapped the glycosylation sites and calculated the glycosylation occupancy of gp120-OD8; 11 sites from 15 glycosylation motifs were determined as having high-mannose or hybrid glycosylation structures. The specific glycan moieties of nine glycosylation sites from eight unique glycopeptides were determined by a combination of ECD and CID MS approaches. Hydroxyl radical-mediated protein footprinting coupled with mass spectrometry analysis was employed to provide detailed information about protein structure of gp120-OD8 by directly identifying accessible and hydroxyl radical-reactive side chain residues. Comparison of gp120-OD8 experimental footprinting data with a homology model derived from the ligated CD4-gp120-OD8 crystal structure revealed a flexible V3 loop structure in which the V3 tip may provide contacts with the rest of the protein while residues in the V3 base remain solvent accessible. In addition, the data illustrate interactions between specific sugar moieties and amino acid side chains potentially important to the gp120-OD8 structure.

  6. Science-Based Simulation Model of Human Performance for Human Reliability Analysis

    SciTech Connect (OSTI)

    Dana L. Kelly; Ronald L. Boring; Ali Mosleh; Carol Smidts

    2011-10-01

    Human reliability analysis (HRA), a component of an integrated probabilistic risk assessment (PRA), is the means by which the human contribution to risk is assessed, both qualitatively and quantitatively. However, among the literally dozens of HRA methods that have been developed, most cannot fully model and quantify the types of errors that occurred at Three Mile Island. Furthermore, all of the methods lack a solid empirical basis, relying heavily on expert judgment or empirical results derived in non-reactor domains. Finally, all of the methods are essentially static, and are thus unable to capture the dynamics of an accident in progress. The objective of this work is to begin exploring a dynamic simulation approach to HRA, one whose models have a basis in psychological theories of human performance, and whose quantitative estimates have an empirical basis. This paper highlights a plan to formalize collaboration among the Idaho National Laboratory (INL), the University of Maryland, and The Ohio State University (OSU) to continue development of a simulation model initially formulated at the University of Maryland. Initial work will focus on enhancing the underlying human performance models with the most recent psychological research, and on planning follow-on studies to establish an empirical basis for the model, based on simulator experiments to be carried out at the INL and at the OSU.

  7. Microscopic silicon-based lateral high-aspect-ratio structures for thin film conformality analysis

    SciTech Connect (OSTI)

    Gao, Feng; Arpiainen, Sanna; Puurunen, Riikka L.

    2015-01-15

    Film conformality is one of the major drivers for the interest in atomic layer deposition (ALD) processes. This work presents new silicon-based microscopic lateral high-aspect-ratio (LHAR) test structures for the analysis of the conformality of thin films deposited by ALD and by other chemical vapor deposition means. The microscopic LHAR structures consist of a lateral cavity inside silicon with a roof supported by pillars. The cavity length (e.g., 20–5000 μm) and cavity height (e.g., 200–1000 nm) can be varied, giving aspect ratios of, e.g., 20:1 to 25 000:1. Film conformality can be analyzed with the microscopic LHAR by several means, as demonstrated for the ALD Al{sub 2}O{sub 3} and TiO{sub 2} processes from Me{sub 3}Al/H{sub 2}O and TiCl{sub 4}/H{sub 2}O. The microscopic LHAR test structures introduced in this work expose a new parameter space for thin film conformality investigations expected to prove useful in the development, tuning and modeling of ALD and other chemical vapor deposition processes.

  8. Automating Embedded Analysis Capabilities and Managing Software Complexity in Multiphysics Simulation, Part I: Template-Based Generic Programming

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Pawlowski, Roger P.; Phipps, Eric T.; Salinger, Andrew G.

    2012-01-01

    An approach for incorporating embedded simulation and analysis capabilities in complex simulation codes through template-based generic programming is presented. This approach relies on templating and operator overloading within the C++ language to transform a given calculation into one that can compute a variety of additional quantities that are necessary for many state-of-the-art simulation and analysis algorithms. An approach for incorporating these ideas into complex simulation codes through general graph-based assembly is also presented. These ideas have been implemented within a set of packages in the Trilinos framework and are demonstrated on a simple problem from chemical engineering.

  9. Arthropod monitoring for fine-scale habitat analysis: A case study of the El Segundo sand dunes

    SciTech Connect (OSTI)

    Mattoni, R.; Longcore, T.; Novotny, V.

    2000-04-01

    Arthropod communities from several habitats on and adjacent to the El Segundo dunes (Los Angeles County, CA) were sampled using pitfall and yellow pan traps to evaluate their possible use as indicators of restoration success. Communities were ordinated and clustered using correspondence analysis, detrended correspondence analysis, two-way indicator species analysis, and Ward's method of agglomerative clustering. The results showed high repeatability among replicates within any sampling arena that permits discrimination of (1) degraded and relatively undisturbed habitat, (2) different dune habitat types, and (3) annual change. Canonical correspondence analysis showed a significant effect of disturbance history on community composition that explained 5--20% of the variation. Replicates of pitfall and yellow pan traps on single sites clustered together reliably when species abundance was considered, whereas clusters using only species incidence did not group replicates as consistently. The broad taxonomic approach seems appropriate for habitat evaluation and monitoring of restoration projects as an alternative to assessments geared to single species or even single families.

  10. Systems Analysis of an Advanced Nuclear Fuel Cycle Based on a Modified UREX+3c Process

    SciTech Connect (OSTI)

    E. R. Johnson; R. E. Best

    2009-12-28

    The research described in this report was performed under a grant from the U.S. Department of Energy (DOE) to describe and compare the merits of two advanced alternative nuclear fuel cycles -- named by this study as the “UREX+3c fuel cycle” and the “Alternative Fuel Cycle” (AFC). Both fuel cycles were assumed to support 100 1,000 MWe light water reactor (LWR) nuclear power plants operating over the period 2020 through 2100, and the fast reactors (FRs) necessary to burn the plutonium and minor actinides generated by the LWRs. Reprocessing in both fuel cycles is assumed to be based on the UREX+3c process reported in earlier work by the DOE. Conceptually, the UREX+3c process provides nearly complete separation of the various components of spent nuclear fuel in order to enable recycle of reusable nuclear materials, and the storage, conversion, transmutation and/or disposal of other recovered components. Output of the process contains substantially all of the plutonium, which is recovered as a 5:1 uranium/plutonium mixture, in order to discourage plutonium diversion. Mixed oxide (MOX) fuel for recycle in LWRs is made using this 5:1 U/Pu mixture plus appropriate makeup uranium. A second process output contains all of the recovered uranium except the uranium in the 5:1 U/Pu mixture. The several other process outputs are various waste streams, including a stream of minor actinides that are stored until they are consumed in future FRs. For this study, the UREX+3c fuel cycle is assumed to recycle only the 5:1 U/Pu mixture to be used in LWR MOX fuel and to use depleted uranium (tails) for the makeup uranium. This fuel cycle is assumed not to use the recovered uranium output stream but to discard it instead. On the other hand, the AFC is assumed to recycle both the 5:1 U/Pu mixture and all of the recovered uranium. In this case, the recovered uranium is reenriched with the level of enrichment being determined by the amount of recovered plutonium and the combined amount

  11. Initial Business Case Analysis of Two Integrated Heat Pump HVAC Systems for Near-Zero-Energy Homes - Update to Include Evaluation of Impact of Including a Humidifier Option

    SciTech Connect (OSTI)

    Baxter, Van D

    2007-02-01

    --A Stage 2 Scoping Assessment, ORNL/TM-2005/194 (Baxter 2005). The 2005 study report describes the HVAC options considered, the ranking criteria used, and the system rankings by priority. In 2006, the two top-ranked options from the 2005 study, air-source and ground-source versions of a centrally ducted integrated heat pump (IHP) system, were subjected to an initial business case study. The IHPs were subjected to a more rigorous hourly-based assessment of their performance potential compared to a baseline suite of equipment of legally minimum efficiency that provided the same heating, cooling, water heating, demand dehumidification, and ventilation services as the IHPs. Results were summarized in a project report, Initial Business Case Analysis of Two Integrated Heat Pump HVAC Systems for Near-Zero-Energy Homes, ORNL/TM-2006/130 (Baxter 2006a). The present report is an update to that document which summarizes results of an analysis of the impact of adding a humidifier to the HVAC system to maintain minimum levels of space relative humidity (RH) in winter. The space RH in winter has direct impact on occupant comfort and on control of dust mites, many types of disease bacteria, and 'dry air' electric shocks. Chapter 8 in ASHRAE's 2005 Handbook of Fundamentals (HOF) suggests a 30% lower limit on RH for indoor temperatures in the range of {approx}68-69F based on comfort (ASHRAE 2005). Table 3 in chapter 9 of the same reference suggests a 30-55% RH range for winter as established by a Canadian study of exposure limits for residential indoor environments (EHD 1987). Harriman, et al (2001) note that for RH levels of 35% or higher, electrostatic shocks are minimized and that dust mites cannot live at RH levels below 40%. They also indicate that many disease bacteria life spans are minimized when space RH is held within a 30-60% range. From the foregoing it is reasonable to assume that a winter space RH range of 30-40% would be an acceptable compromise between comfort

  12. Emulating a System Dynamics Model with Agent-Based Models: A Methodological Case Study in Simulation of Diabetes Progression

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Schryver, Jack; Nutaro, James; Shankar, Mallikarjun

    2015-10-30

    An agent-based simulation model hierarchy emulating disease states and behaviors critical to progression of diabetes type 2 was designed and implemented in the DEVS framework. The models are translations of basic elements of an established system dynamics model of diabetes. In this model hierarchy, which mimics diabetes progression over an aggregated U.S. population, was dis-aggregated and reconstructed bottom-up at the individual (agent) level. Four levels of model complexity were defined in order to systematically evaluate which parameters are needed to mimic outputs of the system dynamics model. Moreover, the four estimated models attempted to replicate stock counts representing disease statesmore » in the system dynamics model, while estimating impacts of an elderliness factor, obesity factor and health-related behavioral parameters. Health-related behavior was modeled as a simple realization of the Theory of Planned Behavior, a joint function of individual attitude and diffusion of social norms that spread over each agent s social network. Although the most complex agent-based simulation model contained 31 adjustable parameters, all models were considerably less complex than the system dynamics model which required numerous time series inputs to make its predictions. In all three elaborations of the baseline model provided significantly improved fits to the output of the system dynamics model. The performances of the baseline agent-based model and its extensions illustrate a promising approach to translate complex system dynamics models into agent-based model alternatives that are both conceptually simpler and capable of capturing main effects of complex local agent-agent interactions.« less

  13. Diagnostic and Prognostic Analysis of Battery Performance & Aging based on

    Broader source: Energy.gov (indexed) [DOE]

    Kinetic and Thermodynamic Principles | Department of Energy es124_gering_2012_o.pdf (9.13 MB) More Documents & Publications Diagnostic Testing and Analysis Toward Understanding Aging Mechanisms and Related Path Dependence Diagnostic Testing and Analysis Toward Understanding Aging Mechanisms and Related Path Dependence Diagnostic Testing and Analysis Toward Understanding Aging Mechanisms and Related Path Dependence

  14. DEVELOPMENT OF A NOVEL GAS PRESSURIZED STRIPPING (GPS)-BASED TECHNOLOGY FOR CO2 CAPTURE FROM POST-COMBUSTION FLUE GASES Topical Report: Techno-Economic Analysis of GPS-based Technology for CO2 Capture

    SciTech Connect (OSTI)

    Chen, Shiaoguo

    2015-09-30

    This topical report presents the techno-economic analysis, conducted by Carbon Capture Scientific, LLC (CCS) and Nexant, for a nominal 550 MWe supercritical pulverized coal (PC) power plant utilizing CCS patented Gas Pressurized Stripping (GPS) technology for post-combustion carbon capture (PCC). Illinois No. 6 coal is used as fuel. Because of the difference in performance between the GPS-based PCC and the MEA-based CO2 absorption technology, the net power output of this plant is not exactly 550 MWe. DOE/NETL Case 11 supercritical PC plant without CO2 capture and Case 12 supercritical PC plant with benchmark MEA-based CO2 capture are chosen as references. In order to include CO2 compression process for the baseline case, CCS independently evaluated the generic 30 wt% MEA-based PCC process together with the CO2 compression section. The net power produced in the supercritical PC plant with GPS-based PCC is 647 MW, greater than the MEA-based design. The levelized cost of electricity (LCOE) over a 20-year period is adopted to assess techno-economic performance. The LCOE for the supercritical PC plant with GPS-based PCC, not considering CO2 transport, storage and monitoring (TS&M), is 97.4 mills/kWh, or 152% of the Case 11 supercritical PC plant without CO2 capture, equivalent to $39.6/tonne for the cost of CO2 capture. GPS-based PCC is also significantly superior to the generic MEA-based PCC with CO2 compression section, whose LCOE is as high as 109.6 mills/kWh.

  15. Neutronics and activation analysis of lithium-based ternary alloys in IFE blankets

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Jolodosky, Alejandra; Kramer, Kevin; Meier, Wayne; DeMuth, James; Reyes, Susana; Fratoni, Massimiliano

    2016-04-09

    Here we report that an attractive feature of using liquid lithium as the breeder and coolant in fusion blankets is that it has very high tritium solubility and results in very low levels of tritium permeation throughout the facility infrastructure. However, lithium metal vigorously reacts with air and water and presents plant safety concerns. The Lawrence Livermore National Laboratory is carrying an effort to develop a lithium-based alloy that maintains the beneficial properties of lithium (e.g. high tritium breeding and solubility) and at the same time reduces overall flammability concerns. This study evaluates the neutronics performance of lithium-based alloys inmore » the blanket of an inertial fusion energy chamber in order to inform such development. 3-D Monte Carlo calculations were performed to evaluate two main neutronics performance parameters for the blanket: tritium breeding ratio (TBR), and the fusion energy multiplication factor (EMF). It was found that elements that exhibit low absorption cross sections and higher q-values such as lead, tin, and strontium, perform well with those that have high neutron multiplication such as lead and bismuth. These elements meet TBR constrains ranging from 1.02 to 1.1. However, most alloys do not reach EMFs greater than 1.15. Additionally, it was found that enriching lithium significantly increases the TBR and decreases the minimum lithium concentration by more than 60%. The amount of enrichment depends on how much total lithium is in the alloy to begin with. Alloys that performed well in the TBR and EMF calculations were considered for activation analysis. Activation simulations were executed with 50 years of irradiation and 300 years of cooling. It was discovered that bismuth is a poor choice due to achieving the highest decay heat, contact dose rates, and accident doses. In addition, it does not meet the waste disposal ratings (WDR). Some of the activation results for alloys with tin, zinc, and gallium were in

  16. Integrated Experimental and Model-based Analysis Reveals the Spatial Aspects of EGFR Activation Dynamics

    SciTech Connect (OSTI)

    Shankaran, Harish; Zhang, Yi; Chrisler, William B.; Ewald, Jonathan A.; Wiley, H. S.; Resat, Haluk

    2012-10-02

    The epidermal growth factor receptor (EGFR) belongs to the ErbB family of receptor tyrosine kinases, and controls a diverse set of cellular responses relevant to development and tumorigenesis. ErbB activation is a complex process involving receptor-ligand binding, receptor dimerization, phosphorylation, and trafficking (internalization, recycling and degradation), which together dictate the spatio-temporal distribution of active receptors within the cell. The ability to predict this distribution, and elucidation of the factors regulating it, would help to establish a mechanistic link between ErbB expression levels and the cellular response. Towards this end, we constructed mathematical models for deconvolving the contributions of receptor dimerization and phosphorylation to EGFR activation, and to examine the dependence of these processes on sub-cellular location. We collected experimental datasets for EGFR activation dynamics in human mammary epithelial cells, with the specific goal of model parameterization, and used the data to estimate parameters for several alternate models. Model-based analysis indicated that: 1) signal termination via receptor dephosphorylation in late endosomes, prior to degradation, is an important component of the response, 2) less than 40% of the receptors in the cell are phosphorylated at any given time, even at saturating ligand doses, and 3) receptor dephosphorylation rates at the cell surface and early endosomes are comparable. We validated the last finding by measuring EGFR dephosphorylation rates at various times following ligand addition both in whole cells, and in endosomes using ELISAs and fluorescent imaging. Overall, our results provide important information on how EGFR phosphorylation levels are regulated within cells. Further, the mathematical model described here can be extended to determine receptor dimer abundances in cells co-expressing various levels of ErbB receptors. This study demonstrates that an iterative cycle of

  17. Analysis on fuel breeding capability of FBR core region based on minor actinide recycling doping

    SciTech Connect (OSTI)

    Permana, Sidik; Novitrian,; Waris, Abdul; Ismail; Suzuki, Mitsutoshi; Saito, Masaki

    2014-09-30

    Nuclear fuel breeding based on the capability of fuel conversion capability can be achieved by conversion ratio of some fertile materials into fissile materials during nuclear reaction processes such as main fissile materials of U-233, U-235, Pu-239 and Pu-241 and for fertile materials of Th-232, U-238, and Pu-240 as well as Pu-238. Minor actinide (MA) loading option which consists of neptunium, americium and curium will gives some additional contribution from converted MA into plutonium such as conversion Np-237 into Pu-238 and it's produced Pu-238 converts to Pu-239 via neutron capture. Increasing composition of Pu-238 can be used to produce fissile material of Pu-239 as additional contribution. Trans-uranium (TRU) fuel (Mixed fuel loading of MOX (U-Pu) and MA composition) and mixed oxide (MOX) fuel compositions are analyzed for comparative analysis in order to show the effect of MA to the plutonium productions in core in term of reactor criticality condition and fuel breeding capability. In the present study, neptunium (Np) nuclide is used as a representative of MAin trans-uranium (TRU) fuel composition as Np-MOX fuel type. It was loaded into the core region gives significant contribution to reduce the excess reactivity in comparing to mixed oxide (MOX) fuel and in the same time it contributes to increase nuclear fuel breeding capability of the reactor. Neptunium fuel loading scheme in FBR core region gives significant production of Pu-238 as fertile material to absorp neutrons for reducing excess reactivity and additional contribution for fuel breeding.

  18. Five case studies of multifamily weatherization programs

    SciTech Connect (OSTI)

    Kinney, L; Wilson, T.; Lewis, G.; MacDonald, M.

    1997-12-31

    The multifamily case studies that are the subject of this report were conducted to provide a better understanding of the approach taken by program operators in weatherizing large buildings. Because of significant variations in building construction and energy systems across the country, five states were selected based on their high level of multifamily weatherization. This report summarizes findings from case studies conducted by multifamily weatherization operations in five cities. The case studies were conducted between January and November 1994. Each of the case studies involved extensive interviews with the staff of weatherization subgrantees conducting multifamily weatherization, the inspection of 4 to 12 buildings weatherized between 1991 and 1993, and the analysis of savings and costs. The case studies focused on innovative techniques which appear to work well.

  19. Case Studies

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    OSCARS Case Studies Science DMZ Case Studies Multi-facility Workflow Case Study News & Publications ESnet News Publications and Presentations Galleries ESnet Awards and Honors Blog ESnet Live Home » Science Engagement » Case Studies Science Engagement Move your data Programs & Workshops Science Requirements Reviews Case Studies OSCARS Case Studies Science DMZ Case Studies Multi-facility Workflow Case Study Contact Us Technical Assistance: 1 800-33-ESnet (Inside US) 1 800-333-7638

  20. Assessment of effectiveness of geologic isolation systems. Test case release consequence analysis for a spent fuel repository in bedded salt

    SciTech Connect (OSTI)

    Raymond, J.R.; Bond, F.W.; Cole, C.R.; Nelson, R.W.; Reisenauer, A.E.; Washburn, J.F.; Norman, N.A.; Mote, P.A.; Segol, G.

    1980-01-01

    Geologic and geohydrologic data for the Paradox Basin have been used to simulate movement of ground water and radioacrtive contaminants from a hypothetical nuclear reactor spent fuel repository after an assumed accidental release. The pathlines, travel times and velocity of the ground water from the repository to the discharge locale (river) were determined after the disruptive event by use of a two-dimensional finite difference hydrologic model. The concentration of radioactive contaminants in the ground water was calculated along a series of flow tubes by use of a one-dimensional mass transport model which takes into account convection, dispersion, contaminant/media interactions and radioactive decay. For the hypothetical site location and specific parameters used in this demonstration, it is found that Iodine-129 (I-129) is tthe only isotope reaching the Colorado River in significant concentration. This concentration occurs about 8.0 x 10/sup 5/ years after the repository has been breached. This I-129 ground-water concentration is about 0.3 of the drinking water standard for uncontrolled use. The groundwater concentration would then be diluted by the Colorado River. None of the actinide elements reach more than half the distance from the repository to the Colorado River in the two-million year model run time. This exercise demonstrates that the WISAP model system is applicable for analysis of contaminant transport. The results presented in this report, however, are valid only for one particular set of parameters. A complete sensitivity analysis must be performed to evaluate the range of effects from the release of contaminants from a breached repository.

  1. Roof-top solar energy potential under performance-based building energy codes: The case of Spain

    SciTech Connect (OSTI)

    Izquierdo, Salvador; Montanes, Carlos; Dopazo, Cesar; Fueyo, Norberto

    2011-01-15

    The quantification at regional level of the amount of energy (for thermal uses and for electricity) that can be generated by using solar systems in buildings is hindered by the availability of data for roof area estimation. In this note, we build on an existing geo-referenced method for determining available roof area for solar facilities in Spain to produce a quantitative picture of the likely limits of roof-top solar energy. The installation of solar hot water systems (SHWS) and photovoltaic systems (PV) is considered. After satisfying up to 70% (if possible) of the service hot water demand in every municipality, PV systems are installed in the remaining roof area. Results show that, applying this performance-based criterion, SHWS would contribute up to 1662 ktoe/y of primary energy (or 68.5% of the total thermal-energy demand for service hot water), while PV systems would provide 10 T W h/y of electricity (or 4.0% of the total electricity demand). (author)

  2. Evaluation of food waste disposal options by LCC analysis from the perspective of global warming: Jungnang case, South Korea

    SciTech Connect (OSTI)

    Kim, Mi-Hyung; Song, Yul-Eum; Song, Han-Byul; Kim, Jung-Wk; Hwang, Sun-Jin

    2011-09-15

    Highlights: > Various food waste disposal options were evaluated from the perspective of global warming. > Costs of the options were compared by the methodology of life cycle assessment and life cycle cost analysis. > Carbon price and valuable by-products were used for analyzing environmental credits. > The benefit-cost ratio of wet feeding scenario was the highest. - Abstract: The costs associated with eight food waste disposal options, dry feeding, wet feeding, composting, anaerobic digestion, co-digestion with sewage sludge, food waste disposer, incineration, and landfilling, were evaluated in the perspective of global warming and energy and/or resource recovery. An expanded system boundary was employed to compare by-products. Life cycle cost was analyzed through the entire disposal process, which included discharge, separate collection, transportation, treatment, and final disposal stages, all of which were included in the system boundary. Costs and benefits were estimated by an avoided impact. Environmental benefits of each system per 1 tonne of food waste management were estimated using carbon prices resulting from CO{sub 2} reduction by avoided impact, as well as the prices of by-products such as animal feed, compost, and electricity. We found that the cost of landfilling was the lowest, followed by co-digestion. The benefits of wet feeding systems were the highest and landfilling the lowest.

  3. U.S. Renewable Energy Technical Potentials: A GIS-Based Analysis

    SciTech Connect (OSTI)

    Lopez, A.; Roberts, B.; Heimiller, D.; Blair, N.; Porro, G.

    2012-07-01

    This report presents the state-level results of a spatial analysis effort calculating energy technical potential, reported in square kilometers of available land, megawatts of capacity, and gigawatt-hours of generation, for six different renewable technologies. For this analysis, the system specific power density (or equivalent), efficiency (capacity factor), and land-use constraints were identified for each technology using independent research, published research, and professional contacts. This report also presents technical potential findings from previous reports.

  4. U.S. Renewable Energy Technical Potentials. A GIS-Based Analysis

    SciTech Connect (OSTI)

    Lopez, Anthony; Roberts, Billy; Heimiller, Donna; Blair, Nate; Porro, Gian

    2012-07-01

    This report presents the state-level results of a spatial analysis effort calculating energy technical potential, reported in square kilometers of available land, megawatts of capacity, and gigawatt-hours of generation, for six different renewable technologies. For this analysis, the system specific power density (or equivalent), efficiency (capacity factor), and land-use constraints were identified for each technology using independent research, published research, and professional contacts. This report also presents technical potential findings from previous reports.

  5. Integrating multi-criteria decision analysis for a GIS-based hazardous waste landfill sitting in Kurdistan Province, western Iran

    SciTech Connect (OSTI)

    Sharifi, Mozafar Hadidi, Mosslem Vessali, Elahe Mosstafakhani, Parasto Taheri, Kamal Shahoie, Saber Khodamoradpour, Mehran

    2009-10-15

    The evaluation of a hazardous waste disposal site is a complicated process because it requires data from diverse social and environmental fields. These data often involve processing of a significant amount of spatial information which can be used by GIS as an important tool for land use suitability analysis. This paper presents a multi-criteria decision analysis alongside with a geospatial analysis for the selection of hazardous waste landfill sites in Kurdistan Province, western Iran. The study employs a two-stage analysis to provide a spatial decision support system for hazardous waste management in a typically under developed region. The purpose of GIS was to perform an initial screening process to eliminate unsuitable land followed by utilization of a multi-criteria decision analysis (MCDA) to identify the most suitable sites using the information provided by the regional experts with reference to new chosen criteria. Using 21 exclusionary criteria, as input layers, masked maps were prepared. Creating various intermediate or analysis map layers a final overlay map was obtained representing areas for hazardous waste landfill sites. In order to evaluate different landfill sites produced by the overlaying a landfill suitability index system was developed representing cumulative effects of relative importance (weights) and suitability values of 14 non-exclusionary criteria including several criteria resulting from field observation. Using this suitability index 15 different sites were visited and based on the numerical evaluation provided by MCDA most suitable sites were determined.

  6. Dynamic analysis of the urban-based low-carbon policy using system dynamics: Focused on housing and green space

    SciTech Connect (OSTI)

    Hong, Taehoon; Kim, Jimin Jeong, Kwangbok; Koo, Choongwan

    2015-02-09

    To systematically manage the energy consumption of existing buildings, the government has to enforce greenhouse gas reduction policies. However, most of the policies are not properly executed because they do not consider various factors from the urban level perspective. Therefore, this study aimed to conduct a dynamic analysis of an urban-based low-carbon policy using system dynamics, with a specific focus on housing and green space. This study was conducted in the following steps: (i) establishing the variables of urban-based greenhouse gases (GHGs) emissions; (ii) creating a stock/flow diagram of urban-based GHGs emissions; (iii) conducting an information analysis using the system dynamics; and (iv) proposing the urban-based low-carbon policy. If a combined energy policy that uses the housing sector (30%) and the green space sector (30%) at the same time is implemented, 2020 CO{sub 2} emissions will be 7.23 million tons (i.e., 30.48% below 2020 business-as-usual), achieving the national carbon emissions reduction target (26.9%). The results of this study could contribute to managing and improving the fundamentals of the urban-based low-carbon policies to reduce greenhouse gas emissions.

  7. Quantitative Analysis of Variability and Uncertainty in Environmental Data and Models. Volume 1. Theory and Methodology Based Upon Bootstrap Simulation

    SciTech Connect (OSTI)

    Frey, H. Christopher; Rhodes, David S.

    1999-04-30

    This is Volume 1 of a two-volume set of reports describing work conducted at North Carolina State University sponsored by Grant Number DE-FG05-95ER30250 by the U.S. Department of Energy. The title of the project is “Quantitative Analysis of Variability and Uncertainty in Acid Rain Assessments.” The work conducted under sponsorship of this grant pertains primarily to two main topics: (1) development of new methods for quantitative analysis of variability and uncertainty applicable to any type of model; and (2) analysis of variability and uncertainty in the performance, emissions, and cost of electric power plant combustion-based NOx control technologies. These two main topics are reported separately in Volumes 1 and 2.

  8. COMMERCIALIZATION OF AN ATMOSPHERIC IRON-BASED CDCL PROCESS FOR POWER PRODUCTION. PHASE I: TECHNOECONOMIC ANALYSIS

    SciTech Connect (OSTI)

    Vargas, Luis

    2013-11-01

    Coal Direct Chemical Looping (CDCL) is an advanced oxy-combustion technology that has potential to enable substantial reductions in the cost and energy penalty associated with carbon dioxide (CO2) capture from coal-fired power plants. Through collaborative efforts, the Babcock & Wilcox Power Generation Group (B&W) and The Ohio State University (OSU) developed a conceptual design for a 550 MWe (net) supercritical CDCL power plant with greater than 90% CO2 capture and compression. Process simulations were completed to enable an initial assessment of its technical performance. A cost estimate was developed following DOE’s guidelines as outlined in NETL’s report “Quality Guidelines for Energy System Studies: Cost Estimation Methodology for NETL Assessments of Power Plant Performance”, (2011/1455). The cost of electricity for the CDCL plant without CO2 Transportation and Storage cost resulted in $ $102.67 per MWh, which corresponds to a 26.8 % increase in cost of electricity (COE) when compared to an air-fired pulverized-coal supercritical power plant. The cost of electricity is strongly depending on the total plant cost and cost of the oxygen carrier particles. The CDCL process could capture further potential savings by increasing the performance of the particles and reducing the plant size. During the techno-economic analysis, the team identified technology and engineering gaps that need to be closed to bring the technology to commercialization. The technology gaps were focused in five critical areas: (i) moving bed reducer reactor, (ii) fluidized bed combustor, (iii) particle riser, (iv) oxygen-carrier particle properties, and (v) process operation. The key technology gaps are related to particle performance, particle manufacturing cost, and the operation of the reducer reactor. These technology gaps are to be addressed during Phase II of project. The project team is proposing additional lab testing to be completed on the particle and a 3MWth pilot facility

  9. Initial Business Case Analysis of Two Integrated Heat Pump HVAC Systems for Near-Zero-Energy Homes

    SciTech Connect (OSTI)

    Baxter, Van D

    2006-11-01

    The long range strategic goal of the Department of Energy's Building Technologies (DOE/BT) Program is to create, by 2020, technologies and design approaches that enable the construction of net-zero energy homes at low incremental cost (DOE/BT 2005). A net zero energy home (NZEH) is a residential building with greatly reduced needs for energy through efficiency gains, with the balance of energy needs supplied by renewable technologies. While initially focused on new construction, these technologies and design approaches are intended to have application to buildings constructed before 2020 as well resulting in substantial reduction in energy use for all building types and ages. DOE/BT's Emerging Technologies (ET) team is working to support this strategic goal by identifying and developing advanced heating, ventilating, air-conditioning, and water heating (HVAC/WH) technology options applicable to NZEHs. Although the energy efficiency of heating, ventilating, and air-conditioning (HVAC) equipment has increased substantially in recent years, new approaches are needed to continue this trend. Dramatic efficiency improvements are necessary to enable progress toward the NZEH goals, and will require a radical rethinking of opportunities to improve system performance. The large reductions in HVAC energy consumption necessary to support the NZEH goals require a systems-oriented analysis approach that characterizes each element of energy consumption, identifies alternatives, and determines the most cost-effective combination of options. In particular, HVAC equipment must be developed that addresses the range of special needs of NZEH applications in the areas of reduced HVAC and water heating energy use, humidity control, ventilation, uniform comfort, and ease of zoning. In FY05 ORNL conducted an initial Stage 1 (Applied Research) scoping assessment of HVAC/WH systems options for future NZEHs to help DOE/BT identify and prioritize alternative approaches for further development

  10. Real Time Pricing as a Default or Optional Service for C&ICustomers: A Comparative Analysis of Eight Case Studies

    SciTech Connect (OSTI)

    Barbose, Galen; Goldman, Charles; Bharvirkar, Ranjit; Hopper,Nicole; Ting, Michael; Neenan, Bernie

    2005-08-01

    Demand response (DR) has been broadly recognized to be an integral component of well-functioning electricity markets, although currently underdeveloped in most regions. Among the various initiatives undertaken to remedy this deficiency, public utility commissions (PUC) and utilities have considered implementing dynamic pricing tariffs, such as real-time pricing (RTP), and other retail pricing mechanisms that communicate an incentive for electricity consumers to reduce their usage during periods of high generation supply costs or system reliability contingencies. Efforts to introduce DR into retail electricity markets confront a range of basic policy issues. First, a fundamental issue in any market context is how to organize the process for developing and implementing DR mechanisms in a manner that facilitates productive participation by affected stakeholder groups. Second, in regions with retail choice, policymakers and stakeholders face the threshold question of whether it is appropriate for utilities to offer a range of dynamic pricing tariffs and DR programs, or just ''plain vanilla'' default service. Although positions on this issue may be based primarily on principle, two empirical questions may have some bearing--namely, what level of price response can be expected through the competitive retail market, and whether establishing RTP as the default service is likely to result in an appreciable level of DR? Third, if utilities are to have a direct role in developing DR, what types of retail pricing mechanisms are most appropriate and likely to have the desired policy impact (e.g., RTP, other dynamic pricing options, DR programs, or some combination)? Given a decision to develop utility RTP tariffs, three basic implementation issues require attention. First, should it be a default or optional tariff, and for which customer classes? Second, what types of tariff design is most appropriate, given prevailing policy objectives, wholesale market structure, ratemaking

  11. Comparing large scale CCS deployment potential in the USA and China: a detailed analysis based on country-specific CO2 transport & storage cost curves

    SciTech Connect (OSTI)

    Dahowski, Robert T.; Davidson, Casie L.; Dooley, James J.

    2011-04-18

    The United States and China are the two largest emitters of greenhouse gases in the world and their projected continued growth and reliance on fossil fuels, especially coal, make them strong candidates for CCS. Previous work has revealed that both nations have over 1600 large electric utility and other industrial point CO2 sources as well as very large CO2 storage resources on the order of 2,000 billion metric tons (Gt) of onshore storage capacity. In each case, the vast majority of this capacity is found in deep saline formations. In both the USA and China, candidate storage reservoirs are likely to be accessible by most sources with over 80% of these large industrial CO2 sources having a CO2 storage option within just 80 km. This suggests a strong potential for CCS deployment as a meaningful option to efforts to reduce CO2 emissions from these large, vibrant economies. However, while the USA and China possess many similarities with regards to the potential value that CCS might provide, including the range of costs at which CCS may be available to most large CO2 sources in each nation, there are a number of more subtle differences that may help us to understand the ways in which CCS deployment may differ between these two countries in order for the USA and China to work together - and in step with the rest of the world - to most efficiently reduce greenhouse gas emissions. This paper details the first ever analysis of CCS deployment costs in these two countries based on methodologically comparable CO2 source and sink inventories, economic analysis, geospatial source-sink matching and cost curve modeling. This type of analysis provides a valuable insight into the degree to which early and sustained opportunities for climate change mitigation via commercial-scale CCS are available to the two countries, and could facilitate greater collaboration in areas where those opportunities overlap.

  12. Security analysis of boolean algebra based on Zhang-Wang digital signature scheme

    SciTech Connect (OSTI)

    Zheng, Jinbin

    2014-10-06

    In 2005, Zhang and Wang proposed an improvement signature scheme without using one-way hash function and message redundancy. In this paper, we show that this scheme exits potential safety concerns through the analysis of boolean algebra, such as bitwise exclusive-or, and point out that mapping is not one to one between assembly instructions and machine code actually by means of the analysis of the result of the assembly program segment, and which possibly causes safety problems unknown to the software.

  13. Analysis of the multigroup model for muon tomography based threat detection

    SciTech Connect (OSTI)

    Perry, J. O.; Bacon, J. D.; Borozdin, K. N.; Fabritius, J. M.; Morris, C. L.

    2014-02-14

    We compare different algorithms for detecting a 5?cm tungsten cube using cosmic ray muon technology. In each case, a simple tomographic technique was used for position reconstruction, but the scattering angles were used differently to obtain a density signal. Receiver operating characteristic curves were used to compare images made using average angle squared, median angle squared, average of the squared angle, and a multi-energy group fit of the angular distributions for scenes with and without a 5?cm tungsten cube. The receiver operating characteristic curves show that the multi-energy group treatment of the scattering angle distributions is the superior method for image reconstruction.

  14. Case Studies

    Broader source: Energy.gov [DOE]

    The following case studies are examples of integrating renewable energy into Federal new construction and major renovation projects. Additional renewable energy case studies are also available.

  15. DRIVE Analysis Tool Generates Custom Vehicle Drive Cycles Based on Real-World Data (Fact Sheet)

    SciTech Connect (OSTI)

    Not Available

    2013-04-01

    This fact sheet from the National Renewable Energy Laboratory describes the Drive-Cycle Rapid Investigation, Visualization, and Evaluation (DRIVE) analysis tool, which uses GPS and controller area network data to characterize vehicle operation and produce custom vehicle drive cycles, analyzing thousands of hours of data in a matter of minutes.

  16. CT based computerized identification and analysis of human airways: A review

    SciTech Connect (OSTI)

    Pu Jiantao; Gu Suicheng; Liu Shusen; Zhu Shaocheng; Wilson, David; Siegfried, Jill M.; Gur, David

    2012-05-15

    As one of the most prevalent chronic disorders, airway disease is a major cause of morbidity and mortality worldwide. In order to understand its underlying mechanisms and to enable assessment of therapeutic efficacy of a variety of possible interventions, noninvasive investigation of the airways in a large number of subjects is of great research interest. Due to its high resolution in temporal and spatial domains, computed tomography (CT) has been widely used in clinical practices for studying the normal and abnormal manifestations of lung diseases, albeit there is a need to clearly demonstrate the benefits in light of the cost and radiation dose associated with CT examinations performed for the purpose of airway analysis. Whereas a single CT examination consists of a large number of images, manually identifying airway morphological characteristics and computing features to enable thorough investigations of airway and other lung diseases is very time-consuming and susceptible to errors. Hence, automated and semiautomated computerized analysis of human airways is becoming an important research area in medical imaging. A number of computerized techniques have been developed to date for the analysis of lung airways. In this review, we present a summary of the primary methods developed for computerized analysis of human airways, including airway segmentation, airway labeling, and airway morphometry, as well as a number of computer-aided clinical applications, such as virtual bronchoscopy. Both successes and underlying limitations of these approaches are discussed, while highlighting areas that may require additional work.

  17. Control Limits for Building Energy End Use Based on Engineering Judgment, Frequency Analysis, and Quantile Regression

    SciTech Connect (OSTI)

    Henze, G. P.; Pless, S.; Petersen, A.; Long, N.; Scambos, A. T.

    2014-02-01

    Approaches are needed to continuously characterize the energy performance of commercial buildings to allow for (1) timely response to excess energy use by building operators; and (2) building occupants to develop energy awareness and to actively engage in reducing energy use. Energy information systems, often involving graphical dashboards, are gaining popularity in presenting energy performance metrics to occupants and operators in a (near) real-time fashion. Such an energy information system, called Building Agent, has been developed at NREL and incorporates a dashboard for public display. Each building is, by virtue of its purpose, location, and construction, unique. Thus, assessing building energy performance is possible only in a relative sense, as comparison of absolute energy use out of context is not meaningful. In some cases, performance can be judged relative to average performance of comparable buildings. However, in cases of high-performance building designs, such as NREL's Research Support Facility (RSF) discussed in this report, relative performance is meaningful only when compared to historical performance of the facility or to a theoretical maximum performance of the facility as estimated through detailed building energy modeling.

  18. Building America Special Research Project: High-R Walls Case...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Building America Special Research Project: High-R Walls Case Study Analysis Building America Special Research Project: High-R Walls Case Study Analysis This report considers a ...

  19. Economic Analysis for Conceptual Design of Supercritical O2-Based PC Boiler

    SciTech Connect (OSTI)

    Andrew Seltzer; Archie Robertson

    2006-09-01

    This report determines the capital and operating costs of two different oxygen-based, pulverized coal-fired (PC) power plants and compares their economics to that of a comparable, air-based PC plant. Rather than combust their coal with air, the oxygen-based plants use oxygen to facilitate capture/removal of the plant CO{sub 2} for transport by pipeline to a sequestering site. To provide a consistent comparison of technologies, all three plants analyzed herein operate with the same coal (Illinois No 6), the same site conditions, and the same supercritical pressure steam turbine (459 MWe). In the first oxygen-based plant, the pulverized coal-fired boiler operates with oxygen supplied by a conventional, cryogenic air separation unit, whereas, in the second oxygen-based plant, the oxygen is supplied by an oxygen ion transport membrane. In both oxygen-based plants a portion of the boiler exhaust gas, which is primarily CO{sub 2}, is recirculated back to the boiler to control the combustion temperature, and the balance of the flue gas undergoes drying and compression to pipeline pressure; for consistency, both plants operate with similar combustion temperatures and utilize the same CO{sub 2} processing technologies. The capital and operating costs of the pulverized coal-fired boilers required by the three different plants were estimated by Foster Wheeler and the balance of plant costs were budget priced using published data together with vendor supplied quotations. The cost of electricity produced by each of the plants was determined and oxygen-based plant CO{sub 2} mitigation costs were calculated and compared to each other as well as to values published for some alternative CO{sub 2} capture technologies.

  20. Genome-Based Metabolic Mapping and 13C Flux Analysis Reveal Systematic Properties of an Oleaginous Microalga Chlorella protothecoides

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Wu, Chao; Xiong, Wei; Dai, Junbiao; Wu, Qingyu

    2014-12-15

    We report that integrated and genome-based flux balance analysis, metabolomics, and 13C-label profiling of phototrophic and heterotrophic metabolism in Chlorella protothecoides, an oleaginous green alga for biofuel. The green alga Chlorella protothecoides, capable of autotrophic and heterotrophic growth with rapid lipid synthesis, is a promising candidate for biofuel production. Based on the newly available genome knowledge of the alga, we reconstructed the compartmentalized metabolic network consisting of 272 metabolic reactions, 270 enzymes, and 461 encoding genes and simulated the growth in different cultivation conditions with flux balance analysis. Phenotype-phase plane analysis shows conditions achieving theoretical maximum of the biomass andmore » corresponding fatty acid-producing rate for phototrophic cells (the ratio of photon uptake rate to CO2 uptake rate equals 8.4) and heterotrophic ones (the glucose uptake rate to O2 consumption rate reaches 2.4), respectively. Isotope-assisted liquid chromatography-mass spectrometry/mass spectrometry reveals higher metabolite concentrations in the glycolytic pathway and the tricarboxylic acid cycle in heterotrophic cells compared with autotrophic cells. We also observed enhanced levels of ATP, nicotinamide adenine dinucleotide (phosphate), reduced, acetyl-Coenzyme A, and malonyl-Coenzyme A in heterotrophic cells consistently, consistent with a strong activity of lipid synthesis. To profile the flux map in experimental conditions, we applied nonstationary 13C metabolic flux analysis as a complementing strategy to flux balance analysis. We found that the result reveals negligible photorespiratory fluxes and a metabolically low active tricarboxylic acid cycle in phototrophic C. protothecoides. In comparison, high throughput of amphibolic reactions and the tricarboxylic acid cycle with no glyoxylate shunt activities were measured for heterotrophic cells. Lastly, taken together, the metabolic network modeling assisted

  1. DanteR: an extensible R-based tool for quantitative analysis of -omics data

    SciTech Connect (OSTI)

    Taverner, Thomas; Karpievitch, Yuliya; Polpitiya, Ashoka D.; Brown, Joseph N.; Dabney, Alan R.; Anderson, Gordon A.; Smith, Richard D.

    2012-09-15

    Motivation: The size and complex nature of LC-MS proteomics data sets motivates development of specialized software for statistical data analysis and exploration. We present DanteR, a graphical R package that features extensive statistical and diagnostic functions for quantitative proteomics data analysis, including normalization, imputation, hypothesis testing, interactive visualization and peptide-to-protein rollup. More importantly, users can easily extend the existing functionality by including their own algorithms under the Add-On tab. Availability: DanteR and its associated user guide are available for download at http://omics.pnl.gov/software/. For Windows, a single click automatically installs DanteR along with the R programming environment. For Linux and Mac OS X, users must first install R and then follow instructions on the DanteR web site for package installation.

  2. Method of assessing a lipid-related health risk based on ion mobility analysis of lipoproteins

    DOE Patents [OSTI]

    Benner, W. Henry; Krauss, Ronald M.; Blanche, Patricia J.

    2010-12-14

    A medical diagnostic method and instrumentation system for analyzing noncovalently bonded agglomerated biological particles is described. The method and system comprises: a method of preparation for the biological particles; an electrospray generator; an alpha particle radiation source; a differential mobility analyzer; a particle counter; and data acquisition and analysis means. The medical device is useful for the assessment of human diseases, such as cardiac disease risk and hyperlipidemia, by rapid quantitative analysis of lipoprotein fraction densities. Initially, purification procedures are described to reduce an initial blood sample to an analytical input to the instrument. The measured sizes from the analytical sample are correlated with densities, resulting in a spectrum of lipoprotein densities. The lipoprotein density distribution can then be used to characterize cardiac and other lipid-related health risks.

  3. Study of vaneless diffuser rotating stall based on two-dimensional inviscid flow analysis

    SciTech Connect (OSTI)

    Tsujimoto, Yoshinobu; Yoshida, Yoshiki [Osaka Univ., Toyonaka, Osaka (Japan); Mori, Yasumasa [Mitsubishi Motors Corp., Ohta, Tokyo (Japan)

    1996-03-01

    Rotating stalls in vaneless diffusers are studied from the viewpoint that they are basically two-dimensional inviscid flow instability under the boundary conditions of vanishing velocity disturbance at the diffuser inlet and of vanishing pressure disturbance at the diffuser outlet. The linear analysis in the present report shows that the critical flow angle and the propagation velocity are functions of only the diffuser radius ratio. It is shown that the present analysis can reproduce most of the general characteristics observed in experiments: critical flow angle, propagation velocity, velocity, and pressure disturbance fields. It is shown that the vanishing velocity disturbance at the diffuser inlet is caused by the nature of impellers as a resistance and an inertial resistance, which is generally strong enough to suppress the velocity disturbance at the diffuser inlet. This explains the general experimental observations that vaneless diffuser rotating stalls are not largely affected by the impeller.

  4. Cogeneration: Economic and technical analysis. (Latest citations from the NTIS data base). Published Search

    SciTech Connect (OSTI)

    Not Available

    1992-05-01

    The bibliography contains citations concerning economic and technical analysis of cogeneration systems. Topics include electric power and steam generation, dual-purpose and fuel cell power plants, and on-site power generation. Tower focus power plants, solar cogeneration, biomass conversion, coal liquefaction and gasification, and refuse derived fuels are discussed. References cite feasibility studies, performance and economic evaluation, environmental impacts, and institutional factors. (Contains 250 citations and includes a subject term index and title list.)

  5. Industrial applications of accelerator-based infrared sources: Analysis using infrared microspectroscopy

    SciTech Connect (OSTI)

    Bantignies, J.L.; Fuchs, G.; Wilhelm, C.; Carr, G.L.; Dumas, P.

    1997-09-01

    Infrared Microspectroscopy, using a globar source, is now widely employed in the industrial environment, for the analysis of various materials. Since synchrotron radiation is a much brighter source, an enhancement of an order of magnitude in lateral resolution can be achieved. Thus, the combination of IR microspectroscopy and synchrotron radiation provides a powerful tool enabling sample regions only few microns size to be studied. This opens up the potential for analyzing small particles. Some examples for hair, bitumen and polymer are presented.

  6. Reservoir characterization based on tracer response and rank analysis of production and injection rates

    SciTech Connect (OSTI)

    Refunjol, B.T.; Lake, L.W.

    1997-08-01

    Quantification of the spatial distribution of properties is important for many reservoir-engineering applications. But, before applying any reservoir-characterization technique, the type of problem to be tackled and the information available should be analyzed. This is important because difficulties arise in reservoirs where production records are the only information for analysis. This paper presents the results of a practical technique to determine preferential flow trends in a reservoir. The technique is a combination of reservoir geology, tracer data, and Spearman rank correlation coefficient analysis. The Spearman analysis, in particular, will prove to be important because it appears to be insightful and uses injection/production data that are prevalent in circumstances where other data are nonexistent. The technique is applied to the North Buck Draw field, Campbell County, Wyoming. This work provides guidelines to assess information about reservoir continuity in interwell regions from widely available measurements of production and injection rates at existing wells. The information gained from the application of this technique can contribute to both the daily reservoir management and the future design, control, and interpretation of subsequent projects in the reservoir, without the need for additional data.

  7. Analysis of ancient-river systems by 3D seismic time-slice technique: A case study in northeast Malay Basin, offshore Terengganu, Malaysia

    SciTech Connect (OSTI)

    Sulaiman, Noorzamzarina; Hamzah, Umar; Samsudin, Abdul Rahim

    2014-09-03

    Fluvial sandstones constitute one of the major clastic petroleum reservoir types in many sedimentary basins around the world. This study is based on the analysis of high-resolution, shallow (seabed to 500 m depth) 3D seismic data which generated three-dimensional (3D) time slices that provide exceptional imaging of the geometry, dimension and temporal and spatial distribution of fluvial channels. The study area is in the northeast of Malay Basin about 280 km to the east of Terengganu offshore. The Malay Basin comprises a thick (> 8 km), rift to post-rift Oligo-Miocene to Pliocene basin-fill. The youngest (Miocene to Pliocene), post-rift succession is dominated by a thick (15 km), cyclic succession of coastal plain and coastal deposits, which accumulated in a humid-tropical climatic setting. This study focuses on the Pleistocene to Recent (500 m thick) succession, which comprises a range of seismic facies analysis of the two-dimensional (2D) seismic sections, mainly reflecting changes in fluvial channel style and river architecture. The succession has been divided into four seismic units (Unit S1-S4), bounded by basin-wide strata surfaces. Two types of boundaries have been identified: 1) a boundary that is defined by a regionally-extensive erosion surface at the base of a prominent incised valley (S3 and S4); 2) a sequence boundary that is defined by more weakly-incised, straight and low-sinuosity channels which is interpreted as low-stand alluvial bypass channel systems (S1 and S2). Each unit displays a predictable vertical change of the channel pattern and scale, with wide low-sinuosity channels at the base passing gradationally upwards into narrow high-sinuosity channels at the top. The wide variation in channel style and size is interpreted to be controlled mainly by the sea-level fluctuations on the widely flat Sunda land Platform.

  8. Structure-sequence based analysis for identification of conserved regions in proteins

    DOE Patents [OSTI]

    Zemla, Adam T; Zhou, Carol E; Lam, Marisa W; Smith, Jason R; Pardes, Elizabeth

    2013-05-28

    Disclosed are computational methods, and associated hardware and software products for scoring conservation in a protein structure based on a computationally identified family or cluster of protein structures. A method of computationally identifying a family or cluster of protein structures in also disclosed herein.

  9. GMR-based PhC biosensor: FOM analysis and experimental studies

    SciTech Connect (OSTI)

    Syamprasad, Jagadeesh; Narayanan, Roshni; Joseph, Joby; Takahashi, Hiroki; Sandhu, Adarsh; Jindal, Rajeev

    2014-02-20

    Guided Mode Resonance based Photonic crystal biosensor has a lot of potential applications. In our work, we are trying to improve their figure of merit values in order to achieve an optimum level through design and fabrication techniques. A robust and low-cost alternative for current biosensors is also explored through this research.

  10. Safety assessment of historical masonry churches based on pre-assigned kinematic limit analysis, FE limit and pushover analyses

    SciTech Connect (OSTI)

    Milani, Gabriele Valente, Marco

    2014-10-06

    This study presents some results of a comprehensive numerical analysis on three masonry churches damaged by the recent Emilia-Romagna (Italy) seismic events occurred in May 2012. The numerical study comprises: (a) pushover analyses conducted with a commercial code, standard nonlinear material models and two different horizontal load distributions; (b) FE kinematic limit analyses performed using a non-commercial software based on a preliminary homogenization of the masonry materials and a subsequent limit analysis with triangular elements and interfaces; (c) kinematic limit analyses conducted in agreement with the Italian code and based on the a-priori assumption of preassigned failure mechanisms, where the masonry material is considered unable to withstand tensile stresses. All models are capable of giving information on the active failure mechanism and the base shear at failure, which, if properly made non-dimensional with the weight of the structure, gives also an indication of the horizontal peak ground acceleration causing the collapse of the church. The results obtained from all three models indicate that the collapse is usually due to the activation of partial mechanisms (apse, faade, lateral walls, etc.). Moreover the horizontal peak ground acceleration associated to the collapse is largely lower than that required in that seismic zone by the Italian code for ordinary buildings. These outcomes highlight that structural upgrading interventions would be extremely beneficial for the considerable reduction of the seismic vulnerability of such kind of historical structures.

  11. Semantic Pattern Analysis for Verbal Fluency Based Assessment of Neurological Disorders

    SciTech Connect (OSTI)

    Sukumar, Sreenivas R; Ainsworth, Keela C; Brown, Tyler C

    2014-01-01

    In this paper, we present preliminary results of semantic pattern analysis of verbal fluency tests used for assessing cognitive psychological and neuropsychological disorders. We posit that recent advances in semantic reasoning and artificial intelligence can be combined to create a standardized computer-aided diagnosis tool to automatically evaluate and interpret verbal fluency tests. Towards that goal, we derive novel semantic similarity (phonetic, phonemic and conceptual) metrics and present the predictive capability of these metrics on a de-identified dataset of participants with and without neurological disorders.

  12. Regression Models for Demand Reduction based on Cluster Analysis of Load Profiles

    SciTech Connect (OSTI)

    Yamaguchi, Nobuyuki; Han, Junqiao; Ghatikar, Girish; Piette, Mary Ann; Asano, Hiroshi; Kiliccote, Sila

    2009-06-28

    This paper provides new regression models for demand reduction of Demand Response programs for the purpose of ex ante evaluation of the programs and screening for recruiting customer enrollment into the programs. The proposed regression models employ load sensitivity to outside air temperature and representative load pattern derived from cluster analysis of customer baseline load as explanatory variables. The proposed models examined their performances from the viewpoint of validity of explanatory variables and fitness of regressions, using actual load profile data of Pacific Gas and Electric Company's commercial and industrial customers who participated in the 2008 Critical Peak Pricing program including Manual and Automated Demand Response.

  13. Design and Quasi-Equilibrium Analysis of a Distributed Frequency-Restoration Controller for Inverter-Based Microgrids

    SciTech Connect (OSTI)

    Ainsworth, Nathan G; Grijalva, Prof. Santiago

    2013-01-01

    This paper discusses a proposed frequency restoration controller which operates as an outer loop to frequency droop for voltage-source inverters. By quasi-equilibrium analysis, we show that the proposed controller is able to provide arbitrarily small steady-state frequency error while maintaing power sharing between inverters without need for communication or centralized control. We derive rate of convergence, discuss design considerations (including a fundamental trade-off that must be made in design), present a design procedure to meet a maximum frequency error requirement, and show simulation results verifying our analysis and design method. The proposed controller will allow flexible plug-and-play inverter-based networks to meet a specified maximum frequency error requirement.

  14. Gene identification and analysis: an application of neural network-based information fusion

    SciTech Connect (OSTI)

    Matis, S.; Xu, Y.; Shah, M.B.; Mural, R.J.; Einstein, J.R.; Uberbacher, E.C.

    1996-10-01

    Identifying genes within large regions of uncharacterized DNA is a difficult undertaking and is currently the focus of many research efforts. We describe a gene localization and modeling system called GRAIL. GRAIL is a multiple sensor-neural network based system. It localizes genes in anonymous DNA sequence by recognizing gene features related to protein-coding slice sites, and then combines the recognized features using a neural network system. Localized coding regions are then optimally parsed into a gene mode. RNA polymerase II promoters can also be predicted. Through years of extensive testing, GRAIL consistently localizes about 90 percent of coding portions of test genes with a false positive rate of about 10 percent. A number of genes for major genetic diseases have been located through the use of GRAIL, and over 1000 research laboratories worldwide use GRAIL on regular bases for localization of genes on their newly sequenced DNA.

  15. Moving beyond mass-based parameters for conductivity analysis of sulfonated polymers

    SciTech Connect (OSTI)

    Kim, Yu Seung; Pivovar, Bryan

    2009-01-01

    Proton conductivity of polymer electrolytes is critical for fuel cells and has therefore been studied in significant detail. The conductivity of sulfonated polymers has been linked to material characteristics in order to elucidate trends. Mass based measurements based on water uptake and ion exchange capacity are two of the most common material characteristics used to make comparisons between polymer electrolytes, but have significant limitations when correlated to proton conductivity. These limitations arise in part because different polymers can have significantly different densities and conduction happens over length scales more appropriately represented by volume measurements rather than mass. Herein, we establish and review volume related parameters that can be used to compare proton conductivity of different polymer electrolytes. Morphological effects on proton conductivity are also considered. Finally, the impact of these phenomena on designing next generation sulfonated polymers for polymer electrolyte membrane fuel cells is discussed.

  16. Extending PowerPack for Profiling and Analysis of High Performance Accelerator-Based Systems

    SciTech Connect (OSTI)

    Li, Bo; Chang, Hung-Ching; Song, Shuaiwen; Su, Chun-Yi; Meyer, Timmy; Mooring, John; Cameron, Kirk

    2014-12-01

    Accelerators offer a substantial increase in efficiency for high-performance systems offering speedups for computational applications that leverage hardware support for highly-parallel codes. However, the power use of some accelerators exceeds 200 watts at idle which means use at exascale comes at a significant increase in power at a time when we face a power ceiling of about 20 megawatts. Despite the growing domination of accelerator-based systems in the Top500 and Green500 lists of fastest and most efficient supercomputers, there are few detailed studies comparing the power and energy use of common accelerators. In this work, we conduct detailed experimental studies of the power usage and distribution of Xeon-Phi-based systems in comparison to the NVIDIA Tesla and at SandyBridge.

  17. Development of simplified design aids based on the results of simulation analysis

    SciTech Connect (OSTI)

    Balcomb, J.D.

    1980-01-01

    The Solar Load Ratio method for estimating the performance of passive solar heating systems is described. It is a simplified technique which is based on correlating the monthly solar savings fraction in terms of the ratio of monthly solar radiation absorbed by the building to total monthly building thermal load. The effect of differences between actual design parameters and those used to develop the correlations is estimated afterwards using sensitivity curves. The technique is fast and simple and sufficiently accurate for design purposes.

  18. Exposure Based Health Issues Project Report: Phase I of High Level Tank Operations, Retrieval, Pretreatment, and Vitrification Exposure Based Health Issues Analysis

    SciTech Connect (OSTI)

    Stenner, Robert D.; Bowers, Harold N.; Kenoyer, Judson L.; Strenge, Dennis L.; Brady, William H.; Ladue, Buffi; Samuels, Joseph K.

    2001-11-30

    The Department of Energy (DOE) has the responsibility to understand the ''big picture'' of worker health and safety which includes fully recognizing the vulnerabilities and associated programs necessary to protect workers at the various DOE sites across the complex. Exposure analysis and medical surveillance are key aspects for understanding this big picture, as is understanding current health and safety practices and how they may need to change to relate to future health and safety management needs. The exposure-based health issues project was initiated to assemble the components necessary to understand potential exposure situations and their medical surveillance and clinical aspects. Phase I focused only on current Hanford tank farm operations and serves as a starting point for the overall project. It is also anticipated that once the pilot is fully developed for Hanford HLW (i.e., current operations, retrieval, pretreatment, vitrification, and disposal), the process and analysis methods developed will be available and applicable for other DOE operations and sites. The purpose of this Phase I project report is to present the health impact information collected regarding ongoing tank waste maintenance operations, show the various aspects of health and safety involved in protecting workers, introduce the reader to the kinds of information that will need to be analyzed in order to effectively manage worker safety.

  19. Thermodynamic analysis of interactions between Ni-based solid oxide fuel cells (SOFC) anodes and trace species in a survey of coal syngas

    SciTech Connect (OSTI)

    Andrew Martinez; Kirk Gerdes; Randall Gemmen; James Postona

    2010-03-20

    A thermodynamic analysis was conducted to characterize the effects of trace contaminants in syngas derived from coal gasification on solid oxide fuel cell (SOFC) anode material. The effluents from 15 different gasification facilities were considered to assess the impact of fuel composition on anode susceptibility to contamination. For each syngas case, the study considers the magnitude of contaminant exposure resulting from operation of a warm gas cleanup unit at two different temperatures and operation of a nickel-based SOFC at three different temperatures. Contaminant elements arsenic (As), phosphorous (P), and antimony (Sb) are predicted to be present in warm gas cleanup effluent and will interact with the nickel (Ni) components of a SOFC anode. Phosphorous is the trace element found in the largest concentration of the three contaminants and is potentially the most detrimental. Poisoning was found to depend on the composition of the syngas as well as system operating conditions. Results for all trace elements tended to show invariance with cleanup operating temperature, but results were sensitive to syngas bulk composition. Synthesis gas with high steam content tended to resist poisoning.

  20. Greenhouse gas mitigation options in the forestry sector of The Gambia: Analysis based on COMAP model

    SciTech Connect (OSTI)

    Jallow, B.P.

    1996-12-31

    Results of the 1993 Greenhouse Gas Emissions Inventory of The Gambia showed net CO{sub 2} emissions of over (1.66 x 10{sup 6} tons) and 1% was due to uptake by plantations (0.01 x 10{sup 6} tons). This is a clear indication that there is need to identify changes in the land-use policy, law and tenure that discourages forest clearing at the same time significantly influencing the sustainable distribution of land among forestry, rangeland and livestock, and agriculture. About 11% of the total area of The Gambia is either fallow or barren flats that once supported vegetation and hence is still capable of supporting vegetation. The US Country Study Programme has provided the Government of The Gambia through the National Climate Committee funds to conduct Assessment of Mitigation Options to Reduce Greenhouse Gas Emissions. The Forestry Sector is one area for which assessment is being conducted. The assessment is expected to end in September 1996. The Comprehensive Mitigation Analysis Process (COMAP) is one of the Models supplied to the National Climate Committee by the Lawrence Berkeley Laboratory, on behalf of the US Country Study Programme, and is being used to conduct the analysis in The Gambia.

  1. Experimental and numerical analysis of metal leaching from fly ash-amended highway bases

    SciTech Connect (OSTI)

    Cetin, Bora; Aydilek, Ahmet H.; Li, Lin

    2012-05-15

    Highlights: Black-Right-Pointing-Pointer This study is the evaluation of leaching potential of fly ash-lime mixed soils. Black-Right-Pointing-Pointer This objective is met with experimental and numerical analysis. Black-Right-Pointing-Pointer Zn leaching decreases with increase in fly ash content while Ba, B, Cu increases. Black-Right-Pointing-Pointer Decrease in lime content promoted leaching of Ba, B and Cu while Zn increases. Black-Right-Pointing-Pointer Numerical analysis predicted lower field metal concentrations. - Abstract: A study was conducted to evaluate the leaching potential of unpaved road materials (URM) mixed with lime activated high carbon fly ashes and to evaluate groundwater impacts of barium, boron, copper, and zinc leaching. This objective was met by a combination of batch water leach tests, column leach tests, and computer modeling. The laboratory tests were conducted on soil alone, fly ash alone, and URM-fly ash-lime kiln dust mixtures. The results indicated that an increase in fly ash and lime content has significant effects on leaching behavior of heavy metals from URM-fly ash mixture. An increase in fly ash content and a decrease in lime content promoted leaching of Ba, B and Cu whereas Zn leaching was primarily affected by the fly ash content. Numerically predicted field metal concentrations were significantly lower than the peak metal concentrations obtained in laboratory column leach tests, and field concentrations decreased with time and distance due to dispersion in soil vadose zone.

  2. Analysis of In-Use Fuel Economy Shortfall Based on Voluntarily Reported MPG Estimates

    SciTech Connect (OSTI)

    Greene, David L; Goeltz, Rick; Hopson, Dr Janet L; Tworek, Elzbieta

    2007-01-01

    The usefulness of the Environmental Protection Agency's (EPA) passenger car and light truck fuel economy estimates has been the subject of debate for the past three decades. For the labels on new vehicles and the fuel economy information given to the public, the EPA adjusts dynamometer test results downward by 10% for the city cycle and 22% for the highway cycle to better reflect real world driving conditions. These adjustment factors were developed in 1984 and their continued validity has repeatedly been questioned. In March of 2005 the U.S. Department of Energy (DOE) and EPA's fuel economy information website, www.fueleconomy.gov, began allowing users to voluntarily share fuel economy estimates. This paper presents an initial statistical analysis of more than 3,000 estimates submitted by website users. The analysis suggests two potentially important results: (1) adjusted, combined EPA fuel economy estimates appear to be approximately unbiased estimators of the average fuel economy consumers will experience in actual driving, and (2) the EPA estimates are highly imprecise predictors of any given individual's in-use fuel economy, an approximate 95% confidence interval being +/-7 MPG. These results imply that what is needed is not less biased adjustment factors for the EPA estimates but rather more precise methods of predicting the fuel economy individual consumers will achieve in their own driving.

  3. POD-based analysis of combustion images in optically accessible engines

    SciTech Connect (OSTI)

    Bizon, K.; Continillo, G.; Mancaruso, E.; Merola, S.S.; Vaglieco, B.M.

    2010-04-15

    This paper reports on 2D images of combustion-related luminosity taken in two optically accessible automobile engines of the most recent generation. The results are discussed to elucidate physical phenomena in the combustion chambers. Then, proper orthogonal decomposition (POD) is applied to the acquired images. The coefficients of the orthogonal modes are then used for the analysis of cycle variability, along with data of dynamic in-cylinder pressure and rate of heat release. The advantage is that statistical analysis can be run on a small number of scalar coefficients rather than on the full data set of pixel luminosity values. Statistics of the POD coefficients provide information on cycle variations of the luminosity field. POD modes are then discriminated by means of normality tests, to separate the mean from the coherent and the incoherent parts of the fluctuation of the luminosity field, in a non-truncated representation of the data. The morphology of the fluctuation components can finally be reconstructed by grouping coherent and incoherent modes. The structure of the incoherent component of the fluctuation is consistent with the underlying turbulent field. (author)

  4. Femtosecond laser ablation-based mass spectrometry. An ideal tool for stoichiometric analysis of thin films

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    LaHaye, Nicole L.; Kurian, Jose; Diwakar, Prasoon K.; Alff, Lambert; Harilal, Sivanandan S.

    2015-08-19

    An accurate and routinely available method for stoichiometric analysis of thin films is a desideratum of modern materials science where a material’s properties depend sensitively on elemental composition. We thoroughly investigated femtosecond laser ablation-inductively coupled plasma-mass spectrometry (fs-LA-ICP-MS) as an analytical technique for determination of the stoichiometry of thin films down to the nanometer scale. The use of femtosecond laser ablation allows for precise removal of material with high spatial and depth resolution that can be coupled to an ICP-MS to obtain elemental and isotopic information. We used molecular beam epitaxy-grown thin films of LaPd(x)Sb2 and T´-La2CuO4 to demonstrate themore » capacity of fs-LA-ICP-MS for stoichiometric analysis and the spatial and depth resolution of the technique. Here we demonstrate that the stoichiometric information of thin films with a thickness of ~10 nm or lower can be determined. Furthermore, our results indicate that fs-LA-ICP-MS provides precise information on the thin film-substrate interface and is able to detect the interdiffusion of cations.« less

  5. Femtosecond laser ablation-based mass spectrometry. An ideal tool for stoichiometric analysis of thin films

    SciTech Connect (OSTI)

    LaHaye, Nicole L.; Kurian, Jose; Diwakar, Prasoon K.; Alff, Lambert; Harilal, Sivanandan S.

    2015-08-19

    An accurate and routinely available method for stoichiometric analysis of thin films is a desideratum of modern materials science where a material’s properties depend sensitively on elemental composition. We thoroughly investigated femtosecond laser ablation-inductively coupled plasma-mass spectrometry (fs-LA-ICP-MS) as an analytical technique for determination of the stoichiometry of thin films down to the nanometer scale. The use of femtosecond laser ablation allows for precise removal of material with high spatial and depth resolution that can be coupled to an ICP-MS to obtain elemental and isotopic information. We used molecular beam epitaxy-grown thin films of LaPd(x)Sb2 and T´-La2CuO4 to demonstrate the capacity of fs-LA-ICP-MS for stoichiometric analysis and the spatial and depth resolution of the technique. Here we demonstrate that the stoichiometric information of thin films with a thickness of ~10 nm or lower can be determined. Furthermore, our results indicate that fs-LA-ICP-MS provides precise information on the thin film-substrate interface and is able to detect the interdiffusion of cations.

  6. First principles analysis of lattice dynamics for Fe-based superconductors and entropically-stabilized phases

    SciTech Connect (OSTI)

    Hahn, Steven

    2012-07-20

    Modern calculations are becoming an essential, complementary tool to inelastic x-ray scattering studies, where x-rays are scattered inelastically to resolve meV phonons. Calculations of the inelastic structure factor for any value of Q assist in both planning the experiment and analyzing the results. Moreover, differences between the measured data and theoretical calculations help identify important new physics driving the properties of novel correlated systems. We have used such calculations to better and more e#14;ciently measure the phonon dispersion and elastic constants of several iron pnictide superconductors. This dissertation describes calculations and measurements at room temperature in the tetragonal phase of CaFe{sub 2}As{sub 2} and LaFeAsO. In both cases, spin-polarized calculations imposing the antiferromagnetic order present in the low-temperature orthorhombic phase dramatically improves the agreement between theory and experiment. This is discussed in terms of the strong antiferromagnetic correlations that are known to persist in the tetragonal phase. In addition, we discuss a relatively new approach called self-consistent ab initio lattice dynamics (SCAILD), which goes beyond the harmonic approximation to include phonon-phonon interactions and produce a temperature-dependent phonon dispersion. We used this technique to study the HCP to BCC transition in beryllium.

  7. P2P-based botnets: structural analysis, monitoring, and mitigation

    SciTech Connect (OSTI)

    Yan, Guanhua; Eidenbenz, Stephan; Ha, Duc T; Ngo, Hung Q

    2008-01-01

    Botnets, which are networks of compromised machines that are controlled by one or a group of attackers, have emerged as one of the most serious security threats on the Internet. With an army of bots at the scale of tens of thousands of hosts or even as large as 1.5 million PCs, the computational power of botnets can be leveraged to launch large-scale DDoS (Distributed Denial of Service) attacks, sending spamming emails, stealing identities and financial information, etc. As detection and mitigation techniques against botnets have been stepped up in recent years, attackers are also constantly improving their strategies to operate these botnets. The first generation of botnets typically employ IRC (Internet Relay Chat) channels as their command and control (C&C) centers. Though simple and easy to deploy, the centralized C&C mechanism of such botnets has made them prone to being detected and disabled. Against this backdrop, peer-to-peer (P2P) based botnets have emerged as a new generation of botnets which can conceal their C&C communication. Recently, P2P networks have emerged as a covert communication platform for malicious programs known as bots. As popular distributed systems, they allow bots to communicate easily while protecting the botmaster from being discovered. Existing work on P2P-based hotnets mainly focuses on measurement of botnet sizes. In this work, through simulation, we study extensively the structure of P2P networks running Kademlia, one of a few widely used P2P protocols in practice. Our simulation testbed incorporates the actual code of a real Kademlia client software to achieve great realism, and distributed event-driven simulation techniques to achieve high scalability. Using this testbed, we analyze the scaling, reachability, clustering, and centrality properties of P2P-based botnets from a graph-theoretical perspective. We further demonstrate experimentally and theoretically that monitoring bot activities in a P2P network is difficult

  8. C COAST. A PC-based program for the analysis of coastal processes using NOAA coastwatch data

    SciTech Connect (OSTI)

    Miller, R.L.; Decampo, J. )

    1994-02-01

    As part of the NOAA Coastal Ocean Program, the CoastWatch program was created to provide low-cost, near real-time remotely sensed data of the coast and Great Lakes region of the United States to decision makers in the public and private sectors. This paper describes a PC-based program developed specifically for the display and analysis of NOAA's CoastWatch sea surface temperatures (SST) processed imagery. This program, C COAST, provides an easy to use environment to users to incorporate SST images into their activities. 2 refs.

  9. BBRN Factsheet: Case Study: Community Engagement | Department...

    Office of Environmental Management (EM)

    Case Study: Community Engagement, on the Community Home Energy Retrofit Project (CHERP), based in Claremont, California. Case Study: Community Engagement (197.35 KB) More Documents ...

  10. Large deformation analysis of laminated composite structures by a continuum-based shell element with transverse deformation

    SciTech Connect (OSTI)

    Wung, Pey Min.

    1989-01-01

    In this work, a finite element formulation and associated computer program is developed for the transient large deformation analysis of laminated composite plate/shell structures. In order to satisfy the plate/shell surface traction boundary conditions and to have accurate stress description while maintaining the low cost of the analysis, a newly assumed displacement field theory is formulated by adding higher-order terms to the transverse displacement component of the first-order shear deformation theory. The laminated shell theory is formulated using the Updated Lagrangian description of a general continuum-based theory with assumptions on thickness deformation. The transverse deflection is approximated through the thickness by a quartic polynomial of the thickness coordinate. As a result both the plate/shell surface tractions (including nonzero tangential tractions and nonzero normal pressure) and the interlaminar shear stress continuity conditions at interfaces are satisfied simultaneously. Furthermore, the rotational degree of freedoms become layer dependent quantities and the laminate possesses a transverse deformation capability (i.e the normal strain is no longer zero). Analytical integration through the thickness direction is performed for both the linear analysis and the nonlinear analysis. Resultants of the stress integrations are expressed in terms of the laminate stacking sequence. Consequently, the laminate characteristics in the normal direction can be evaluated precisely and the cost of the overall analysis is reduced. The standard Newmark method and the modified Newton Raphson method are used for the solution of the nonlinear dynamic equilibrium equations. Finally, a variety of numerical examples are presented to demonstrate the validity and efficiency of the finite element program developed herein.