National Library of Energy BETA

Sample records for base case analysis

  1. 1980 Base case and feasibility analysis

    SciTech Connect (OSTI)

    1993-03-01

    This report describes a task of documenting a ``base case`` and performing a feasibility analysis for a national residential energy efficiency program for new homes, The principal objective of the task wasto estimate the energy consumption of typical homes built in 1980 and then to identify and assess the feasibility of methods to reduce that consumption by 50%. The goal of the program by the year 2000 is to reduce heating and cooling energy use in new homes built under the program to one-half of the energy use in typical new homes built in 1980. The task also calls for determining whether the program goal should be revised, based on the analysis.

  2. 1980 Base case and feasibility analysis

    SciTech Connect (OSTI)

    Not Available

    1993-03-01

    This report describes a task of documenting a base case'' and performing a feasibility analysis for a national residential energy efficiency program for new homes, The principal objective of the task wasto estimate the energy consumption of typical homes built in 1980 and then to identify and assess the feasibility of methods to reduce that consumption by 50%. The goal of the program by the year 2000 is to reduce heating and cooling energy use in new homes built under the program to one-half of the energy use in typical new homes built in 1980. The task also calls for determining whether the program goal should be revised, based on the analysis.

  3. Definition of the base analysis case of the interim performance assessment

    SciTech Connect (OSTI)

    Mann, F.M.

    1995-12-01

    The base analysis case for the ``Hanford Low-Level Tank Waste Interim Performance Assessment`` is defined. Also given are brief description of the sensitivity cases.

  4. Fuel Cycle Analysis Framework Base Cases for the IAEA/INPRO GAINS Collaborative Project

    SciTech Connect (OSTI)

    Brent Dixon

    2012-09-01

    Thirteen countries participated in the Collaborative Project GAINS “Global Architecture of Innovative Nuclear Energy Systems Based on Thermal and Fast Reactors Including a Closed Fuel Cycle”, which was the primary activity within the IAEA/INPRO Program Area B: “Global Vision on Sustainable Nuclear Energy” for the last three years. The overall objective of GAINS was to develop a standard framework for assessing future nuclear energy systems taking into account sustainable development, and to validate results through sample analyses. This paper details the eight scenarios that constitute the GAINS framework base cases for analysis of the transition to future innovative nuclear energy systems. The framework base cases provide a reference for users of the framework to start from in developing and assessing their own alternate systems. Each base case is described along with performance results against the GAINS sustainability evaluation metrics. The eight cases include four using a moderate growth projection and four using a high growth projection for global nuclear electricity generation through 2100. The cases are divided into two sets, addressing homogeneous and heterogeneous scenarios developed by GAINS to model global fuel cycle strategies. The heterogeneous world scenario considers three separate nuclear groups based on their fuel cycle strategies, with non-synergistic and synergistic cases. The framework base case analyses results show the impact of these different fuel cycle strategies while providing references for future users of the GAINS framework. A large number of scenario alterations are possible and can be used to assess different strategies, different technologies, and different assumptions about possible futures of nuclear power. Results can be compared to the framework base cases to assess where these alternate cases perform differently versus the sustainability indicators.

  5. A case study for cloud based high throughput analysis of NGS data using the globus genomics system

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Bhuvaneshwar, Krithika; Sulakhe, Dinanath; Gauba, Robinder; Rodriguez, Alex; Madduri, Ravi; Dave, Utpal; Lacinski, Lukasz; Foster, Ian; Gusev, Yuriy; Madhavan, Subha

    2015-01-01

    Next generation sequencing (NGS) technologies produce massive amounts of data requiring a powerful computational infrastructure, high quality bioinformatics software, and skilled personnel to operate the tools. We present a case study of a practical solution to this data management and analysis challenge that simplifies terabyte scale data handling and provides advanced tools for NGS data analysis. These capabilities are implemented using the “Globus Genomics” system, which is an enhanced Galaxy workflow system made available as a service that offers users the capability to process and transfer data easily, reliably and quickly to address end-to-end NGS analysis requirements. The Globus Genomicsmore » system is built on Amazon's cloud computing infrastructure. The system takes advantage of elastic scaling of compute resources to run multiple workflows in parallel and it also helps meet the scale-out analysis needs of modern translational genomics research.« less

  6. A case study for cloud based high throughput analysis of NGS data using the globus genomics system

    SciTech Connect (OSTI)

    Bhuvaneshwar, Krithika; Sulakhe, Dinanath; Gauba, Robinder; Rodriguez, Alex; Madduri, Ravi; Dave, Utpal; Lacinski, Lukasz; Foster, Ian; Gusev, Yuriy; Madhavan, Subha

    2015-01-01

    Next generation sequencing (NGS) technologies produce massive amounts of data requiring a powerful computational infrastructure, high quality bioinformatics software, and skilled personnel to operate the tools. We present a case study of a practical solution to this data management and analysis challenge that simplifies terabyte scale data handling and provides advanced tools for NGS data analysis. These capabilities are implemented using the “Globus Genomics” system, which is an enhanced Galaxy workflow system made available as a service that offers users the capability to process and transfer data easily, reliably and quickly to address end-to-end NGS analysis requirements. The Globus Genomics system is built on Amazon's cloud computing infrastructure. The system takes advantage of elastic scaling of compute resources to run multiple workflows in parallel and it also helps meet the scale-out analysis needs of modern translational genomics research.

  7. Geographically-Based Infrastructure Analysis

    Broader source: Energy.gov (indexed) [DOE]

    January 26, 2006 Geographically-Based Infrastructure Analysis (GIA) Utilizes GIS, ... Geographically-based Infrastructure Analysis GIS Transportation Technologies & Systems ...

  8. ARM - Field Campaign - CASES Data Analysis

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    govCampaignsCASES Data Analysis Comments? We would love to hear from you! Send us a note below or call us at 1-888-ARM-DATA. Send Campaign : CASES Data Analysis 2004.07.01 - 2009.06.30 Lead Scientist : Margaret LeMone Abstract CASES Data Analysis: Potential Benefits Diurnal variation of the Atmospheric Boundary Layer. Taken together, the two Cooperative Atmosphere Surface Exchange Study (CASES) field programs, CASES-97 (morning and evening) and CASES-99 (evening, night, morning) provide a robust

  9. Final base case community analysis: Indian Springs, Nevada for the Clark County socioeconomic impact assessment of the proposed high- level nuclear waste repository at Yucca Mountain, Nevada

    SciTech Connect (OSTI)

    1992-06-18

    This document provides a base case description of the rural Clark County community of Indian Springs in anticipation of change associated with the proposed high-level nuclear waste repository at Yucca Mountain. As the community closest to the proposed site, Indian Springs may be seen by site characterization workers, as well as workers associated with later repository phases, as a logical place to live. This report develops and updates information relating to a broad spectrum of socioeconomic variables, thereby providing a `snapshot` or `base case` look at Indian Springs in early 1992. With this as a background, future repository-related developments may be analytically separated from changes brought about by other factors, thus allowing for the assessment of the magnitude of local changes associated with the proposed repository. Given the size of the community, changes that may be considered small in an absolute sense may have relatively large impacts at the local level. Indian Springs is, in many respects, a unique community and a community of contrasts. An unincorporated town, it is a small yet important enclave of workers on large federal projects and home to employees of small- scale businesses and services. It is a rural community, but it is also close to the urbanized Las Vega Valley. It is a desert community, but has good water resources. It is on flat terrain, but it is located within 20 miles of the tallest mountains in Nevada. It is a town in which various interest groups diverge on issues of local importance, but in a sense of community remains an important feature of life. Finally, it has a sociodemographic history of both surface transience and underlying stability. If local land becomes available, Indian Springs has some room for growth but must first consider the historical effects of growth on the town and its desired direction for the future.

  10. Analysis of Restricted Natural Gas Supply Cases

    Reports and Publications (EIA)

    2004-01-01

    The four cases examined in this study have progressively greater impacts on overall natural gas consumption, prices, and supply. Compared to the Annual Energy Outlook 2004 reference case, the no Alaska pipeline case has the least impact; the low liquefied natural gas case has more impact; the low unconventional gas recovery case has even more impact; and the combined case has the most impact.

  11. Analysis of design tradeoffs for diplay case evaporators

    SciTech Connect (OSTI)

    Bullard, CLARK

    2004-08-11

    A model for simulating a display case evaporator under frosting conditions has been developed, using a quasi-steady and finite-volume approach and a Newton-Raphson based solution algorithm. It is capable of simulating evaporators with multiple modules having different geometries, e.g. tube and fin thicknesses and pitch. The model was validated against data taken at two-minute intervals from a well-instrumented medium-temperature vertical display case, for two evaporators having very different configurations. The data from these experiments provided both the input data for the model and also the data to compare the modeling results. The validated model has been used to generate some general guidelines for coil design. Effects of various geometrical parameters were quantified, and compressor performance data were used to express the results in terms of total power consumption. Using these general guidelines, a new prototype evaporator was designed for the subject display case, keeping in mind the current packaging restrictions, tube and fin availabilities. It is an optimum coil for the given external load conditions. Subsequently, the validated model was used in a more extensive analysis to design prototype coils with some of the current tube and fin spacing restrictions removed. A new microchannel based suction line heat exchanger was installed in the display case system. The performance of this suction line heat exchanger is reported.

  12. Chapter 11. Community analysis-based methods

    SciTech Connect (OSTI)

    Cao, Y.; Wu, C.H.; Andersen, G.L.; Holden, P.A.

    2010-05-01

    Microbial communities are each a composite of populations whose presence and relative abundance in water or other environmental samples are a direct manifestation of environmental conditions, including the introduction of microbe-rich fecal material and factors promoting persistence of the microbes therein. As shown by culture-independent methods, different animal-host fecal microbial communities appear distinctive, suggesting that their community profiles can be used to differentiate fecal samples and to potentially reveal the presence of host fecal material in environmental waters. Cross-comparisons of microbial communities from different hosts also reveal relative abundances of genetic groups that can be used to distinguish sources. In increasing order of their information richness, several community analysis methods hold promise for MST applications: phospholipid fatty acid (PLFA) analysis, denaturing gradient gel electrophoresis (DGGE), terminal restriction fragment length polymorphism (TRFLP), cloning/sequencing, and PhyloChip. Specific case studies involving TRFLP and PhyloChip approaches demonstrate the ability of community-based analyses of contaminated waters to confirm a diagnosis of water quality based on host-specific marker(s). The success of community-based MST for comprehensively confirming fecal sources relies extensively upon using appropriate multivariate statistical approaches. While community-based MST is still under evaluation and development as a primary diagnostic tool, results presented herein demonstrate its promise. Coupled with its inherently comprehensive ability to capture an unprecedented amount of microbiological data that is relevant to water quality, the tools for microbial community analysis are increasingly accessible, and community-based approaches have unparalleled potential for translation into rapid, perhaps real-time, monitoring platforms.

  13. Network-based Analysis and Insights | NISAC

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    NISACNetwork-based Analysis and Insights content top Chemical Supply Chain Analysis Posted by Admin on Mar 1, 2012 in | Comments 0 comments Chemical Supply Chain Analysis NISAC has...

  14. Economic Analysis Case Studies of Battery Energy Storage with...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Economic Analysis Case Studies of Battery Energy Storage with SAM Nicholas DiOrio, Aron Dobos, ... to use the storage system to increase the system value and mitigate demand charges. ...

  15. Integrated fire analysis: Application to offshore cases

    SciTech Connect (OSTI)

    Saubestre, V.; Khalfi, J.P.; Paygnard, J.C.

    1995-12-31

    Evaluating thermal loads from different fire scenarios and then response of the structure to these loads covers several fields. It is also difficult and time consuming to implement. Interfaces are necessary between the heat calculation, transient propagation and structural analysis software packages. Nevertheless, it is necessary to design structures to accommodate heat loads in order to meet safety requirements or functional specification. Elf, along with several operators and organizations, have sponsored a research project on this topic. The project, managed by SINTEF NBL (Norwegian Fire Research Laboratory), has delivered an integrated fire analysis software package which can be used to address design-to-fire-related issues in various contexts. The core modules of the integrated package are robust, well validated analysis tools. This paper describes some benefits (technical or cost related) of using an integrated approach to assess the response of a structure to thermal loads. Three examples are described: consequence of an accidental scenario on the living quarters in an offshore complex, necessity for the reinforcement of a flareboom following a change in process, evaluation of the amount of insulation needed for a topside process primary structure. The paper focuses on the importance for the operator to have a practical tool which can lead to substantial cost saving while reducing the uncertainty linked to safety issues.

  16. Byfl: Compiler-based Application Analysis

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Feynman Center (505) 665-9090 Email Byfl: Compiler-based Application Analysis Byfl is a productivity tool that helps computational scientists analyze their code for...

  17. Well casing-based geophysical sensor apparatus, system and method

    DOE Patents [OSTI]

    Daily, William D.

    2010-03-09

    A geophysical sensor apparatus, system, and method for use in, for example, oil well operations, and in particular using a network of sensors emplaced along and outside oil well casings to monitor critical parameters in an oil reservoir and provide geophysical data remote from the wells. Centralizers are affixed to the well casings and the sensors are located in the protective spheres afforded by the centralizers to keep from being damaged during casing emplacement. In this manner, geophysical data may be detected of a sub-surface volume, e.g. an oil reservoir, and transmitted for analysis. Preferably, data from multiple sensor types, such as ERT and seismic data are combined to provide real time knowledge of the reservoir and processes such as primary and secondary oil recovery.

  18. Geographically-Based Infrastructure Analysis for California

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Geographically-Based Infrastructure Analysis for California Joan Ogden Institute of Transportation Studies University of California, Davis Presented at the USDOE Hydrogen Transition Analysis Meeting Washington, DC August 9-10, 2006 Acknowledgments UC Davis Researchers: Michael Nicholas Dr. Marc Melaina Dr. Marshall Miller Dr. Chris Yang USDOE: Dr. Sig Gronich Research support: USDOE; H2 Pathways Program sponsors at UC Davis * Refueling station siting and sizing are key aspects of designing H2

  19. Business Case Analysis of Prototype Fabrication Division Recapitalization Plan. Summary

    SciTech Connect (OSTI)

    Booth, Steven Richard; Benson, Faith Ann; Dinehart, Timothy Grant

    2015-04-30

    Business case studies were completed to support procurement of new machines and capital equipment in the Prototype Fabrication (PF) Division SM-39 and TA-03-0102 machine shops. Economic analysis was conducted for replacing the Mazak 30Y Mill-Turn Machine in SM-39, the Haas Vertical CNC Mill in Building 102, and the Hardinge Q10/65-SP Lathe in SM-39. Analysis was also conducted for adding a NanoTech Lathe in Building 102 and a new electrical discharge machine (EDM) in SM-39 to augment current capabilities. To determine the value of switching machinery, a baseline scenario was compared with a future scenario where new machinery was purchased and installed. Costs and benefits were defined via interviews with subject matter experts.

  20. Chiller condition monitoring using topological case-based modeling

    SciTech Connect (OSTI)

    Tsutsui, Hiroaki; Kamimura, Kazuyuki

    1996-11-01

    To increase energy efficiency and economy, commercial building projects now often utilize centralized, shared sources of heat such as district heating and cooling (DHC) systems. To maintain efficiency, precise monitoring and scheduling of maintenance for chillers and heat pumps is essential. Low-performance operation results in energy loss, while unnecessary maintenance is expensive and wasteful. Plant supervisors are responsible for scheduling and supervising maintenance. Modeling systems that assist in analyzing system deterioration are of great benefit for these tasks. Topological case-based modeling (TCBM) (Tsutsui et al. 1993; Tsutsui 1995) is an effective tool for chiller performance deterioration monitoring. This paper describes TCBM and its application to this task using recorded historical performance data.

  1. A review of recent NEPA alternatives analysis case law

    SciTech Connect (OSTI)

    Smith, Michael D. . E-mail: michael.smith@humboldt.edu

    2007-03-15

    According to the Council on Environmental Quality (CEQ) Regulations for implementing the National Environmental Policy Act (NEPA), the analysis and comparison of alternatives is considered the 'heart' of the NEPA process. Although over 20 years have passed since the original mandate appeared to construct and assess a 'reasonable range' of alternatives contained in the CEQ Regulations, there is a perception that there is still a significant amount of confusion about what exactly constitutes a legally-compliant alternatives analysis. One manifestation of this confusion is the increasing amount of litigation over the alternatives analysis in NEPA documents. This study examined decisions on challenges to alternative analyses contained in federal agency NEPA documents in federal Courts of Appeals for the ten-year period 1996-2005. The results show that federal agencies are overwhelmingly successful against such challenges - winning 30 of the 37 cases. The most common challenge was that federal agencies had not included a full reasonable range of alternatives, while the second most frequent was that agencies had improperly constructed their purpose and need for their projects. Brief descriptions of several of the key court decisions are provided that illustrate the main factors that led to agencies being successful, as well as being unsuccessful, in their court challenges. The results provide little support for recent calls to amend the NEPA Statute and the CEQ Regulations to better clarify the requirements for alternatives analysis. The conclusion to the study focuses on practical steps NEPA practitioners can take to prepare their alternatives analyses in a manner that fulfills the requirements of the NEPA Statute and Council on Environmental Quality (CEQ) Regulations and makes them less vulnerable to an unfavorable court decision if legally challenged.

  2. Economic Analysis Case Studies of Battery Energy Storage with SAM

    SciTech Connect (OSTI)

    DiOrio, Nicholas; Dobos, Aron; Janzou, Steven

    2015-11-01

    Interest in energy storage has continued to increase as states like California have introduced mandates and subsidies to spur adoption. This energy storage includes customer sited behind-the-meter storage coupled with photovoltaics (PV). This paper presents case study results from California and Tennessee, which were performed to assess the economic benefit of customer-installed systems. Different dispatch strategies, including manual scheduling and automated peak-shaving were explored to determine ideal ways to use the storage system to increase the system value and mitigate demand charges. Incentives, complex electric tariffs, and site specific load and PV data were used to perform detailed analysis. The analysis was performed using the free, publically available System Advisor Model (SAM) tool. We find that installation of photovoltaics with a lithium-ion battery system priced at $300/kWh in Los Angeles under a high demand charge utility rate structure and dispatched using perfect day-ahead forecasting yields a positive net-present value, while all other scenarios cost the customer more than the savings accrued. Different dispatch strategies, including manual scheduling and automated peak-shaving were explored to determine ideal ways to use the storage system to increase the system value and mitigate demand charges. Incentives, complex electric tariffs, and site specific load and PV data were used to perform detailed analysis. The analysis was performed using the free, publically available System Advisor Model (SAM) tool. We find that installation of photovoltaics with a lithium-ion battery system priced at $300/kWh in Los Angeles under a high demand charge utility rate structure and dispatched using perfect day-ahead forecasting yields a positive net-present value, while all other scenarios cost the customer more than the savings accrued.

  3. Economic Analysis for Conceptual Design of Supercritical O2-Based...

    Office of Scientific and Technical Information (OSTI)

    Economic Analysis for Conceptual Design of Supercritical O2-Based PC Boiler Citation Details In-Document Search Title: Economic Analysis for Conceptual Design of Supercritical ...

  4. Preliminary Analysis of Texas Instrument Hercules Flash-based...

    Office of Scientific and Technical Information (OSTI)

    Technical Report: Preliminary Analysis of Texas Instrument Hercules Flash-based Microcontroller Citation Details In-Document Search Title: Preliminary Analysis of Texas Instrument ...

  5. Topology-based Feature Definition and Analysis

    SciTech Connect (OSTI)

    Weber, Gunther H.; Bremer, Peer-Timo; Gyulassy, Attila; Pascucci, Valerio

    2010-12-10

    Defining high-level features, detecting them, tracking them and deriving quantities based on them is an integral aspect of modern data analysis and visualization. In combustion simulations, for example, burning regions, which are characterized by high fuel-consumption, are a possible feature of interest. Detecting these regions makes it possible to derive statistics about their size and track them over time. However, features of interest in scientific simulations are extremely varied, making it challenging to develop cross-domain feature definitions. Topology-based techniques offer an extremely flexible means for general feature definitions and have proven useful in a variety of scientific domains. This paper will provide a brief introduction into topological structures like the contour tree and Morse-Smale complex and show how to apply them to define features in different science domains such as combustion. The overall goal is to provide an overview of these powerful techniques and start a discussion how these techniques can aid in the analysis of astrophysical simulations.

  6. Bismuth-based electrochemical stripping analysis

    DOE Patents [OSTI]

    Wang, Joseph

    2004-01-27

    Method and apparatus for trace metal detection and analysis using bismuth-coated electrodes and electrochemical stripping analysis. Both anodic stripping voltammetry and adsorptive stripping analysis may be employed.

  7. Geographically Based Hydrogen Demand and Infrastructure Analysis...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Presentation by NREL's Margo Melendez at the 2010 - 2025 Scenario Analysis for Hydrogen ... More Documents & Publications 2010 - 2025 Scenario Analysis Meeting Agenda for August 9 - ...

  8. Analysis of Geothermal Reservoir Stimulation using Geomechanics-Based

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Stochastic Analysis of Injection-Induced Seismicity | Department of Energy Geothermal Reservoir Stimulation using Geomechanics-Based Stochastic Analysis of Injection-Induced Seismicity Analysis of Geothermal Reservoir Stimulation using Geomechanics-Based Stochastic Analysis of Injection-Induced Seismicity This project will develop a model for seismicity-based reservoir characterization (SBRC) by combining rock mechanics; finite element modeling; geo-statistical concepts to establish

  9. Analysis of Geothermal Reservoir Stimulation using Geomechanics-Based Stochastic Analysis of Injection-Induced Seismicity

    Broader source: Energy.gov [DOE]

    Analysis of Geothermal Reservoir Stimulation using Geomechanics-Based Stochastic Analysis of Injection-Induced Seismicity presentation at the April 2013 peer review meeting held in Denver, Colorado.

  10. 20th International Conference on Case Based Reasoning | GE Global...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Efficiency of Scientific Data Analysis: Scientific ... other traditional Artificial Intelligence (AI) algorithms out there. ... Basically, the big take away is that while most AI ...

  11. Load flow analysis: Base cases, data, diagrams, and results ...

    Office of Scientific and Technical Information (OSTI)

    The report summarizes the load flow model construction, simulation, and validation and describes the general capabilities of an information query system designed to access load ...

  12. Analysis of Geothermal Reservoir Stimulation Using Geomechanics-based

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Stochastic Analysis of Injection-induced Seismicity; 2010 Geothermal Technology Program Peer Review Report | Department of Energy Using Geomechanics-based Stochastic Analysis of Injection-induced Seismicity; 2010 Geothermal Technology Program Peer Review Report Analysis of Geothermal Reservoir Stimulation Using Geomechanics-based Stochastic Analysis of Injection-induced Seismicity; 2010 Geothermal Technology Program Peer Review Report DOE 2010 Geothermal Technologies Program Peer Review

  13. Geographically-Based Infrastructure Analysis for California | Department of

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Energy Geographically-Based Infrastructure Analysis for California Geographically-Based Infrastructure Analysis for California Presentation by Joan Ogden of the University of California at the 2010 - 2025 Scenario Analysis for Hydrogen Fuel Cell Vehicles and Infrastructure Meeting on August 9 - 10, 2006 in Washington, D.C. ogden_geo_infrastructure_analysis.pdf (5.39 MB) More Documents & Publications Hydrogen Infrastructure Strategies Consumer Water Heater, UEF - v1.0 EIS-0105: Draft

  14. Cluster Analysis-Based Approaches for Geospatiotemporal Data...

    Office of Scientific and Technical Information (OSTI)

    Cluster Analysis-Based Approaches for Geospatiotemporal Data Mining of Massive Data Sets for Identification of Forest Threats Mills, Richard T ORNL ORNL; Hoffman, Forrest M...

  15. Physics-Based Constraints in the Forward Modeling Analysis of...

    Office of Scientific and Technical Information (OSTI)

    Image Data, (Long Version) Citation Details In-Document Search Title: Physics-Based Constraints in the Forward Modeling Analysis of Time-Correlated Image Data, (Long Version) ...

  16. NETL - Petroleum-Based Fuels Life Cycle Greenhouse Gas Analysis...

    Open Energy Info (EERE)

    search Tool Summary LAUNCH TOOL Name: NETL - Petroleum-Based Fuels Life Cycle Greenhouse Gas Analysis 2005 Baseline Model AgencyCompany Organization: National Energy Technology...

  17. Copula-Based Flood Frequency Analysis at Ungauged Basin Confluences...

    Office of Scientific and Technical Information (OSTI)

    SciTech Connect Search Results Journal Article: Copula-Based Flood Frequency Analysis at ... sustain user needs while also posing an increased flooding risk from multiple tributaries. ...

  18. A case study of abnormal conditions and events (ACE) analysis

    SciTech Connect (OSTI)

    Reeves, R.; Hicks, G.; Karrasch, B.

    1995-08-01

    In August of 1993, EPRI initiated a project to perform an evaluation of the application of various methodologies for performing Abnormal Conditions and Events (ACE) analysis on computer systems used in nuclear plants. This paper discusses the application of ACE analysis techniques to two systems designed for the Tennessee Valley Authority (TVA) Browns Ferry Nuclear (BFN) plant. Further details can be obtained from EPRI TR-104595, ``Abnormal Conditions and Events Analysis for Instrumentation and Controls Systems`` which is scheduled for publication in December, 1994.

  19. Topology-based Visualization and Analysis of High-dimensional...

    Office of Scientific and Technical Information (OSTI)

    Topology-based Visualization and Analysis of High-dimensional Data and Time-varying Data at the Extreme Scale Citation Details In-Document Search Title: Topology-based ...

  20. Physics-based constraints in the forward modeling analysis of...

    Office of Scientific and Technical Information (OSTI)

    Conference: Physics-based constraints in the forward modeling analysis of time-correlated image data Citation Details In-Document Search Title: Physics-based constraints in the ...

  1. Physics-Based Constraints in the Forward Modeling Analysis of...

    Office of Scientific and Technical Information (OSTI)

    Technical Report: Physics-Based Constraints in the Forward Modeling Analysis of Time-Correlated Image Data, (Long Version) Citation Details In-Document Search Title: Physics-Based ...

  2. Geographically Based Hydrogen Demand and Infrastructure Analysis

    Broader source: Energy.gov [DOE]

    Presentation by NREL's Margo Melendez at the 2010 - 2025 Scenario Analysis for Hydrogen Fuel Cell Vehicles and Infrastructure Meeting on August 9 - 10, 2006 in Washington, D.C.

  3. Overview of New Tools to Perform Safety Analysis: BWR Station Black Out Test Case

    SciTech Connect (OSTI)

    D. Mandelli; C. Smith; T. Riley; J. Nielsen; J. Schroeder; C. Rabiti; A. Alfonsi; Cogliati; R. Kinoshita; V. Pasucci; B. Wang; D. Maljovec

    2014-06-01

    Dynamic Probabilistic Risk Assessment (DPRA) methodologies couple system simulator codes (e.g., RELAP, MELCOR) with simulation controller codes (e.g., RAVEN, ADAPT). While system simulator codes accurately model system dynamics deterministically, simulation controller codes introduce both deterministic (e.g., system control logic, operating procedures) and stochastic (e.g., component failures, parameter uncertainties) elements into the simulation. Typically, a DPRA is performed by: 1) sampling values of a set of parameters from the uncertainty space of interest (using the simulation controller codes), and 2) simulating the system behavior for that specific set of parameter values (using the system simulator codes). For complex systems, one of the major challenges in using DPRA methodologies is to analyze the large amount of information (i.e., large number of scenarios ) generated, where clustering techniques are typically employed to allow users to better organize and interpret the data. In this paper, we focus on the analysis of a nuclear simulation dataset that is part of the Risk Informed Safety Margin Characterization (RISMC) Boiling Water Reactor (BWR) station blackout (SBO) case study. We apply a software tool that provides the domain experts with an interactive analysis and visualization environment for understanding the structures of such high-dimensional nuclear simulation datasets. Our tool encodes traditional and topology-based clustering techniques, where the latter partitions the data points into clusters based on their uniform gradient flow behavior. We demonstrate through our case study that both types of clustering techniques complement each other in bringing enhanced structural understanding of the data.

  4. Building America Special Research Project: High-R Walls Case Study Analysis

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    | Department of Energy Building America Special Research Project: High-R Walls Case Study Analysis Building America Special Research Project: High-R Walls Case Study Analysis This report considers a number of promising wall systems with improved thermal control to improve plant-wide performance. Unlike previous studies, it considers performance in a more realistic matter, including some true three-dimensional heat flow and the relative risk of moisture damage. Building America Special

  5. Analysis of Energy Efficiency Program Impacts Based on Program Spending

    U.S. Energy Information Administration (EIA) Indexed Site

    Analysis of Energy Efficiency Program Impacts Based on Program Spending May 2015 Independent Statistics & Analysis www.eia.gov U.S. Department of Energy Washington, DC 20585 U.S. Energy Information Administration | Analysis of Energy Efficiency Program Impacts Based on Program Spending i This report was prepared by the U.S. Energy Information Administration (EIA), the statistical and analytical agency within the U.S. Department of Energy. By law, EIA's data, analyses, and forecasts are

  6. 2007 Wholesale Power Rate Case Final Proposal : Risk Analysis Study.

    SciTech Connect (OSTI)

    United States. Bonneville Power Administration.

    2006-07-01

    BPA's operating environment is filled with numerous uncertainties, and thus the rate-setting process must take into account a wide spectrum of risks. The objective of the Risk Analysis is to identify, model, and analyze the impacts that key risks have on BPA's net revenue (total revenues less total expenses). This is carried out in two distinct steps: a risk analysis step, in which the distributions, or profiles, of operating and non operating risks are defined, and a risk mitigation step, in which different rate tools are tested to assess their ability to recover BPA's costs in the face of this uncertainty. Two statistical models are used in the risk analysis step for this rate proposal, the Risk Analysis Model (RiskMod), and the Non-Operating Risk Model (NORM), while a third model, the ToolKit, is used to test the effectiveness of rate tools options in the risk mitigation step. RiskMod is discussed in Sections 2.1 through 2.4, the NORM is discussed in Section 2.5, and the ToolKit is discussed in Section 3. The models function together so that BPA can develop rates that cover all of its costs and provide a high probability of making its Treasury payments on time and in full during the rate period. By law, BPA's payments to Treasury are the lowest priority for revenue application, meaning that payments to Treasury are the first to be missed if financial reserves are insufficient to pay all bills on time. For this reason, BPA measures its potential for recovering costs in terms of probability of being able to make Treasury payments on time (also known as Treasury Payment Probability or TPP).

  7. Desiccant-Based Preconditioning Market Analysis

    SciTech Connect (OSTI)

    Fischer, J.

    2001-01-11

    A number of important conclusions can be drawn as a result of this broad, first-phase market evaluation. The more important conclusions include the following: (1) A very significant market opportunity will exist for specialized outdoor air-handling units (SOAHUs) as more construction and renovation projects are designed to incorporate the recommendations made by the ASHRAE 62-1989 standard. Based on this investigation, the total potential market is currently $725,000,000 annually (see Table 6, Sect. 3). Based on the market evaluations completed, it is estimated that approximately $398,000,000 (55%) of this total market could be served by DBC systems if they were made cost-effective through mass production. Approximately $306,000,000 (42%) of the total can be served by a non-regenerated, desiccant-based total recovery approach, based on the information provided by this investigation. Approximately $92,000,000 (13%) can be served by a regenerated desiccant-based cooling approach (see Table 7, Sect. 3). (2) A projection of the market selling price of various desiccant-based SOAHU systems was prepared using prices provided by Trane for central-station, air-handling modules currently manufactured. The wheel-component pricing was added to these components by SEMCO. This resulted in projected pricing for these systems that is significantly less than that currently offered by custom suppliers (see Table 4, Sect. 2). Estimated payback periods for all SOAHU approaches were quite short when compared with conventional over-cooling and reheat systems. Actual paybacks may vary significantly depending on site-specific considerations. (3) In comparing cost vs benefit of each SOAHU approach, it is critical that the total system design be evaluated. For example, the cost premium of a DBC system is very significant when compared to a conventional air handling system, yet the reduced chiller, boiler, cooling tower, and other expense often equals or exceeds this premium, resulting in a

  8. 2007 Wholesale Power Rate Case Initial Proposal : Risk Analysis Study.

    SciTech Connect (OSTI)

    United States. Bonneville Power Administration.

    2005-11-01

    The Federal Columbia River Power System (FCRPS), operated on behalf of the ratepayers of the PNW by BPA and other Federal agencies, faces many uncertainties during the FY 2007-2009 rate period. Among these uncertainties, the largest revolve around hydro conditions, market prices and river operations for fish recovery. In order to provide a high probability of making its U.S. Treasury payments, BPA performs a Risk Analysis as part of its rate-making process. In this Risk Analysis, BPA identifies key risks, models their relationships, and then analyzes their impacts on net revenues (total revenues less expenses). BPA subsequently evaluates in the ToolKit Model the Treasury Payment Probability (TPP) resulting from the rates, risks, and risk mitigation measures described here and in the Wholesale Power Rate Development Study (WPRDS). If the TPP falls short of BPA's standard, additional risk mitigation revenues, such as PNRR and CRAC revenues are incorporated in the modeling in ToolKit until the TPP standard is met. Increased wholesale market price volatility and six years of drought have significantly changed the profile of risk and uncertainty facing BPA and its stakeholders. These present new challenges for BPA in its effort to keep its power rates as low as possible while fully meeting its obligations to the U.S. Treasury. As a result, the risk BPA faces in not receiving the level of secondary revenues that have been credited to power rates before receiving those funds is greater. In addition to market price volatility, BPA also faces uncertainty around the financial impacts of operations for fish programs in FY 2006 and in the FY 2007-2009 rate period. A new Biological Opinion or possible court-ordered change to river operations in FY 2006 through FY 2009 may reduce BPA's net revenues included Initial Proposal. Finally, the FY 2007-2009 risk analysis includes new operational risks as well as a more comprehensive analysis of non-operating risks. Both the operational

  9. Sandia National Laboratories analysis code data base

    SciTech Connect (OSTI)

    Peterson, C.W.

    1994-11-01

    Sandia National Laboratories, mission is to solve important problems in the areas of national defense, energy security, environmental integrity, and industrial technology. The Laboratories` strategy for accomplishing this mission is to conduct research to provide an understanding of the important physical phenomena underlying any problem, and then to construct validated computational models of the phenomena which can be used as tools to solve the problem. In the course of implementing this strategy, Sandia`s technical staff has produced a wide variety of numerical problem-solving tools which they use regularly in the design, analysis, performance prediction, and optimization of Sandia components, systems and manufacturing processes. This report provides the relevant technical and accessibility data on the numerical codes used at Sandia, including information on the technical competency or capability area that each code addresses, code ``ownership`` and release status, and references describing the physical models and numerical implementation.

  10. Analysis of Vehicle-Based Security Operations

    SciTech Connect (OSTI)

    Carter, Jason M; Paul, Nate R

    2015-01-01

    Vehicle-to-vehicle (V2V) communications promises to increase roadway safety by providing each vehicle with 360 degree situational awareness of other vehicles in proximity, and by complementing onboard sensors such as radar or camera in detecting imminent crash scenarios. In the United States, approximately three hundred million automobiles could participate in a fully deployed V2V system if Dedicated Short-Range Communication (DSRC) device use becomes mandatory. The system s reliance on continuous communication, however, provides a potential means for unscrupulous persons to transmit false data in an attempt to cause crashes, create traffic congestion, or simply render the system useless. V2V communications must be highly scalable while retaining robust security and privacy preserving features to meet the intra-vehicle and vehicle-to-infrastructure communication requirements for a growing vehicle population. Oakridge National Research Laboratory is investigating a Vehicle-Based Security System (VBSS) to provide security and privacy for a fully deployed V2V and V2I system. In the VBSS an On-board Unit (OBU) generates short-term certificates and signs Basic Safety Messages (BSM) to preserve privacy and enhance security. This work outlines a potential VBSS structure and its operational concepts; it examines how a vehicle-based system might feasibly provide security and privacy, highlights remaining challenges, and explores potential mitigations to address those challenges. Certificate management alternatives that attempt to meet V2V security and privacy requirements have been examined previously by the research community including privacy-preserving group certificates, shared certificates, and functional encryption. Due to real-world operational constraints, adopting one of these approaches for VBSS V2V communication is difficult. Timely misbehavior detection and revocation are still open problems for any V2V system. We explore the alternative approaches that may be

  11. An approach to model validation and model-based prediction -- polyurethane foam case study.

    SciTech Connect (OSTI)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical analyses and

  12. Surveillance data bases, analysis, and standardization program

    SciTech Connect (OSTI)

    Kam, F.B.K.

    1990-09-26

    The traveler presented a paper at the Seventh ASTM-EURATOM Symposium on Reactor Dosimetry and co-chaired an oral session on Computer Codes and Methods. Papers of considerable interest to the NRC Surveillance Dosimetry Program involved statistically based adjustment procedures and uncertainties. The information exchange meetings with Czechoslovakia and Hungary were very enlightening. Lack of large computers have hindered their surveillance program. They depended very highly on information from their measurement programs which were somewhat limited because of the lack of sophisticated electronics. The Nuclear Research Institute at Rez had to rely on expensive mockups of power reactor configurations to test their fluence exposures. Computers, computer codes, and updated nuclear data would advance their technology rapidly, and they were not hesitant to admit this fact. Both eastern-bloc countries said that IBM is providing an IBM 3090 for educational purposes but research and development studies would have very limited access. They were very apologetic that their currencies were not convertible, and any exchange means that they could provide services or pay for US scientists in their respective countries, but funding for their scientists in the United States, or expenses that involved payment in dollars, must come from us.

  13. Product Lifecycle Management Architecture: A Model Based Systems Engineering Analysis.

    SciTech Connect (OSTI)

    Noonan, Nicholas James

    2015-07-01

    This report is an analysis of the Product Lifecycle Management (PLM) program. The analysis is centered on a need statement generated by a Nuclear Weapons (NW) customer. The need statement captured in this report creates an opportunity for the PLM to provide a robust service as a solution. Lifecycles for both the NW and PLM are analyzed using Model Based System Engineering (MBSE).

  14. Business Case Analysis Requirements for Certain Interagency and Agency-Specific Acquisitions

    Office of Energy Efficiency and Renewable Energy (EERE)

    Office of Federal Procurement Policy’s (OFPP) memorandum, dated September 29, 2011, Development, Review and Approval of Business Cases for Certain Interagency and Agency-Specific Acquisitions, outlines required elements of business case analysis as well as a process for developing, reviewing, and approving business cases to support the establishment and renewal of government-wide acquisition contracts (GWACs), certain multi-agency contracts, certain agency-specific contracts, or agency-specific blanket purchase agreement (BPA). Agency-specific vehicles are either indefinite-delivery, indefinite quantity contracts or blanket purchase agreements intended for the use of your contracting activity, the Department of Energy, or another Federal Agency.

  15. Using the DOE Knowledge Base for Special Event Analysis

    SciTech Connect (OSTI)

    Armstrong, H.M.; Harris, J.M.; Young, C.J.

    1998-10-20

    The DOE Knowledge Base is a library of detailed information whose purpose is to support the United States National Data Center (USNDC) in its mission to monitor compliance with the Comprehensive Test Ban Treaty (CTBT). One of the important tasks which the USNDC must accomplish is to periodically perform detailed analysis of events of high interest, so-called "Special Events", to provide the national authority with information needed to make policy decisions. In this paper we investigate some possible uses of the Knowledge Base for Special Event Analysis (SEA), and make recommendations for improving Knowledge Base support for SEA. To analyze an event in detail, there are two basic types of data which must be used sensor-derived data (wave- forms, arrivals, events, etc.) and regiohalized contextual data (known sources, geological characteristics, etc.). Cur- rently there is no single package which can provide full access to both types of data, so for our study we use a separate package for each MatSeis, the Sandia Labs-developed MATLAB-based seismic analysis package, for wave- form data analysis, and ArcView, an ESRI product, for contextual data analysis. Both packages are well-suited to pro- totyping because they provide a rich set of currently available functionality and yet are also flexible and easily extensible, . Using these tools and Phase I Knowledge Base data sets, we show how the Knowledge Base can improve both the speed and the quality of SEA. Empirically-derived interpolated correction information can be accessed to improve both location estimates and associated error estimates. This information can in turn be used to identi~ any known nearby sources (e.g. mines, volcanos), which may then trigger specialized processing of the sensor data. Based on the location estimate, preferred magnitude formulas and discriminants can be retrieved, and any known blockages can be identified to prevent miscalculations. Relevant historic events can be identilled either by

  16. Klonos: A Similarity Analysis Based Tool for Software Porting

    Energy Science and Technology Software Center (OSTI)

    2014-07-30

    The Klonos is a compiler-based tool that can help users for scientific application porting. The tool is based on the similarity analysis with the help of the OpenUH compiler (a branch of Open64 compiler). This tool combines syntactic and cost-model-provided metrics clusters, which aggregate similar subroutines that can be ported similarity. The generated porting plan, which allows programmers and compilers to reuse porting experience as much as possible during the porting process.

  17. Financial Analysis of Incentive Mechanisms to Promote Energy Efficiency: Case Study of a Prototypical Southwest Utility

    SciTech Connect (OSTI)

    Cappers, Peter; Goldman, Charles; Chait, Michele; Edgar, George; Schlegel, Jeff; Shirley, Wayne

    2009-03-04

    alternative incentive approaches on utility shareholders and customers if energy efficiency is implemented under various utility operating, cost, and supply conditions.We used and adapted a spreadsheet-based financial model (the Benefits Calculator) which was developed originally as a tool to support the National Action Plan for Energy Efficiency (NAPEE). The major steps in our analysis are displayed graphically in Figure ES- 1. Two main inputs are required: (1) characterization of the utility which includes its initial financial and physical market position, a forecast of the utility?s future sales, peak demand, and resource strategy to meet projected growth; and (2) characterization of the Demand-Side Resource (DSR) portfolio ? projected electricity and demand savings, costs and economic lifetime of a portfolio of energy efficiency (and/or demand response) programs that the utility is planning or considering implementing during the analysis period. The Benefits Calculator also estimates total resource costs and benefits of the DSR portfolio using a forecast of avoided capacity and energy costs. The Benefits Calculator then uses inputs provided in the Utility Characterization to produce a ?business-as usual? base case as well as alternative scenarios that include energy efficiency resources, including the corresponding utility financial budgets required in each case. If a decoupling and/or a shareholder incentive mechanism are instituted, the Benefits Calculator model readjusts the utility?s revenue requirement and retail rates accordingly. Finally, for each scenario, the Benefits Calculator produces several metrics that provides insights on how energy efficiency resources, decoupling and/or a shareholder incentive mechanism impacts utility shareholders (e.g. overall earnings, return on equity), ratepayers (e.g., average customer bills and rates) and society (e.g. net resource benefits).

  18. Tariff-based analysis of commercial building electricityprices

    SciTech Connect (OSTI)

    Coughlin, Katie M.; Bolduc, Chris A.; Rosenquist, Greg J.; VanBuskirk, Robert D.; McMahon, James E.

    2008-03-28

    This paper presents the results of a survey and analysis ofelectricity tariffs and marginal electricity prices for commercialbuildings. The tariff data come from a survey of 90 utilities and 250tariffs for non-residential customers collected in 2004 as part of theTariff Analysis Project at LBNL. The goals of this analysis are toprovide useful summary data on the marginal electricity prices commercialcustomers actually see, and insight into the factors that are mostimportant in determining prices under different circumstances. We providea new, empirically-based definition of several marginal prices: theeffective marginal price and energy-only anddemand-only prices, andderive a simple formula that expresses the dependence of the effectivemarginal price on the marginal load factor. The latter is a variable thatcan be used to characterize the load impacts of a particular end-use orefficiency measure. We calculate all these prices for eleven regionswithin the continental U.S.

  19. Analysis of Geothermal Reservoir Stimulation using Geomechanics-Based Stochastic Analysis of Injection-Induced Seismicity

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    May 18, 2010 Geothermal Technologies Program 2013 Peer Review Ghassemi, 2002 Analysis of Geothermal Reservoir Stimulation using Geomechanics-Based Stochastic Analysis of Injection-Induced Seismicity Principal Investigator: Ahmad Ghassmi EGS Component R&D Stimulation Prediction Models This presentation does not contain any proprietary confidential, or otherwise restricted information. April, 2013 2 | US DOE Geothermal Program eere.energy.gov Relevance/Impact of Research * Develop a model for

  20. Identification and Prioritization of Analysis Cases for Marine and Hydrokinetic Energy Risk Screening

    SciTech Connect (OSTI)

    Anderson, Richard M.; Unwin, Stephen D.; Van Cleve, Frances B.

    2010-06-16

    In this report we describe the development of the Environmental Risk Evaluation System (ERES), a risk-informed analytical process for estimating the environmental risks associated with the construction and operation of marine and hydrokinetic energy generation projects. The development process consists of two main phases of analysis. In the first phase, preliminary risk analyses will take the form of screening studies in which key environmental impacts and the uncertainties that create risk are identified, leading to a better-focused characterization of the relevant environmental effects. Existence of critical data gaps will suggest areas in which specific modeling and/or data collection activities should take place. In the second phase, more detailed quantitative risk analyses will be conducted, with residual uncertainties providing the basis for recommending risk mitigation and monitoring activities. We also describe the process used for selecting three cases for fiscal year 2010 risk screening analysis using the ERES. A case is defined as a specific technology deployed in a particular location involving certain environmental receptors specific to that location. The three cases selected satisfy a number of desirable criteria: 1) they correspond to real projects whose deployment is likely to take place in the foreseeable future; 2) the technology developers are willing to share technology and project-related data; 3) the projects represent a diversity of technology-site-receptor characteristics; 4) the projects are of national interest, and 5) environmental effects data may be available for the projects.

  1. Techno-Economic Analysis of Biofuels Production Based on Gasification

    SciTech Connect (OSTI)

    Swanson, R. M.; Platon, A.; Satrio, J. A.; Brown, R. C.; Hsu, D. D.

    2010-11-01

    This study compares capital and production costs of two biomass-to-liquid production plants based on gasification. The first biorefinery scenario is an oxygen-fed, low-temperature (870?C), non-slagging, fluidized bed gasifier. The second scenario is an oxygen-fed, high-temperature (1,300?C), slagging, entrained flow gasifier. Both are followed by catalytic Fischer-Tropsch synthesis and hydroprocessing to naphtha-range (gasoline blend stock) and distillate-range (diesel blend stock) liquid fractions. Process modeling software (Aspen Plus) is utilized to organize the mass and energy streams and cost estimation software is used to generate equipment costs. Economic analysis is performed to estimate the capital investment and operating costs. Results show that the total capital investment required for nth plant scenarios is $610 million and $500 million for high-temperature and low-temperature scenarios, respectively. Product value (PV) for the high-temperature and low-temperature scenarios is estimated to be $4.30 and $4.80 per gallon of gasoline equivalent (GGE), respectively, based on a feedstock cost of $75 per dry short ton. Sensitivity analysis is also performed on process and economic parameters. This analysis shows that total capital investment and feedstock cost are among the most influential parameters affecting the PV.

  2. Knowledge representation and the application of case-based reasoning in engineering design

    SciTech Connect (OSTI)

    Bhangal, J.S.; Esat, I.

    1996-12-31

    This paper is an assessment of the requirements in the application of Case-based Reasoning to Engineering Design. The methods in which a CBR system will assist a designer when he/she is presented with a problem specification and the various methods which need to be understood before attempting to build an such expert system are discussed here. The problem is two fold, firstly the methods of utilizing CBR are varied and secondly the method of representing the knowledge in design also needs to be established. How a design represented basically differs for each application and this is a decision which needs to be made when setting up the case memory but the methods used are discussed here. CBR itself can also be utilized in various ways and it has been seen from previous applications that a hybrid approach can produce the best results.

  3. Lossless droplet transfer of droplet-based microfluidic analysis

    DOE Patents [OSTI]

    Kelly, Ryan T (West Richland, WA); Tang, Keqi (Richland, WA); Page, Jason S (Kennewick, WA); Smith, Richard D (Richland, WA)

    2011-11-22

    A transfer structure for droplet-based microfluidic analysis is characterized by a first conduit containing a first stream having at least one immiscible droplet of aqueous material and a second conduit containing a second stream comprising an aqueous fluid. The interface between the first conduit and the second conduit can define a plurality of apertures, wherein the apertures are sized to prevent exchange of the first and second streams between conduits while allowing lossless transfer of droplets from the first conduit to the second conduit through contact between the first and second streams.

  4. Landslide hazard analysis for pipelines: The case of the Simonette river crossing

    SciTech Connect (OSTI)

    Grivas, D.A.; Schultz, B.C.; O`Neil, G.; Rizkalla, M.; McGuffey, V.C.

    1995-12-31

    The overall objective of this study is to develop a probabilistic methodology to analyze landslide hazards and their effects on the safety of buried pipelines. The methodology incorporates a range of models that can accommodate differences in the ground movement modes and the amount and type of information available at various site locations. Two movement modes are considered, namely (a) instantaneous (catastrophic) slides, and (b) gradual ground movement which may result in cumulative displacements over the pipeline design life (30--40 years) that are in excess of allowable values. Probabilistic analysis is applied in each case to address the uncertainties associated with important factors that control slope stability. Availability of information ranges from relatively well studied, instrumented installations to cases where data is limited to what can be derived from topographic and geologic maps. The methodology distinguishes between procedures applied where there is little information and those that can be used when relatively extensive data is available. important aspects of the methodology are illustrated in a case study involving a pipeline located in Northern Alberta, Canada, in the Simonette river valley.

  5. A self-adaptive case-based reasoning system for dose planning in prostate cancer radiotherapy

    SciTech Connect (OSTI)

    Mishra, Nishikant; Petrovic, Sanja; Sundar, Santhanam

    2011-12-15

    Purpose: Prostate cancer is the most common cancer in the male population. Radiotherapy is often used in the treatment for prostate cancer. In radiotherapy treatment, the oncologist makes a trade-off between the risk and benefit of the radiation, i.e., the task is to deliver a high dose to the prostate cancer cells and minimize side effects of the treatment. The aim of our research is to develop a software system that will assist the oncologist in planning new treatments. Methods: A nonlinear case-based reasoning system is developed to capture the expertise and experience of oncologists in treating previous patients. Importance (weights) of different clinical parameters in the dose planning is determined by the oncologist based on their past experience, and is highly subjective. The weights are usually fixed in the system. In this research, the weights are updated automatically each time after generating a treatment plan for a new patient using a group based simulated annealing approach. Results: The developed approach is analyzed on the real data set collected from the Nottingham University Hospitals NHS Trust, City Hospital Campus, UK. Extensive experiments show that the dose plan suggested by the proposed method is coherent with the dose plan prescribed by an experienced oncologist or even better. Conclusions: The developed case-based reasoning system enables the use of knowledge and experience gained by the oncologist in treating new patients. This system may play a vital role to assist the oncologist in making a better decision in less computational time; it utilizes the success rate of the previously treated patients and it can also be used in teaching and training processes.

  6. Knowledge base navigator facilitating regional analysis inter-tool communication.

    SciTech Connect (OSTI)

    Hampton, Jeffery Wade; Chael, Eric Paul; Hart, Darren M.; Merchant, Bion John; Chown, Matthew N.

    2004-08-01

    To make use of some portions of the National Nuclear Security Administration (NNSA) Knowledge Base (KB) for which no current operational monitoring applications were available, Sandia National Laboratories have developed a set of prototype regional analysis tools (MatSeis, EventID Tool, CodaMag Tool, PhaseMatch Tool, Dendro Tool, Infra Tool, etc.), and we continue to maintain and improve these. Individually, these tools have proven effective in addressing specific monitoring tasks, but collectively their number and variety tend to overwhelm KB users, so we developed another application - the KB Navigator - to launch the tools and facilitate their use for real monitoring tasks. The KB Navigator is a flexible, extensible java application that includes a browser for KB data content, as well as support to launch any of the regional analysis tools. In this paper, we will discuss the latest versions of KB Navigator and the regional analysis tools, with special emphasis on the new overarching inter-tool communication methodology that we have developed to make the KB Navigator and the tools function together seamlessly. We use a peer-to-peer communication model, which allows any tool to communicate with any other. The messages themselves are passed as serialized XML, and the conversion from Java to XML (and vice versa) is done using Java Architecture for XML Binding (JAXB).

  7. Feature-Based Statistical Analysis of Combustion Simulation Data

    SciTech Connect (OSTI)

    Bennett, J; Krishnamoorthy, V; Liu, S; Grout, R; Hawkes, E; Chen, J; Pascucci, V; Bremer, P T

    2011-11-18

    We present a new framework for feature-based statistical analysis of large-scale scientific data and demonstrate its effectiveness by analyzing features from Direct Numerical Simulations (DNS) of turbulent combustion. Turbulent flows are ubiquitous and account for transport and mixing processes in combustion, astrophysics, fusion, and climate modeling among other disciplines. They are also characterized by coherent structure or organized motion, i.e. nonlocal entities whose geometrical features can directly impact molecular mixing and reactive processes. While traditional multi-point statistics provide correlative information, they lack nonlocal structural information, and hence, fail to provide mechanistic causality information between organized fluid motion and mixing and reactive processes. Hence, it is of great interest to capture and track flow features and their statistics together with their correlation with relevant scalar quantities, e.g. temperature or species concentrations. In our approach we encode the set of all possible flow features by pre-computing merge trees augmented with attributes, such as statistical moments of various scalar fields, e.g. temperature, as well as length-scales computed via spectral analysis. The computation is performed in an efficient streaming manner in a pre-processing step and results in a collection of meta-data that is orders of magnitude smaller than the original simulation data. This meta-data is sufficient to support a fully flexible and interactive analysis of the features, allowing for arbitrary thresholds, providing per-feature statistics, and creating various global diagnostics such as Cumulative Density Functions (CDFs), histograms, or time-series. We combine the analysis with a rendering of the features in a linked-view browser that enables scientists to interactively explore, visualize, and analyze the equivalent of one terabyte of simulation data. We highlight the utility of this new framework for combustion

  8. A Research Roadmap for Computation-Based Human Reliability Analysis

    SciTech Connect (OSTI)

    Boring, Ronald; Mandelli, Diego; Joe, Jeffrey; Smith, Curtis; Groth, Katrina

    2015-08-01

    The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is often secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.

  9. A component analysis based on serial results analyzing performance of parallel iterative programs

    SciTech Connect (OSTI)

    Richman, S.C.

    1994-12-31

    This research is concerned with the parallel performance of iterative methods for solving large, sparse, nonsymmetric linear systems. Most of the iterative methods are first presented with their time costs and convergence rates examined intensively on sequential machines, and then adapted to parallel machines. The analysis of the parallel iterative performance is more complicated than that of serial performance, since the former can be affected by many new factors, such as data communication schemes, number of processors used, and Ordering and mapping techniques. Although the author is able to summarize results from data obtained after examining certain cases by experiments, two questions remain: (1) How to explain the results obtained? (2) How to extend the results from the certain cases to general cases? To answer these two questions quantitatively, the author introduces a tool called component analysis based on serial results. This component analysis is introduced because the iterative methods consist mainly of several basic functions such as linked triads, inner products, and triangular solves, which have different intrinsic parallelisms and are suitable for different parallel techniques. The parallel performance of each iterative method is first expressed as a weighted sum of the parallel performance of the basic functions that are the components of the method. Then, one separately examines the performance of basic functions and the weighting distributions of iterative methods, from which two independent sets of information are obtained when solving a given problem. In this component approach, all the weightings require only serial costs not parallel costs, and each iterative method for solving a given problem is represented by its unique weighting distribution. The information given by the basic functions is independent of iterative method, while that given by weightings is independent of parallel technique, parallel machine and number of processors.

  10. SYSTEM DESIGN AND ANALYSIS FOR CONCEPTUAL DESIGN OF OXYGEN-BASED PC BOILER

    SciTech Connect (OSTI)

    Zhen Fan; Andrew Seltzer

    2003-11-01

    The objective of the system design and analysis task of the Conceptual Design of Oxygen-Based PC Boiler study is to optimize the PC boiler plant by maximizing system efficiency. Simulations of the oxygen-fired plant with CO{sub 2} sequestration were conducted using Aspen Plus and were compared to a reference air-fired 460 Mw plant. Flue gas recycle is used in the O{sub 2}-fired PC to control the flame temperature. Parametric runs were made to determine the effect of flame temperature on system efficiency and required waterwall material and thickness. The degree of improvement on system efficiency of various modifications including hot gas recycle, purge gas recycle, flue gas feedwater recuperation, and recycle purge gas expansion were investigated. The selected O{sub 2}-fired design case has a system efficiency of 30.1% compared to the air-fired system efficiency of 36.7%. The design O{sub 2}-fired case requires T91 waterwall material and has a waterwall surface area of only 44% of the air-fired reference case. Compared to other CO{sub 2} sequestration technologies, the O{sub 2}-fired PC is substantially better than both natural gas combined cycles and post CO{sub 2} removal PCs and is slightly better than integrated gasification combined cycles.

  11. Discrete Mathematical Approaches to Graph-Based Traffic Analysis

    SciTech Connect (OSTI)

    Joslyn, Cliff A.; Cowley, Wendy E.; Hogan, Emilie A.; Olsen, Bryan K.

    2014-04-01

    Modern cyber defense and anlaytics requires general, formal models of cyber systems. Multi-scale network models are prime candidates for such formalisms, using discrete mathematical methods based in hierarchically-structured directed multigraphs which also include rich sets of labels. An exemplar of an application of such an approach is traffic analysis, that is, observing and analyzing connections between clients, servers, hosts, and actors within IP networks, over time, to identify characteristic or suspicious patterns. Towards that end, NetFlow (or more generically, IPFLOW) data are available from routers and servers which summarize coherent groups of IP packets flowing through the network. In this paper, we consider traffic analysis of Netflow using both basic graph statistics and two new mathematical measures involving labeled degree distributions and time interval overlap measures. We do all of this over the VAST test data set of 96M synthetic Netflow graph edges, against which we can identify characteristic patterns of simulated ground-truth network attacks.

  12. Statistical Analysis of Microarray Data with Replicated Spots: A Case Study withSynechococcusWH8102

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Thomas, E. V.; Phillippy, K. H.; Brahamsha, B.; Haaland, D. M.; Timlin, J. A.; Elbourne, L. D. H.; Palenik, B.; Paulsen, I. T.

    2009-01-01

    Until recently microarray experiments often involved relatively few arrays with only a single representation of each gene on each array. A complete genome microarray with multiple spots per gene (spread out spatially across the array) was developed in order to compare the gene expression of a marine cyanobacterium and a knockout mutant strain in a defined artificial seawater medium. Statistical methods were developed for analysis in the special situation of this case study where there is gene replication within an array and where relatively few arrays are used, which can be the case with current array technology. Due in partmore »to the replication within an array, it was possible to detect very small changes in the levels of expression between the wild type and mutant strains. One interesting biological outcome of this experiment is the indication of the extent to which the phosphorus regulatory system of this cyanobacterium affects the expression of multiple genes beyond those strictly involved in phosphorus acquisition.« less

  13. Statistical Analysis of Microarray Data with Replicated Spots: A Case Study with Synechococcus WH8102

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Thomas, E. V.; Phillippy, K. H.; Brahamsha, B.; Haaland, D. M.; Timlin, J. A.; Elbourne, L. D. H.; Palenik, B.; Paulsen, I. T.

    2009-01-01

    Until recently microarray experiments often involved relatively few arrays with only a single representation of each gene on each array. A complete genome microarray with multiple spots per gene (spread out spatially across the array) was developed in order to compare the gene expression of a marine cyanobacterium and a knockout mutant strain in a defined artificial seawater medium. Statistical methods were developed for analysis in the special situation of this case study where there is gene replication within an array and where relatively few arrays are used, which can be the case with current array technology. Due in partmore » to the replication within an array, it was possible to detect very small changes in the levels of expression between the wild type and mutant strains. One interesting biological outcome of this experiment is the indication of the extent to which the phosphorus regulatory system of this cyanobacterium affects the expression of multiple genes beyond those strictly involved in phosphorus acquisition.« less

  14. Survey of sampling-based methods for uncertainty and sensitivity analysis.

    SciTech Connect (OSTI)

    Johnson, Jay Dean; Helton, Jon Craig; Sallaberry, Cedric J. PhD.; Storlie, Curt B. (Colorado State University, Fort Collins, CO)

    2006-06-01

    Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (1) Definition of probability distributions to characterize epistemic uncertainty in analysis inputs, (2) Generation of samples from uncertain analysis inputs, (3) Propagation of sampled inputs through an analysis, (4) Presentation of uncertainty analysis results, and (5) Determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition.

  15. Feature-based Analysis of Plasma-based Particle Acceleration Data

    SciTech Connect (OSTI)

    Ruebel, Oliver; Geddes, Cameron G.R.; Chen, Min; Cormier-Michel, Estelle; Bethel, E. Wes

    2013-07-05

    Plasma-based particle accelerators can produce and sustain thousands of times stronger acceleration fields than conventional particle accelerators, providing a potential solution to the problem of the growing size and cost of conventional particle accelerators. To facilitate scientific knowledge discovery from the ever growing collections of accelerator simulation data generated by accelerator physicists to investigate next-generation plasma-based particle accelerator designs, we describe a novel approach for automatic detection and classification of particle beams and beam substructures due to temporal differences in the acceleration process, here called acceleration features. The automatic feature detection in combination with a novel visualization tool for fast, intuitive, query-based exploration of acceleration features enables an effective top-down data exploration process, starting from a high-level, feature-based view down to the level of individual particles. We describe the application of our analysis in practice to analyze simulations of single pulse and dual and triple colliding pulse accelerator designs, and to study the formation and evolution of particle beams, to compare substructures of a beam and to investigate transverse particle loss.

  16. Geography-based structural analysis of the Internet

    SciTech Connect (OSTI)

    Kasiviswanathan, Shiva; Eidenbenz, Stephan; Yan, Guanhua

    2010-01-01

    In this paper, we study some geographic aspects of the Internet. We base our analysis on a large set of geolocated IP hop-level session data (including about 300,000 backbone routers, 150 million end hosts, and 1 billion sessions) that we synthesized from a variety of different input sources such as US census data, computer usage statistics, Internet market share data, IP geolocation data sets, CAJDA's Skitter data set for backbone connectivity, and BGP routing tables. We use this model to perform a nationwide and statewide geographic analysis of the Internet. Our main observations are: (1) There is a dominant coast-to-coast pattern in the US Internet traffic. In fact, in many instances even if the end-devices are not near either coast, still the traffic between them takes a long detour through the coasts. (2) More than half of the Internet paths are inflated by 100% or more compared to their corresponding geometric straight-line distance. This circuitousness makes the average ratio between the routing distance and geometric distance big (around 10). (3) The weighted mean hop count is around 5, but the hop counts are very loosely correlated with the distances. The weighted mean AS count (number of ASes traversed) is around 3. (4) The AS size and the AS location number distributions are heavy-tailed and strongly correlated. Most of the ASes are medium sized and there is a wide variability in the geographic dispersion size (measured in terms of the convex hull area) of these ASes.

  17. Precipitation Estimate Using NEXRAD Ground-Based Radar Images: Validation, Calibration and Spatial Analysis

    SciTech Connect (OSTI)

    Zhang, Xuesong

    2012-12-17

    Precipitation is an important input variable for hydrologic and ecological modeling and analysis. Next Generation Radar (NEXRAD) can provide precipitation products that cover most of the continental United States with a high resolution display of approximately 4 × 4 km2. Two major issues concerning the applications of NEXRAD data are (1) lack of a NEXRAD geo-processing and geo-referencing program and (2) bias correction of NEXRAD estimates. In this chapter, a geographic information system (GIS) based software that can automatically support processing of NEXRAD data for hydrologic and ecological models is presented. Some geostatistical approaches to calibrating NEXRAD data using rain gauge data are introduced, and two case studies on evaluating accuracy of NEXRAD Multisensor Precipitation Estimator (MPE) and calibrating MPE with rain-gauge data are presented. The first case study examines the performance of MPE in mountainous region versus south plains and cold season versus warm season, as well as the effect of sub-grid variability and temporal scale on NEXRAD performance. From the results of the first case study, performance of MPE was found to be influenced by complex terrain, frozen precipitation, sub-grid variability, and temporal scale. Overall, the assessment of MPE indicates the importance of removing bias of the MPE precipitation product before its application, especially in the complex mountainous region. The second case study examines the performance of three MPE calibration methods using rain gauge observations in the Little River Experimental Watershed in Georgia. The comparison results show that no one method can perform better than the others in terms of all evaluation coefficients and for all time steps. For practical estimation of precipitation distribution, implementation of multiple methods to predict spatial precipitation is suggested.

  18. System planning analysis applied to OTEC: initial cases by Florida Power Corporation. Task II report No. FC-5237-2

    SciTech Connect (OSTI)

    1980-03-01

    The objective of the task was to exercise the FPC system planning methodology on: (1) Base Case, 10 year generation expansion plan with coal plants providing base load expansion, and (2) same, but 400 MW of OTEC substituting for coal burning units with equal resultant system reliability. OTEC inputs were based on reasonable economic projections of direct capital cost and O and M costs for first-generation large commercial plants. OTEC inputs discussed in Section 2. The Base Case conditions for FPC system planning methodology involved base load coal fueled additions during the 1980's and early 1990's. The first trial runs of the PROMOD system planning model substituted OTEC for 400 MW purchases of coal generated power during 1988-1989 and then 400 MW coal capacity thereafter. Result showed higher system reliability than Base Case runs. Reruns with greater coal fueled capacity displacement showed that OTEC could substitute for 400 MW purchases in 1988-1989 and replace the 800 MW coal unit scheduled for 1990 to yield equivalent system reliability. However, a 1995 unit would need to be moved to 1994. Production costing computer model runs were used as input to Corporate Model to examine corporate financial impact. Present value of total revenue requirements were primary indication of relative competitiveness between Base Case and OTEC. Results show present value of total revenue requirements unfavorable to OTEC as compared to coal units. The disparity was in excess of the allowable range for possible consideration.

  19. A comprehensive evaluation of various sensitivity analysis methods: A case study with a hydrological model

    SciTech Connect (OSTI)

    Gan, Yanjun; Duan, Qingyun; Gong, Wei; Tong, Charles; Sun, Yunwei; Chu, Wei; Ye, Aizhong; Miao, Chiyuan; Di, Zhenhua

    2014-01-01

    Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin near Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more efficient

  20. A comprehensive evaluation of various sensitivity analysis methods: A case study with a hydrological model

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Gan, Yanjun; Duan, Qingyun; Gong, Wei; Tong, Charles; Sun, Yunwei; Chu, Wei; Ye, Aizhong; Miao, Chiyuan; Di, Zhenhua

    2014-01-01

    Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin nearmore » Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more

  1. The IAEA Coordinated Research Program on HTGR Reactor Physics, Thermal-hydraulics and Depletion Uncertainty Analysis: Description of the Benchmark Test Cases and Phases

    SciTech Connect (OSTI)

    Frederik Reitsma; Gerhard Strydom; Bismark Tyobeka; Kostadin Ivanov

    2012-10-01

    The continued development of High Temperature Gas Cooled Reactors (HTGRs) requires verification of design and safety features with reliable high fidelity physics models and robust, efficient, and accurate codes. The uncertainties in the HTR analysis tools are today typically assessed with sensitivity analysis and then a few important input uncertainties (typically based on a PIRT process) are varied in the analysis to find a spread in the parameter of importance. However, one wish to apply a more fundamental approach to determine the predictive capability and accuracies of coupled neutronics/thermal-hydraulics and depletion simulations used for reactor design and safety assessment. Today there is a broader acceptance of the use of uncertainty analysis even in safety studies and it has been accepted by regulators in some cases to replace the traditional conservative analysis. Finally, there is also a renewed focus in supplying reliable covariance data (nuclear data uncertainties) that can then be used in uncertainty methods. Uncertainty and sensitivity studies are therefore becoming an essential component of any significant effort in data and simulation improvement. In order to address uncertainty in analysis and methods in the HTGR community the IAEA launched a Coordinated Research Project (CRP) on the HTGR Uncertainty Analysis in Modelling early in 2012. The project is built on the experience of the OECD/NEA Light Water Reactor (LWR) Uncertainty Analysis in Best-Estimate Modelling (UAM) benchmark activity, but focuses specifically on the peculiarities of HTGR designs and its simulation requirements. Two benchmark problems were defined with the prismatic type design represented by the MHTGR-350 design from General Atomics (GA) while a 250 MW modular pebble bed design, similar to the INET (China) and indirect-cycle PBMR (South Africa) designs are also included. In the paper more detail on the benchmark cases, the different specific phases and tasks and the latest

  2. Habitat-Lite: A GSC case study based on free text terms for environmental metadata

    SciTech Connect (OSTI)

    Kyrpides, Nikos; Hirschman, Lynette; Clark, Cheryl; Cohen, K. Bretonnel; Mardis, Scott; Luciano, Joanne; Kottmann, Renzo; Cole, James; Markowitz, Victor; Kyrpides, Nikos; Field, Dawn

    2008-04-01

    There is an urgent need to capture metadata on the rapidly growing number of genomic, metagenomic and related sequences, such as 16S ribosomal genes. This need is a major focus within the Genomic Standards Consortium (GSC), and Habitat is a key metadata descriptor in the proposed 'Minimum Information about a Genome Sequence' (MIGS) specification. The goal of the work described here is to provide a light-weight, easy-to-use (small) set of terms ('Habitat-Lite') that captures high-level information about habitat while preserving a mapping to the recently launched Environment Ontology (EnvO). Our motivation for building Habitat-Lite is to meet the needs of multiple users, such as annotators curating these data, database providers hosting the data, and biologists and bioinformaticians alike who need to search and employ such data in comparative analyses. Here, we report a case study based on semi-automated identification of terms from GenBank and GOLD. We estimate that the terms in the initial version of Habitat-Lite would provide useful labels for over 60% of the kinds of information found in the GenBank isolation-source field, and around 85% of the terms in the GOLD habitat field. We present a revised version of Habitat-Lite and invite the community's feedback on its further development in order to provide a minimum list of terms to capture high-level habitat information and to provide classification bins needed for future studies.

  3. Model-based performance monitoring: Review of diagnostic methods and chiller case study

    SciTech Connect (OSTI)

    Haves, Phil; Khalsa, Sat Kartar

    2000-05-01

    The paper commences by reviewing the variety of technical approaches to the problem of detecting and diagnosing faulty operation in order to improve the actual performance of buildings. The review covers manual and automated methods, active testing and passive monitoring, the different classes of models used in fault detection, and methods of diagnosis. The process of model-based fault detection is then illustrated by describing the use of relatively simple empirical models of chiller energy performance to monitor equipment degradation and control problems. The CoolTools(trademark) chiller model identification package is used to fit the DOE-2 chiller model to on-site measurements from a building instrumented with high quality sensors. The need for simple algorithms to reject transient data, detect power surges and identify control problems is discussed, as is the use of energy balance checks to detect sensor problems. The accuracy with which the chiller model can be expected! to predict performance is assessed from the goodness of fit obtained and the implications for fault detection sensitivity and sensor accuracy requirements are discussed. A case study is described in which the model was applied retroactively to high-quality data collected in a San Francisco office building as part of a related project (Piette et al. 1999).

  4. Reduced order model based on principal component analysis for process simulation and optimization

    SciTech Connect (OSTI)

    Lang, Y.; Malacina, A.; Biegler, L.; Munteanu, S.; Madsen, J.; Zitney, S.

    2009-01-01

    It is well-known that distributed parameter computational fluid dynamics (CFD) models provide more accurate results than conventional, lumped-parameter unit operation models used in process simulation. Consequently, the use of CFD models in process/equipment co-simulation offers the potential to optimize overall plant performance with respect to complex thermal and fluid flow phenomena. Because solving CFD models is time-consuming compared to the overall process simulation, we consider the development of fast reduced order models (ROMs) based on CFD results to closely approximate the high-fidelity equipment models in the co-simulation. By considering process equipment items with complicated geometries and detailed thermodynamic property models, this study proposes a strategy to develop ROMs based on principal component analysis (PCA). Taking advantage of commercial process simulation and CFD software (for example, Aspen Plus and FLUENT), we are able to develop systematic CFD-based ROMs for equipment models in an efficient manner. In particular, we show that the validity of the ROM is more robust within well-sampled input domain and the CPU time is significantly reduced. Typically, it takes at most several CPU seconds to evaluate the ROM compared to several CPU hours or more to solve the CFD model. Two case studies, involving two power plant equipment examples, are described and demonstrate the benefits of using our proposed ROM methodology for process simulation and optimization.

  5. Strategic backdrop analysis for fossil fuel planning. Task 1. Default Case. Report 468-117-07/01

    SciTech Connect (OSTI)

    Not Available

    1980-06-01

    This report presents data describing a default case analysis performed using the strategic backdrop analytical framework developed to facilitate fossil fuel planning within the DOE. Target years are 1985 and 2025. Residential, commercial, and industrial energy demands are forecast as well as the impacts of energy technology implementation and market penetration using a set of energy technology assumptions. (DMC)

  6. Strategic backdrop analysis for fossil fuel planning. Task 1. Default Case. Report 468-117-07/03

    SciTech Connect (OSTI)

    Not Available

    1980-06-01

    This report presents data describing a default case analysis performed using the strategic backdrop analytical framework developed to facilitate fossil fuel planning within the DOE. Target years are 1985, 2000, and 2025. Residential, commercial, and industrial energy demands and impacts of energy technology implementation and market penetration are forecast using a set of energy technology assumptions.

  7. Strategic backdrop analysis for fossil fuel planning. Task 1. Default Case. Report 468-117-07/02

    SciTech Connect (OSTI)

    Not Available

    1980-06-01

    This report presents data describing a default case analysis performed using the strategic backdrop analytical framework developed to facilitate fossil fuel planning within the DOE. Target years are 1985, 2000, and 2025. Residential, commercial, and industrial energy demands and impacts of energy technology implementation and market penetration are forecast using a set of energy technology assumptions. (DMC)

  8. FAQS Gap Analysis Qualification Card – General Technical Base

    Broader source: Energy.gov [DOE]

    Functional Area Qualification Standard Gap Analysis Qualification Cards outline the differences between the last and latest version of the FAQ Standard.

  9. Geographically Based Hydrogen Demand and Infrastructure Rollout Scenario Analysis

    Broader source: Energy.gov [DOE]

    Presentation by Margo Melendez at the 2010-2025 Scenario Analysis for Hydrogen Fuel Cell Vehicles and Infrastructure meeting on January 31, 2007.

  10. An Integrated Analysis of a NERVA Based Nuclear Thermal Propulsion...

    Office of Scientific and Technical Information (OSTI)

    require that self-consistent neutronicthermal-hydraulicstress analyses be carried out. ... SYSTEMS; PULSES; REACTOR SAFETY; STRESS ANALYSIS; THERMAL HYDRAULICS; WATER; ...

  11. Algorithms and tools for high-throughput geometry-based analysis...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Algorithms and tools for high-throughput geometry-based analysis of crystalline porous materials Previous Next List Thomas F. Willems, Chris H. Rycroft, Michaeel Kazi, Juan C....

  12. Algorithms and tools for high-throughput geometry-based analysis...

    Office of Scientific and Technical Information (OSTI)

    Journal Article: Algorithms and tools for high-throughput geometry-based analysis of crystalline porous materials ... Research Org: Energy Frontier Research Centers (EFRC); ...

  13. Prevalence and contribution of BRCA1 mutations in breast cancer and ovarian cancer: Results from three US population-based case-control studies of ovarian cancer

    SciTech Connect (OSTI)

    Whittemore, A.S.; Gong, G.; Itnyre, J.

    1997-03-01

    We investigate the familial risks of cancers of the breast and ovary, using data pooled from three population-based case-control studies of ovarian cancer that were conducted in the United States. We base estimates of the frequency of mutations of BRCA1 (and possibly other genes) on the reported occurrence of breast cancer and ovarian cancer in the mothers and sisters of 922 women with incident ovarian cancer (cases) and in 922 women with no history of ovarian cancer (controls). Segregation analysis and goodness-of-fit testing of genetic models suggest that rare mutations (frequency .0014; 95% confidence interval .0002-.011) account for all the observed aggregation of breast cancer and ovarian cancer in these families. The estimated risk of breast cancer by age 80 years is 73.5% in mutation carriers and 6.8% in noncarriers. The corresponding estimates for ovarian cancer are 27.8% in carriers and 1.8% in noncarriers. For cancer risk in carriers, these estimates are lower than those obtained from families selected for high cancer prevalence. The estimated proportion of all U.S. cancer diagnoses, by age 80 years, that are due to germ-line BRCA1 mutations is 3.0% for breast cancer and 4.4% for ovarian cancer. Aggregation of breast cancer and ovarian cancer was less evident in the families of 169 cases with borderline ovarian cancers than in the families of cases with invasive cancers. Familial aggregation did not differ by the ethnicity of the probands, although the number of non-White and Hispanic cases (N = 99) was sparse. 14 refs., 3 figs., 6 tabs.

  14. SYSTEM DESIGN AND ANALYSIS FOR CONCEPTUAL DESIGN OF OXYGEN-BASED...

    Office of Scientific and Technical Information (OSTI)

    DESIGN OF OXYGEN-BASED PC BOILER Citation Details In-Document Search Title: SYSTEM DESIGN AND ANALYSIS FOR CONCEPTUAL DESIGN OF OXYGEN-BASED PC BOILER The objective of the system ...

  15. Global Trade Analysis Project (GTAP) Data Base | Open Energy...

    Open Energy Info (EERE)

    TOOL Name: GTAP 6 Data Base AgencyCompany Organization: Purdue University Sector: Energy Topics: Policiesdeployment programs, Co-benefits assessment, - Macroeconomic,...

  16. A High Resolution Hydrometer Phase Classifier Based on Analysis...

    Office of Scientific and Technical Information (OSTI)

    Satellite-based retrievals of cloudmore phase in high latitudes are often hindered by the highly reflecting ice-covered ground and persistent temperature inversions. From the ...

  17. Psychosocial Modeling of Insider Threat Risk Based on Behavioral and Word Use Analysis

    SciTech Connect (OSTI)

    Greitzer, Frank L.; Kangas, Lars J.; Noonan, Christine F.; Brown, Christopher R.; Ferryman, Thomas A.

    2013-10-01

    In many insider crimes, managers and other coworkers observed that the offenders had exhibited signs of stress, disgruntlement, or other issues, but no alarms were raised. Barriers to using such psychosocial indicators include the inability to recognize the signs and the failure to record the behaviors so that they can be assessed. A psychosocial model was developed to assess an employee’s behavior associated with an increased risk of insider abuse. The model is based on case studies and research literature on factors/correlates associated with precursor behavioral manifestations of individuals committing insider crimes. A complementary Personality Factor modeling approach was developed based on analysis to derive relevant personality characteristics from word use. Several implementations of the psychosocial model were evaluated by comparing their agreement with judgments of human resources and management professionals; the personality factor modeling approach was examined using email samples. If implemented in an operational setting, these models should be part of a set of management tools for employee assessment to identify employees who pose a greater insider threat.

  18. Microsoft PowerPoint - Microbial Genome and Metagenome Analysis Case Study (NERSC Workshop - May 7-8, 2009).ppt [Compatibility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Microbial Genome & Metagenome Analysis: Computational Challenges Natalia N. Ivanova * Nikos C. Kyrpides * Victor M. Markowitz ** * Genome Biology Program, Joint Genome Institute ** Lawrence Berkeley National Lab Microbial genome & metagenome analysis General aims Understand microbial life Apply to agriculture, bioremediation, biofuels, human health Specific aims include Specific aims include Predict biochemistry & physiology of organisms based on genome sequence Explain known

  19. Code cases for implementing risk-based inservice testing in the ASME OM code

    SciTech Connect (OSTI)

    Rowley, C.W.

    1996-12-01

    Historically inservice testing has been reasonably effective, but quite costly. Recent applications of plant PRAs to the scope of the IST program have demonstrated that of the 30 pumps and 500 valves in the typical plant IST program, less than half of the pumps and ten percent of the valves are risk significant. The way the ASME plans to tackle this overly-conservative scope for IST components is to use the PRA and plant expert panels to create a two tier IST component categorization scheme. The PRA provides the quantitative risk information and the plant expert panel blends the quantitative and deterministic information to place the IST component into one of two categories: More Safety Significant Component (MSSC) or Less Safety Significant Component (LSSC). With all the pumps and valves in the IST program placed in MSSC or LSSC categories, two different testing strategies will be applied. The testing strategies will be unique for the type of component, such as centrifugal pump, positive displacement pump, MOV, AOV, SOV, SRV, PORV, HOV, CV, and MV. A series of OM Code Cases are being developed to capture this process for a plant to use. One Code Case will be for Component Importance Ranking. The remaining Code Cases will develop the MSSC and LSSC testing strategy for type of component. These Code Cases are planned for publication in early 1997. Later, after some industry application of the Code Cases, the alternative Code Case requirements will gravitate to the ASME OM Code as appendices.

  20. CHAPTER 7. BERYLLIUM ANALYSIS BY NON-PLASMA BASED METHODS

    SciTech Connect (OSTI)

    Ekechukwu, A

    2009-04-20

    The most common method of analysis for beryllium is inductively coupled plasma atomic emission spectrometry (ICP-AES). This method, along with inductively coupled plasma mass spectrometry (ICP-MS), is discussed in Chapter 6. However, other methods exist and have been used for different applications. These methods include spectroscopic, chromatographic, colorimetric, and electrochemical. This chapter provides an overview of beryllium analysis methods other than plasma spectrometry (inductively coupled plasma atomic emission spectrometry or mass spectrometry). The basic methods, detection limits and interferences are described. Specific applications from the literature are also presented.

  1. Digital Instrumentation and Control Failure Events Derivation and Analysis by Frame-Based Technique

    SciTech Connect (OSTI)

    Hui-Wen Huang; Chunkuan Shih [National Tsing Hua University, 101, Section 2, Kuang-Fu Road, Hsinchu, Taiwan 30013 (China); Swu Yih [DML International, 18F-1 295, Section 2 Kuang Fu Road, Hsinchu, Taiwan (China); Yen-Chang Tzeng; Ming-Huei Chen [Institute of Nuclear Energy Research, No. 1000, Wunhua Rd., Jiaan Village, Longtan Township, Taoyuan County 32546, Taiwan (China)

    2006-07-01

    A frame-based technique, including physical frame, logical frame, and cognitive frame, was adopted to perform digital I and C failure events derivation and analysis for generic ABWR. The physical frame was structured with a modified PCTran-ABWR plant simulation code, which was extended and enhanced on the feedwater system, recirculation system, and steam line system. The logical model is structured with MATLAB, which was incorporated into PCTran-ABWR to improve the pressure control system, feedwater control system, recirculation control system, and automated power regulation control system. As a result, the software failure of these digital control systems can be properly simulated and analyzed. The cognitive frame was simulated by the operator awareness status in the scenarios. Moreover, via an internal characteristics tuning technique, the modified PCTran-ABWR can precisely reflect the characteristics of the power-core flow. Hence, in addition to the transient plots, the analysis results can then be demonstrated on the power-core flow map. A number of postulated I and C system software failure events were derived to achieve the dynamic analyses. The basis for event derivation includes the published classification for software anomalies, the digital I and C design data for ABWR, chapter 15 accident analysis of generic SAR, and the reported NPP I and C software failure events. The case study of this research includes (1) the software CMF analysis for the major digital control systems; and (2) postulated ABWR digital I and C software failure events derivation from the actual happening of non-ABWR digital I and C software failure events, which were reported to LER of USNRC or IRS of IAEA. These events were analyzed by PCTran-ABWR. Conflicts among plant status, computer status, and human cognitive status are successfully identified. The operator might not easily recognize the abnormal condition, because the computer status seems to progress normally. However, a well

  2. Improving the Accuracy of Software-Based Energy Analysis for Residential Buildings (Presentation)

    SciTech Connect (OSTI)

    Polly, B.

    2011-09-01

    This presentation describes the basic components of software-based energy analysis for residential buildings, explores the concepts of 'error' and 'accuracy' when analysis predictions are compared to measured data, and explains how NREL is working to continuously improve the accuracy of energy analysis methods.

  3. Rapid analysis of steels using laser-based techniques

    SciTech Connect (OSTI)

    Cremers, D.A.; Archuleta, F.L.; Dilworth, H.C.

    1985-01-01

    Based on the data obtained by this study, we conclude that laser-based techniques can be used to provide at least semi-quantitative information about the elemental composition of molten steel. Of the two techniques investigated here, the Sample-Only method appears preferable to the LIBS (laser-induced breakdown spectroscopy) method because of its superior analytical performance. In addition, the Sample-Only method would probably be easier to incorporate into a steel plant environment. However, before either technique can be applied to steel monitoring, additional research is needed.

  4. Temperature-based Instanton Analysis: Identifying Vulnerability in Transmission Networks

    SciTech Connect (OSTI)

    Kersulis, Jonas; Hiskens, Ian; Chertkov, Michael; Backhaus, Scott N.; Bienstock, Daniel

    2015-04-08

    A time-coupled instanton method for characterizing transmission network vulnerability to wind generation fluctuation is presented. To extend prior instanton work to multiple-time-step analysis, line constraints are specified in terms of temperature rather than current. An optimization formulation is developed to express the minimum wind forecast deviation such that at least one line is driven to its thermal limit. Results are shown for an IEEE RTS-96 system with several wind-farms.

  5. Intelligent Control in Automation Based on Wireless Traffic Analysis

    SciTech Connect (OSTI)

    Kurt Derr; Milos Manic

    2007-09-01

    Wireless technology is a central component of many factory automation infrastructures in both the commercial and government sectors, providing connectivity among various components in industrial realms (distributed sensors, machines, mobile process controllers). However wireless technologies provide more threats to computer security than wired environments. The advantageous features of Bluetooth technology resulted in Bluetooth units shipments climbing to five million per week at the end of 2005 [1, 2]. This is why the real-time interpretation and understanding of Bluetooth traffic behavior is critical in both maintaining the integrity of computer systems and increasing the efficient use of this technology in control type applications. Although neuro-fuzzy approaches have been applied to wireless 802.11 behavior analysis in the past, a significantly different Bluetooth protocol framework has not been extensively explored using this technology. This paper presents a new neurofuzzy traffic analysis algorithm of this still new territory of Bluetooth traffic. Further enhancements of this algorithm are presented along with the comparison against the traditional, numerical approach. Through test examples, interesting Bluetooth traffic behavior characteristics were captured, and the comparative elegance of this computationally inexpensive approach was demonstrated. This analysis can be used to provide directions for future development and use of this prevailing technology in various control type applications, as well as making the use of it more secure.

  6. Intelligent Control in Automation Based on Wireless Traffic Analysis

    SciTech Connect (OSTI)

    Kurt Derr; Milos Manic

    2007-08-01

    Wireless technology is a central component of many factory automation infrastructures in both the commercial and government sectors, providing connectivity among various components in industrial realms (distributed sensors, machines, mobile process controllers). However wireless technologies provide more threats to computer security than wired environments. The advantageous features of Bluetooth technology resulted in Bluetooth units shipments climbing to five million per week at the end of 2005 [1, 2]. This is why the real-time interpretation and understanding of Bluetooth traffic behavior is critical in both maintaining the integrity of computer systems and increasing the efficient use of this technology in control type applications. Although neuro-fuzzy approaches have been applied to wireless 802.11 behavior analysis in the past, a significantly different Bluetooth protocol framework has not been extensively explored using this technology. This paper presents a new neurofuzzy traffic analysis algorithm of this still new territory of Bluetooth traffic. Further enhancements of this algorithm are presented along with the comparison against the traditional, numerical approach. Through test examples, interesting Bluetooth traffic behavior characteristics were captured, and the comparative elegance of this computationally inexpensive approach was demonstrated. This analysis can be used to provide directions for future development and use of this prevailing technology in various control type applications, as well as making the use of it more secure.

  7. Simulation and Non-Simulation Based Human Reliability Analysis Approaches

    SciTech Connect (OSTI)

    Boring, Ronald Laurids; Shirley, Rachel Elizabeth; Joe, Jeffrey Clark; Mandelli, Diego

    2014-12-01

    Part of the U.S. Department of Energy’s Light Water Reactor Sustainability (LWRS) Program, the Risk-Informed Safety Margin Characterization (RISMC) Pathway develops approaches to estimating and managing safety margins. RISMC simulations pair deterministic plant physics models with probabilistic risk models. As human interactions are an essential element of plant risk, it is necessary to integrate human actions into the RISMC risk model. In this report, we review simulation-based and non-simulation-based human reliability assessment (HRA) methods. Chapter 2 surveys non-simulation-based HRA methods. Conventional HRA methods target static Probabilistic Risk Assessments for Level 1 events. These methods would require significant modification for use in dynamic simulation of Level 2 and Level 3 events. Chapter 3 is a review of human performance models. A variety of methods and models simulate dynamic human performance; however, most of these human performance models were developed outside the risk domain and have not been used for HRA. The exception is the ADS-IDAC model, which can be thought of as a virtual operator program. This model is resource-intensive but provides a detailed model of every operator action in a given scenario, along with models of numerous factors that can influence operator performance. Finally, Chapter 4 reviews the treatment of timing of operator actions in HRA methods. This chapter is an example of one of the critical gaps between existing HRA methods and the needs of dynamic HRA. This report summarizes the foundational information needed to develop a feasible approach to modeling human interactions in the RISMC simulations.

  8. Case Studies

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Case Studies Case Studies The following case studies will be included in the HEP report. Final case studies are due January 7, 2013. Lattice Gauge Theories - Lead: Doug Toussaint Simulations for Cosmic Frontier Experiments - Leads: Peter Nugent & Andrew Connelly Cosmic Microwave Background Data Analysis - Lead: Julian Borrill Cosmological Simulations - Lead: Salman Habib Plasma Accelerator Simulation Using Laser and Particle Beam Drivers - Leads: Cameron Geddes & Frank Tsung Community

  9. Aerosol transport and wet scavenging in deep convective clouds: a case study and model evaluation using a multiple passive tracer analysis approach

    SciTech Connect (OSTI)

    Yang, Qing; Easter, Richard C.; Campuzano-Jost, Pedro; Jimenez, Jose L.; Fast, Jerome D.; Ghan, Steven J.; Wang, Hailong; Berg, Larry K.; Barth, Mary; Liu, Ying; Shrivastava, ManishKumar B.; Singh, Balwinder; Morrison, H.; Fan, Jiwen; Ziegler, Conrad L.; Bela, Megan; Apel, Eric; Diskin, G. S.; Mikoviny, Tomas; Wisthaler, Armin

    2015-08-20

    The effect of wet scavenging on ambient aerosols in deep, continental convective clouds in the mid-latitudes is studied for a severe storm case in Oklahoma during the Deep Convective Clouds and Chemistry (DC3) field campaign. A new passive-tracer based transport analysis framework is developed to characterize the convective transport based on the vertical distribution of several slowly reacting and nearly insoluble trace gases. The passive gas concentration in the upper troposphere convective outflow results from a mixture of 47% from the lower level (0-3 km), 21% entrained from the upper troposphere, and 32% from mid-atmosphere based on observations. The transport analysis framework is applied to aerosols to estimate aerosol transport and wet-scavenging efficiency. Observations yield high overall scavenging efficiencies of 81% and 68% for aerosol mass (Dp < 1ÎŒm) and aerosol number (0.03< Dp < 2.5ÎŒm), respectively. Little chemical selectivity to wet scavenging is seen among observed submicron sulfate (84%), organic (82%), and ammonium (80%) aerosols, while nitrate has a much lower scavenging efficiency of 57% likely due to the uptake of nitric acid. Observed larger size particles (0.15 - 2.5ÎŒm) are scavenged more efficiently (84%) than smaller particles (64%; 0.03 - 0.15ÎŒm). The storm is simulated using the chemistry version of the WRF model. Compared to the observation based analysis, the standard model underestimates the wet scavenging efficiency for both mass and number concentrations with low biases of 31% and 40%, respectively. Adding a new treatment of secondary activation significantly improves simulation results, so that the bias in scavenging efficiency in mass and number concentrations is reduced to <10%. This supports the hypothesis that secondary activation is an important process for wet removal of aerosols in deep convective storms.

  10. ANUDlSiTM-40 Load Flow Analysis: Base Cases, Data, Diagrams,...

    Office of Scientific and Technical Information (OSTI)

    ... set. 4.4 IMPLICATIONS OF THE ADOPTED MODELING APPROACH By representing the M W and ... those that would result if ComEd were simulated in the context of the full MAIN model. ...

  11. U.S. Renewable Energy Technical Potentials: A GIS-Based Analysis...

    Open Energy Info (EERE)

    Renewable Energy Technical Potentials: A GIS-Based Analysis Jump to: navigation, search OpenEI Reference LibraryAdd to library Report: U.S. Renewable Energy Technical Potentials: A...

  12. Proteomics based compositional analysis of complex cellulase-hemicellulase mixtures

    SciTech Connect (OSTI)

    Chundawat, Shishir P.; Lipton, Mary S.; Purvine, Samuel O.; Uppugundla, Nirmal; Gao, Dahai; Balan, Venkatesh; Dale, Bruce E.

    2011-10-07

    Efficient deconstruction of cellulosic biomass to fermentable sugars for fuel and chemical production is accomplished by a complex mixture of cellulases, hemicellulases and accessory enzymes (e.g., >50 extracellular proteins). Cellulolytic enzyme mixtures, produced industrially mostly using fungi like Trichoderma reesei, are poorly characterized in terms of their protein composition and its correlation to hydrolytic activity on cellulosic biomass. The secretomes of commercial glycosyl hydrolase producing microbes was explored using a proteomics approach with high-throughput quantification using liquid chromatography-tandem mass spectrometry (LC-MS/MS). Here, we show that proteomics based spectral counting approach is a reasonably accurate and rapid analytical technique that can be used to determine protein composition of complex glycosyl hydrolase mixtures that also correlates with the specific activity of individual enzymes present within the mixture. For example, a strong linear correlation was seen between Avicelase activity and total cellobiohydrolase content. Reliable, quantitative and cheaper analytical methods that provide insight into the cellulosic biomass degrading fungal and bacterial secretomes would lead to further improvements towards commercialization of plant biomass derived fuels and chemicals.

  13. A Monte Carlo based spent fuel analysis safeguards strategy assessment

    SciTech Connect (OSTI)

    Fensin, Michael L; Tobin, Stephen J; Swinhoe, Martyn T; Menlove, Howard O; Sandoval, Nathan P

    2009-01-01

    assessment process, the techniques employed to automate the coupled facets of the assessment process, and the standard burnup/enrichment/cooling time dependent spent fuel assembly library. We also clearly define the diversion scenarios that will be analyzed during the standardized assessments. Though this study is currently limited to generic PWR assemblies, it is expected that the results of the assessment will yield an adequate spent fuel analysis strategy knowledge that will help the down-select process for other reactor types.

  14. Analyzing simulation-based PRA data through traditional and topological clustering: A BWR station blackout case study

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Maljovec, D.; Liu, S.; Wang, B.; Mandelli, D.; Bremer, P. -T.; Pascucci, V.; Smith, C.

    2015-07-14

    Here, dynamic probabilistic risk assessment (DPRA) methodologies couple system simulator codes (e.g., RELAP and MELCOR) with simulation controller codes (e.g., RAVEN and ADAPT). Whereas system simulator codes model system dynamics deterministically, simulation controller codes introduce both deterministic (e.g., system control logic and operating procedures) and stochastic (e.g., component failures and parameter uncertainties) elements into the simulation. Typically, a DPRA is performed by sampling values of a set of parameters and simulating the system behavior for that specific set of parameter values. For complex systems, a major challenge in using DPRA methodologies is to analyze the large number of scenarios generated,more » where clustering techniques are typically employed to better organize and interpret the data. In this paper, we focus on the analysis of two nuclear simulation datasets that are part of the risk-informed safety margin characterization (RISMC) boiling water reactor (BWR) station blackout (SBO) case study. We provide the domain experts a software tool that encodes traditional and topological clustering techniques within an interactive analysis and visualization environment, for understanding the structures of such high-dimensional nuclear simulation datasets. We demonstrate through our case study that both types of clustering techniques complement each other for enhanced structural understanding of the data.« less

  15. Analyzing simulation-based PRA data through traditional and topological clustering: A BWR station blackout case study

    SciTech Connect (OSTI)

    Maljovec, D.; Liu, S.; Wang, B.; Mandelli, D.; Bremer, P. -T.; Pascucci, V.; Smith, C.

    2015-07-14

    Here, dynamic probabilistic risk assessment (DPRA) methodologies couple system simulator codes (e.g., RELAP and MELCOR) with simulation controller codes (e.g., RAVEN and ADAPT). Whereas system simulator codes model system dynamics deterministically, simulation controller codes introduce both deterministic (e.g., system control logic and operating procedures) and stochastic (e.g., component failures and parameter uncertainties) elements into the simulation. Typically, a DPRA is performed by sampling values of a set of parameters and simulating the system behavior for that specific set of parameter values. For complex systems, a major challenge in using DPRA methodologies is to analyze the large number of scenarios generated, where clustering techniques are typically employed to better organize and interpret the data. In this paper, we focus on the analysis of two nuclear simulation datasets that are part of the risk-informed safety margin characterization (RISMC) boiling water reactor (BWR) station blackout (SBO) case study. We provide the domain experts a software tool that encodes traditional and topological clustering techniques within an interactive analysis and visualization environment, for understanding the structures of such high-dimensional nuclear simulation datasets. We demonstrate through our case study that both types of clustering techniques complement each other for enhanced structural understanding of the data.

  16. EVENT TREE ANALYSIS AT THE SAVANNAH RIVER SITE: A CASE HISTORY

    SciTech Connect (OSTI)

    Williams, R

    2009-05-25

    At the Savannah River Site (SRS), a Department of Energy (DOE) installation in west-central South Carolina there is a unique geologic stratum that exists at depth that has the potential to cause surface settlement resulting from a seismic event. In the past the particular stratum in question has been remediated via pressure grouting, however the benefits of remediation have always been debatable. Recently the SRS has attempted to frame the issue in terms of risk via an event tree or logic tree analysis. This paper describes that analysis, including the input data required.

  17. Business Case Analysis for Replacing the Mazak 30Y Mill-Turn Machine in SM-39. Summary

    SciTech Connect (OSTI)

    Booth, Steven Richard; Dinehart, Timothy Grant; Benson, Faith Ann

    2015-03-19

    Business case studies are being looked at to support procurement of new machines and capital equipment in the SM-39 and TA-03-0102 machine shops. The first effort conducted economic analysis of replacing the Mazak 30Y Mill-Turn Machine located in SM-39. To determine the value of switching machinery, a baseline scenario was compared with a future scenario where new machinery was purchased and installed. The conditions under the two scenarios were defined via interviews with subject matter experts in terms of one-time and periodic costs. The results of the analysis were compiled in a life-cycle cost/benefit table. The costs of procuring, installing, and maintaining a new machine were balanced against the costs avoided by replacing older machinery. Productivity savings were included as a measure to show the costs avoided by being able to produce parts at a quicker and more efficient pace.

  18. Macroalgae Analysis A National GIS-based Analysis of Macroalgae Production Potential Summary Report and Project Plan

    SciTech Connect (OSTI)

    Roesijadi, Guritno; Coleman, Andre M.; Judd, Chaeli; Van Cleve, Frances B.; Thom, Ronald M.; Buenau, Kate E.; Tagestad, Jerry D.; Wigmosta, Mark S.; Ward, Jeffrey A.

    2011-12-01

    The overall project objective is to conduct a strategic analysis to assess the state of macroalgae as a feedstock for biofuels production. The objective in FY11 is to develop a multi-year systematic national assessment to evaluate the U.S. potential for macroalgae production using a GIS-based assessment tool and biophysical growth model developed as part of these activities. The initial model development for both resource assessment and constraints was completed and applied to the demonstration areas. The model for macroalgal growth was extended to the EEZ off the East and West Coasts of the United States, and a plan to merge the findings for an initial composite assessment was developed. In parallel, an assessment of land-based, port, and offshore infrastructure needs based on published and grey literature was conducted. Major information gaps and challenges encountered during this analysis were identified. Also conducted was an analysis of the type of local, state, and federal requirements that pertain to permitting land-based facilities and nearshore/offshore culture operations

  19. UXO detection and identification based on intrinsic target polarizabilities: A case history

    SciTech Connect (OSTI)

    Gasperikova, E.; Smith, J.T.; Morrison, H.F.; Becker, A.; Kappler, K.

    2008-07-15

    Electromagnetic induction data parameterized in time dependent object intrinsic polarizabilities allow discrimination of unexploded ordnance (UXO) from false targets (scrap metal). Data from a cart-mounted system designed for discrimination of UXO with 20 mm to 155 mm diameters are used. Discrimination of UXO from irregular scrap metal is based on the principal dipole polarizabilities of a target. A near-intact UXO displays a single major polarizability coincident with the long axis of the object and two equal smaller transverse polarizabilities, whereas metal scraps have distinct polarizability signatures that rarely mimic those of elongated symmetric bodies. Based on a training data set of known targets, object identification was made by estimating the probability that an object is a single UXO. Our test survey took place on a military base where both 4.2-inch mortar shells and scrap metal were present. The results show that we detected and discriminated correctly all 4.2-inch mortars, and in that process we added 7%, and 17%, respectively, of dry holes (digging scrap) to the total number of excavations in two different survey modes. We also demonstrated a mode of operation that might be more cost effective than the current practice.

  20. Energy-water analysis of the 10-year WECC transmission planning study cases.

    SciTech Connect (OSTI)

    Tidwell, Vincent Carroll; Passell, Howard David; Castillo, Cesar; Moreland, Barbara

    2011-11-01

    calculating water withdrawal and consumption for current and planned electric power generation; projected water demand from competing use sectors; and, surface and groundwater availability. WECC's long range planning is organized according to two target planning horizons, a 10-year and a 20-year. This study supports WECC in the 10-year planning endeavor. In this case the water implications associated with four of WECC's alternative future study cases (described below) are calculated and reported. In future phases of planning we will work with WECC to craft study cases that aim to reduce the thermoelectric footprint of the interconnection and/or limit production in the most water stressed regions of the West.

  1. Aminoindazole PDK1 Inhibitors: A Case Study in Fragment-Based Drug Discovery

    SciTech Connect (OSTI)

    Medina, Jesus R.; Blackledge, Charles W.; Heerding, Dirk A.; Campobasso, Nino; Ward, Paris; Briand, Jacques; Wright, Lois; Axten, Jeffrey M.

    2012-05-29

    Fragment screening of phosphoinositide-dependent kinase-1 (PDK1) in a biochemical kinase assay afforded hits that were characterized and prioritized based on ligand efficiency and binding interactions with PDK1 as determined by NMR. Subsequent crystallography and follow-up screening led to the discovery of aminoindazole 19, a potent leadlike PDK1 inhibitor with high ligand efficiency. Well-defined structure-activity relationships and protein crystallography provide a basis for further elaboration and optimization of 19 as a PDK1 inhibitor.

  2. A framework for graph-based synthesis, analysis, and visualization of HPC cluster job data.

    SciTech Connect (OSTI)

    Mayo, Jackson R.; Kegelmeyer, W. Philip, Jr.; Wong, Matthew H.; Pebay, Philippe Pierre; Gentile, Ann C.; Thompson, David C.; Roe, Diana C.; De Sapio, Vincent; Brandt, James M.

    2010-08-01

    The monitoring and system analysis of high performance computing (HPC) clusters is of increasing importance to the HPC community. Analysis of HPC job data can be used to characterize system usage and diagnose and examine failure modes and their effects. This analysis is not straightforward, however, due to the complex relationships that exist between jobs. These relationships are based on a number of factors, including shared compute nodes between jobs, proximity of jobs in time, etc. Graph-based techniques represent an approach that is particularly well suited to this problem, and provide an effective technique for discovering important relationships in job queuing and execution data. The efficacy of these techniques is rooted in the use of a semantic graph as a knowledge representation tool. In a semantic graph job data, represented in a combination of numerical and textual forms, can be flexibly processed into edges, with corresponding weights, expressing relationships between jobs, nodes, users, and other relevant entities. This graph-based representation permits formal manipulation by a number of analysis algorithms. This report presents a methodology and software implementation that leverages semantic graph-based techniques for the system-level monitoring and analysis of HPC clusters based on job queuing and execution data. Ontology development and graph synthesis is discussed with respect to the domain of HPC job data. The framework developed automates the synthesis of graphs from a database of job information. It also provides a front end, enabling visualization of the synthesized graphs. Additionally, an analysis engine is incorporated that provides performance analysis, graph-based clustering, and failure prediction capabilities for HPC systems.

  3. On the Existence of Our Metals-Based Civilization: I. Phase Space Analysis

    SciTech Connect (OSTI)

    D.D. Macdonald

    2005-06-22

    The stability of the barrier layers of bilayer passive films that form on metal and alloy surfaces, when in contact with oxidizing aqueous environments, is explored within the framework of the Point Defect Model (PDM) using phase-space analysis (PSA), in which the rate of growth of the barrier layer into the metal, (dL{sup +}/dt), and the barrier layer dissolution rate, (dL{sup -}/dt), are plotted simultaneously against the barrier layer thickness. A point of intersection of dL{sup -}/dt with dL{sup +}/dt indicates the existence of a metastable barrier layer with a steady state thickness greater than zero. If dL{sup -}/dt > (dL{sup +}/dt){sub L=0}, where the latter quantity is the barrier layer growth rate at zero barrier layer thickness, the barrier layer cannot exist, even as a metastable phase, as the resulting thickness would be negative. Under these conditions, the surface is depassivated and the metal may corrode at a rapid rate. Depassivation may result from a change in the oxidation state of the cation upon dissolution of the barrier layer, such that the dissolution rate becomes highly potential dependent (as in the case of transpassive dissolution of chromium-containing alloys, for example, in which the reaction Cr{sub 2}O{sub 3} + 5H{sub 2}O {yields} 2CrO{sub 4}{sup 2-} + 10H {sup +} + 6e{sup -} results in the destruction of the film), or by the action of some solution-phase species (e.g., H{sup +}, Cl{sup -}) that enhances the dissolution rate to the extent that dL{sup -}/dt > (dL{sup +}/dt){sub L=0}. The boundaries for depassivation may be plotted in potential-pH space to develop Kinetic Stability Diagrams (KSDs) as alternatives to the classical Pourbaix diagrams for describing the conditions under which metals or alloys exist in contact with an aqueous environment. The advantage of KSDs is that they provide kinetic descriptions of the state of a metal or alloy that is in much closer concert with the kinetic phenomenon of passivity and depassivation

  4. Dynamic Slope Stability Analysis of Mine Tailing Deposits: the Case of Raibl Mine

    SciTech Connect (OSTI)

    Roberto, Meriggi; Marco, Del Fabbro; Erica, Blasone; Erica, Zilli

    2008-07-08

    Over the last few years, many embankments and levees have collapsed during strong earthquakes or floods. In the Friuli Venezia Giulia Region (North-Eastern Italy), the main source of this type of risk is a slag deposit of about 2x10{sup 6} m{sup 3} deriving from galena and lead mining activity until 1991 in the village of Raibl. For the final remedial action plan, several in situ tests were performed: five boreholes equipped with piezometers, four CPTE and some geophysical tests with different approaches (refraction, ReMi and HVSR). Laboratory tests were conducted on the collected samples: geotechnical classification, triaxial compression tests and constant head permeability tests in triaxial cell. Pressure plate tests were also done on unsaturated slag to evaluate the characteristic soil-water curve useful for transient seepage analysis. A seepage analysis was performed in order to obtain the maximum pore water pressures during the intense rainfall event which hit the area on 29th August 2003. The results highlight that the slag low permeability prevents the infiltration of rainwater, which instead seeps easily through the boundary levees built with coarse materials. For this reason pore water pressures inside the deposits are not particularly influenced by rainfall intensity and frequency. Seismic stability analysis was performed with both the pseudo-static method, coupled with Newmark's method, and dynamic methods, using as design earthquake the one registered in Tolmezzo (Udine) on 6{sup th} May 1976. The low reduction of safety factors and the development of very small cumulative displacements show that the stability of embankments is assured even if an earthquake of magnitude 6.4 and a daily rainfall of 141.6 mm occur at the same time.

  5. Appendix E: Other NEMS-MP results for the base case and scenarios.

    SciTech Connect (OSTI)

    Plotkin, S. E.; Singh, M. K.; Energy Systems

    2009-12-03

    The NEMS-MP model generates numerous results for each run of a scenario. (This model is the integrated National Energy Modeling System [NEMS] version used for the Multi-Path Transportation Futures Study [MP].) This appendix examines additional findings beyond the primary results reported in the Multi-Path Transportation Futures Study: Vehicle Characterization and Scenario Analyses (Reference 1). These additional results are provided in order to help further illuminate some of the primary results. Specifically discussed in this appendix are: (1) Energy use results for light vehicles (LVs), including details about the underlying total vehicle miles traveled (VMT), the average vehicle fuel economy, and the volumes of the different fuels used; (2) Resource fuels and their use in the production of ethanol, hydrogen (H{sub 2}), and electricity; (3) Ethanol use in the scenarios (i.e., the ethanol consumption in E85 vs. other blends, the percent of travel by flex fuel vehicles on E85, etc.); (4) Relative availability of E85 and H2 stations; (5) Fuel prices; (6) Vehicle prices; and (7) Consumer savings. These results are discussed as follows: (1) The three scenarios (Mixed, (P)HEV & Ethanol, and H2 Success) when assuming vehicle prices developed through literature review; (2) The three scenarios with vehicle prices that incorporate the achievement of the U.S. Department of Energy (DOE) program vehicle cost goals; (3) The three scenarios with 'literature review' vehicle prices, plus vehicle subsidies; and (4) The three scenarios with 'program goals' vehicle prices, plus vehicle subsidies. The four versions or cases of each scenario are referred to as: Literature Review No Subsidies, Program Goals No Subsidies, Literature Review with Subsidies, and Program Goals with Subsidies. Two additional points must be made here. First, none of the results presented for LVs in this section include Class 2B trucks. Results for this class are included occasionally in Reference 1. They

  6. The Business Case for SEP | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    The Business Case for SEP The Business Case for SEP Superior Energy Performance logo Facilities pursue certification to Superior Energy PerformanceÂź (SEP(tm)) to achieve an attractive return on investment while enhancing sustainability. The business case for SEP is based on detailed accounts from facilities that have implemented ISO 50001 and SEP. Gain an insider's view from these pioneers. Read the cost-benefit analysis and case studies, and view videos and presentations. Cost-Benefit Analysis

  7. Station Blackout: A case study in the interaction of mechanistic and probabilistic safety analysis

    SciTech Connect (OSTI)

    Curtis Smith; Diego Mandelli; Cristian Rabiti

    2013-11-01

    The ability to better characterize and quantify safety margins is important to improved decision making about nuclear power plant design, operation, and plant life extension. As research and development (R&D) in the light-water reactor (LWR) Sustainability (LWRS) Program and other collaborative efforts yield new data, sensors, and improved scientific understanding of physical processes that govern the aging and degradation of plant SSCs needs and opportunities to better optimize plant safety and performance will become known. The purpose of the Risk Informed Safety Margin Characterization (RISMC) Pathway R&D is to support plant decisions for risk-informed margin management with the aim to improve economics, reliability, and sustain safety of current NPPs. In this paper, we describe the RISMC analysis process illustrating how mechanistic and probabilistic approaches are combined in order to estimate a safety margin. We use the scenario of a “station blackout” wherein offsite power and onsite power is lost, thereby causing a challenge to plant safety systems. We describe the RISMC approach, illustrate the station blackout modeling, and contrast this with traditional risk analysis modeling for this type of accident scenario.

  8. An analysis of uranium dispersal and health effects using a Gulf War case study.

    SciTech Connect (OSTI)

    Marshall, Albert Christian

    2005-07-01

    The study described in this report used mathematical modeling to estimate health risks from exposure to depleted uranium (DU) during the 1991 Gulf War for both U.S. troops and nearby Iraqi civilians. The analysis found that the risks of DU-induced leukemia or birth defects are far too small to result in an observable increase in these health effects among exposed veterans or Iraqi civilians. Only a few veterans in vehicles accidentally struck by U.S. DU munitions are predicted to have inhaled sufficient quantities of DU particulate to incur any significant health risk (i.e., the possibility of temporary kidney damage from the chemical toxicity of uranium and about a 1% chance of fatal lung cancer). The health risk to all downwind civilians is predicted to be extremely small. Recommendations for monitoring are made for certain exposed groups. Although the study found fairly large calculational uncertainties, the models developed and used are generally valid. The analysis was also used to assess potential uranium health hazards for workers in the weapons complex. No illnesses are projected for uranium workers following standard guidelines; nonetheless, some research suggests that more conservative guidelines should be considered.

  9. Preliminary Analysis and Case Study of Transmission Constraints and Wind Energy in the West: Preprint

    SciTech Connect (OSTI)

    Milligan, M.; Berger, D. P.

    2005-05-01

    Wind developers typically need long-term transmission service to finance their projects; however, most of the capacity on several key paths is reserved by existing firm contracts. Because non-firm contracts are only offered for periods up to 1 year, obtaining financing for the wind project is generally not possible when firm capacity is unavailable. However, sufficient capacity may exist on the constrained paths for new wind projects that can risk curtailment for a small number of hours of the year. This paper presents the results of a study sponsored by the National Renewable Energy Laboratory (NREL), a work group participant in the Rocky Mountain Area Transmission Study (RMATS). Using recent historical power flow data, case studies were conducted on the constrained paths between Wyoming-Colorado (TOT3) and Montana-Northwest, coinciding with areas of exceptional wind resources. The potential curtailment frequency for hypothetical 100-MW and 500-MW wind plants was calculated using hourly wind data. The results from the study indicate that sufficient potential exists for innovative transmission products that can help bring more wind to load centers and increase the efficiency of the existing transmission network.

  10. Lipid-Based Nanodiscs as Models for Studying Mesoscale Coalescence A Transport Limited Case

    SciTech Connect (OSTI)

    Hu, Andrew; Fan, Tai-Hsi; Katsaras, John; Xia, Yan; Li, Ming; Nieh, Mu-Ping

    2014-01-01

    Lipid-based nanodiscs (bicelles) are able to form in mixtures of long- and short-chain lipids. Initially, they are of uniform size but grow upon dilution. Previously, nanodisc growth kinetics have been studied using time-resolved small angle neutron scattering (SANS), a technique which is not well suited for probing their change in size immediately after dilution. To address this, we have used dynamic light scattering (DLS), a technique which permits the collection of useful data in a short span of time after dilution of the system. The DLS data indicate that the negatively charged lipids in nanodiscs play a significant role in disc stability and growth. Specifically, the charged lipids are most likely drawn out from the nanodiscs into solution, thereby reducing interparticle repulsion and enabling the discs to grow. We describe a population balance model, which takes into account Coulombic interactions and adequately predicts the initial growth of nanodiscs with a single parameter i.e., surface potential. The results presented here strongly support the notion that the disc coalescence rate strongly depends on nanoparticle charge density. The present system containing low-polydispersity lipid nanodiscs serves as a good model for understanding how charged discoidal micelles coalesce.

  11. Technology Solutions Case Study: Apartment Compartmentalization with an Aerosol-Based Sealing Process

    SciTech Connect (OSTI)

    2015-07-01

    Air sealing of building enclosures is a difficult and time-consuming process. Current methods in new construction require laborers to physically locate small and sometimes large holes in multiple assemblies and then manually seal each of them. This research study by Building America team Consortium for Advanced Residential Buildings demonstrated the automated air sealing and compartmentalization of buildings through the use of an aerosolized sealant developed by the Western Cooling Efficiency Center at University of California Davis. CARB demonstrated this new technology application in a multifamily building in Queens, NY. The effectiveness of the sealing process was evaluated by three methods: air leakage testing of overall apartment before and after sealing, point-source testing of individual leaks, and pressure measurements in the walls of the target apartment during sealing. Aerosolized sealing was successful by several measures in this study. Many individual leaks that are labor-intensive to address separately were well sealed by the aerosol particles. In addition, many diffuse leaks that are difficult to identify and treat were also sealed. The aerosol-based sealing process resulted in an average reduction of 71% in air leakage across three apartments and an average apartment airtightness of 0.08 CFM50/SF of enclosure area.

  12. IAEA CRP on HTGR Uncertainty Analysis: Benchmark Definition and Test Cases

    SciTech Connect (OSTI)

    Gerhard Strydom; Frederik Reitsma; Hans Gougar; Bismark Tyobeka; Kostadin Ivanov

    2012-11-01

    Uncertainty and sensitivity studies are essential elements of the reactor simulation code verification and validation process. Although several international uncertainty quantification activities have been launched in recent years in the LWR, BWR and VVER domains (e.g. the OECD/NEA BEMUSE program [1], from which the current OECD/NEA LWR Uncertainty Analysis in Modelling (UAM) benchmark [2] effort was derived), the systematic propagation of uncertainties in cross-section, manufacturing and model parameters for High Temperature Reactor (HTGR) designs has not been attempted yet. This paper summarises the scope, objectives and exercise definitions of the IAEA Coordinated Research Project (CRP) on HTGR UAM [3]. Note that no results will be included here, as the HTGR UAM benchmark was only launched formally in April 2012, and the specification is currently still under development.

  13. Framework for the Economic Analysis of Hybrid Systems Based on Exergy Consumption

    SciTech Connect (OSTI)

    Cristian Rabiti; Robert S. Cherry; Wesley R. Deason; Piyush Sabharwall; Shannon M. Bragg-Sitton; Richard D. Boardman

    2014-08-01

    Starting from an overview of the dynamic behavior of the electricity market the need of the introduction of energy users that will provide a damping capability to the system is derived as also a qualitative analysis of the impact of uncertainty, both in the demand and supply side, is performed. Then it follows an introduction to the investment analysis methodologies based on the discounting of the cash flow, and then work concludes with the illustration and application of the exergonomic principles to provide a sound methodology for the cost accounting of the plant components to be used in the cash flow analysis.

  14. 2007 Wholesale Power Rate Case Initial Proposal : Risk Analysis Study Documentation.

    SciTech Connect (OSTI)

    United States. Bonneville Power Administration.

    2005-11-01

    The RiskMod Model is comprised of a set of risk simulation models, collectively referred to as RiskSim; a set of computer programs that manages data referred to as Data Management Procedures; and RevSim, a model that calculates net revenues. RiskMod interacts with the AURORA Model, the RAM2007, and the ToolKit Model during the process of performing the Risk Analysis Study. AURORA is the computer model being used to perform the Market Price Forecast Study (see Market Price Forecast Study, WP-07-E-BPA-03); the RAM2007 is the computer model being used to calculate rates (see Wholesale Power Rate Development Study, WP-07-E-BPA-05); and the ToolKit is the computer model being used to develop the risk mitigation package that achieves BPA's 92.6 percent TPP standard (see Section 3 in the Risk Analysis Study, WP-07-E-BPA-04). Variations in monthly loads, resources, natural gas prices, forward market electricity prices, transmission expenses, and aluminum smelter benefit payments are simulated in RiskSim. Monthly spot market electricity prices for the simulated loads, resources, and natural gas prices are estimated by the AURORA Model. Data Management Procedures facilitate the format and movement of data that flow to and/or from RiskSim, AURORA, and RevSim. RevSim estimates net revenues using risk data from RiskSim, spot market electricity prices from AURORA, loads and resources data from the Load Resource Study, WP-07-E-BPA-01, various revenues from the Revenue Forecast component of the Wholesale Power Rate Development Study, WP-07-E-BPA-05, and rates and expenses from the RAM2007. Annual average surplus energy revenues, purchased power expenses, and section 4(h)(10)(C) credits calculated by RevSim are used in the Revenue Forecast and the RAM2007. Heavy Load Hour (HLH) and Light Load Hour (LLH) surplus and deficit energy values from RevSim are used in the Transmission Expense Risk Model. Net revenues estimated for each simulation by RevSim are input into the ToolKit Model

  15. 2007 Wholesale Power Rate Case Final Proposal : Risk Analysis Study Documentation.

    SciTech Connect (OSTI)

    United States. Bonneville Power Administration.

    2006-07-01

    The RiskMod Model is comprised of a set of risk simulation models, collectively referred to as RiskSim; a set of computer programs that manages data referred to as Data Management Procedures; and RevSim, a model that calculates net revenues. RiskMod interacts with the AURORA Model, the RAM2007, and the ToolKit Model during the process of performing the Risk Analysis Study. AURORA is the computer model being used to perform the Market Price Forecast Study (see Market Price Forecast Study, WP-07-FS-BPA-03); the RAM2007 is the computer model being used to calculate rates (see Wholesale Power Rate Development Study, WP-07-FS-BPA-05); and the ToolKit is the computer model being used to develop the risk mitigation package that achieves BPA's 92.6 percent TPP standard (see Section 3 in the Risk Analysis Study, WP-07-FS-BPA-04). Variations in monthly loads, resources, natural gas prices, forward market electricity prices, transmission expenses, and aluminum smelter benefit payments are simulated in RiskSim. Monthly spot market electricity prices for the simulated loads, resources, and natural gas prices are estimated by the AURORA Model. Data Management Procedures facilitate the format and movement of data that flow to and/or from RiskSim, AURORA, and RevSim. RevSim estimates net revenues using risk data from RiskSim, spot market electricity prices from AURORA, loads and resources data from the Load Resource Study, WP-07-FS-BPA-01, various revenues from the Revenue Forecast component of the Wholesale Power Rate Development Study, WP-07-FSBPA-05, and rates and expenses from the RAM2007. Annual average surplus energy revenues, purchased power expenses, and section 4(h)(10)(C) credits calculated by RevSim are used in the Revenue Forecast and the RAM2007. Heavy Load Hour (HLH) and Light Load Hour (LLH) surplus and deficit energy values from RevSim are used in the Transmission Expense Risk Model. Net revenues estimated for each simulation by RevSim are input into the Tool

  16. Distributed energy resources in practice: A case study analysis and validation of LBNL's customer adoption model

    SciTech Connect (OSTI)

    Bailey, Owen; Creighton, Charles; Firestone, Ryan; Marnay, Chris; Stadler, Michael

    2003-02-01

    This report describes a Berkeley Lab effort to model the economics and operation of small-scale (<500 kW) on-site electricity generators based on real-world installations at several example customer sites. This work builds upon the previous development of the Distributed Energy Resource Customer Adoption Model (DER-CAM), a tool designed to find the optimal combination of installed equipment, and idealized operating schedule, that would minimize the site's energy bills, given performance and cost data on available DER technologies, utility tariffs, and site electrical and thermal loads over a historic test period, usually a recent year. This study offered the first opportunity to apply DER-CAM in a real-world setting and evaluate its modeling results. DER-CAM has three possible applications: first, it can be used to guide choices of equipment at specific sites, or provide general solutions for example sites and propose good choices for sites with similar circumstances; second, it can additionally provide the basis for the operations of installed on-site generation; and third, it can be used to assess the market potential of technologies by anticipating which kinds of customers might find various technologies attractive. A list of approximately 90 DER candidate sites was compiled and each site's DER characteristics and their willingness to volunteer information was assessed, producing detailed information on about 15 sites of which five sites were analyzed in depth. The five sites were not intended to provide a random sample, rather they were chosen to provide some diversity of business activity, geography, and technology. More importantly, they were chosen in the hope of finding examples of true business decisions made based on somewhat sophisticated analyses, and pilot or demonstration projects were avoided. Information on the benefits and pitfalls of implementing a DER system was also presented from an additional ten sites including agriculture, education, health

  17. Natural time analysis of critical phenomena: The case of pre-fracture electromagnetic emissions

    SciTech Connect (OSTI)

    Potirakis, S. M.; Karadimitrakis, A.; Eftaxias, K.

    2013-06-15

    Criticality of complex systems reveals itself in various ways. One way to monitor a system at critical state is to analyze its observable manifestations using the recently introduced method of natural time. Pre-fracture electromagnetic (EM) emissions, in agreement to laboratory experiments, have been consistently detected in the MHz band prior to significant earthquakes. It has been proposed that these emissions stem from the fracture of the heterogeneous materials surrounding the strong entities (asperities) distributed along the fault, preventing the relative slipping. It has also been proposed that the fracture of heterogeneous material could be described in analogy to the critical phase transitions in statistical physics. In this work, the natural time analysis is for the first time applied to the pre-fracture MHz EM signals revealing their critical nature. Seismicity and pre-fracture EM emissions should be two sides of the same coin concerning the earthquake generation process. Therefore, we also examine the corresponding foreshock seismic activity, as another manifestation of the same complex system at critical state. We conclude that the foreshock seismicity data present criticality features as well.

  18. Analysis of Customer Enrollment Patterns in TIme-Based Rate Programs:

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Initial Results from the SGIG Consumer Behavior Studies (July 2013) | Department of Energy of Customer Enrollment Patterns in TIme-Based Rate Programs: Initial Results from the SGIG Consumer Behavior Studies (July 2013) Analysis of Customer Enrollment Patterns in TIme-Based Rate Programs: Initial Results from the SGIG Consumer Behavior Studies (July 2013) The U.S. Department of Energy is implementing the Smart Grid Investment Grant (SGIG) program under the American Recovery and Reinvestment

  19. Security Analysis of Selected AMI Failure Scenarios Using Agent Based Game Theoretic Simulation

    SciTech Connect (OSTI)

    Abercrombie, Robert K; Schlicher, Bob G; Sheldon, Frederick T

    2014-01-01

    Information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. We concentrated our analysis on the Advanced Metering Infrastructure (AMI) functional domain which the National Electric Sector Cyber security Organization Resource (NESCOR) working group has currently documented 29 failure scenarios. The strategy for the game was developed by analyzing five electric sector representative failure scenarios contained in the AMI functional domain. From these five selected scenarios, we characterize them into three specific threat categories affecting confidentiality, integrity and availability (CIA). The analysis using our ABGT simulation demonstrates how to model the AMI functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the AMI network with respect to CIA.

  20. Waste-to-wheel analysis of anaerobic-digestion-based renewable natural gas pathways with the GREET model.

    SciTech Connect (OSTI)

    Han, J.; Mintz, M.; Wang, M.

    2011-12-14

    In 2009, manure management accounted for 2,356 Gg or 107 billion standard cubic ft of methane (CH{sub 4}) emissions in the United States, equivalent to 0.5% of U.S. natural gas (NG) consumption. Owing to the high global warming potential of methane, capturing and utilizing this methane source could reduce greenhouse gas (GHG) emissions. The extent of that reduction depends on several factors - most notably, how much of this manure-based methane can be captured, how much GHG is produced in the course of converting it to vehicular fuel, and how much GHG was produced by the fossil fuel it might displace. A life-cycle analysis was conducted to quantify these factors and, in so doing, assess the impact of converting methane from animal manure into renewable NG (RNG) and utilizing the gas in vehicles. Several manure-based RNG pathways were characterized in the GREET (Greenhouse gases, Regulated Emissions, and Energy use in Transportation) model, and their fuel-cycle energy use and GHG emissions were compared to petroleum-based pathways as well as to conventional fossil NG pathways. Results show that despite increased total energy use, both fossil fuel use and GHG emissions decline for most RNG pathways as compared with fossil NG and petroleum. However, GHG emissions for RNG pathways are highly dependent on the specifics of the reference case, as well as on the process energy emissions and methane conversion factors assumed for the RNG pathways. The most critical factors are the share of flared controllable CH{sub 4} and the quantity of CH{sub 4} lost during NG extraction in the reference case, the magnitude of N{sub 2}O lost in the anaerobic digestion (AD) process and in AD residue, and the amount of carbon sequestered in AD residue. In many cases, data for these parameters are limited and uncertain. Therefore, more research is needed to gain a better understanding of the range and magnitude of environmental benefits from converting animal manure to RNG via AD.

  1. U.S. Renewable Energy Technical Potentials: A GIS-Based Analysis

    Office of Energy Efficiency and Renewable Energy (EERE)

    The National Renewable Energy Laboratory (NREL) routinely estimates the technical potential of specific renewable electricity generation technologies. These are technology-specific estimates of energy generation potential based on renewable resource availability and quality, technical system performance, topographic limitations, environmental, and land-use constraints only. The estimates do not consider (in most cases) economic or market constraints, and therefore do not represent a level of renewable generation that might actually be deployed. Technical potential estimates for six different renewable energy technologies were calculated by NREL, and methods and results for several other renewable technologies from previously published reports are also presented.

  2. Approach to proliferation risk assessment based on multiple objective analysis framework

    SciTech Connect (OSTI)

    Andrianov, A.; Kuptsov, I.

    2013-07-01

    The approach to the assessment of proliferation risk using the methods of multi-criteria decision making and multi-objective optimization is presented. The approach allows the taking into account of the specifics features of the national nuclear infrastructure, and possible proliferation strategies (motivations, intentions, and capabilities). 3 examples of applying the approach are shown. First, the approach has been used to evaluate the attractiveness of HEU (high enriched uranium)production scenarios at a clandestine enrichment facility using centrifuge enrichment technology. Secondly, the approach has been applied to assess the attractiveness of scenarios for undeclared production of plutonium or HEU by theft of materials circulating in nuclear fuel cycle facilities and thermal reactors. Thirdly, the approach has been used to perform a comparative analysis of the structures of developing nuclear power systems based on different types of nuclear fuel cycles, the analysis being based on indicators of proliferation risk.

  3. INEEL Subsurface Disposal Area CERCLA-based Decision Analysis for Technology Screening and Remedial Alternative Evaluation

    SciTech Connect (OSTI)

    Parnell, G. S.; Kloeber, Jr. J.; Westphal, D; Fung, V.; Richardson, John Grant

    2000-03-01

    A CERCLA-based decision analysis methodology for alternative evaluation and technology screening has been developed for application at the Idaho National Engineering and Environmental Laboratory WAG 7 OU13/14 Subsurface Disposal Area (SDA). Quantitative value functions derived from CERCLA balancing criteria in cooperation with State and Federal regulators are presented. A weighted criteria hierarchy is also summarized that relates individual value function numerical values to an overall score for a specific technology alternative.

  4. CyberShake 3.0: Physics-based Probabilistic Seismic Hazard Analysis |

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Argonne Leadership Computing Facility potential source faults for Southern California A 3D view showing potential source faults for Southern California's next "big one." Dynamic rupture and wave propagation simulations produce a model of ground motion at the earth's surface. Colors indicate possible distributions of displacement across the faults during rupture. Geoffrey Ely, Southern California Earthquake Center CyberShake 3.0: Physics-based Probabilistic Seismic Hazard Analysis

  5. CyberShake3.0: Physics-Based Probabilistic Seismic Hazard Analysis |

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Argonne Leadership Computing Facility CyberShake3.0: Physics-Based Probabilistic Seismic Hazard Analysis PI Name: Thomas Jordan PI Email: tjordan@usc.edu Institution: University of Southern California Allocation Program: INCITE Allocation Hours at ALCF: 2,000,000 Year: 2012 Research Domain: Earth Science Recent destructive earthquakes including Haiti (2010), Chile (2010), New Zealand( 2011), and Japan (2011) highlight the national and international need for improved seismic hazard

  6. Posters Preliminary Analysis of Ground-Based Microwave and Infrared Radiance Observations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    3 Posters Preliminary Analysis of Ground-Based Microwave and Infrared Radiance Observations During the Pilot Radiation OBservation Experiment E. R. Westwater, Y. Han, J. H. Churnside, and J. B. Snider National Oceanic and Atmospheric Administration Environmental Research Laboratories Environmental Technology Laboratory Boulder, Colorado Introduction During Phase Two of the Pilot Radiation OBservation Experiment (PROBE) held in Kavieng, Papua New Guinea (Renné et al. 1994), the National Oceanic

  7. Session Papers Preliminary Analysis of Ground-Based Microwave and Infrared Radiance Observations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Session Papers Preliminary Analysis of Ground-Based Microwave and Infrared Radiance Observations During the Pilot Radiation OBservation Experiment E. R. Westwater, Y. Han, J. H. Churnside, and J. B. Snider National Oceanic and Atmospheric Administration Environmental Research Laboratories Environmental Technology Laboratory Boulder, Colorado Introduction During Phase Two of the Pilot Radiation OBservation Experiment (PROBE) held in Kavieng, Papua New Guinea (Renné et al. 1994), the National

  8. Global Assessment of Hydrogen Technologies – Tasks 3 & 4 Report Economic, Energy, and Environmental Analysis of Hydrogen Production and Delivery Options in Select Alabama Markets: Preliminary Case Studies

    SciTech Connect (OSTI)

    Fouad, Fouad H.; Peters, Robert W.; Sisiopiku, Virginia P.; Sullivan Andrew J.; Gillette, Jerry; Elgowainy, Amgad; Mintz, Marianne

    2007-12-01

    This report documents a set of case studies developed to estimate the cost of producing, storing, delivering, and dispensing hydrogen for light-duty vehicles for several scenarios involving metropolitan areas in Alabama. While the majority of the scenarios focused on centralized hydrogen production and pipeline delivery, alternative delivery modes were also examined. Although Alabama was used as the case study for this analysis, the results provide insights into the unique requirements for deploying hydrogen infrastructure in smaller urban and rural environments that lie outside the DOE’s high priority hydrogen deployment regions. Hydrogen production costs were estimated for three technologies – steam-methane reforming (SMR), coal gasification, and thermochemical water-splitting using advanced nuclear reactors. In all cases examined, SMR has the lowest production cost for the demands associated with metropolitan areas in Alabama. Although other production options may be less costly for larger hydrogen markets, these were not examined within the context of the case studies.

  9. A Raman cell based on hollow core photonic crystal fiber for human breath analysis

    SciTech Connect (OSTI)

    Chow, Kam Kong; Zeng, Haishan; Short, Michael; Lam, Stephen; McWilliams, Annette

    2014-09-15

    Purpose: Breath analysis has a potential prospect to benefit the medical field based on its perceived advantages to become a point-of-care, easy to use, and cost-effective technology. Early studies done by mass spectrometry show that volatile organic compounds from human breath can represent certain disease states of our bodies, such as lung cancer, and revealed the potential of breath analysis. But mass spectrometry is costly and has slow-turnaround time. The authors’ goal is to develop a more portable and cost effective device based on Raman spectroscopy and hollow core-photonic crystal fiber (HC-PCF) for breath analysis. Methods: Raman scattering is a photon-molecular interaction based on the kinetic modes of an analyte which offers unique fingerprint type signals that allow molecular identification. HC-PCF is a novel light guide which allows light to be confined in a hollow core and it can be filled with a gaseous sample. Raman signals generated by the gaseous sample (i.e., human breath) can be guided and collected effectively for spectral analysis. Results: A Raman-cell based on HC-PCF in the near infrared wavelength range was developed and tested in a single pass forward-scattering mode for different gaseous samples. Raman spectra were obtained successfully from reference gases (hydrogen, oxygen, carbon dioxide gases), ambient air, and a human breath sample. The calculated minimum detectable concentration of this system was ∌15 parts per million by volume, determined by measuring the carbon dioxide concentration in ambient air via the characteristic Raman peaks at 1286 and 1388 cm{sup −1}. Conclusions: The results of this study were compared to a previous study using HC-PCF to trap industrial gases and backward-scatter 514.5 nm light from them. The authors found that the method presented in this paper has an advantage to enhance the signal-to-noise ratio (SNR). This SNR advantage, coupled with the better transmission of HC-PCF in the near-IR than in the

  10. Analysis of Geothermal Reservoir Stimulation Using Geomechanics-based Stochastic Analysis of Injection-induced Seismicity; 2010 Geothermal Technology Program Peer Review Report

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    5 4.5.7 Analysis of Geothermal Reservoir Stimulation Using Geomechanics-based Stochastic Analysis of Injection-induced Seismicity Presentation Number: 027 Investigator: Ghassemi, Ahmad (Texas A&M University) Objectives: To develop a model for seismicity-based reservoir characterization (SBRC) by combining rock mechanics, finite element modeling, geostatistical concepts to establish relationships between microseismicity, reservoir flow and geomechanical characteristics. Average Overall Score:

  11. Adapting a GIS-Based Multicriteria Decision Analysis Approach for Evaluating New Power Generating Sites

    SciTech Connect (OSTI)

    Omitaomu, Olufemi A; Blevins, Brandon R; Jochem, Warren C; Mays, Gary T; Belles, Randy; Hadley, Stanton W; Harrison, Thomas J; Bhaduri, Budhendra L; Neish, Bradley S; Rose, Amy N

    2012-01-01

    There is a growing need to site new power generating plants that use cleaner energy sources due to increased regulations on air and water pollution and a sociopolitical desire to develop more clean energy sources. To assist utility and energy companies as well as policy-makers in evaluating potential areas for siting new plants in the contiguous United States, a geographic information system (GIS)-based multicriteria decision analysis approach is presented in this paper. The presented approach has led to the development of the Oak Ridge Siting Analysis for power Generation Expansion (OR-SAGE) tool. The tool takes inputs such as population growth, water availability, environmental indicators, and tectonic and geological hazards to provide an in-depth analysis for siting options. To the utility and energy companies, the tool can quickly and effectively provide feedback on land suitability based on technology specific inputs. However, the tool does not replace the required detailed evaluation of candidate sites. To the policy-makers, the tool provides the ability to analyze the impacts of future energy technology while balancing competing resource use.

  12. FERC's acceptance of market-based pricing: An antitrust analysis. [Federal Energy Regulatory Commission

    SciTech Connect (OSTI)

    Harris, B.C.; Frankena, M.W. )

    1992-06-01

    In large part, FERC's determination of market power is based on an analysis that focuses on the ability of power suppliers to foreclose' other potential power suppliers by withholding transmission access to the buyer. The authors believe that this analysis is flawed because the conditions it considers are neither necessary nor sufficient for the existence of market power. That is, it is possible that market-based rates can be subject to market power even if no transmission supplier has the ability to foreclose some power suppliers; conversely, it is possible that no market power exists despite the ability to foreclose other suppliers. This paper provides a critical analysis of FERC's market-power determinations. The concept of market power is defined and its relationship to competition is discussed in Section 1, while a framework for evaluating the existence of market power is presented in Section 2. In Section 3, FERC's recent order in Terra Comfort is examined using this framework. A brief preview of FERC's order in TECO Power Services comprises Section 4. Overall conclusions are presented in Section 5.

  13. Demonstration of emulator-based Bayesian calibration of safety analysis codes: Theory and formulation

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Yurko, Joseph P.; Buongiorno, Jacopo; Youngblood, Robert

    2015-05-28

    System codes for simulation of safety performance of nuclear plants may contain parameters whose values are not known very accurately. New information from tests or operating experience is incorporated into safety codes by a process known as calibration, which reduces uncertainty in the output of the code and thereby improves its support for decision-making. The work reported here implements several improvements on classic calibration techniques afforded by modern analysis techniques. The key innovation has come from development of code surrogate model (or code emulator) construction and prediction algorithms. Use of a fast emulator makes the calibration processes used here withmore » Markov Chain Monte Carlo (MCMC) sampling feasible. This study uses Gaussian Process (GP) based emulators, which have been used previously to emulate computer codes in the nuclear field. The present work describes the formulation of an emulator that incorporates GPs into a factor analysis-type or pattern recognition-type model. This “function factorization” Gaussian Process (FFGP) model allows overcoming limitations present in standard GP emulators, thereby improving both accuracy and speed of the emulator-based calibration process. Calibration of a friction-factor example using a Method of Manufactured Solution is performed to illustrate key properties of the FFGP based process.« less

  14. Demonstration of emulator-based Bayesian calibration of safety analysis codes: Theory and formulation

    SciTech Connect (OSTI)

    Yurko, Joseph P.; Buongiorno, Jacopo; Youngblood, Robert

    2015-05-28

    System codes for simulation of safety performance of nuclear plants may contain parameters whose values are not known very accurately. New information from tests or operating experience is incorporated into safety codes by a process known as calibration, which reduces uncertainty in the output of the code and thereby improves its support for decision-making. The work reported here implements several improvements on classic calibration techniques afforded by modern analysis techniques. The key innovation has come from development of code surrogate model (or code emulator) construction and prediction algorithms. Use of a fast emulator makes the calibration processes used here with Markov Chain Monte Carlo (MCMC) sampling feasible. This study uses Gaussian Process (GP) based emulators, which have been used previously to emulate computer codes in the nuclear field. The present work describes the formulation of an emulator that incorporates GPs into a factor analysis-type or pattern recognition-type model. This “function factorization” Gaussian Process (FFGP) model allows overcoming limitations present in standard GP emulators, thereby improving both accuracy and speed of the emulator-based calibration process. Calibration of a friction-factor example using a Method of Manufactured Solution is performed to illustrate key properties of the FFGP based process.

  15. Methods for simulation-based analysis of fluid-structure interaction.

    SciTech Connect (OSTI)

    Barone, Matthew Franklin; Payne, Jeffrey L.

    2005-10-01

    Methods for analysis of fluid-structure interaction using high fidelity simulations are critically reviewed. First, a literature review of modern numerical techniques for simulation of aeroelastic phenomena is presented. The review focuses on methods contained within the arbitrary Lagrangian-Eulerian (ALE) framework for coupling computational fluid dynamics codes to computational structural mechanics codes. The review treats mesh movement algorithms, the role of the geometric conservation law, time advancement schemes, wetted surface interface strategies, and some representative applications. The complexity and computational expense of coupled Navier-Stokes/structural dynamics simulations points to the need for reduced order modeling to facilitate parametric analysis. The proper orthogonal decomposition (POD)/Galerkin projection approach for building a reduced order model (ROM) is presented, along with ideas for extension of the methodology to allow construction of ROMs based on data generated from ALE simulations.

  16. Fission matrix-based Monte Carlo criticality analysis of fuel storage pools

    SciTech Connect (OSTI)

    Farlotti, M.; Larsen, E. W.

    2013-07-01

    Standard Monte Carlo transport procedures experience difficulties in solving criticality problems in fuel storage pools. Because of the strong neutron absorption between fuel assemblies, source convergence can be very slow, leading to incorrect estimates of the eigenvalue and the eigenfunction. This study examines an alternative fission matrix-based Monte Carlo transport method that takes advantage of the geometry of a storage pool to overcome this difficulty. The method uses Monte Carlo transport to build (essentially) a fission matrix, which is then used to calculate the criticality and the critical flux. This method was tested using a test code on a simple problem containing 8 assemblies in a square pool. The standard Monte Carlo method gave the expected eigenfunction in 5 cases out of 10, while the fission matrix method gave the expected eigenfunction in all 10 cases. In addition, the fission matrix method provides an estimate of the error in the eigenvalue and the eigenfunction, and it allows the user to control this error by running an adequate number of cycles. Because of these advantages, the fission matrix method yields a higher confidence in the results than standard Monte Carlo. We also discuss potential improvements of the method, including the potential for variance reduction techniques. (authors)

  17. Impact of x-ray dose on track formation and data analysis for CR-39-based proton diagnostics

    SciTech Connect (OSTI)

    Rinderknecht, H. G. Rojas-Herrera, J.; Zylstra, A. B.; Frenje, J. A.; Gatu Johnson, M.; Sio, H.; Sinenian, N.; Rosenberg, M. J.; Li, C. K.; SĂ©guin, F. H.; Petrasso, R. D.; Filkins, T.; Steidle, Jessica A.; Traynor, N.; Freeman, C.; Steidle, Jeffrey A.

    2015-12-15

    The nuclear track detector CR-39 is used extensively for charged particle diagnosis, in particular proton spectroscopy, at inertial confinement fusion facilities. These detectors can absorb x-ray doses from the experiments in the order of 1–100 Gy, the effects of which are not accounted for in the previous detector calibrations. X-ray dose absorbed in the CR-39 has previously been shown to affect the track size of alpha particles in the detector, primarily due to a measured reduction in the material bulk etch rate [Rojas-Herrera et al., Rev. Sci. Instrum. 86, 033501 (2015)]. Similar to the previous findings for alpha particles, protons with energies in the range 0.5–9.1 MeV are shown to produce tracks that are systematically smaller as a function of the absorbed x-ray dose in the CR-39. The reduction of track size due to x-ray dose is found to diminish with time between exposure and etching if the CR-39 is stored at ambient temperature, and complete recovery is observed after two weeks. The impact of this effect on the analysis of data from existing CR-39-based proton diagnostics on OMEGA and the National Ignition Facility is evaluated and best practices are proposed for cases in which the effect of x rays is significant.

  18. Impact of x-ray dose on track formation and data analysis for CR-39-based proton diagnostics

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Rinderknecht, H. G.; Rojas-Herrera, J.; Zylstra, A. B.; Frenje, J. A.; Gatu Johnson, M.; Sio, H.; Sinenian, N.; Rosenberg, M. J.; Li, C. K.; Seguin, F. H.; et al

    2015-12-23

    The nuclear track detector CR-39 is used extensively for charged particle diagnosis, in particular proton spectroscopy, at inertial confinement fusion facilities. These detectors can absorb x-ray doses from the experiments in the order of 1–100 Gy, the effects of which are not accounted for in the previous detector calibrations. X-ray dose absorbed in the CR-39 has previously been shown to affect the track size of alpha particles in the detector, primarily due to a measured reduction in the material bulk etch rate [Rojas-Herrera et al., Rev. Sci. Instrum. 86, 033501 (2015)]. Similar to the previous findings for alpha particles, protonsmore » with energies in the range 0.5–9.1 MeV are shown to produce tracks that are systematically smaller as a function of the absorbed x-ray dose in the CR-39. The reduction of track size due to x-ray dose is found to diminish with time between exposure and etching if the CR-39 is stored at ambient temperature, and complete recovery is observed after two weeks. Lastly, the impact of this effect on the analysis of data from existing CR-39-based proton diagnostics on OMEGA and the National Ignition Facility is evaluated and best practices are proposed for cases in which the effect of x rays is significant.« less

  19. Analysis of FEL-based CeC amplification at high gain limit

    SciTech Connect (OSTI)

    Wang, G.; Litvinenko, V.; Jing, Y.

    2015-05-03

    An analysis of Coherent electron Cooling (CeC) amplifier based on 1D Free Electron Laser (FEL) theory was previously performed with exact solution of the dispersion relation, assuming electrons having Lorentzian energy distribution. At high gain limit, the asymptotic behavior of the FEL amplifier can be better understood by Taylor expanding the exact solution of the dispersion relation with respect to the detuning parameter. In this work, we make quadratic expansion of the dispersion relation for Lorentzian energy distribution and investigate how longitudinal space charge and electrons’ energy spread affect the FEL amplification process.

  20. Genetic Algorithms for Agent-Based Infrastructure Interdependency Modeling and Analysis

    SciTech Connect (OSTI)

    May Permann

    2007-03-01

    Today’s society relies greatly upon an array of complex national and international infrastructure networks such as transportation, electric power, telecommunication, and financial networks. This paper describes initial research combining agent-based infrastructure modeling software and genetic algorithms (GAs) to help optimize infrastructure protection and restoration decisions. This research proposes to apply GAs to the problem of infrastructure modeling and analysis in order to determine the optimum assets to restore or protect from attack or other disaster. This research is just commencing and therefore the focus of this paper is the integration of a GA optimization method with a simulation through the simulation’s agents.

  1. Economic Analysis for Conceptual Design of Oxygen-Based PC Boiler

    SciTech Connect (OSTI)

    Andrew Seltzer

    2005-02-01

    The objective of the economic analysis is to prepare a budgetary estimate of capital and operating costs of the O{sub 2}-fired PC power plant as well as for the equivalent conventional PC-fired power plant. Capital and operating costs of conventional steam generation, steam heating, and power generation equipment are estimated based on Foster Wheeler's extensive experience and database. Capital and operating costs of equipment, such as oxygen separation and CO{sub 2} liquefaction, are based on vendor supplied data and FW process plant experience. The levelized cost of electricity is determined for both the air-fired and O{sub 2}-fired power plants as well as the CO{sub 2} mitigation cost. An economic comparison between the O{sub 2}-fired PC and other alternate technologies is presented.

  2. Loading and Regeneration Analysis of a Diesel Particulate Filter with a Radio Frequency-Based Sensor

    SciTech Connect (OSTI)

    Sappok, Alex; Prikhodko, Vitaly Y; Parks, II, James E

    2010-01-01

    Accurate knowledge of diesel particulate filter (DPF) loading is critical for robust and efficient operation of the combined engine-exhaust aftertreatment system. Furthermore, upcoming on-board diagnostics regulations require on-board technologies to evaluate the status of the DPF. This work describes the application of radio frequency (RF) based sensing techniques to accurately measure DPF soot levels and the spatial distribution of the accumulated material. A 1.9L GM turbo diesel engine and a DPF with an RF-sensor were studied. Direct comparisons between the RF measurement and conventional pressure-based methods were made. Further analysis of the particulate matter loading rates was obtained with a mass-based soot emission measurement instrument (TEOM). Comparison with pressure drop measurements show the RF technique is unaffected by exhaust flow variations and exhibits a high degree of sensitivity to DPF soot loading and good dynamic response. Additional computational and experimental work further illustrates the spatial resolution of the RF measurements. Based on the experimental results, the RF technique shows significant promise for improving DPF control enabling optimization of the combined engine-aftertreatment system for improved fuel economy and extended DPF service life.

  3. CIRA: A Microcomputer-based energy analysis and auditing tool for residential applications

    SciTech Connect (OSTI)

    Sonderegger, R.C.; Dixon, J.D.

    1983-01-01

    Computerized, Instrumented, Residential Audit (CIRA) is a collection of programs for energy analysis and energy auditing of residential buildings. CIRA is written for microcomputers with a CP/M operating system and 64K RAM. Its principal features are: user-friendliness, dynamic defaults, file-oriented structure, design energy analysis capability, economic optimization of retrofits, graphic and tabular output to screen and printer. To calculate monthly energy consumptions both for design and retrofit analyses CIRA uses a modified degree-day and degree-night approach, taking into account solar gains, IR losses to the sky, internal gains and ground heat transfer; the concept of solar storage factor addresses the delayed effect of daytime solar gains while the concept of effective thermal mass ensures proper handling of changes in thermostat setting from day to night; aie infiltration is modeled using the LBL infiltration model based on effective leakage area; HVAC system performance is modeled using correlations developed for DOE-2.1. For any given budget, CIRA can also develop an optimally sequenced list of retrofits with the highest combined savings. Long run-times necessary for economic optimization of retrofits are greatly reduced by using a method based on partial derivatives of energy consumption with respect to principal building parameters. Energy calculations of CIRA compare well with those of DOE-2.1 and with measured energy consumptions from a sample of monitored houses.

  4. A knowledge-based approach to the adaptive finite element analysis

    SciTech Connect (OSTI)

    Haghighi, K.; Kang, E.

    1995-12-31

    An automatic and knowledge-based finite element mesh generator (INTELMESH), which makes extensive use of interactive computer graphics techniques, has been developed. INTELMESH is designed for planar domains and axisymmetric 3-D structures of elasticity and heat transfer subjected to mechanical and thermal loading. It intelligently identifies the critical regions/points in the problem domain and utilizes the new concepts of substructuring and wave propagation to choose the proper mesh size for them. INTELMESH generates well-shaped triangular elements by applying triangulation and Laplacian smoothing procedures. The adaptive analysis involves the initial finite element analysis and an efficient a-posteriori error analysis and estimation. Once a problem is defined, the system automatically builds a finite element model and analyzes the problem through an automatic iterative process until the error reaches a desired level. It has been shown that the proposed approach which initiates the process with an a-priori, and near optimum mesh of the object, converges to the desired accuracy in less time and at less cost.

  5. A Laser-Based Method for On-Site Analysis of UF6 at Enrichment Plants

    SciTech Connect (OSTI)

    Anheier, Norman C.; Cannon, Bret D.; Martinez, Alonzo; Barrett, Christopher A.; Taubman, Matthew S.; Anderson, Kevin K.; Smith, Leon E.

    2014-11-23

    The International Atomic Energy Agency’s (IAEA’s) long-term research and development plan calls for more cost-effective and efficient safeguard methods to detect and deter misuse of gaseous centrifuge enrichment plants (GCEPs). The IAEA’s current safeguards approaches at GCEPs are based on a combination of routine and random inspections that include environmental sampling and destructive assay (DA) sample collection from UF6 in-process material and selected cylinders. Samples are then shipped offsite for subsequent laboratory analysis. In this paper, a new DA sample collection and onsite analysis approach that could help to meet challenges in transportation and chain of custody for UF6 DA samples is introduced. This approach uses a handheld sampler concept and a Laser Ablation, Laser Absorbance Spectrometry (LAARS) analysis instrument, both currently under development at the Pacific Northwest National Laboratory. A LAARS analysis instrument could be temporarily or permanently deployed in the IAEA control room of the facility, in the IAEA data acquisition cabinet, for example. The handheld PNNL DA sampler design collects and stabilizes a much smaller DA sample mass compared to current sampling methods. The significantly lower uranium mass reduces the sample radioactivity and the stabilization approach diminishes the risk of uranium and hydrogen fluoride release. These attributes enable safe sample handling needed during onsite LAARS assay and may help ease shipping challenges for samples to be processed at the IAEA’s offsite laboratory. The LAARS and DA sampler implementation concepts will be described and preliminary technical viability results presented.

  6. Optimized periodic verification testing blended risk and performance-based MOV inservice test program an application of ASME code case OMN-1

    SciTech Connect (OSTI)

    Sellers, C.; Fleming, K.; Bidwell, D.; Forbes, P.

    1996-12-01

    This paper presents an application of ASME Code Case OMN-1 to the GL 89-10 Program at the South Texas Project Electric Generating Station (STPEGS). Code Case OMN-1 provides guidance for a performance-based MOV inservice test program that can be used for periodic verification testing and allows consideration of risk insights. Blended probabilistic and deterministic evaluation techniques were used to establish inservice test strategies including both test methods and test frequency. Described in the paper are the methods and criteria for establishing MOV safety significance based on the STPEGS probabilistic safety assessment, deterministic considerations of MOV performance characteristics and performance margins, the expert panel evaluation process, and the development of inservice test strategies. Test strategies include a mix of dynamic and static testing as well as MOV exercising.

  7. The Milling Assistant, Case-Based Reasoning, and machining strategy: A report on the development of automated numerical control programming systems at New Mexico State University

    SciTech Connect (OSTI)

    Burd, W.; Culler, D.; Eskridge, T.; Cox, L.; Slater, T.

    1993-08-01

    The Milling Assistant (MA) programming system demonstrates the automated development of tool paths for Numerical Control (NC) machine tools. By integrating a Case-Based Reasoning decision processor with a commercial CAD/CAM software, intelligent tool path files for milled and point-to-point features can be created. The operational system is capable of reducing the time required to program a variety of parts and improving product quality by collecting and utilizing ``best of practice`` machining strategies.

  8. Modeling of electrodes and implantable pulse generator cases for the analysis of implant tip heating under MR imaging

    SciTech Connect (OSTI)

    Acikel, Volkan Atalar, Ergin; Uslubas, Ali

    2015-07-15

    Purpose: The authors’ purpose is to model the case of an implantable pulse generator (IPG) and the electrode of an active implantable medical device using lumped circuit elements in order to analyze their effect on radio frequency induced tissue heating problem during a magnetic resonance imaging (MRI) examination. Methods: In this study, IPG case and electrode are modeled with a voltage source and impedance. Values of these parameters are found using the modified transmission line method (MoTLiM) and the method of moments (MoM) simulations. Once the parameter values of an electrode/IPG case model are determined, they can be connected to any lead, and tip heating can be analyzed. To validate these models, both MoM simulations and MR experiments were used. The induced currents on the leads with the IPG case or electrode connections were solved using the proposed models and the MoTLiM. These results were compared with the MoM simulations. In addition, an electrode was connected to a lead via an inductor. The dissipated power on the electrode was calculated using the MoTLiM by changing the inductance and the results were compared with the specific absorption rate results that were obtained using MoM. Then, MRI experiments were conducted to test the IPG case and the electrode models. To test the IPG case, a bare lead was connected to the case and placed inside a uniform phantom. During a MRI scan, the temperature rise at the lead was measured by changing the lead length. The power at the lead tip for the same scenario was also calculated using the IPG case model and MoTLiM. Then, an electrode was connected to a lead via an inductor and placed inside a uniform phantom. During a MRI scan, the temperature rise at the electrode was measured by changing the inductance and compared with the dissipated power on the electrode resistance. Results: The induced currents on leads with the IPG case or electrode connection were solved for using the combination of the MoTLiM and

  9. Ion Trap Array-Based Systems And Methods For Chemical Analysis

    DOE Patents [OSTI]

    Whitten, William B [Oak Ridge, TN; Ramsey, J Michael [Knoxville, TN

    2005-08-23

    An ion trap-based system for chemical analysis includes an ion trap array. The ion trap array includes a plurality of ion traps arranged in a 2-dimensional array for initially confining ions. Each of the ion traps comprise a central electrode having an aperture, a first and second insulator each having an aperture sandwiching the central electrode, and first and second end cap electrodes each having an aperture sandwiching the first and second insulator. A structure for simultaneously directing a plurality of different species of ions out from the ion traps is provided. A spectrometer including a detector receives and identifies the ions. The trap array can be used with spectrometers including time-of-flight mass spectrometers and ion mobility spectrometers.

  10. Microcomputer Spectrum Analysis Models (MSAM) with terrain data base (for microcomputers). Software

    SciTech Connect (OSTI)

    Not Available

    1992-08-01

    The package contains a collection of 14 radio frequency communications engineering and spectrum management programs plus a menu program. An associated terrain elevation data base with 30-second data is provided for the U.S. (less Alaska), Hawaii, Puerto Rico, the Caribbean and border areas of Canada and Mexico. The following programs are included: Bearing/Distance Program (BDIST); Satellite Azimuth Program (SATAZ); Intermodulation Program (INTMOD); NLAMBDA-90 smooth-earth propagation program (NL90); Frequency Dependent Rejection program (FDR); ANNEX I program to evaluate frequency proposals per NTIA Manual (ANNEXI); Antenna Field Intensity program (AFI); Personal Computer Plot 2-D graphics program (PCPLT); Profile 4/3 earth terrain elevation plot program (PROFILE); Horizon radio line-of-sight plot program (HORIZON); Single-Emitter Analysis Mode (SEAM); Terrain Integrated Rough-Earth Model (TIREM); Power Density Display Program to produce power contour map (PDDP); Line-of-Sight antenna coverage map program (SHADO).

  11. Difference between healthy children and ADHD based on wavelet spectral analysis of nuclear magnetic resonance images

    SciTech Connect (OSTI)

    Gonzålez Gómez Dulce, I. E-mail: emoreno@fcfm.buap.mx E-mail: joserm84@gmail.com; Moreno Barbosa, E. E-mail: emoreno@fcfm.buap.mx E-mail: joserm84@gmail.com; Hernåndez, Mario Ivån Martínez E-mail: emoreno@fcfm.buap.mx E-mail: joserm84@gmail.com; Méndez, José Ramos E-mail: emoreno@fcfm.buap.mx E-mail: joserm84@gmail.com; Silvia, Hidalgo Tobón; Pilar, Dies Suarez E-mail: neurodoc@prodigy.net.mx; Eduardo, Barragån Pérez E-mail: neurodoc@prodigy.net.mx; Benito, De Celis Alonso

    2014-11-07

    The main goal of this project was to create a computer algorithm based on wavelet analysis of region of homogeneity images obtained during resting state studies. Ideally it would automatically diagnose ADHD. Because the cerebellum is an area known to be affected by ADHD, this study specifically analysed this region. Male right handed volunteers (infants with ages between 7 and 11 years old) were studied and compared with age matched controls. Statistical differences between the values of the absolute integrated wavelet spectrum were found and showed significant differences (p<0.0015) between groups. This difference might help in the future to distinguish healthy from ADHD patients and therefore diagnose ADHD. Even if results were statistically significant, the small size of the sample limits the applicability of this methods as it is presented here, and further work with larger samples and using freely available datasets must be done.

  12. Cut set-based risk and reliability analysis for arbitrarily interconnected networks

    DOE Patents [OSTI]

    Wyss, Gregory D.

    2000-01-01

    Method for computing all-terminal reliability for arbitrarily interconnected networks such as the United States public switched telephone network. The method includes an efficient search algorithm to generate minimal cut sets for nonhierarchical networks directly from the network connectivity diagram. Efficiency of the search algorithm stems in part from its basis on only link failures. The method also includes a novel quantification scheme that likewise reduces computational effort associated with assessing network reliability based on traditional risk importance measures. Vast reductions in computational effort are realized since combinatorial expansion and subsequent Boolean reduction steps are eliminated through analysis of network segmentations using a technique of assuming node failures to occur on only one side of a break in the network, and repeating the technique for all minimal cut sets generated with the search algorithm. The method functions equally well for planar and non-planar networks.

  13. Orbit-based analysis of resonant excitations of Alfvén waves in tokamaks

    SciTech Connect (OSTI)

    Bierwage, Andreas; Shinohara, Kouji

    2014-11-15

    The exponential growth phase of fast-ion-driven AlfvĂ©nic instabilities is simulated and the resonant wave-particle interactions are analyzed numerically. The simulations are carried out in realistic magnetic geometry and with a realistic particle distribution for a JT-60U plasma driven by negative-ion-based neutral beams. In order to deal with the large magnetic drifts of the fast ions, two new mapping methods are developed and applied. The first mapping yields the radii and pitch angles at the points, where the unperturbed orbit of a particle intersects the mid-plane. These canonical coordinates allow to express analysis results (e.g., drive profiles and resonance widths) in a form that is easy to understand and directly comparable to the radial mode structure. The second mapping yields the structure of the wave field along the particle trajectory. This allows us to unify resonance conditions for trapped and passing particles, determine which harmonics are driven, and which orders of the resonance are involved. This orbit-based resonance analysis (ORA) method is applied to fast-ion-driven instabilities with toroidal mode numbers n = 1-3. After determining the order and width of each resonance, the kinetic compression of resonant particles and the effect of linear resonance overlap are examined. On the basis of the ORA results, implications for the fully nonlinear regime, for the long-time evolution of the system in the presence of a fast ion source, and for the interpretation of experimental observations are discussed.

  14. Fenestration performance analysis using an interactive graphics-based methodology on a microcomputer

    SciTech Connect (OSTI)

    Sullivan, R.; Selkowitz, S.

    1989-09-01

    We show the development and implementation of a new methodology that can be used to evaluate the energy and comfort performance of fenestration in non-residential buildings. The methodology is based on the definition of a fenestration system figure of merit.'' The figure of merit'' is determined by considering five non-dimensional performance indices representing heating energy, cooling energy, cooling energy peak, thermal comfort, and visual comfort. These indices were derived by performing a regression analysis of several thousand hour-by-hour building heat transfer simulations of a prototypical office building module using the DOE-2 simulation program. The regression analysis resulted in a series of simplified algebraic expressions that related fenestration configuration variables to performance parameters. We implemented these equations in a hypermedia'' environment -- one that integrates graphics, sound, animation, and calculation sequences --and created a prototype fenestration performance design tool. Inputs required by the program consist of geographic location, building type, perimeter space, and envelope definition. Outputs are the calculated performance indices for electricity and fuel use, peak electric load, and thermal and visual comfort. 6 refs., 7 figs.

  15. Model-Based Analysis of Electric Drive Options for Medium-Duty Parcel Delivery Vehicles: Preprint

    SciTech Connect (OSTI)

    Barnitt, R. A.; Brooker, A. D.; Ramroth, L.

    2010-12-01

    Medium-duty vehicles are used in a broad array of fleet applications, including parcel delivery. These vehicles are excellent candidates for electric drive applications due to their transient-intensive duty cycles, operation in densely populated areas, and relatively high fuel consumption and emissions. The National Renewable Energy Laboratory (NREL) conducted a robust assessment of parcel delivery routes and completed a model-based techno-economic analysis of hybrid electric vehicle (HEV) and plug-in hybrid electric vehicle configurations. First, NREL characterized parcel delivery vehicle usage patterns, most notably daily distance driven and drive cycle intensity. Second, drive-cycle analysis results framed the selection of drive cycles used to test a parcel delivery HEV on a chassis dynamometer. Next, measured fuel consumption results were used to validate simulated fuel consumption values derived from a dynamic model of the parcel delivery vehicle. Finally, NREL swept a matrix of 120 component size, usage, and cost combinations to assess impacts on fuel consumption and vehicle cost. The results illustrated the dependency of component sizing on drive-cycle intensity and daily distance driven and may allow parcel delivery fleets to match the most appropriate electric drive vehicle to their fleet usage profile.

  16. Analysis of laser remote fusion cutting based on a mathematical model

    SciTech Connect (OSTI)

    Matti, R. S.; Ilar, T.; Kaplan, A. F. H.

    2013-12-21

    Laser remote fusion cutting is analyzed by the aid of a semi-analytical mathematical model of the processing front. By local calculation of the energy balance between the absorbed laser beam and the heat losses, the three-dimensional vaporization front can be calculated. Based on an empirical model for the melt flow field, from a mass balance, the melt film and the melting front can be derived, however only in a simplified manner and for quasi-steady state conditions. Front waviness and multiple reflections are not modelled. The model enables to compare the similarities, differences, and limits between laser remote fusion cutting, laser remote ablation cutting, and even laser keyhole welding. In contrast to the upper part of the vaporization front, the major part only slightly varies with respect to heat flux, laser power density, absorptivity, and angle of front inclination. Statistical analysis shows that for high cutting speed, the domains of high laser power density contribute much more to the formation of the front than for low speed. The semi-analytical modelling approach offers flexibility to simplify part of the process physics while, for example, sophisticated modelling of the complex focused fibre-guided laser beam is taken into account to enable deeper analysis of the beam interaction. Mechanisms like recast layer generation, absorptivity at a wavy processing front, and melt film formation are studied too.

  17. On the rejection-based algorithm for simulation and analysis of large-scale reaction networks

    SciTech Connect (OSTI)

    Thanh, Vo Hong; Zunino, Roberto; Priami, Corrado

    2015-06-28

    Stochastic simulation for in silico studies of large biochemical networks requires a great amount of computational time. We recently proposed a new exact simulation algorithm, called the rejection-based stochastic simulation algorithm (RSSA) [Thanh et al., J. Chem. Phys. 141(13), 134116 (2014)], to improve simulation performance by postponing and collapsing as much as possible the propensity updates. In this paper, we analyze the performance of this algorithm in detail, and improve it for simulating large-scale biochemical reaction networks. We also present a new algorithm, called simultaneous RSSA (SRSSA), which generates many independent trajectories simultaneously for the analysis of the biochemical behavior. SRSSA improves simulation performance by utilizing a single data structure across simulations to select reaction firings and forming trajectories. The memory requirement for building and storing the data structure is thus independent of the number of trajectories. The updating of the data structure when needed is performed collectively in a single operation across the simulations. The trajectories generated by SRSSA are exact and independent of each other by exploiting the rejection-based mechanism. We test our new improvement on real biological systems with a wide range of reaction networks to demonstrate its applicability and efficiency.

  18. Knowledge-based analysis of microarray gene expression data by using support vector machines

    SciTech Connect (OSTI)

    William Grundy; Manuel Ares, Jr.; David Haussler

    2001-06-18

    The authors introduce a method of functionally classifying genes by using gene expression data from DNA microarray hybridization experiments. The method is based on the theory of support vector machines (SVMs). SVMs are considered a supervised computer learning method because they exploit prior knowledge of gene function to identify unknown genes of similar function from expression data. SVMs avoid several problems associated with unsupervised clustering methods, such as hierarchical clustering and self-organizing maps. SVMs have many mathematical features that make them attractive for gene expression analysis, including their flexibility in choosing a similarity function, sparseness of solution when dealing with large data sets, the ability to handle large feature spaces, and the ability to identify outliers. They test several SVMs that use different similarity metrics, as well as some other supervised learning methods, and find that the SVMs best identify sets of genes with a common function using expression data. Finally, they use SVMs to predict functional roles for uncharacterized yeast ORFs based on their expression data.

  19. NASTRAN-based computer program for structural dynamic analysis of horizontal axis wind turbines

    SciTech Connect (OSTI)

    Lobitz, D.W.

    1984-01-01

    This paper describes a computer program developed for structural dynamic analysis of horizontal axis wind turbines (HAWTs). It is based on the finite element method through its reliance on NASTRAN for the development of mass, stiffness, and damping matrices of the tower and rotor, which are treated in NASTRAN as separate structures. The tower is modeled in a stationary frame and the rotor in one rotating at a constant angular velocity. The two structures are subsequently joined together (external to NASTRAN) using a time-dependent transformation consistent with the hub configuration. Aerodynamic loads are computed with an established flow model based on strip theory. Aeroelastic effects are included by incorporating the local velocity and twisting deformation of the blade in the load computation. The turbulent nature of the wind, both in space and time, is modeled by adding in stochastic wind increments. The resulting equations of motion are solved in the time domain using the implicit Newmark-Beta integrator. Preliminary comparisons with data from the Boeing/NASA MOD2 HAWT indicate that the code is capable of accurately and efficiently predicting the response of HAWTs driven by turbulent winds.

  20. Economic analysis of coal-fired cogeneration plants for Air Force bases

    SciTech Connect (OSTI)

    Holcomb, R.S.; Griffin, F.P.

    1990-10-01

    The Defense Appropriations Act of 1986 requires the Department of Defense to use an additional 1,600,000 tons/year of coal at their US facilities by 1995 and also states that the most economical fuel should be used at each facility. In a previous study of Air Force heating plants burning gas or oil, Oak Ridge National Laboratory found that only a small fraction of this target 1,600,000 tons/year could be achieved by converting the plants where coal is economically viable. To identify projects that would use greater amounts of coal, the economic benefits of installing coal-fired cogeneration plants at 7 candidate Air Force bases were examined in this study. A life-cycle cost analysis was performed that included two types of financing (Air Force and private) and three levels of energy escalation for a total of six economic scenarios. Hill, McGuire, and Plattsburgh Air Force Bases were identified as the facilities with the best potential for coal-fired cogeneration, but the actual cost savings will depend strongly on how the projects are financed and to a lesser extent on future energy escalation rates. 10 refs., 11 figs., 27 tabs.

  1. Monitoring Based Commissioning: Benchmarking Analysis of 24 UC/CSU/IOU Projects

    SciTech Connect (OSTI)

    Mills, Evan; Mathew, Paul

    2009-04-01

    Buildings rarely perform as intended, resulting in energy use that is higher than anticipated. Building commissioning has emerged as a strategy for remedying this problem in non-residential buildings. Complementing traditional hardware-based energy savings strategies, commissioning is a 'soft' process of verifying performance and design intent and correcting deficiencies. Through an evaluation of a series of field projects, this report explores the efficacy of an emerging refinement of this practice, known as monitoring-based commissioning (MBCx). MBCx can also be thought of as monitoring-enhanced building operation that incorporates three components: (1) Permanent energy information systems (EIS) and diagnostic tools at the whole-building and sub-system level; (2) Retro-commissioning based on the information from these tools and savings accounting emphasizing measurement as opposed to estimation or assumptions; and (3) On-going commissioning to ensure efficient building operations and measurement-based savings accounting. MBCx is thus a measurement-based paradigm which affords improved risk-management by identifying problems and opportunities that are missed with periodic commissioning. The analysis presented in this report is based on in-depth benchmarking of a portfolio of MBCx energy savings for 24 buildings located throughout the University of California and California State University systems. In the course of the analysis, we developed a quality-control/quality-assurance process for gathering and evaluating raw data from project sites and then selected a number of metrics to use for project benchmarking and evaluation, including appropriate normalizations for weather and climate, accounting for variations in central plant performance, and consideration of differences in building types. We performed a cost-benefit analysis of the resulting dataset, and provided comparisons to projects from a larger commissioning 'Meta-analysis' database. A total of 1120

  2. Critical analysis of the Hanford spent nuclear fuel project activity based cost estimate

    SciTech Connect (OSTI)

    Warren, R.N.

    1998-09-29

    In 1997, the SNFP developed a baseline change request (BCR) and submitted it to DOE-RL for approval. The schedule was formally evaluated to have a 19% probability of success [Williams, 1998]. In December 1997, DOE-RL Manager John Wagoner approved the BCR contingent upon a subsequent independent review of the new baseline. The SNFP took several actions during the first quarter of 1998 to prepare for the independent review. The project developed the Estimating Requirements and Implementation Guide [DESH, 1998] and trained cost account managers (CAMS) and other personnel involved in the estimating process in activity-based cost (ABC) estimating techniques. The SNFP then applied ABC estimating techniques to develop the basis for the December Baseline (DB) and documented that basis in Basis of Estimate (BOE) books. These BOEs were provided to DOE in April 1998. DOE commissioned Professional Analysis, Inc. (PAI) to perform a critical analysis (CA) of the DB. PAI`s review formally began on April 13. PAI performed the CA, provided three sets of findings to the SNFP contractor, and initiated reconciliation meetings. During the course of PAI`s review, DOE directed the SNFP to develop a new baseline with a higher probability of success. The contractor transmitted the new baseline, which is referred to as the High Probability Baseline (HPB), to DOE on April 15, 1998 [Williams, 1998]. The HPB was estimated to approach a 90% confidence level on the start of fuel movement [Williams, 1998]. This high probability resulted in an increased cost and a schedule extension. To implement the new baseline, the contractor initiated 26 BCRs with supporting BOES. PAI`s scope was revised on April 28 to add reviewing the HPB and the associated BCRs and BOES.

  3. Laser based analysis using a passively Q-switched laser employing analysis electronics and a means for detecting atomic optical emission of the laser media

    DOE Patents [OSTI]

    Woodruff, Steven D.; Mcintyre, Dustin L.

    2016-03-29

    A device for Laser based Analysis using a Passively Q-Switched Laser comprising an optical pumping source optically connected to a laser media. The laser media and a Q-switch are positioned between and optically connected to a high reflectivity mirror (HR) and an output coupler (OC) along an optical axis. The output coupler (OC) is optically connected to the output lens along the optical axis. A means for detecting atomic optical emission comprises a filter and a light detector. The optical filter is optically connected to the laser media and the optical detector. A control system is connected to the optical detector and the analysis electronics. The analysis electronics are optically connected to the output lens. The detection of the large scale laser output production triggers the control system to initiate the precise timing and data collection from the detector and analysis.

  4. High-Throughput Genetic Analysis and Combinatorial Chiral Separations Based on Capillary Electrophoresis

    SciTech Connect (OSTI)

    Wenwan Zhong

    2003-08-05

    Capillary electrophoresis (CE) offers many advantages over conventional analytical methods, such as speed, simplicity, high resolution, low cost, and small sample consumption, especially for the separation of enantiomers. However, chiral method developments still can be time consuming and tedious. They designed a comprehensive enantioseparation protocol employing neutral and sulfated cyclodextrins as chiral selectors for common basic, neutral, and acidic compounds with a 96-capillary array system. By using only four judiciously chosen separation buffers, successful enantioseparations were achieved for 49 out of 54 test compounds spanning a large variety of pKs and structures. Therefore, unknown compounds can be screened in this manner to identify optimal enantioselective conditions in just one rn. In addition to superior separation efficiency for small molecules, CE is also the most powerful technique for DNA separations. Using the same multiplexed capillary system with UV absorption detection, the sequence of a short DNA template can be acquired without any dye-labels. Two internal standards were utilized to adjust the migration time variations among capillaries, so that the four electropherograms for the A, T, C, G Sanger reactions can be aligned and base calling can be completed with a high level of confidence. the CE separation of DNA can be applied to study differential gene expression as well. Combined with pattern recognition techniques, small variations among electropherograms obtained by the separation of cDNA fragments produced from the total RNA samples of different human tissues can be revealed. These variations reflect the differences in total RNA expression among tissues. Thus, this Ce-based approach can serve as an alternative to the DNA array techniques in gene expression analysis.

  5. Structural Analysis of a Highly Glycosylated and Unliganded gp120-Based Antigen Using Mass Spectrometry

    SciTech Connect (OSTI)

    L Wang; Y Qin; S Ilchenko; J Bohon; W Shi; M Cho; K Takamoto; M Chance

    2011-12-31

    Structural characterization of the HIV-1 envelope protein gp120 is very important for providing an understanding of the protein's immunogenicity and its binding to cell receptors. So far, the crystallographic structure of gp120 with an intact V3 loop (in the absence of a CD4 coreceptor or antibody) has not been determined. The third variable region (V3) of the gp120 is immunodominant and contains glycosylation signatures that are essential for coreceptor binding and entry of the virus into T-cells. In this study, we characterized the structure of the outer domain of gp120 with an intact V3 loop (gp120-OD8) purified from Drosophila S2 cells utilizing mass spectrometry-based approaches. We mapped the glycosylation sites and calculated the glycosylation occupancy of gp120-OD8; 11 sites from 15 glycosylation motifs were determined as having high-mannose or hybrid glycosylation structures. The specific glycan moieties of nine glycosylation sites from eight unique glycopeptides were determined by a combination of ECD and CID MS approaches. Hydroxyl radical-mediated protein footprinting coupled with mass spectrometry analysis was employed to provide detailed information about protein structure of gp120-OD8 by directly identifying accessible and hydroxyl radical-reactive side chain residues. Comparison of gp120-OD8 experimental footprinting data with a homology model derived from the ligated CD4-gp120-OD8 crystal structure revealed a flexible V3 loop structure in which the V3 tip may provide contacts with the rest of the protein while residues in the V3 base remain solvent accessible. In addition, the data illustrate interactions between specific sugar moieties and amino acid side chains potentially important to the gp120-OD8 structure.

  6. Arthropod monitoring for fine-scale habitat analysis: A case study of the El Segundo sand dunes

    SciTech Connect (OSTI)

    Mattoni, R.; Longcore, T.; Novotny, V.

    2000-04-01

    Arthropod communities from several habitats on and adjacent to the El Segundo dunes (Los Angeles County, CA) were sampled using pitfall and yellow pan traps to evaluate their possible use as indicators of restoration success. Communities were ordinated and clustered using correspondence analysis, detrended correspondence analysis, two-way indicator species analysis, and Ward's method of agglomerative clustering. The results showed high repeatability among replicates within any sampling arena that permits discrimination of (1) degraded and relatively undisturbed habitat, (2) different dune habitat types, and (3) annual change. Canonical correspondence analysis showed a significant effect of disturbance history on community composition that explained 5--20% of the variation. Replicates of pitfall and yellow pan traps on single sites clustered together reliably when species abundance was considered, whereas clusters using only species incidence did not group replicates as consistently. The broad taxonomic approach seems appropriate for habitat evaluation and monitoring of restoration projects as an alternative to assessments geared to single species or even single families.

  7. Science-Based Simulation Model of Human Performance for Human Reliability Analysis

    SciTech Connect (OSTI)

    Dana L. Kelly; Ronald L. Boring; Ali Mosleh; Carol Smidts

    2011-10-01

    Human reliability analysis (HRA), a component of an integrated probabilistic risk assessment (PRA), is the means by which the human contribution to risk is assessed, both qualitatively and quantitatively. However, among the literally dozens of HRA methods that have been developed, most cannot fully model and quantify the types of errors that occurred at Three Mile Island. Furthermore, all of the methods lack a solid empirical basis, relying heavily on expert judgment or empirical results derived in non-reactor domains. Finally, all of the methods are essentially static, and are thus unable to capture the dynamics of an accident in progress. The objective of this work is to begin exploring a dynamic simulation approach to HRA, one whose models have a basis in psychological theories of human performance, and whose quantitative estimates have an empirical basis. This paper highlights a plan to formalize collaboration among the Idaho National Laboratory (INL), the University of Maryland, and The Ohio State University (OSU) to continue development of a simulation model initially formulated at the University of Maryland. Initial work will focus on enhancing the underlying human performance models with the most recent psychological research, and on planning follow-on studies to establish an empirical basis for the model, based on simulator experiments to be carried out at the INL and at the OSU.

  8. Microscopic silicon-based lateral high-aspect-ratio structures for thin film conformality analysis

    SciTech Connect (OSTI)

    Gao, Feng; Arpiainen, Sanna; Puurunen, Riikka L.

    2015-01-15

    Film conformality is one of the major drivers for the interest in atomic layer deposition (ALD) processes. This work presents new silicon-based microscopic lateral high-aspect-ratio (LHAR) test structures for the analysis of the conformality of thin films deposited by ALD and by other chemical vapor deposition means. The microscopic LHAR structures consist of a lateral cavity inside silicon with a roof supported by pillars. The cavity length (e.g., 20–5000 Όm) and cavity height (e.g., 200–1000 nm) can be varied, giving aspect ratios of, e.g., 20:1 to 25 000:1. Film conformality can be analyzed with the microscopic LHAR by several means, as demonstrated for the ALD Al{sub 2}O{sub 3} and TiO{sub 2} processes from Me{sub 3}Al/H{sub 2}O and TiCl{sub 4}/H{sub 2}O. The microscopic LHAR test structures introduced in this work expose a new parameter space for thin film conformality investigations expected to prove useful in the development, tuning and modeling of ALD and other chemical vapor deposition processes.

  9. Automating Embedded Analysis Capabilities and Managing Software Complexity in Multiphysics Simulation, Part I: Template-Based Generic Programming

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Pawlowski, Roger P.; Phipps, Eric T.; Salinger, Andrew G.

    2012-01-01

    An approach for incorporating embedded simulation and analysis capabilities in complex simulation codes through template-based generic programming is presented. This approach relies on templating and operator overloading within the C++ language to transform a given calculation into one that can compute a variety of additional quantities that are necessary for many state-of-the-art simulation and analysis algorithms. An approach for incorporating these ideas into complex simulation codes through general graph-based assembly is also presented. These ideas have been implemented within a set of packages in the Trilinos framework and are demonstrated on a simple problem from chemical engineering.

  10. Systems Analysis of an Advanced Nuclear Fuel Cycle Based on a Modified UREX+3c Process

    SciTech Connect (OSTI)

    E. R. Johnson; R. E. Best

    2009-12-28

    The research described in this report was performed under a grant from the U.S. Department of Energy (DOE) to describe and compare the merits of two advanced alternative nuclear fuel cycles -- named by this study as the “UREX+3c fuel cycle” and the “Alternative Fuel Cycle” (AFC). Both fuel cycles were assumed to support 100 1,000 MWe light water reactor (LWR) nuclear power plants operating over the period 2020 through 2100, and the fast reactors (FRs) necessary to burn the plutonium and minor actinides generated by the LWRs. Reprocessing in both fuel cycles is assumed to be based on the UREX+3c process reported in earlier work by the DOE. Conceptually, the UREX+3c process provides nearly complete separation of the various components of spent nuclear fuel in order to enable recycle of reusable nuclear materials, and the storage, conversion, transmutation and/or disposal of other recovered components. Output of the process contains substantially all of the plutonium, which is recovered as a 5:1 uranium/plutonium mixture, in order to discourage plutonium diversion. Mixed oxide (MOX) fuel for recycle in LWRs is made using this 5:1 U/Pu mixture plus appropriate makeup uranium. A second process output contains all of the recovered uranium except the uranium in the 5:1 U/Pu mixture. The several other process outputs are various waste streams, including a stream of minor actinides that are stored until they are consumed in future FRs. For this study, the UREX+3c fuel cycle is assumed to recycle only the 5:1 U/Pu mixture to be used in LWR MOX fuel and to use depleted uranium (tails) for the makeup uranium. This fuel cycle is assumed not to use the recovered uranium output stream but to discard it instead. On the other hand, the AFC is assumed to recycle both the 5:1 U/Pu mixture and all of the recovered uranium. In this case, the recovered uranium is reenriched with the level of enrichment being determined by the amount of recovered plutonium and the combined amount

  11. Initial Business Case Analysis of Two Integrated Heat Pump HVAC Systems for Near-Zero-Energy Homes - Update to Include Evaluation of Impact of Including a Humidifier Option

    SciTech Connect (OSTI)

    Baxter, Van D

    2007-02-01

    --A Stage 2 Scoping Assessment, ORNL/TM-2005/194 (Baxter 2005). The 2005 study report describes the HVAC options considered, the ranking criteria used, and the system rankings by priority. In 2006, the two top-ranked options from the 2005 study, air-source and ground-source versions of a centrally ducted integrated heat pump (IHP) system, were subjected to an initial business case study. The IHPs were subjected to a more rigorous hourly-based assessment of their performance potential compared to a baseline suite of equipment of legally minimum efficiency that provided the same heating, cooling, water heating, demand dehumidification, and ventilation services as the IHPs. Results were summarized in a project report, Initial Business Case Analysis of Two Integrated Heat Pump HVAC Systems for Near-Zero-Energy Homes, ORNL/TM-2006/130 (Baxter 2006a). The present report is an update to that document which summarizes results of an analysis of the impact of adding a humidifier to the HVAC system to maintain minimum levels of space relative humidity (RH) in winter. The space RH in winter has direct impact on occupant comfort and on control of dust mites, many types of disease bacteria, and 'dry air' electric shocks. Chapter 8 in ASHRAE's 2005 Handbook of Fundamentals (HOF) suggests a 30% lower limit on RH for indoor temperatures in the range of {approx}68-69F based on comfort (ASHRAE 2005). Table 3 in chapter 9 of the same reference suggests a 30-55% RH range for winter as established by a Canadian study of exposure limits for residential indoor environments (EHD 1987). Harriman, et al (2001) note that for RH levels of 35% or higher, electrostatic shocks are minimized and that dust mites cannot live at RH levels below 40%. They also indicate that many disease bacteria life spans are minimized when space RH is held within a 30-60% range. From the foregoing it is reasonable to assume that a winter space RH range of 30-40% would be an acceptable compromise between comfort

  12. Emulating a System Dynamics Model with Agent-Based Models: A Methodological Case Study in Simulation of Diabetes Progression

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Schryver, Jack; Nutaro, James; Shankar, Mallikarjun

    2015-10-30

    An agent-based simulation model hierarchy emulating disease states and behaviors critical to progression of diabetes type 2 was designed and implemented in the DEVS framework. The models are translations of basic elements of an established system dynamics model of diabetes. In this model hierarchy, which mimics diabetes progression over an aggregated U.S. population, was dis-aggregated and reconstructed bottom-up at the individual (agent) level. Four levels of model complexity were defined in order to systematically evaluate which parameters are needed to mimic outputs of the system dynamics model. Moreover, the four estimated models attempted to replicate stock counts representing disease statesmore » in the system dynamics model, while estimating impacts of an elderliness factor, obesity factor and health-related behavioral parameters. Health-related behavior was modeled as a simple realization of the Theory of Planned Behavior, a joint function of individual attitude and diffusion of social norms that spread over each agent s social network. Although the most complex agent-based simulation model contained 31 adjustable parameters, all models were considerably less complex than the system dynamics model which required numerous time series inputs to make its predictions. In all three elaborations of the baseline model provided significantly improved fits to the output of the system dynamics model. The performances of the baseline agent-based model and its extensions illustrate a promising approach to translate complex system dynamics models into agent-based model alternatives that are both conceptually simpler and capable of capturing main effects of complex local agent-agent interactions.« less

  13. Diagnostic and Prognostic Analysis of Battery Performance & Aging based on

    Broader source: Energy.gov (indexed) [DOE]

    Kinetic and Thermodynamic Principles | Department of Energy es124_gering_2012_o.pdf (9.13 MB) More Documents & Publications Diagnostic Testing and Analysis Toward Understanding Aging Mechanisms and Related Path Dependence Diagnostic Testing and Analysis Toward Understanding Aging Mechanisms and Related Path Dependence Diagnostic Testing and Analysis Toward Understanding Aging Mechanisms and Related Path Dependence

  14. DEVELOPMENT OF A NOVEL GAS PRESSURIZED STRIPPING (GPS)-BASED TECHNOLOGY FOR CO2 CAPTURE FROM POST-COMBUSTION FLUE GASES Topical Report: Techno-Economic Analysis of GPS-based Technology for CO2 Capture

    SciTech Connect (OSTI)

    Chen, Shiaoguo

    2015-09-30

    This topical report presents the techno-economic analysis, conducted by Carbon Capture Scientific, LLC (CCS) and Nexant, for a nominal 550 MWe supercritical pulverized coal (PC) power plant utilizing CCS patented Gas Pressurized Stripping (GPS) technology for post-combustion carbon capture (PCC). Illinois No. 6 coal is used as fuel. Because of the difference in performance between the GPS-based PCC and the MEA-based CO2 absorption technology, the net power output of this plant is not exactly 550 MWe. DOE/NETL Case 11 supercritical PC plant without CO2 capture and Case 12 supercritical PC plant with benchmark MEA-based CO2 capture are chosen as references. In order to include CO2 compression process for the baseline case, CCS independently evaluated the generic 30 wt% MEA-based PCC process together with the CO2 compression section. The net power produced in the supercritical PC plant with GPS-based PCC is 647 MW, greater than the MEA-based design. The levelized cost of electricity (LCOE) over a 20-year period is adopted to assess techno-economic performance. The LCOE for the supercritical PC plant with GPS-based PCC, not considering CO2 transport, storage and monitoring (TS&M), is 97.4 mills/kWh, or 152% of the Case 11 supercritical PC plant without CO2 capture, equivalent to $39.6/tonne for the cost of CO2 capture. GPS-based PCC is also significantly superior to the generic MEA-based PCC with CO2 compression section, whose LCOE is as high as 109.6 mills/kWh.

  15. Neutronics and activation analysis of lithium-based ternary alloys in IFE blankets

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Jolodosky, Alejandra; Kramer, Kevin; Meier, Wayne; DeMuth, James; Reyes, Susana; Fratoni, Massimiliano

    2016-04-09

    Here we report that an attractive feature of using liquid lithium as the breeder and coolant in fusion blankets is that it has very high tritium solubility and results in very low levels of tritium permeation throughout the facility infrastructure. However, lithium metal vigorously reacts with air and water and presents plant safety concerns. The Lawrence Livermore National Laboratory is carrying an effort to develop a lithium-based alloy that maintains the beneficial properties of lithium (e.g. high tritium breeding and solubility) and at the same time reduces overall flammability concerns. This study evaluates the neutronics performance of lithium-based alloys inmore » the blanket of an inertial fusion energy chamber in order to inform such development. 3-D Monte Carlo calculations were performed to evaluate two main neutronics performance parameters for the blanket: tritium breeding ratio (TBR), and the fusion energy multiplication factor (EMF). It was found that elements that exhibit low absorption cross sections and higher q-values such as lead, tin, and strontium, perform well with those that have high neutron multiplication such as lead and bismuth. These elements meet TBR constrains ranging from 1.02 to 1.1. However, most alloys do not reach EMFs greater than 1.15. Additionally, it was found that enriching lithium significantly increases the TBR and decreases the minimum lithium concentration by more than 60%. The amount of enrichment depends on how much total lithium is in the alloy to begin with. Alloys that performed well in the TBR and EMF calculations were considered for activation analysis. Activation simulations were executed with 50 years of irradiation and 300 years of cooling. It was discovered that bismuth is a poor choice due to achieving the highest decay heat, contact dose rates, and accident doses. In addition, it does not meet the waste disposal ratings (WDR). Some of the activation results for alloys with tin, zinc, and gallium were in

  16. Integrated Experimental and Model-based Analysis Reveals the Spatial Aspects of EGFR Activation Dynamics

    SciTech Connect (OSTI)

    Shankaran, Harish; Zhang, Yi; Chrisler, William B.; Ewald, Jonathan A.; Wiley, H. S.; Resat, Haluk

    2012-10-02

    The epidermal growth factor receptor (EGFR) belongs to the ErbB family of receptor tyrosine kinases, and controls a diverse set of cellular responses relevant to development and tumorigenesis. ErbB activation is a complex process involving receptor-ligand binding, receptor dimerization, phosphorylation, and trafficking (internalization, recycling and degradation), which together dictate the spatio-temporal distribution of active receptors within the cell. The ability to predict this distribution, and elucidation of the factors regulating it, would help to establish a mechanistic link between ErbB expression levels and the cellular response. Towards this end, we constructed mathematical models for deconvolving the contributions of receptor dimerization and phosphorylation to EGFR activation, and to examine the dependence of these processes on sub-cellular location. We collected experimental datasets for EGFR activation dynamics in human mammary epithelial cells, with the specific goal of model parameterization, and used the data to estimate parameters for several alternate models. Model-based analysis indicated that: 1) signal termination via receptor dephosphorylation in late endosomes, prior to degradation, is an important component of the response, 2) less than 40% of the receptors in the cell are phosphorylated at any given time, even at saturating ligand doses, and 3) receptor dephosphorylation rates at the cell surface and early endosomes are comparable. We validated the last finding by measuring EGFR dephosphorylation rates at various times following ligand addition both in whole cells, and in endosomes using ELISAs and fluorescent imaging. Overall, our results provide important information on how EGFR phosphorylation levels are regulated within cells. Further, the mathematical model described here can be extended to determine receptor dimer abundances in cells co-expressing various levels of ErbB receptors. This study demonstrates that an iterative cycle of

  17. Analysis on fuel breeding capability of FBR core region based on minor actinide recycling doping

    SciTech Connect (OSTI)

    Permana, Sidik; Novitrian,; Waris, Abdul; Ismail; Suzuki, Mitsutoshi; Saito, Masaki

    2014-09-30

    Nuclear fuel breeding based on the capability of fuel conversion capability can be achieved by conversion ratio of some fertile materials into fissile materials during nuclear reaction processes such as main fissile materials of U-233, U-235, Pu-239 and Pu-241 and for fertile materials of Th-232, U-238, and Pu-240 as well as Pu-238. Minor actinide (MA) loading option which consists of neptunium, americium and curium will gives some additional contribution from converted MA into plutonium such as conversion Np-237 into Pu-238 and it's produced Pu-238 converts to Pu-239 via neutron capture. Increasing composition of Pu-238 can be used to produce fissile material of Pu-239 as additional contribution. Trans-uranium (TRU) fuel (Mixed fuel loading of MOX (U-Pu) and MA composition) and mixed oxide (MOX) fuel compositions are analyzed for comparative analysis in order to show the effect of MA to the plutonium productions in core in term of reactor criticality condition and fuel breeding capability. In the present study, neptunium (Np) nuclide is used as a representative of MAin trans-uranium (TRU) fuel composition as Np-MOX fuel type. It was loaded into the core region gives significant contribution to reduce the excess reactivity in comparing to mixed oxide (MOX) fuel and in the same time it contributes to increase nuclear fuel breeding capability of the reactor. Neptunium fuel loading scheme in FBR core region gives significant production of Pu-238 as fertile material to absorp neutrons for reducing excess reactivity and additional contribution for fuel breeding.

  18. Case Studies

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    OSCARS Case Studies Science DMZ Case Studies Multi-facility Workflow Case Study News & Publications ESnet News Publications and Presentations Galleries ESnet Awards and Honors Blog ESnet Live Home » Science Engagement » Case Studies Science Engagement Move your data Programs & Workshops Science Requirements Reviews Case Studies OSCARS Case Studies Science DMZ Case Studies Multi-facility Workflow Case Study Contact Us Technical Assistance: 1 800-33-ESnet (Inside US) 1 800-333-7638

  19. Five case studies of multifamily weatherization programs

    SciTech Connect (OSTI)

    Kinney, L; Wilson, T.; Lewis, G.; MacDonald, M.

    1997-12-31

    The multifamily case studies that are the subject of this report were conducted to provide a better understanding of the approach taken by program operators in weatherizing large buildings. Because of significant variations in building construction and energy systems across the country, five states were selected based on their high level of multifamily weatherization. This report summarizes findings from case studies conducted by multifamily weatherization operations in five cities. The case studies were conducted between January and November 1994. Each of the case studies involved extensive interviews with the staff of weatherization subgrantees conducting multifamily weatherization, the inspection of 4 to 12 buildings weatherized between 1991 and 1993, and the analysis of savings and costs. The case studies focused on innovative techniques which appear to work well.

  20. Assessment of effectiveness of geologic isolation systems. Test case release consequence analysis for a spent fuel repository in bedded salt

    SciTech Connect (OSTI)

    Raymond, J.R.; Bond, F.W.; Cole, C.R.; Nelson, R.W.; Reisenauer, A.E.; Washburn, J.F.; Norman, N.A.; Mote, P.A.; Segol, G.

    1980-01-01

    Geologic and geohydrologic data for the Paradox Basin have been used to simulate movement of ground water and radioacrtive contaminants from a hypothetical nuclear reactor spent fuel repository after an assumed accidental release. The pathlines, travel times and velocity of the ground water from the repository to the discharge locale (river) were determined after the disruptive event by use of a two-dimensional finite difference hydrologic model. The concentration of radioactive contaminants in the ground water was calculated along a series of flow tubes by use of a one-dimensional mass transport model which takes into account convection, dispersion, contaminant/media interactions and radioactive decay. For the hypothetical site location and specific parameters used in this demonstration, it is found that Iodine-129 (I-129) is tthe only isotope reaching the Colorado River in significant concentration. This concentration occurs about 8.0 x 10/sup 5/ years after the repository has been breached. This I-129 ground-water concentration is about 0.3 of the drinking water standard for uncontrolled use. The groundwater concentration would then be diluted by the Colorado River. None of the actinide elements reach more than half the distance from the repository to the Colorado River in the two-million year model run time. This exercise demonstrates that the WISAP model system is applicable for analysis of contaminant transport. The results presented in this report, however, are valid only for one particular set of parameters. A complete sensitivity analysis must be performed to evaluate the range of effects from the release of contaminants from a breached repository.

  1. Roof-top solar energy potential under performance-based building energy codes: The case of Spain

    SciTech Connect (OSTI)

    Izquierdo, Salvador; Montanes, Carlos; Dopazo, Cesar; Fueyo, Norberto

    2011-01-15

    The quantification at regional level of the amount of energy (for thermal uses and for electricity) that can be generated by using solar systems in buildings is hindered by the availability of data for roof area estimation. In this note, we build on an existing geo-referenced method for determining available roof area for solar facilities in Spain to produce a quantitative picture of the likely limits of roof-top solar energy. The installation of solar hot water systems (SHWS) and photovoltaic systems (PV) is considered. After satisfying up to 70% (if possible) of the service hot water demand in every municipality, PV systems are installed in the remaining roof area. Results show that, applying this performance-based criterion, SHWS would contribute up to 1662 ktoe/y of primary energy (or 68.5% of the total thermal-energy demand for service hot water), while PV systems would provide 10 T W h/y of electricity (or 4.0% of the total electricity demand). (author)

  2. Evaluation of food waste disposal options by LCC analysis from the perspective of global warming: Jungnang case, South Korea

    SciTech Connect (OSTI)

    Kim, Mi-Hyung; Song, Yul-Eum; Song, Han-Byul; Kim, Jung-Wk; Hwang, Sun-Jin

    2011-09-15

    Highlights: > Various food waste disposal options were evaluated from the perspective of global warming. > Costs of the options were compared by the methodology of life cycle assessment and life cycle cost analysis. > Carbon price and valuable by-products were used for analyzing environmental credits. > The benefit-cost ratio of wet feeding scenario was the highest. - Abstract: The costs associated with eight food waste disposal options, dry feeding, wet feeding, composting, anaerobic digestion, co-digestion with sewage sludge, food waste disposer, incineration, and landfilling, were evaluated in the perspective of global warming and energy and/or resource recovery. An expanded system boundary was employed to compare by-products. Life cycle cost was analyzed through the entire disposal process, which included discharge, separate collection, transportation, treatment, and final disposal stages, all of which were included in the system boundary. Costs and benefits were estimated by an avoided impact. Environmental benefits of each system per 1 tonne of food waste management were estimated using carbon prices resulting from CO{sub 2} reduction by avoided impact, as well as the prices of by-products such as animal feed, compost, and electricity. We found that the cost of landfilling was the lowest, followed by co-digestion. The benefits of wet feeding systems were the highest and landfilling the lowest.

  3. U.S. Renewable Energy Technical Potentials: A GIS-Based Analysis

    SciTech Connect (OSTI)

    Lopez, A.; Roberts, B.; Heimiller, D.; Blair, N.; Porro, G.

    2012-07-01

    This report presents the state-level results of a spatial analysis effort calculating energy technical potential, reported in square kilometers of available land, megawatts of capacity, and gigawatt-hours of generation, for six different renewable technologies. For this analysis, the system specific power density (or equivalent), efficiency (capacity factor), and land-use constraints were identified for each technology using independent research, published research, and professional contacts. This report also presents technical potential findings from previous reports.

  4. U.S. Renewable Energy Technical Potentials. A GIS-Based Analysis

    SciTech Connect (OSTI)

    Lopez, Anthony; Roberts, Billy; Heimiller, Donna; Blair, Nate; Porro, Gian

    2012-07-01

    This report presents the state-level results of a spatial analysis effort calculating energy technical potential, reported in square kilometers of available land, megawatts of capacity, and gigawatt-hours of generation, for six different renewable technologies. For this analysis, the system specific power density (or equivalent), efficiency (capacity factor), and land-use constraints were identified for each technology using independent research, published research, and professional contacts. This report also presents technical potential findings from previous reports.

  5. Integrating multi-criteria decision analysis for a GIS-based hazardous waste landfill sitting in Kurdistan Province, western Iran

    SciTech Connect (OSTI)

    Sharifi, Mozafar Hadidi, Mosslem Vessali, Elahe Mosstafakhani, Parasto Taheri, Kamal Shahoie, Saber Khodamoradpour, Mehran

    2009-10-15

    The evaluation of a hazardous waste disposal site is a complicated process because it requires data from diverse social and environmental fields. These data often involve processing of a significant amount of spatial information which can be used by GIS as an important tool for land use suitability analysis. This paper presents a multi-criteria decision analysis alongside with a geospatial analysis for the selection of hazardous waste landfill sites in Kurdistan Province, western Iran. The study employs a two-stage analysis to provide a spatial decision support system for hazardous waste management in a typically under developed region. The purpose of GIS was to perform an initial screening process to eliminate unsuitable land followed by utilization of a multi-criteria decision analysis (MCDA) to identify the most suitable sites using the information provided by the regional experts with reference to new chosen criteria. Using 21 exclusionary criteria, as input layers, masked maps were prepared. Creating various intermediate or analysis map layers a final overlay map was obtained representing areas for hazardous waste landfill sites. In order to evaluate different landfill sites produced by the overlaying a landfill suitability index system was developed representing cumulative effects of relative importance (weights) and suitability values of 14 non-exclusionary criteria including several criteria resulting from field observation. Using this suitability index 15 different sites were visited and based on the numerical evaluation provided by MCDA most suitable sites were determined.

  6. Dynamic analysis of the urban-based low-carbon policy using system dynamics: Focused on housing and green space

    SciTech Connect (OSTI)

    Hong, Taehoon; Kim, Jimin Jeong, Kwangbok; Koo, Choongwan

    2015-02-09

    To systematically manage the energy consumption of existing buildings, the government has to enforce greenhouse gas reduction policies. However, most of the policies are not properly executed because they do not consider various factors from the urban level perspective. Therefore, this study aimed to conduct a dynamic analysis of an urban-based low-carbon policy using system dynamics, with a specific focus on housing and green space. This study was conducted in the following steps: (i) establishing the variables of urban-based greenhouse gases (GHGs) emissions; (ii) creating a stock/flow diagram of urban-based GHGs emissions; (iii) conducting an information analysis using the system dynamics; and (iv) proposing the urban-based low-carbon policy. If a combined energy policy that uses the housing sector (30%) and the green space sector (30%) at the same time is implemented, 2020 CO{sub 2} emissions will be 7.23 million tons (i.e., 30.48% below 2020 business-as-usual), achieving the national carbon emissions reduction target (26.9%). The results of this study could contribute to managing and improving the fundamentals of the urban-based low-carbon policies to reduce greenhouse gas emissions.

  7. Initial Business Case Analysis of Two Integrated Heat Pump HVAC Systems for Near-Zero-Energy Homes

    SciTech Connect (OSTI)

    Baxter, Van D

    2006-11-01

    The long range strategic goal of the Department of Energy's Building Technologies (DOE/BT) Program is to create, by 2020, technologies and design approaches that enable the construction of net-zero energy homes at low incremental cost (DOE/BT 2005). A net zero energy home (NZEH) is a residential building with greatly reduced needs for energy through efficiency gains, with the balance of energy needs supplied by renewable technologies. While initially focused on new construction, these technologies and design approaches are intended to have application to buildings constructed before 2020 as well resulting in substantial reduction in energy use for all building types and ages. DOE/BT's Emerging Technologies (ET) team is working to support this strategic goal by identifying and developing advanced heating, ventilating, air-conditioning, and water heating (HVAC/WH) technology options applicable to NZEHs. Although the energy efficiency of heating, ventilating, and air-conditioning (HVAC) equipment has increased substantially in recent years, new approaches are needed to continue this trend. Dramatic efficiency improvements are necessary to enable progress toward the NZEH goals, and will require a radical rethinking of opportunities to improve system performance. The large reductions in HVAC energy consumption necessary to support the NZEH goals require a systems-oriented analysis approach that characterizes each element of energy consumption, identifies alternatives, and determines the most cost-effective combination of options. In particular, HVAC equipment must be developed that addresses the range of special needs of NZEH applications in the areas of reduced HVAC and water heating energy use, humidity control, ventilation, uniform comfort, and ease of zoning. In FY05 ORNL conducted an initial Stage 1 (Applied Research) scoping assessment of HVAC/WH systems options for future NZEHs to help DOE/BT identify and prioritize alternative approaches for further development

  8. Quantitative Analysis of Variability and Uncertainty in Environmental Data and Models. Volume 1. Theory and Methodology Based Upon Bootstrap Simulation

    SciTech Connect (OSTI)

    Frey, H. Christopher; Rhodes, David S.

    1999-04-30

    This is Volume 1 of a two-volume set of reports describing work conducted at North Carolina State University sponsored by Grant Number DE-FG05-95ER30250 by the U.S. Department of Energy. The title of the project is “Quantitative Analysis of Variability and Uncertainty in Acid Rain Assessments.” The work conducted under sponsorship of this grant pertains primarily to two main topics: (1) development of new methods for quantitative analysis of variability and uncertainty applicable to any type of model; and (2) analysis of variability and uncertainty in the performance, emissions, and cost of electric power plant combustion-based NOx control technologies. These two main topics are reported separately in Volumes 1 and 2.

  9. Real Time Pricing as a Default or Optional Service for C&ICustomers: A Comparative Analysis of Eight Case Studies

    SciTech Connect (OSTI)

    Barbose, Galen; Goldman, Charles; Bharvirkar, Ranjit; Hopper,Nicole; Ting, Michael; Neenan, Bernie

    2005-08-01

    Demand response (DR) has been broadly recognized to be an integral component of well-functioning electricity markets, although currently underdeveloped in most regions. Among the various initiatives undertaken to remedy this deficiency, public utility commissions (PUC) and utilities have considered implementing dynamic pricing tariffs, such as real-time pricing (RTP), and other retail pricing mechanisms that communicate an incentive for electricity consumers to reduce their usage during periods of high generation supply costs or system reliability contingencies. Efforts to introduce DR into retail electricity markets confront a range of basic policy issues. First, a fundamental issue in any market context is how to organize the process for developing and implementing DR mechanisms in a manner that facilitates productive participation by affected stakeholder groups. Second, in regions with retail choice, policymakers and stakeholders face the threshold question of whether it is appropriate for utilities to offer a range of dynamic pricing tariffs and DR programs, or just ''plain vanilla'' default service. Although positions on this issue may be based primarily on principle, two empirical questions may have some bearing--namely, what level of price response can be expected through the competitive retail market, and whether establishing RTP as the default service is likely to result in an appreciable level of DR? Third, if utilities are to have a direct role in developing DR, what types of retail pricing mechanisms are most appropriate and likely to have the desired policy impact (e.g., RTP, other dynamic pricing options, DR programs, or some combination)? Given a decision to develop utility RTP tariffs, three basic implementation issues require attention. First, should it be a default or optional tariff, and for which customer classes? Second, what types of tariff design is most appropriate, given prevailing policy objectives, wholesale market structure, ratemaking

  10. COMMERCIALIZATION OF AN ATMOSPHERIC IRON-BASED CDCL PROCESS FOR POWER PRODUCTION. PHASE I: TECHNOECONOMIC ANALYSIS

    SciTech Connect (OSTI)

    Vargas, Luis

    2013-11-01

    Coal Direct Chemical Looping (CDCL) is an advanced oxy-combustion technology that has potential to enable substantial reductions in the cost and energy penalty associated with carbon dioxide (CO2) capture from coal-fired power plants. Through collaborative efforts, the Babcock & Wilcox Power Generation Group (B&W) and The Ohio State University (OSU) developed a conceptual design for a 550 MWe (net) supercritical CDCL power plant with greater than 90% CO2 capture and compression. Process simulations were completed to enable an initial assessment of its technical performance. A cost estimate was developed following DOE’s guidelines as outlined in NETL’s report “Quality Guidelines for Energy System Studies: Cost Estimation Methodology for NETL Assessments of Power Plant Performance”, (2011/1455). The cost of electricity for the CDCL plant without CO2 Transportation and Storage cost resulted in $ $102.67 per MWh, which corresponds to a 26.8 % increase in cost of electricity (COE) when compared to an air-fired pulverized-coal supercritical power plant. The cost of electricity is strongly depending on the total plant cost and cost of the oxygen carrier particles. The CDCL process could capture further potential savings by increasing the performance of the particles and reducing the plant size. During the techno-economic analysis, the team identified technology and engineering gaps that need to be closed to bring the technology to commercialization. The technology gaps were focused in five critical areas: (i) moving bed reducer reactor, (ii) fluidized bed combustor, (iii) particle riser, (iv) oxygen-carrier particle properties, and (v) process operation. The key technology gaps are related to particle performance, particle manufacturing cost, and the operation of the reducer reactor. These technology gaps are to be addressed during Phase II of project. The project team is proposing additional lab testing to be completed on the particle and a 3MWth pilot facility

  11. Comparing large scale CCS deployment potential in the USA and China: a detailed analysis based on country-specific CO2 transport & storage cost curves

    SciTech Connect (OSTI)

    Dahowski, Robert T.; Davidson, Casie L.; Dooley, James J.

    2011-04-18

    The United States and China are the two largest emitters of greenhouse gases in the world and their projected continued growth and reliance on fossil fuels, especially coal, make them strong candidates for CCS. Previous work has revealed that both nations have over 1600 large electric utility and other industrial point CO2 sources as well as very large CO2 storage resources on the order of 2,000 billion metric tons (Gt) of onshore storage capacity. In each case, the vast majority of this capacity is found in deep saline formations. In both the USA and China, candidate storage reservoirs are likely to be accessible by most sources with over 80% of these large industrial CO2 sources having a CO2 storage option within just 80 km. This suggests a strong potential for CCS deployment as a meaningful option to efforts to reduce CO2 emissions from these large, vibrant economies. However, while the USA and China possess many similarities with regards to the potential value that CCS might provide, including the range of costs at which CCS may be available to most large CO2 sources in each nation, there are a number of more subtle differences that may help us to understand the ways in which CCS deployment may differ between these two countries in order for the USA and China to work together - and in step with the rest of the world - to most efficiently reduce greenhouse gas emissions. This paper details the first ever analysis of CCS deployment costs in these two countries based on methodologically comparable CO2 source and sink inventories, economic analysis, geospatial source-sink matching and cost curve modeling. This type of analysis provides a valuable insight into the degree to which early and sustained opportunities for climate change mitigation via commercial-scale CCS are available to the two countries, and could facilitate greater collaboration in areas where those opportunities overlap.

  12. Security analysis of boolean algebra based on Zhang-Wang digital signature scheme

    SciTech Connect (OSTI)

    Zheng, Jinbin

    2014-10-06

    In 2005, Zhang and Wang proposed an improvement signature scheme without using one-way hash function and message redundancy. In this paper, we show that this scheme exits potential safety concerns through the analysis of boolean algebra, such as bitwise exclusive-or, and point out that mapping is not one to one between assembly instructions and machine code actually by means of the analysis of the result of the assembly program segment, and which possibly causes safety problems unknown to the software.

  13. Case Studies

    Broader source: Energy.gov [DOE]

    The following case studies are examples of integrating renewable energy into Federal new construction and major renovation projects. Additional renewable energy case studies are also available.

  14. Analysis of the multigroup model for muon tomography based threat detection

    SciTech Connect (OSTI)

    Perry, J. O.; Bacon, J. D.; Borozdin, K. N.; Fabritius, J. M.; Morris, C. L.

    2014-02-14

    We compare different algorithms for detecting a 5?cm tungsten cube using cosmic ray muon technology. In each case, a simple tomographic technique was used for position reconstruction, but the scattering angles were used differently to obtain a density signal. Receiver operating characteristic curves were used to compare images made using average angle squared, median angle squared, average of the squared angle, and a multi-energy group fit of the angular distributions for scenes with and without a 5?cm tungsten cube. The receiver operating characteristic curves show that the multi-energy group treatment of the scattering angle distributions is the superior method for image reconstruction.

  15. DRIVE Analysis Tool Generates Custom Vehicle Drive Cycles Based on Real-World Data (Fact Sheet)

    SciTech Connect (OSTI)

    Not Available

    2013-04-01

    This fact sheet from the National Renewable Energy Laboratory describes the Drive-Cycle Rapid Investigation, Visualization, and Evaluation (DRIVE) analysis tool, which uses GPS and controller area network data to characterize vehicle operation and produce custom vehicle drive cycles, analyzing thousands of hours of data in a matter of minutes.

  16. CT based computerized identification and analysis of human airways: A review

    SciTech Connect (OSTI)

    Pu Jiantao; Gu Suicheng; Liu Shusen; Zhu Shaocheng; Wilson, David; Siegfried, Jill M.; Gur, David

    2012-05-15

    As one of the most prevalent chronic disorders, airway disease is a major cause of morbidity and mortality worldwide. In order to understand its underlying mechanisms and to enable assessment of therapeutic efficacy of a variety of possible interventions, noninvasive investigation of the airways in a large number of subjects is of great research interest. Due to its high resolution in temporal and spatial domains, computed tomography (CT) has been widely used in clinical practices for studying the normal and abnormal manifestations of lung diseases, albeit there is a need to clearly demonstrate the benefits in light of the cost and radiation dose associated with CT examinations performed for the purpose of airway analysis. Whereas a single CT examination consists of a large number of images, manually identifying airway morphological characteristics and computing features to enable thorough investigations of airway and other lung diseases is very time-consuming and susceptible to errors. Hence, automated and semiautomated computerized analysis of human airways is becoming an important research area in medical imaging. A number of computerized techniques have been developed to date for the analysis of lung airways. In this review, we present a summary of the primary methods developed for computerized analysis of human airways, including airway segmentation, airway labeling, and airway morphometry, as well as a number of computer-aided clinical applications, such as virtual bronchoscopy. Both successes and underlying limitations of these approaches are discussed, while highlighting areas that may require additional work.

  17. Control Limits for Building Energy End Use Based on Engineering Judgment, Frequency Analysis, and Quantile Regression

    SciTech Connect (OSTI)

    Henze, G. P.; Pless, S.; Petersen, A.; Long, N.; Scambos, A. T.

    2014-02-01

    Approaches are needed to continuously characterize the energy performance of commercial buildings to allow for (1) timely response to excess energy use by building operators; and (2) building occupants to develop energy awareness and to actively engage in reducing energy use. Energy information systems, often involving graphical dashboards, are gaining popularity in presenting energy performance metrics to occupants and operators in a (near) real-time fashion. Such an energy information system, called Building Agent, has been developed at NREL and incorporates a dashboard for public display. Each building is, by virtue of its purpose, location, and construction, unique. Thus, assessing building energy performance is possible only in a relative sense, as comparison of absolute energy use out of context is not meaningful. In some cases, performance can be judged relative to average performance of comparable buildings. However, in cases of high-performance building designs, such as NREL's Research Support Facility (RSF) discussed in this report, relative performance is meaningful only when compared to historical performance of the facility or to a theoretical maximum performance of the facility as estimated through detailed building energy modeling.

  18. Building America Special Research Project: High-R Walls Case...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Building America Special Research Project: High-R Walls Case Study Analysis Building America Special Research Project: High-R Walls Case Study Analysis This report considers a ...

  19. Economic Analysis for Conceptual Design of Supercritical O2-Based PC Boiler

    SciTech Connect (OSTI)

    Andrew Seltzer; Archie Robertson

    2006-09-01

    This report determines the capital and operating costs of two different oxygen-based, pulverized coal-fired (PC) power plants and compares their economics to that of a comparable, air-based PC plant. Rather than combust their coal with air, the oxygen-based plants use oxygen to facilitate capture/removal of the plant CO{sub 2} for transport by pipeline to a sequestering site. To provide a consistent comparison of technologies, all three plants analyzed herein operate with the same coal (Illinois No 6), the same site conditions, and the same supercritical pressure steam turbine (459 MWe). In the first oxygen-based plant, the pulverized coal-fired boiler operates with oxygen supplied by a conventional, cryogenic air separation unit, whereas, in the second oxygen-based plant, the oxygen is supplied by an oxygen ion transport membrane. In both oxygen-based plants a portion of the boiler exhaust gas, which is primarily CO{sub 2}, is recirculated back to the boiler to control the combustion temperature, and the balance of the flue gas undergoes drying and compression to pipeline pressure; for consistency, both plants operate with similar combustion temperatures and utilize the same CO{sub 2} processing technologies. The capital and operating costs of the pulverized coal-fired boilers required by the three different plants were estimated by Foster Wheeler and the balance of plant costs were budget priced using published data together with vendor supplied quotations. The cost of electricity produced by each of the plants was determined and oxygen-based plant CO{sub 2} mitigation costs were calculated and compared to each other as well as to values published for some alternative CO{sub 2} capture technologies.

  20. Genome-Based Metabolic Mapping and 13C Flux Analysis Reveal Systematic Properties of an Oleaginous Microalga Chlorella protothecoides

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Wu, Chao; Xiong, Wei; Dai, Junbiao; Wu, Qingyu

    2014-12-15

    We report that integrated and genome-based flux balance analysis, metabolomics, and 13C-label profiling of phototrophic and heterotrophic metabolism in Chlorella protothecoides, an oleaginous green alga for biofuel. The green alga Chlorella protothecoides, capable of autotrophic and heterotrophic growth with rapid lipid synthesis, is a promising candidate for biofuel production. Based on the newly available genome knowledge of the alga, we reconstructed the compartmentalized metabolic network consisting of 272 metabolic reactions, 270 enzymes, and 461 encoding genes and simulated the growth in different cultivation conditions with flux balance analysis. Phenotype-phase plane analysis shows conditions achieving theoretical maximum of the biomass andmore » corresponding fatty acid-producing rate for phototrophic cells (the ratio of photon uptake rate to CO2 uptake rate equals 8.4) and heterotrophic ones (the glucose uptake rate to O2 consumption rate reaches 2.4), respectively. Isotope-assisted liquid chromatography-mass spectrometry/mass spectrometry reveals higher metabolite concentrations in the glycolytic pathway and the tricarboxylic acid cycle in heterotrophic cells compared with autotrophic cells. We also observed enhanced levels of ATP, nicotinamide adenine dinucleotide (phosphate), reduced, acetyl-Coenzyme A, and malonyl-Coenzyme A in heterotrophic cells consistently, consistent with a strong activity of lipid synthesis. To profile the flux map in experimental conditions, we applied nonstationary 13C metabolic flux analysis as a complementing strategy to flux balance analysis. We found that the result reveals negligible photorespiratory fluxes and a metabolically low active tricarboxylic acid cycle in phototrophic C. protothecoides. In comparison, high throughput of amphibolic reactions and the tricarboxylic acid cycle with no glyoxylate shunt activities were measured for heterotrophic cells. Lastly, taken together, the metabolic network modeling assisted

  1. Method of assessing a lipid-related health risk based on ion mobility analysis of lipoproteins

    DOE Patents [OSTI]

    Benner, W. Henry; Krauss, Ronald M.; Blanche, Patricia J.

    2010-12-14

    A medical diagnostic method and instrumentation system for analyzing noncovalently bonded agglomerated biological particles is described. The method and system comprises: a method of preparation for the biological particles; an electrospray generator; an alpha particle radiation source; a differential mobility analyzer; a particle counter; and data acquisition and analysis means. The medical device is useful for the assessment of human diseases, such as cardiac disease risk and hyperlipidemia, by rapid quantitative analysis of lipoprotein fraction densities. Initially, purification procedures are described to reduce an initial blood sample to an analytical input to the instrument. The measured sizes from the analytical sample are correlated with densities, resulting in a spectrum of lipoprotein densities. The lipoprotein density distribution can then be used to characterize cardiac and other lipid-related health risks.

  2. DanteR: an extensible R-based tool for quantitative analysis of -omics data

    SciTech Connect (OSTI)

    Taverner, Thomas; Karpievitch, Yuliya; Polpitiya, Ashoka D.; Brown, Joseph N.; Dabney, Alan R.; Anderson, Gordon A.; Smith, Richard D.

    2012-09-15

    Motivation: The size and complex nature of LC-MS proteomics data sets motivates development of specialized software for statistical data analysis and exploration. We present DanteR, a graphical R package that features extensive statistical and diagnostic functions for quantitative proteomics data analysis, including normalization, imputation, hypothesis testing, interactive visualization and peptide-to-protein rollup. More importantly, users can easily extend the existing functionality by including their own algorithms under the Add-On tab. Availability: DanteR and its associated user guide are available for download at http://omics.pnl.gov/software/. For Windows, a single click automatically installs DanteR along with the R programming environment. For Linux and Mac OS X, users must first install R and then follow instructions on the DanteR web site for package installation.

  3. Study of vaneless diffuser rotating stall based on two-dimensional inviscid flow analysis

    SciTech Connect (OSTI)

    Tsujimoto, Yoshinobu; Yoshida, Yoshiki [Osaka Univ., Toyonaka, Osaka (Japan); Mori, Yasumasa [Mitsubishi Motors Corp., Ohta, Tokyo (Japan)

    1996-03-01

    Rotating stalls in vaneless diffusers are studied from the viewpoint that they are basically two-dimensional inviscid flow instability under the boundary conditions of vanishing velocity disturbance at the diffuser inlet and of vanishing pressure disturbance at the diffuser outlet. The linear analysis in the present report shows that the critical flow angle and the propagation velocity are functions of only the diffuser radius ratio. It is shown that the present analysis can reproduce most of the general characteristics observed in experiments: critical flow angle, propagation velocity, velocity, and pressure disturbance fields. It is shown that the vanishing velocity disturbance at the diffuser inlet is caused by the nature of impellers as a resistance and an inertial resistance, which is generally strong enough to suppress the velocity disturbance at the diffuser inlet. This explains the general experimental observations that vaneless diffuser rotating stalls are not largely affected by the impeller.

  4. Cogeneration: Economic and technical analysis. (Latest citations from the NTIS data base). Published Search

    SciTech Connect (OSTI)

    Not Available

    1992-05-01

    The bibliography contains citations concerning economic and technical analysis of cogeneration systems. Topics include electric power and steam generation, dual-purpose and fuel cell power plants, and on-site power generation. Tower focus power plants, solar cogeneration, biomass conversion, coal liquefaction and gasification, and refuse derived fuels are discussed. References cite feasibility studies, performance and economic evaluation, environmental impacts, and institutional factors. (Contains 250 citations and includes a subject term index and title list.)

  5. Industrial applications of accelerator-based infrared sources: Analysis using infrared microspectroscopy

    SciTech Connect (OSTI)

    Bantignies, J.L.; Fuchs, G.; Wilhelm, C.; Carr, G.L.; Dumas, P.

    1997-09-01

    Infrared Microspectroscopy, using a globar source, is now widely employed in the industrial environment, for the analysis of various materials. Since synchrotron radiation is a much brighter source, an enhancement of an order of magnitude in lateral resolution can be achieved. Thus, the combination of IR microspectroscopy and synchrotron radiation provides a powerful tool enabling sample regions only few microns size to be studied. This opens up the potential for analyzing small particles. Some examples for hair, bitumen and polymer are presented.

  6. Reservoir characterization based on tracer response and rank analysis of production and injection rates

    SciTech Connect (OSTI)

    Refunjol, B.T.; Lake, L.W.

    1997-08-01

    Quantification of the spatial distribution of properties is important for many reservoir-engineering applications. But, before applying any reservoir-characterization technique, the type of problem to be tackled and the information available should be analyzed. This is important because difficulties arise in reservoirs where production records are the only information for analysis. This paper presents the results of a practical technique to determine preferential flow trends in a reservoir. The technique is a combination of reservoir geology, tracer data, and Spearman rank correlation coefficient analysis. The Spearman analysis, in particular, will prove to be important because it appears to be insightful and uses injection/production data that are prevalent in circumstances where other data are nonexistent. The technique is applied to the North Buck Draw field, Campbell County, Wyoming. This work provides guidelines to assess information about reservoir continuity in interwell regions from widely available measurements of production and injection rates at existing wells. The information gained from the application of this technique can contribute to both the daily reservoir management and the future design, control, and interpretation of subsequent projects in the reservoir, without the need for additional data.

  7. Analysis of ancient-river systems by 3D seismic time-slice technique: A case study in northeast Malay Basin, offshore Terengganu, Malaysia

    SciTech Connect (OSTI)

    Sulaiman, Noorzamzarina; Hamzah, Umar; Samsudin, Abdul Rahim

    2014-09-03

    Fluvial sandstones constitute one of the major clastic petroleum reservoir types in many sedimentary basins around the world. This study is based on the analysis of high-resolution, shallow (seabed to 500 m depth) 3D seismic data which generated three-dimensional (3D) time slices that provide exceptional imaging of the geometry, dimension and temporal and spatial distribution of fluvial channels. The study area is in the northeast of Malay Basin about 280 km to the east of Terengganu offshore. The Malay Basin comprises a thick (> 8 km), rift to post-rift Oligo-Miocene to Pliocene basin-fill. The youngest (Miocene to Pliocene), post-rift succession is dominated by a thick (1–5 km), cyclic succession of coastal plain and coastal deposits, which accumulated in a humid-tropical climatic setting. This study focuses on the Pleistocene to Recent (500 m thick) succession, which comprises a range of seismic facies analysis of the two-dimensional (2D) seismic sections, mainly reflecting changes in fluvial channel style and river architecture. The succession has been divided into four seismic units (Unit S1-S4), bounded by basin-wide strata surfaces. Two types of boundaries have been identified: 1) a boundary that is defined by a regionally-extensive erosion surface at the base of a prominent incised valley (S3 and S4); 2) a sequence boundary that is defined by more weakly-incised, straight and low-sinuosity channels which is interpreted as low-stand alluvial bypass channel systems (S1 and S2). Each unit displays a predictable vertical change of the channel pattern and scale, with wide low-sinuosity channels at the base passing gradationally upwards into narrow high-sinuosity channels at the top. The wide variation in channel style and size is interpreted to be controlled mainly by the sea-level fluctuations on the widely flat Sunda land Platform.

  8. GMR-based PhC biosensor: FOM analysis and experimental studies

    SciTech Connect (OSTI)

    Syamprasad, Jagadeesh; Narayanan, Roshni; Joseph, Joby; Takahashi, Hiroki; Sandhu, Adarsh; Jindal, Rajeev

    2014-02-20

    Guided Mode Resonance based Photonic crystal biosensor has a lot of potential applications. In our work, we are trying to improve their figure of merit values in order to achieve an optimum level through design and fabrication techniques. A robust and low-cost alternative for current biosensors is also explored through this research.

  9. Structure-sequence based analysis for identification of conserved regions in proteins

    DOE Patents [OSTI]

    Zemla, Adam T; Zhou, Carol E; Lam, Marisa W; Smith, Jason R; Pardes, Elizabeth

    2013-05-28

    Disclosed are computational methods, and associated hardware and software products for scoring conservation in a protein structure based on a computationally identified family or cluster of protein structures. A method of computationally identifying a family or cluster of protein structures in also disclosed herein.

  10. Safety assessment of historical masonry churches based on pre-assigned kinematic limit analysis, FE limit and pushover analyses

    SciTech Connect (OSTI)

    Milani, Gabriele Valente, Marco

    2014-10-06

    This study presents some results of a comprehensive numerical analysis on three masonry churches damaged by the recent Emilia-Romagna (Italy) seismic events occurred in May 2012. The numerical study comprises: (a) pushover analyses conducted with a commercial code, standard nonlinear material models and two different horizontal load distributions; (b) FE kinematic limit analyses performed using a non-commercial software based on a preliminary homogenization of the masonry materials and a subsequent limit analysis with triangular elements and interfaces; (c) kinematic limit analyses conducted in agreement with the Italian code and based on the a-priori assumption of preassigned failure mechanisms, where the masonry material is considered unable to withstand tensile stresses. All models are capable of giving information on the active failure mechanism and the base shear at failure, which, if properly made non-dimensional with the weight of the structure, gives also an indication of the horizontal peak ground acceleration causing the collapse of the church. The results obtained from all three models indicate that the collapse is usually due to the activation of partial mechanisms (apse, façade, lateral walls, etc.). Moreover the horizontal peak ground acceleration associated to the collapse is largely lower than that required in that seismic zone by the Italian code for ordinary buildings. These outcomes highlight that structural upgrading interventions would be extremely beneficial for the considerable reduction of the seismic vulnerability of such kind of historical structures.

  11. Semantic Pattern Analysis for Verbal Fluency Based Assessment of Neurological Disorders

    SciTech Connect (OSTI)

    Sukumar, Sreenivas R; Ainsworth, Keela C; Brown, Tyler C

    2014-01-01

    In this paper, we present preliminary results of semantic pattern analysis of verbal fluency tests used for assessing cognitive psychological and neuropsychological disorders. We posit that recent advances in semantic reasoning and artificial intelligence can be combined to create a standardized computer-aided diagnosis tool to automatically evaluate and interpret verbal fluency tests. Towards that goal, we derive novel semantic similarity (phonetic, phonemic and conceptual) metrics and present the predictive capability of these metrics on a de-identified dataset of participants with and without neurological disorders.

  12. Regression Models for Demand Reduction based on Cluster Analysis of Load Profiles

    SciTech Connect (OSTI)

    Yamaguchi, Nobuyuki; Han, Junqiao; Ghatikar, Girish; Piette, Mary Ann; Asano, Hiroshi; Kiliccote, Sila

    2009-06-28

    This paper provides new regression models for demand reduction of Demand Response programs for the purpose of ex ante evaluation of the programs and screening for recruiting customer enrollment into the programs. The proposed regression models employ load sensitivity to outside air temperature and representative load pattern derived from cluster analysis of customer baseline load as explanatory variables. The proposed models examined their performances from the viewpoint of validity of explanatory variables and fitness of regressions, using actual load profile data of Pacific Gas and Electric Company's commercial and industrial customers who participated in the 2008 Critical Peak Pricing program including Manual and Automated Demand Response.

  13. Design and Quasi-Equilibrium Analysis of a Distributed Frequency-Restoration Controller for Inverter-Based Microgrids

    SciTech Connect (OSTI)

    Ainsworth, Nathan G; Grijalva, Prof. Santiago

    2013-01-01

    This paper discusses a proposed frequency restoration controller which operates as an outer loop to frequency droop for voltage-source inverters. By quasi-equilibrium analysis, we show that the proposed controller is able to provide arbitrarily small steady-state frequency error while maintaing power sharing between inverters without need for communication or centralized control. We derive rate of convergence, discuss design considerations (including a fundamental trade-off that must be made in design), present a design procedure to meet a maximum frequency error requirement, and show simulation results verifying our analysis and design method. The proposed controller will allow flexible plug-and-play inverter-based networks to meet a specified maximum frequency error requirement.

  14. Gene identification and analysis: an application of neural network-based information fusion

    SciTech Connect (OSTI)

    Matis, S.; Xu, Y.; Shah, M.B.; Mural, R.J.; Einstein, J.R.; Uberbacher, E.C.

    1996-10-01

    Identifying genes within large regions of uncharacterized DNA is a difficult undertaking and is currently the focus of many research efforts. We describe a gene localization and modeling system called GRAIL. GRAIL is a multiple sensor-neural network based system. It localizes genes in anonymous DNA sequence by recognizing gene features related to protein-coding slice sites, and then combines the recognized features using a neural network system. Localized coding regions are then optimally parsed into a gene mode. RNA polymerase II promoters can also be predicted. Through years of extensive testing, GRAIL consistently localizes about 90 percent of coding portions of test genes with a false positive rate of about 10 percent. A number of genes for major genetic diseases have been located through the use of GRAIL, and over 1000 research laboratories worldwide use GRAIL on regular bases for localization of genes on their newly sequenced DNA.

  15. Moving beyond mass-based parameters for conductivity analysis of sulfonated polymers

    SciTech Connect (OSTI)

    Kim, Yu Seung; Pivovar, Bryan

    2009-01-01

    Proton conductivity of polymer electrolytes is critical for fuel cells and has therefore been studied in significant detail. The conductivity of sulfonated polymers has been linked to material characteristics in order to elucidate trends. Mass based measurements based on water uptake and ion exchange capacity are two of the most common material characteristics used to make comparisons between polymer electrolytes, but have significant limitations when correlated to proton conductivity. These limitations arise in part because different polymers can have significantly different densities and conduction happens over length scales more appropriately represented by volume measurements rather than mass. Herein, we establish and review volume related parameters that can be used to compare proton conductivity of different polymer electrolytes. Morphological effects on proton conductivity are also considered. Finally, the impact of these phenomena on designing next generation sulfonated polymers for polymer electrolyte membrane fuel cells is discussed.

  16. Extending PowerPack for Profiling and Analysis of High Performance Accelerator-Based Systems

    SciTech Connect (OSTI)

    Li, Bo; Chang, Hung-Ching; Song, Shuaiwen; Su, Chun-Yi; Meyer, Timmy; Mooring, John; Cameron, Kirk

    2014-12-01

    Accelerators offer a substantial increase in efficiency for high-performance systems offering speedups for computational applications that leverage hardware support for highly-parallel codes. However, the power use of some accelerators exceeds 200 watts at idle which means use at exascale comes at a significant increase in power at a time when we face a power ceiling of about 20 megawatts. Despite the growing domination of accelerator-based systems in the Top500 and Green500 lists of fastest and most efficient supercomputers, there are few detailed studies comparing the power and energy use of common accelerators. In this work, we conduct detailed experimental studies of the power usage and distribution of Xeon-Phi-based systems in comparison to the NVIDIA Tesla and at SandyBridge.

  17. Development of simplified design aids based on the results of simulation analysis

    SciTech Connect (OSTI)

    Balcomb, J.D.

    1980-01-01

    The Solar Load Ratio method for estimating the performance of passive solar heating systems is described. It is a simplified technique which is based on correlating the monthly solar savings fraction in terms of the ratio of monthly solar radiation absorbed by the building to total monthly building thermal load. The effect of differences between actual design parameters and those used to develop the correlations is estimated afterwards using sensitivity curves. The technique is fast and simple and sufficiently accurate for design purposes.

  18. Exposure Based Health Issues Project Report: Phase I of High Level Tank Operations, Retrieval, Pretreatment, and Vitrification Exposure Based Health Issues Analysis

    SciTech Connect (OSTI)

    Stenner, Robert D.; Bowers, Harold N.; Kenoyer, Judson L.; Strenge, Dennis L.; Brady, William H.; Ladue, Buffi; Samuels, Joseph K.

    2001-11-30

    The Department of Energy (DOE) has the responsibility to understand the ''big picture'' of worker health and safety which includes fully recognizing the vulnerabilities and associated programs necessary to protect workers at the various DOE sites across the complex. Exposure analysis and medical surveillance are key aspects for understanding this big picture, as is understanding current health and safety practices and how they may need to change to relate to future health and safety management needs. The exposure-based health issues project was initiated to assemble the components necessary to understand potential exposure situations and their medical surveillance and clinical aspects. Phase I focused only on current Hanford tank farm operations and serves as a starting point for the overall project. It is also anticipated that once the pilot is fully developed for Hanford HLW (i.e., current operations, retrieval, pretreatment, vitrification, and disposal), the process and analysis methods developed will be available and applicable for other DOE operations and sites. The purpose of this Phase I project report is to present the health impact information collected regarding ongoing tank waste maintenance operations, show the various aspects of health and safety involved in protecting workers, introduce the reader to the kinds of information that will need to be analyzed in order to effectively manage worker safety.

  19. Thermodynamic analysis of interactions between Ni-based solid oxide fuel cells (SOFC) anodes and trace species in a survey of coal syngas

    SciTech Connect (OSTI)

    Andrew Martinez; Kirk Gerdes; Randall Gemmen; James Postona

    2010-03-20

    A thermodynamic analysis was conducted to characterize the effects of trace contaminants in syngas derived from coal gasification on solid oxide fuel cell (SOFC) anode material. The effluents from 15 different gasification facilities were considered to assess the impact of fuel composition on anode susceptibility to contamination. For each syngas case, the study considers the magnitude of contaminant exposure resulting from operation of a warm gas cleanup unit at two different temperatures and operation of a nickel-based SOFC at three different temperatures. Contaminant elements arsenic (As), phosphorous (P), and antimony (Sb) are predicted to be present in warm gas cleanup effluent and will interact with the nickel (Ni) components of a SOFC anode. Phosphorous is the trace element found in the largest concentration of the three contaminants and is potentially the most detrimental. Poisoning was found to depend on the composition of the syngas as well as system operating conditions. Results for all trace elements tended to show invariance with cleanup operating temperature, but results were sensitive to syngas bulk composition. Synthesis gas with high steam content tended to resist poisoning.

  20. POD-based analysis of combustion images in optically accessible engines

    SciTech Connect (OSTI)

    Bizon, K.; Continillo, G.; Mancaruso, E.; Merola, S.S.; Vaglieco, B.M.

    2010-04-15

    This paper reports on 2D images of combustion-related luminosity taken in two optically accessible automobile engines of the most recent generation. The results are discussed to elucidate physical phenomena in the combustion chambers. Then, proper orthogonal decomposition (POD) is applied to the acquired images. The coefficients of the orthogonal modes are then used for the analysis of cycle variability, along with data of dynamic in-cylinder pressure and rate of heat release. The advantage is that statistical analysis can be run on a small number of scalar coefficients rather than on the full data set of pixel luminosity values. Statistics of the POD coefficients provide information on cycle variations of the luminosity field. POD modes are then discriminated by means of normality tests, to separate the mean from the coherent and the incoherent parts of the fluctuation of the luminosity field, in a non-truncated representation of the data. The morphology of the fluctuation components can finally be reconstructed by grouping coherent and incoherent modes. The structure of the incoherent component of the fluctuation is consistent with the underlying turbulent field. (author)

  1. Femtosecond laser ablation-based mass spectrometry. An ideal tool for stoichiometric analysis of thin films

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    LaHaye, Nicole L.; Kurian, Jose; Diwakar, Prasoon K.; Alff, Lambert; Harilal, Sivanandan S.

    2015-08-19

    An accurate and routinely available method for stoichiometric analysis of thin films is a desideratum of modern materials science where a material’s properties depend sensitively on elemental composition. We thoroughly investigated femtosecond laser ablation-inductively coupled plasma-mass spectrometry (fs-LA-ICP-MS) as an analytical technique for determination of the stoichiometry of thin films down to the nanometer scale. The use of femtosecond laser ablation allows for precise removal of material with high spatial and depth resolution that can be coupled to an ICP-MS to obtain elemental and isotopic information. We used molecular beam epitaxy-grown thin films of LaPd(x)Sb2 and TÂŽ-La2CuO4 to demonstrate themore » capacity of fs-LA-ICP-MS for stoichiometric analysis and the spatial and depth resolution of the technique. Here we demonstrate that the stoichiometric information of thin films with a thickness of ~10 nm or lower can be determined. Furthermore, our results indicate that fs-LA-ICP-MS provides precise information on the thin film-substrate interface and is able to detect the interdiffusion of cations.« less

  2. Experimental and numerical analysis of metal leaching from fly ash-amended highway bases

    SciTech Connect (OSTI)

    Cetin, Bora; Aydilek, Ahmet H.; Li, Lin

    2012-05-15

    Highlights: Black-Right-Pointing-Pointer This study is the evaluation of leaching potential of fly ash-lime mixed soils. Black-Right-Pointing-Pointer This objective is met with experimental and numerical analysis. Black-Right-Pointing-Pointer Zn leaching decreases with increase in fly ash content while Ba, B, Cu increases. Black-Right-Pointing-Pointer Decrease in lime content promoted leaching of Ba, B and Cu while Zn increases. Black-Right-Pointing-Pointer Numerical analysis predicted lower field metal concentrations. - Abstract: A study was conducted to evaluate the leaching potential of unpaved road materials (URM) mixed with lime activated high carbon fly ashes and to evaluate groundwater impacts of barium, boron, copper, and zinc leaching. This objective was met by a combination of batch water leach tests, column leach tests, and computer modeling. The laboratory tests were conducted on soil alone, fly ash alone, and URM-fly ash-lime kiln dust mixtures. The results indicated that an increase in fly ash and lime content has significant effects on leaching behavior of heavy metals from URM-fly ash mixture. An increase in fly ash content and a decrease in lime content promoted leaching of Ba, B and Cu whereas Zn leaching was primarily affected by the fly ash content. Numerically predicted field metal concentrations were significantly lower than the peak metal concentrations obtained in laboratory column leach tests, and field concentrations decreased with time and distance due to dispersion in soil vadose zone.

  3. Analysis of In-Use Fuel Economy Shortfall Based on Voluntarily Reported MPG Estimates

    SciTech Connect (OSTI)

    Greene, David L; Goeltz, Rick; Hopson, Dr Janet L; Tworek, Elzbieta

    2007-01-01

    The usefulness of the Environmental Protection Agency's (EPA) passenger car and light truck fuel economy estimates has been the subject of debate for the past three decades. For the labels on new vehicles and the fuel economy information given to the public, the EPA adjusts dynamometer test results downward by 10% for the city cycle and 22% for the highway cycle to better reflect real world driving conditions. These adjustment factors were developed in 1984 and their continued validity has repeatedly been questioned. In March of 2005 the U.S. Department of Energy (DOE) and EPA's fuel economy information website, www.fueleconomy.gov, began allowing users to voluntarily share fuel economy estimates. This paper presents an initial statistical analysis of more than 3,000 estimates submitted by website users. The analysis suggests two potentially important results: (1) adjusted, combined EPA fuel economy estimates appear to be approximately unbiased estimators of the average fuel economy consumers will experience in actual driving, and (2) the EPA estimates are highly imprecise predictors of any given individual's in-use fuel economy, an approximate 95% confidence interval being +/-7 MPG. These results imply that what is needed is not less biased adjustment factors for the EPA estimates but rather more precise methods of predicting the fuel economy individual consumers will achieve in their own driving.

  4. Femtosecond laser ablation-based mass spectrometry. An ideal tool for stoichiometric analysis of thin films

    SciTech Connect (OSTI)

    LaHaye, Nicole L.; Kurian, Jose; Diwakar, Prasoon K.; Alff, Lambert; Harilal, Sivanandan S.

    2015-08-19

    An accurate and routinely available method for stoichiometric analysis of thin films is a desideratum of modern materials science where a material’s properties depend sensitively on elemental composition. We thoroughly investigated femtosecond laser ablation-inductively coupled plasma-mass spectrometry (fs-LA-ICP-MS) as an analytical technique for determination of the stoichiometry of thin films down to the nanometer scale. The use of femtosecond laser ablation allows for precise removal of material with high spatial and depth resolution that can be coupled to an ICP-MS to obtain elemental and isotopic information. We used molecular beam epitaxy-grown thin films of LaPd(x)Sb2 and T®-La2CuO4 to demonstrate the capacity of fs-LA-ICP-MS for stoichiometric analysis and the spatial and depth resolution of the technique. Here we demonstrate that the stoichiometric information of thin films with a thickness of ~10 nm or lower can be determined. Furthermore, our results indicate that fs-LA-ICP-MS provides precise information on the thin film-substrate interface and is able to detect the interdiffusion of cations.

  5. Greenhouse gas mitigation options in the forestry sector of The Gambia: Analysis based on COMAP model

    SciTech Connect (OSTI)

    Jallow, B.P.

    1996-12-31

    Results of the 1993 Greenhouse Gas Emissions Inventory of The Gambia showed net CO{sub 2} emissions of over (1.66 x 10{sup 6} tons) and 1% was due to uptake by plantations (0.01 x 10{sup 6} tons). This is a clear indication that there is need to identify changes in the land-use policy, law and tenure that discourages forest clearing at the same time significantly influencing the sustainable distribution of land among forestry, rangeland and livestock, and agriculture. About 11% of the total area of The Gambia is either fallow or barren flats that once supported vegetation and hence is still capable of supporting vegetation. The US Country Study Programme has provided the Government of The Gambia through the National Climate Committee funds to conduct Assessment of Mitigation Options to Reduce Greenhouse Gas Emissions. The Forestry Sector is one area for which assessment is being conducted. The assessment is expected to end in September 1996. The Comprehensive Mitigation Analysis Process (COMAP) is one of the Models supplied to the National Climate Committee by the Lawrence Berkeley Laboratory, on behalf of the US Country Study Programme, and is being used to conduct the analysis in The Gambia.

  6. First principles analysis of lattice dynamics for Fe-based superconductors and entropically-stabilized phases

    SciTech Connect (OSTI)

    Hahn, Steven

    2012-07-20

    Modern calculations are becoming an essential, complementary tool to inelastic x-ray scattering studies, where x-rays are scattered inelastically to resolve meV phonons. Calculations of the inelastic structure factor for any value of Q assist in both planning the experiment and analyzing the results. Moreover, differences between the measured data and theoretical calculations help identify important new physics driving the properties of novel correlated systems. We have used such calculations to better and more e#14;ciently measure the phonon dispersion and elastic constants of several iron pnictide superconductors. This dissertation describes calculations and measurements at room temperature in the tetragonal phase of CaFe{sub 2}As{sub 2} and LaFeAsO. In both cases, spin-polarized calculations imposing the antiferromagnetic order present in the low-temperature orthorhombic phase dramatically improves the agreement between theory and experiment. This is discussed in terms of the strong antiferromagnetic correlations that are known to persist in the tetragonal phase. In addition, we discuss a relatively new approach called self-consistent ab initio lattice dynamics (SCAILD), which goes beyond the harmonic approximation to include phonon-phonon interactions and produce a temperature-dependent phonon dispersion. We used this technique to study the HCP to BCC transition in beryllium.

  7. P2P-based botnets: structural analysis, monitoring, and mitigation

    SciTech Connect (OSTI)

    Yan, Guanhua; Eidenbenz, Stephan; Ha, Duc T; Ngo, Hung Q

    2008-01-01

    Botnets, which are networks of compromised machines that are controlled by one or a group of attackers, have emerged as one of the most serious security threats on the Internet. With an army of bots at the scale of tens of thousands of hosts or even as large as 1.5 million PCs, the computational power of botnets can be leveraged to launch large-scale DDoS (Distributed Denial of Service) attacks, sending spamming emails, stealing identities and financial information, etc. As detection and mitigation techniques against botnets have been stepped up in recent years, attackers are also constantly improving their strategies to operate these botnets. The first generation of botnets typically employ IRC (Internet Relay Chat) channels as their command and control (C&C) centers. Though simple and easy to deploy, the centralized C&C mechanism of such botnets has made them prone to being detected and disabled. Against this backdrop, peer-to-peer (P2P) based botnets have emerged as a new generation of botnets which can conceal their C&C communication. Recently, P2P networks have emerged as a covert communication platform for malicious programs known as bots. As popular distributed systems, they allow bots to communicate easily while protecting the botmaster from being discovered. Existing work on P2P-based hotnets mainly focuses on measurement of botnet sizes. In this work, through simulation, we study extensively the structure of P2P networks running Kademlia, one of a few widely used P2P protocols in practice. Our simulation testbed incorporates the actual code of a real Kademlia client software to achieve great realism, and distributed event-driven simulation techniques to achieve high scalability. Using this testbed, we analyze the scaling, reachability, clustering, and centrality properties of P2P-based botnets from a graph-theoretical perspective. We further demonstrate experimentally and theoretically that monitoring bot activities in a P2P network is difficult

  8. C COAST. A PC-based program for the analysis of coastal processes using NOAA coastwatch data

    SciTech Connect (OSTI)

    Miller, R.L.; Decampo, J. )

    1994-02-01

    As part of the NOAA Coastal Ocean Program, the CoastWatch program was created to provide low-cost, near real-time remotely sensed data of the coast and Great Lakes region of the United States to decision makers in the public and private sectors. This paper describes a PC-based program developed specifically for the display and analysis of NOAA's CoastWatch sea surface temperatures (SST) processed imagery. This program, C COAST, provides an easy to use environment to users to incorporate SST images into their activities. 2 refs.

  9. BBRN Factsheet: Case Study: Community Engagement | Department...

    Office of Environmental Management (EM)

    Case Study: Community Engagement, on the Community Home Energy Retrofit Project (CHERP), based in Claremont, California. Case Study: Community Engagement (197.35 KB) More Documents ...

  10. Large deformation analysis of laminated composite structures by a continuum-based shell element with transverse deformation

    SciTech Connect (OSTI)

    Wung, Pey Min.

    1989-01-01

    In this work, a finite element formulation and associated computer program is developed for the transient large deformation analysis of laminated composite plate/shell structures. In order to satisfy the plate/shell surface traction boundary conditions and to have accurate stress description while maintaining the low cost of the analysis, a newly assumed displacement field theory is formulated by adding higher-order terms to the transverse displacement component of the first-order shear deformation theory. The laminated shell theory is formulated using the Updated Lagrangian description of a general continuum-based theory with assumptions on thickness deformation. The transverse deflection is approximated through the thickness by a quartic polynomial of the thickness coordinate. As a result both the plate/shell surface tractions (including nonzero tangential tractions and nonzero normal pressure) and the interlaminar shear stress continuity conditions at interfaces are satisfied simultaneously. Furthermore, the rotational degree of freedoms become layer dependent quantities and the laminate possesses a transverse deformation capability (i.e the normal strain is no longer zero). Analytical integration through the thickness direction is performed for both the linear analysis and the nonlinear analysis. Resultants of the stress integrations are expressed in terms of the laminate stacking sequence. Consequently, the laminate characteristics in the normal direction can be evaluated precisely and the cost of the overall analysis is reduced. The standard Newmark method and the modified Newton Raphson method are used for the solution of the nonlinear dynamic equilibrium equations. Finally, a variety of numerical examples are presented to demonstrate the validity and efficiency of the finite element program developed herein.

  11. Solar Reserve Methodology for Renewable Energy Integration Studies Based on Sub-Hourly Variability Analysis: Preprint

    SciTech Connect (OSTI)

    Ibanez, E.; Brinkman, G.; Hummon, M.; Lew, D.

    2012-08-01

    Increasing penetrations of wind a solar energy are raising concerns among electric system operators because of the variability and uncertainty associated with power sources. Previous work focused on the quantification of reserves for systems with wind power. This paper presents a new methodology that allows the determination of necessary reserves for high penetrations of photovoltaic (PV) power and compares it to the wind-based methodology. The solar reserve methodology is applied to Phase 2 of the Western Wind and Solar Integration Study. A summary of the results is included.

  12. An empirical analysis of exposure-based regulation to abate toxic air pollution

    SciTech Connect (OSTI)

    Marakovits, D.M.; Considine, T.J.

    1996-11-01

    Title III of the 1990 Clean Air Act Amendments requires the Environmental Protection Agency to regulate 189 air toxics, including emissions from by-product coke ovens. Economists criticize the inefficiency of uniform standards, but Title III makes no provision for flexible regulatory instruments. Environmental health scientists suggest that population exposure, not necessarily ambient air quality, should motivate environmental air pollution policies. Using an engineering-economic model of the United States steel industry, we estimate that an exposure-based policy can achieve the same level of public health as coke oven emissions standards and can reduce compliance costs by up to 60.0%. 18 refs., 3 figs., 1 tab.

  13. Model-Based Analysis of the Role of Biological, Hydrological and Geochemical Factors Affecting Uranium Bioremediation

    SciTech Connect (OSTI)

    Zhao, Jiao; Scheibe, Timothy D.; Mahadevan, Radhakrishnan

    2011-01-24

    Uranium contamination is a serious concern at several sites motivating the development of novel treatment strategies such as the Geobacter-mediated reductive immobilization of uranium. However, this bioremediation strategy has not yet been optimized for the sustained uranium removal. While several reactive-transport models have been developed to represent Geobacter-mediated bioremediation of uranium, these models often lack the detailed quantitative description of the microbial process (e.g., biomass build-up in both groundwater and sediments, electron transport system, etc.) and the interaction between biogeochemical and hydrological process. In this study, a novel multi-scale model was developed by integrating our recent model on electron capacitance of Geobacter (Zhao et al., 2010) with a comprehensive simulator of coupled fluid flow, hydrologic transport, heat transfer, and biogeochemical reactions. This mechanistic reactive-transport model accurately reproduces the experimental data for the bioremediation of uranium with acetate amendment. We subsequently performed global sensitivity analysis with the reactive-transport model in order to identify the main sources of prediction uncertainty caused by synergistic effects of biological, geochemical, and hydrological processes. The proposed approach successfully captured significant contributing factors across time and space, thereby improving the structure and parameterization of the comprehensive reactive-transport model. The global sensitivity analysis also provides a potentially useful tool to evaluate uranium bioremediation strategy. The simulations suggest that under difficult environments (e.g., highly contaminated with U(VI) at a high migration rate of solutes), the efficiency of uranium removal can be improved by adding Geobacter species to the contaminated site (bioaugmentation) in conjunction with the addition of electron donor (biostimulation). The simulations also highlight the interactive effect of

  14. Synchrotron-based analysis of chromium distributions in multicrystalline silicon for solar cells

    SciTech Connect (OSTI)

    Jensen, Mallory Ann; Hofstetter, Jasmin; Morishige, Ashley E.; Coletti, Gianluca; Lai, Barry; Fenning, David P.; Buonassisi, Tonio

    2015-05-18

    Chromium (Cr) can degrade silicon wafer-based solar cell efficiencies at concentrations as low as 10(10) cm(-3). In this contribution, we employ synchrotron-based X-ray fluorescence microscopy to study chromium distributions in multicrystalline silicon in as-grown material and after phosphorous diffusion. We complement quantified precipitate size and spatial distribution with interstitial Cr concentration and minority carrier lifetime measurements to provide insight into chromium gettering kinetics and offer suggestions for minimizing the device impacts of chromium. We observe that Cr-rich precipitates in as-grown material are generally smaller than iron-rich precipitates and that Cri point defects account for only one-half of the total Cr in the as-grown material. This observation is consistent with previous hypotheses that Cr transport and CrSi2 growth are more strongly diffusion-limited during ingot cooling. We apply two phosphorous diffusion gettering profiles that both increase minority carrier lifetime by two orders of magnitude and reduce [Cr-i] by three orders of magnitude to approximate to 10(10) cm(-3). Some Cr-rich precipitates persist after both processes, and locally high [Cri] after the high-temperature process indicates that further optimization of the chromium gettering profile is possible. (C) 2015 AIP Publishing LLC.

  15. Method And Apparatus For Two Dimensional Surface Property Analysis Based On Boundary Measurement

    DOE Patents [OSTI]

    Richardson, John G.

    2005-11-15

    An apparatus and method for determining properties of a conductive film is disclosed. A plurality of probe locations selected around a periphery of the conductive film define a plurality of measurement lines between each probe location and all other probe locations. Electrical resistance may be measured along each of the measurement lines. A lumped parameter model may be developed based on the measured values of electrical resistance. The lumped parameter model may be used to estimate resistivity at one or more selected locations encompassed by the plurality of probe locations. The resistivity may be extrapolated to other physical properties if the conductive film includes a correlation between resistivity and the other physical properties. A profile of the conductive film may be developed by determining resistivity at a plurality of locations. The conductive film may be applied to a structure such that resistivity may be estimated and profiled for the structure's surface.

  16. Market power analysis in the EEX electricity market : an agent-based simulation approach.

    SciTech Connect (OSTI)

    Wang, J.; Botterud, A.; Conzelmann, G.; Koritarov, V.; Decision and Information Sciences

    2008-01-01

    In this paper, an agent-based modeling and simulation (ABMS) approach is used to model the German wholesale electricity market. The spot market prices in the European Energy Exchange (EEX) are studied as the wholesale market prices. Each participant in the market is modeled as an individual rationality-bounded agent whose objective is to maximize its own profit. By simulating the market clearing process, the interaction among agents is captured. The market clearing price formed by agentspsila production cost bidding is regarded as the reference marginal cost. The gap between the marginal cost and the real market price is measured as an indicator of possible market power exertion. Various bidding strategies such as physical withholding and economic withholding can be simulated to represent strategic bidding behaviors of the market participants. The preliminary simulation results show that some generation companies (GenCos) are in the position of exerting market power by strategic bidding.

  17. Review and model-based analysis of factors influencing soil carbon sequestration beneath switchgrass (Panicum virgatum)

    SciTech Connect (OSTI)

    Garten Jr, Charles T [ORNL

    2012-01-01

    Abstract. A simple, multi-compartment model was developed to predict soil carbon sequestration beneath switchgrass (Panicum virgatum) plantations in the southeastern United States. Soil carbon sequestration is an important component of sustainable switchgrass production for bioenergy because soil organic matter promotes water retention, nutrient supply, and soil properties that minimize erosion. A literature review was included for the purpose of model parameterization and five model-based experiments were conducted to predict how changes in environment (temperature) or crop management (cultivar, fertilization, and harvest efficiency) might affect soil carbon storage and nitrogen losses. Predictions of soil carbon sequestration were most sensitive to changes in annual biomass production, the ratio of belowground to aboveground biomass production, and temperature. Predictions of ecosystem nitrogen loss were most sensitive to changes in annual biomass production, the soil C/N ratio, and nitrogen remobilization efficiency (i.e., nitrogen cycling within the plant). Model-based experiments indicated that 1) soil carbon sequestration can be highly site specific depending on initial soil carbon stocks, temperature, and the amount of annual nitrogen fertilization, 2) response curves describing switchgrass yield as a function of annual nitrogen fertilization were important to model predictions, 3) plant improvements leading to greater belowground partitioning of biomass could increase soil carbon sequestration, 4) improvements in harvest efficiency have no indicated effects on soil carbon and nitrogen, but improve cumulative biomass yield, and 5) plant improvements that reduce organic matter decomposition rates could also increase soil carbon sequestration, even though the latter may not be consistent with desired improvements in plant tissue chemistry to maximize yields of cellulosic ethanol.

  18. Analysis of the environmental impact of China based on STIRPAT model

    SciTech Connect (OSTI)

    Lin Shoufu; Zhao Dingtao; Marinova, Dora

    2009-11-15

    Assuming that energy consumption is the main source of GHG emissions in China, this paper analyses the effect of population, urbanisation level, GDP per capita, industrialisation level and energy intensity on the country's environmental impact using the STIRPAT model with data for 1978-2006. The analysis shows that population has the largest potential effect on environmental impact, followed by urbanisation level, industrialisation level, GDP per capita and energy intensity. Hence, China's One Child Policy, which restrains rapid population growth, has been an effective way of reducing the country's environmental impact. However, due to the difference in growth rates, GDP per capita had a higher effect on the environmental impact, contributing to 38% of its increase (while population's contribution was at 32%). The rapid decrease in energy intensity was the main factor restraining the increase in China's environmental impact but recently it has also been rising. Against this background, the future of the country looks bleak unless a change in human behaviour towards more ecologically sensitive economic choices occurs.

  19. The analysis of normative requirements to materials of VVER components, basing on LBB concepts

    SciTech Connect (OSTI)

    Anikovsky, V.V.; Karzov, G.P.; Timofeev, B.T.

    1997-04-01

    The paper demonstrates an insufficiency of some requirements native Norms (when comparing them with the foreign requirements for the consideration of calculating situations): (1) leak before break (LBB); (2) short cracks; (3) preliminary loading (warm prestressing). In particular, the paper presents (1) Comparison of native and foreign normative requirements (PNAE G-7-002-86, Code ASME, BS 1515, KTA) on permissible stress levels and specifically on the estimation of crack initiation and propagation; (2) comparison of RF and USA Norms of pressure vessel material acceptance and also data of pressure vessel hydrotests; (3) comparison of Norms on the presence of defects (RF and USA) in NPP vessels, developments of defect schematization rules; foundation of a calculated defect (semi-axis correlation a/b) for pressure vessel and piping components: (4) sequence of defect estimation (growth of initial defects and critical crack sizes) proceeding from the concept LBB; (5) analysis of crack initiation and propagation conditions according to the acting Norms (including crack jumps); (6) necessity to correct estimation methods of ultimate states of brittle an ductile fracture and elastic-plastic region as applied to calculating situation: (a) LBB and (b) short cracks; (7) necessity to correct estimation methods of ultimate states with the consideration of static and cyclic loading (warm prestressing effect) of pressure vessel; estimation of the effect stability; (8) proposals on PNAE G-7-002-86 Norm corrections.

  20. On the applicability of surrogate-based MCMC-Bayesian inversion to the Community Land Model: Case studies at Flux tower sites

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan; Ren, Huiying; Liu, Ying; Swiler, Laura

    2016-06-01

    The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically-average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. Lastly, analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less

  1. Review and comparison of web- and disk-based tools for residentialenergy analysis

    SciTech Connect (OSTI)

    Mills, Evan

    2002-08-25

    There exist hundreds of building energy software tools, both web- and disk-based. These tools exhibit considerable range in approach and creativity, with some being highly specialized and others able to consider the building as a whole. However, users are faced with a dizzying array of choices and, often, conflicting results. The fragmentation of development and deployment efforts has hampered tool quality and market penetration. The purpose of this review is to provide information for defining the desired characteristics of residential energy tools, and to encourage future tool development that improves on current practice. This project entails (1) creating a framework for describing possible technical and functional characteristics of such tools, (2) mapping existing tools onto this framework, (3) exploring issues of tool accuracy, and (4) identifying ''best practice'' and strategic opportunities for tool design. evaluated 50 web-based residential calculators, 21 of which we regard as ''whole-house'' tools(i.e., covering a range of end uses). Of the whole-house tools, 13 provide open-ended energy calculations, 5 normalize the results to actual costs (a.k.a ''bill-disaggregation tools''), and 3 provide both options. Across the whole-house tools, we found a range of 5 to 58 house-descriptive features (out of 68 identified in our framework) and 2 to 41 analytical and decision-support features (55 possible). We also evaluated 15 disk-based residential calculators, six of which are whole-house tools. Of these tools, 11 provide open-ended calculations, 1 normalizes the results to actual costs, and 3 provide both options. These tools offered ranges of 18 to 58 technical features (70 possible) and 10 to 40 user- and decision-support features (56 possible). The comparison shows that such tools can employ many approaches and levels of detail. Some tools require a relatively small number of well-considered inputs while others ask a myriad of questions and still miss key

  2. Techno-Economic Analysis of Scalable Coal-Based Fuel Cells

    SciTech Connect (OSTI)

    Chuang, Steven S. C.

    2014-08-31

    Researchers at The University of Akron (UA) have demonstrated the technical feasibility of a laboratory coal fuel cell that can economically convert high sulfur coal into electricity with near zero negative environmental impact. Scaling up this coal fuel cell technology to the megawatt scale for the nation’s electric power supply requires two key elements: (i) developing the manufacturing technology for the components of the coal-based fuel cell, and (ii) long term testing of a kW scale fuel cell pilot plant. This project was expected to develop a scalable coal fuel cell manufacturing process through testing, demonstrating the feasibility of building a large-scale coal fuel cell power plant. We have developed a reproducible tape casting technique for the mass production of the planner fuel cells. Low cost interconnect and cathode current collector material was identified and current collection was improved. In addition, this study has demonstrated that electrochemical oxidation of carbon can take place on the Ni anode surface and the CO and CO2 product produced can further react with carbon to initiate the secondary reactions. One important secondary reaction is the reaction of carbon with CO2 to produce CO. We found CO and carbon can be electrochemically oxidized simultaneously inside of the anode porous structure and on the surface of anode for producing electricity. Since CH4 produced from coal during high temperature injection of coal into the anode chamber can cause severe deactivation of Ni-anode, we have studied how CH4 can interact with CO2 to produce in the anode chamber. CO produced was found able to inhibit coking and allow the rate of anode deactivation to be decreased. An injection system was developed to inject the solid carbon and coal fuels without bringing air into the anode chamber. Five planner fuel cells connected in a series configuration and tested. Extensive studies on the planner fuels

  3. Origin of the Diverse Behavior of Oxygen Vacancies in ABO3 Perovskites: A Symmetry Based Analysis

    SciTech Connect (OSTI)

    Yin, W. J.; Wei, S. H.; Al-Jassim, M. M.; Yan, Y. F.

    2012-05-15

    Using band symmetry analysis and density functional theory calculations, we reveal the origin of why oxygen vacancy (V{sub O}) energy levels are shallow in some ABO{sub 3} perovskites, such as SrTiO{sub 3}, but are deep in some others, such as LaAlO{sub 3}. We show that this diverse behavior can be explained by the symmetry of the perovskite structure and the location (A or B site) of the metal atoms with low d orbital energies, such as Ti and La atoms. When the conduction band minimum (CBM) is an antibonding {Gamma}12 state, which is usually associated with the metal atom with low d orbital energies at the A site (e.g., LaAlO{sub 3}), then the V{sub O} energy levels are deep inside the gap. Otherwise, if the CBM is the nonbonding {Gamma}25{prime} state, which is usually associated with metal atoms with low d orbital energies at the B site (e.g., SrTiO{sub 3}), then the V{sub O} energy levels are shallow and often above the CBM. The V{sub O} energy level is also deep for some uncommon ABO{sub 3} perovskite materials that possess a low s orbital, or large-size cations, and an antibonding {Gamma}{sub 1} state CBM, such as ZnTiO{sub 3}. Our results, therefore, provide guidelines for designing ABO{sub 3} perovskite materials with desired functional behaviors.

  4. ANALYSIS OF QUIET-SUN INTERNETWORK MAGNETIC FIELDS BASED ON LINEAR POLARIZATION SIGNALS

    SciTech Connect (OSTI)

    Orozco Suarez, D.; Bellot Rubio, L. R.

    2012-05-20

    We present results from the analysis of Fe I 630 nm measurements of the quiet Sun taken with the spectropolarimeter of the Hinode satellite. Two data sets with noise levels of 1.2 Multiplication-Sign 10{sup -3} and 3 Multiplication-Sign 10{sup -4} are employed. We determine the distribution of field strengths and inclinations by inverting the two observations with a Milne-Eddington model atmosphere. The inversions show a predominance of weak, highly inclined fields. By means of several tests we conclude that these properties cannot be attributed to photon noise effects. To obtain the most accurate results, we focus on the 27.4% of the pixels in the second data set that have linear polarization amplitudes larger than 4.5 times the noise level. The vector magnetic field derived for these pixels is very precise because both circular and linear polarization signals are used simultaneously. The inferred field strength, inclination, and filling factor distributions agree with previous results, supporting the idea that internetwork (IN) fields are weak and very inclined, at least in about one quarter of the area occupied by the IN. These properties differ from those of network fields. The average magnetic flux density and the mean field strength derived from the 27.4% of the field of view with clear linear polarization signals are 16.3 Mx cm{sup -2} and 220 G, respectively. The ratio between the average horizontal and vertical components of the field is approximately 3.1. The IN fields do not follow an isotropic distribution of orientations.

  5. Integration of a constraint-based metabolic model of Brassica napus developing seeds with 13C-metabolic flux analysis

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Hay, Jordan O.; Shi, Hai; Heinzel, Nicolas; Hebbelmann, Inga; Rolletschek, Hardy; Schwender, Jorg

    2014-12-19

    The use of large-scale or genome-scale metabolic reconstructions for modeling and simulation of plant metabolism and integration of those models with large-scale omics and experimental flux data is becoming increasingly important in plant metabolic research. Here we report an updated version of bna572, a bottom-up reconstruction of oilseed rape (Brassica napus L.; Brassicaceae) developing seeds with emphasis on representation of biomass-component biosynthesis. New features include additional seed-relevant pathways for isoprenoid, sterol, phenylpropanoid, flavonoid, and choline biosynthesis. Being now based on standardized data formats and procedures for model reconstruction, bna572+ is available as a COBRA-compliant Systems Biology Markup Language (SBML) modelmore » and conforms to the Minimum Information Requested in the Annotation of Biochemical Models (MIRIAM) standards for annotation of external data resources. Bna572+ contains 966 genes, 671 reactions, and 666 metabolites distributed among 11 subcellular compartments. It is referenced to the Arabidopsis thaliana genome, with gene-protein-reaction (GPR) associations resolving subcellular localization. Detailed mass and charge balancing and confidence scoring were applied to all reactions. Using B. napus seed specific transcriptome data, expression was verified for 78% of bna572+ genes and 97% of reactions. Alongside bna572+ we also present a revised carbon centric model for 13C-Metabolic Flux Analysis (13C-MFA) with all its reactions being referenced to bna572+ based on linear projections. By integration of flux ratio constraints obtained from 13C-MFA and by elimination of infinite flux bounds around thermodynamically infeasible loops based on COBRA loopless methods, we demonstrate improvements in predictive power of Flux Variability Analysis (FVA). In conclusion, using this combined approach we characterize the difference in metabolic flux of developing seeds of two B. napus genotypes contrasting in starch

  6. Economic analysis of operating alternatives for the South Vandenberg Power Plant at Vandenberg Air Force Base, California

    SciTech Connect (OSTI)

    Daellenbach, K.K.; Dagle, J.E.; Reilly, R.W.; Shankle, S.A.

    1993-02-01

    Vandenberg Air Force Base (VAFB), located approximately 50 miles northwest of Santa Barbara, California, commissioned the Pacific Northwest Laboratory to conduct an economic analysis of operating alternatives of the South Vandenberg Power Plant (SVPP). Recent concern over SVPP operating and environmental costs prompted VAFB personnel to consider other means to support the Missile Operation Support Requirement (MOSR). The natural gas-fired SVPP was originally designed to support the Space Transportation System launch activities. With cancellation of this mission, the SVPP has been used to provide primary and backup electric power to support MOSR activities for the Space Launch Complexes. This document provides economic analysis in support of VAFB decisions about future operation of the SVPP. This analysis complied with the life-cycle cost (LCC) analytical approach detailed in 10 CFR 436, which is used in support of all Federal energy decisions. Many of the SVPP operational and environmental cost estimates were provided by VAFB staff, with additional information from vendors and engineering contractors. The LCC analysis consisted of three primary operating strategies, each with a level of service equal to or better than the current status-quo operation. These scenarios are: Status-quo operation where the SVPP provides both primary and backup MOSR power; Purchased utility power providing primary MOSR support with backup power provided by an Uninterruptible Power Supply (UPS) system. The SVPP would be used to provide power for long-duration power outages; Purchased utility power provides primary MOSR support with backup power provided by a UPS system. A new set of dedicated generators would provide backup power for long-duration power outages.

  7. Prediction of global solar irradiance based on time series analysis: Application to solar thermal power plants energy production planning

    SciTech Connect (OSTI)

    Martin, Luis; Marchante, Ruth; Cony, Marco; Zarzalejo, Luis F.; Polo, Jesus; Navarro, Ana

    2010-10-15

    Due to strong increase of solar power generation, the predictions of incoming solar energy are acquiring more importance. Photovoltaic and solar thermal are the main sources of electricity generation from solar energy. In the case of solar thermal energy plants with storage energy system, its management and operation need reliable predictions of solar irradiance with the same temporal resolution as the temporal capacity of the back-up system. These plants can work like a conventional power plant and compete in the energy stock market avoiding intermittence in electricity production. This work presents a comparisons of statistical models based on time series applied to predict half daily values of global solar irradiance with a temporal horizon of 3 days. Half daily values consist of accumulated hourly global solar irradiance from solar raise to solar noon and from noon until dawn for each day. The dataset of ground solar radiation used belongs to stations of Spanish National Weather Service (AEMet). The models tested are autoregressive, neural networks and fuzzy logic models. Due to the fact that half daily solar irradiance time series is non-stationary, it has been necessary to transform it to two new stationary variables (clearness index and lost component) which are used as input of the predictive models. Improvement in terms of RMSD of the models essayed is compared against the model based on persistence. The validation process shows that all models essayed improve persistence. The best approach to forecast half daily values of solar irradiance is neural network models with lost component as input, except Lerida station where models based on clearness index have less uncertainty because this magnitude has a linear behaviour and it is easier to simulate by models. (author)

  8. Cost Analysis of Plug-In Hybred Electric Vehicles Using GPS-Based Longitudinal Travel Data

    SciTech Connect (OSTI)

    Wu, Xing; Dong, Jing; Lin, Zhenhong

    2014-01-01

    Using spatial, longitudinal travel data of 415 vehicles over 3 18 months in the Seattle metropolitan area, this paper estimates the operating costs of plug-in hybrid electric vehicles (PHEVs) of various electric ranges (10, 20, 30, and 40 miles) for 3, 5, and 10 years of payback period, considering different charging infrastructure deployment levels and gasoline prices. Some key findings were made. (1) PHEVs could help save around 60% or 40% in energy costs, compared with conventional gasoline vehicles (CGVs) or hybrid electric vehicles (HEVs), respectively. However, for motorists whose daily vehicle miles traveled (DVMT) is significant, HEVs may be even a better choice than PHEV40s, particularly in areas that lack a public charging infrastructure. (2) The incremental battery cost of large-battery PHEVs is difficult to justify based on the incremental savings of PHEVs operating costs unless a subsidy is offered for largebattery PHEVs. (3) When the price of gasoline increases from $4/gallon to $5/gallon, the number of drivers who benefit from a larger battery increases significantly. (4) Although quick chargers can reduce charging time, they contribute little to energy cost savings for PHEVs, as opposed to Level-II chargers.

  9. Uncertainty analysis in geospatial merit matrix–based hydropower resource assessment

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Pasha, M. Fayzul K.; Yeasmin, Dilruba; Saetern, Sen; Yang, Majntxov; Kao, Shih -Chieh; Smith, Brennan T.

    2016-03-30

    Hydraulic head and mean annual streamflow, two main input parameters in hydropower resource assessment, are not measured at every point along the stream. Translation and interpolation are used to derive these parameters, resulting in uncertainties. This study estimates the uncertainties and their effects on model output parameters: the total potential power and the number of potential locations (stream-reach). These parameters are quantified through Monte Carlo Simulation (MCS) linking with a geospatial merit matrix based hydropower resource assessment (GMM-HRA) Model. The methodology is applied to flat, mild, and steep terrains. Results show that the uncertainty associated with the hydraulic head ismore » within 20% for mild and steep terrains, and the uncertainty associated with streamflow is around 16% for all three terrains. Output uncertainty increases as input uncertainty increases. However, output uncertainty is around 10% to 20% of the input uncertainty, demonstrating the robustness of the GMM-HRA model. Hydraulic head is more sensitive to output parameters in steep terrain than in flat and mild terrains. Furthermore, mean annual streamflow is more sensitive to output parameters in flat terrain.« less

  10. A NMR-Based Carbon-Type Analysis of Diesel Fuel Blends From Various Sources

    SciTech Connect (OSTI)

    Bays, J. Timothy; King, David L.

    2013-05-10

    In collaboration with participants of the Coordinating Research Council (CRC) Advanced Vehicle/Fuels/Lubricants (AVFL) Committee, and project AVFL-19, the characteristics of fuels from advanced and renewable sources were compared to commercial diesel fuels. The main objective of this study was to highlight similarities and differences among the fuel types, i.e. ULSD, renewables, and alternative fuels, and among fuels within the different fuel types. This report summarizes the carbon-type analysis from 1H and 13C{1H} nuclear magnetic resonance spectroscopy (NMR) of 14 diesel fuel samples. The diesel fuel samples come from diverse sources and include four commercial ultra-low sulfur diesel fuels (ULSD), one gas-to-liquid diesel fuel (GTL), six renewable diesel fuels (RD), two shale oil-derived diesel fuels, and one oil sands-derived diesel fuel. Overall, the fuels examined fall into two groups. The two shale oil-derived samples and the oil-sand-derived sample closely resemble the four commercial ultra-low sulfur diesels, with SO1 and SO2 most closely matched with ULSD1, ULSD2, and ULSD4, and OS1 most closely matched with ULSD3. As might be expected, the renewable diesel fuels, with the exception of RD3, do not resemble the ULSD fuels because of their very low aromatic content, but more closely resemble the gas-to-liquid sample (GTL) in this respect. RD3 is significantly different from the other renewable diesel fuels in that the aromatic content more closely resembles the ULSD fuels. Fused-ring aromatics are readily observable in the ULSD, SO, and OS samples, as well as RD3, and are noticeably absent in the remaining RD and GTL fuels. Finally, ULSD3 differs from the other ULSD fuels by having a significantly lower aromatic carbon content and higher cycloparaffinic carbon content. In addition to providing important comparative compositional information regarding the various diesel fuels, this report also provides important information about the capabilities of NMR

  11. Analysis of Hanford-based Options for Sustainable DOE Facilities on the West Coast

    SciTech Connect (OSTI)

    Warwick, William M.

    2012-06-30

    Large-scale conventional energy projects result in lower costs of energy (COE). This is true for most renewable energy projects as well. The Office of Science is interested in its facilities meeting the renewable energy mandates set by Congress and the Administration. Those facilities on the west coast include a cluster in the Bay Area of California and at Hanford in central Washington State. Land constraints at the California facilities do not permit large scale projects. The Hanford Reservation has land and solar insolation available for a large scale solar project as well as access to a regional transmission system that can provide power to facilities in California. The premise of this study is that a large-scale solar project at Hanford may be able to provide renewable energy sufficient to meet the needs of select Office of Science facilities on the west coast at a COE that is competitive with costs in California despite the lower solar insolation values at Hanford. The study concludes that although the cost of solar projects continues to decline, estimated costs for a large-scale project at Hanford are still not competitive with avoided power costs for Office of Science facilities on the west coast. Further, although it is possible to transmit power from a solar project at Hanford to California facilities, the costs of doing so add additional costs. Consequently, development of a large- scale solar project at Hanford to meet the renewable goals of Office of Science facilities on the west coast is currently uneconomic. This may change as solar costs decrease and California-based facilities face increasing costs for conventional and renewable energy produced in the state. PNNL should monitor those cost trends.

  12. FY01 Supplemental Science and Performance Analysis: Volume 1,Scientific Bases and Analyses

    SciTech Connect (OSTI)

    Bodvarsson, G.S.; Dobson, David

    2001-05-30

    The U.S. Department of Energy (DOE) is considering the possible recommendation of a site at Yucca Mountain, Nevada, for development as a geologic repository for the disposal of high-level radioactive waste and spent nuclear fuel. To facilitate public review and comment, in May 2001 the DOE released the Yucca Mountain Science and Engineering Report (S&ER) (DOE 2001 [DIRS 153849]), which presents technical information supporting the consideration of the possible site recommendation. The report summarizes the results of more than 20 years of scientific and engineering studies. A decision to recommend the site has not been made: the DOE has provided the S&ER and its supporting documents as an aid to the public in formulating comments on the possible recommendation. When the S&ER (DOE 2001 [DIRS 153849]) was released, the DOE acknowledged that technical and scientific analyses of the site were ongoing. Therefore, the DOE noted in the Federal Register Notice accompanying the report (66 FR 23013 [DIRS 155009], p. 2) that additional technical information would be released before the dates, locations, and times for public hearings on the possible recommendation were announced. This information includes: (1) the results of additional technical studies of a potential repository at Yucca Mountain, contained in this FY01 Supplemental Science and Performance Analyses: Vol. 1, Scientific Bases and Analyses; and FY01 Supplemental Science and Performance Analyses: Vol. 2, Performance Analyses (McNeish 2001 [DIRS 155023]) (collectively referred to as the SSPA) and (2) a preliminary evaluation of the Yucca Mountain site's preclosure and postclosure performance against the DOE's proposed site suitability guidelines (10 CFR Part 963 [64 FR 67054 [DIRS 124754

  13. Influence of composition gradients on weld metal creep behavior: An analysis based on laminate composites

    SciTech Connect (OSTI)

    Choi, I.

    1989-01-01

    The effects of weld metal microsegregation, as altered by post-weld heat treatments, on both low and high temperatures tensile properties were investigated on Monel alloy 400. Flat, all weld metal, tensile specimens were machined from single pass GTA welds and were heat treated in vacuum in the range of 600 C to 1000 C to produce samples with different composition gradients. Short-time tensile tests were run at room temperature and elevated temperature. Long-time constant load creep tests were performed at 500 C. The room temperature mechanical properties of the as-welded specimen and heat treated specimens were similar and thus unaffected by variations in composition gradients. In contrast, at high temperatures the steady state creep rates decreased, rupture strains increased, and rupture lives decreases with increases in heat treatment temperature, that is, with decreases in the amplitudes of composition gradients. The deformation behavior of solidified dendritic structure was modeled based on results obtained on laminate composites of nickel and copper. The laminates, prepared by roll bonding, were annealed to produce controlled composition gradients with dimensions equivalent to those observed in the weld metal. The steady state creep rates of laminate composites decreased with increases in heat treatment time, that is, with decreases in the amplitudes of composition gradients. To rationalize the creep properties of each component in laminate composites, nickel-copper solid solutions having systematic compositional variations were prepared and tested under the same conditions as the laminate composites. The creep rates of nickel-copper solid solutions showed a minimum with nickel composition.

  14. An Analysis Technique for Active Neutron Multiplicity Measurements Based on First Principles

    SciTech Connect (OSTI)

    Evans, Louise G; Goddard, Braden; Charlton, William S; Peerani, Paolo

    2012-08-13

    Passive neutron multiplicity counting is commonly used to quantify the total mass of plutonium in a sample, without prior knowledge of the sample geometry. However, passive neutron counting is less applicable to uranium measurements due to the low spontaneous fission rates of uranium. Active neutron multiplicity measurements are therefore used to determine the {sup 235}U mass in a sample. Unfortunately, there are still additional challenges to overcome for uranium measurements, such as the coupling of the active source and the uranium sample. Techniques, such as the coupling method, have been developed to help reduce the dependence of calibration curves for active measurements on uranium samples; although, they still require similar geometry known standards. An advanced active neutron multiplicity measurement method is being developed by Texas A&M University, in collaboration with Los Alamos National Laboratory (LANL) in an attempt to overcome the calibration curve requirements. This method can be used to quantify the {sup 235}U mass in a sample containing uranium without using calibration curves. Furthermore, this method is based on existing detectors and nondestructive assay (NDA) systems, such as the LANL Epithermal Neutron Multiplicity Counter (ENMC). This method uses an inexpensive boron carbide liner to shield the uranium sample from thermal and epithermal neutrons while allowing fast neutrons to reach the sample. Due to the relatively low and constant fission and absorption energy dependent cross-sections at high neutron energies for uranium isotopes, fast neutrons can penetrate the sample without significant attenuation. Fast neutron interrogation therefore creates a homogeneous fission rate in the sample, allowing for first principle methods to be used to determine the {sup 235}U mass in the sample. This paper discusses the measurement method concept and development, including measurements and simulations performed to date, as well as the potential

  15. Isotope Enrichment Detection by Laser Ablation - Laser Absorption Spectrometry: Automated Environmental Sampling and Laser-Based Analysis for HEU Detection

    SciTech Connect (OSTI)

    Anheier, Norman C.; Bushaw, Bruce A.

    2010-01-01

    The global expansion of nuclear power, and consequently the uranium enrichment industry, requires the development of new safeguards technology to mitigate proliferation risks. Current enrichment monitoring instruments exist that provide only yes/no detection of highly enriched uranium (HEU) production. More accurate accountancy measurements are typically restricted to gamma-ray and weight measurements taken in cylinder storage yards. Analysis of environmental and cylinder content samples have much higher effectiveness, but this approach requires onsite sampling, shipping, and time-consuming laboratory analysis and reporting. Given that large modern gaseous centrifuge enrichment plants (GCEPs) can quickly produce a significant quantity (SQ ) of HEU, these limitations in verification suggest the need for more timely detection of potential facility misuse. The Pacific Northwest National Laboratory (PNNL) is developing an unattended safeguards instrument concept, combining continuous aerosol particulate collection with uranium isotope assay, to provide timely analysis of enrichment levels within low enriched uranium facilities. This approach is based on laser vaporization of aerosol particulate samples, followed by wavelength tuned laser diode spectroscopy to characterize the uranium isotopic ratio through subtle differences in atomic absorption wavelengths. Environmental sampling (ES) media from an integrated aerosol collector is introduced into a small, reduced pressure chamber, where a focused pulsed laser vaporizes material from a 10 to 20-”m diameter spot of the surface of the sampling media. The plume of ejected material begins as high-temperature plasma that yields ions and atoms, as well as molecules and molecular ions. We concentrate on the plume of atomic vapor that remains after the plasma has expanded and then cooled by the surrounding cover gas. Tunable diode lasers are directed through this plume and each isotope is detected by monitoring absorbance

  16. NREL: Energy Analysis: Geospatial Analysis

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    U.S. Renewable Energy Technical Potentials: A GIS-Based Analysis This study summarizes the ... (tools, maps, data): Dynamic Maps, GIS Data and Analysis Tools website provides ...

  17. Ultrasound-Guided Radiological Placement of Central Venous Port via the Subclavian Vein: A Retrospective Analysis of 500 Cases at a Single Institute

    SciTech Connect (OSTI)

    Sakamoto, Noriaki Arai, Yasuaki Takeuchi, Yoshito Takahashi, Masahide Tsurusaki, Masakatsu; Sugimura, Kazuro

    2010-10-15

    The purpose of this study was to assess the technical success rate and adverse events (AEs) associated with ultrasound (US)-guided radiological placement (RP) of a central venous port (CVP) via the subclavian vein (SCV). Between April 2006 and May 2007, a total of 500 US-guided RPs of a CVP via the SCV were scheduled in 486 cancer patients (mean age {+-} SD, 54.1 {+-} 18.1 years) at our institute. Referring to the interventional radiology report database and patients' records, technical success rate and AEs relevant to CVP placement were evaluated retrospectively. The technical success rate was 98.6% (493/500). AEs occurred in 26 cases (5.2%) during follow-up (range, 1-1080 days; mean {+-} SD, 304.0 {+-} 292.1 days). AEs within 24 h postprocedure occurred in five patients: pneumothorax (n = 2), arterial puncture (n = 1), hematoma formation at the pocket site (n = 2), and catheter tip migration into the internal mammary vein (n = 1). There were seven early AEs: hematoma formation at the pocket site (n = 2), fibrin sheath formation around the indwelling catheter (n = 2), and catheter-related infections (n = 3). There were 13 delayed AEs: catheter-related infections (n = 7), catheter detachments (n = 3), catheter occlusion (n = 1), symptomatic thrombus in the SCV (n = 1), and catheter migration (n = 1). No major AEs, such as procedure-related death, air embolism, or events requiring surgical intervention, were observed. In conclusion, US-guided RP of a CVP via the SCV is highly appropriate, based on its high technical success rate and the limited number of AEs.

  18. the-schedule-based-transit-model

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    The Schedule-Based transit model of the Chicago Metropolitan Area Vadim Sokolov Transportation Research and Analysis Computing Center Argonne National Laboratory List of Authors ================ Vadim Sokolov Transportation Research and Analysis Computing Center Argonne National Laboratory 277 International Drive West Chicago, IL 60185 Abstract ========= Usually public transit systems are modeled using so called frequency based approach. In this case transit route times are defined in terms of

  19. Economic feasibility analysis of distributed electric power generation based upon the natural gas-fired fuel cell. Final report

    SciTech Connect (OSTI)

    Not Available

    1994-03-01

    The final report provides a summary of results of the Cost of Ownership Model and the circumstances under which a distributed fuel cell is economically viable. The analysis is based on a series of micro computer models estimate the capital and operations cost of a fuel cell central utility plant configuration. Using a survey of thermal and electrical demand profiles, the study defines a series of energy user classes. The energy user class demand requirements are entered into the central utility plant model to define the required size the fuel cell capacity and all supporting equipment. The central plant model includes provisions that enables the analyst to select optional plant features that are most appropriate to a fuel cell application, and that are cost effective. The model permits the choice of system features that would be suitable for a large condominium complex or a residential institution such as a hotel, boarding school or prison. Other applications are also practical; however, such applications have a higher relative demand for thermal energy, a characteristic that is well-suited to a fuel cell application with its free source of hot water or steam. The analysis combines the capital and operation from the preceding models into a Cost of Ownership Model to compute the plant capital and operating costs as a function of capacity and principal features and compares these estimates to the estimated operating cost of the same central plant configuration without a fuel cell.

  20. Benchmarking of a treatment planning system for spot scanning proton therapy: Comparison and analysis of robustness to setup errors of photon IMRT and proton SFUD treatment plans of base of skull meningioma

    SciTech Connect (OSTI)

    Harding, R.; TrnkovĂĄ, P.; Lomax, A. J.; Weston, S. J.; Lilley, J.; Thompson, C. M.; Cosgrove, V. P.; Short, S. C.; Loughrey, C.; Thwaites, D. I.

    2014-11-01

    Purpose: Base of skull meningioma can be treated with both intensity modulated radiation therapy (IMRT) and spot scanned proton therapy (PT). One of the main benefits of PT is better sparing of organs at risk, but due to the physical and dosimetric characteristics of protons, spot scanned PT can be more sensitive to the uncertainties encountered in the treatment process compared with photon treatment. Therefore, robustness analysis should be part of a comprehensive comparison between these two treatment methods in order to quantify and understand the sensitivity of the treatment techniques to uncertainties. The aim of this work was to benchmark a spot scanning treatment planning system for planning of base of skull meningioma and to compare the created plans and analyze their robustness to setup errors against the IMRT technique. Methods: Plans were produced for three base of skull meningioma cases: IMRT planned with a commercial TPS [Monaco (Elekta AB, Sweden)]; single field uniform dose (SFUD) spot scanning PT produced with an in-house TPS (PSI-plan); and SFUD spot scanning PT plan created with a commercial TPS [XiO (Elekta AB, Sweden)]. A tool for evaluating robustness to random setup errors was created and, for each plan, both a dosimetric evaluation and a robustness analysis to setup errors were performed. Results: It was possible to create clinically acceptable treatment plans for spot scanning proton therapy of meningioma with a commercially available TPS. However, since each treatment planning system uses different methods, this comparison showed different dosimetric results as well as different sensitivities to setup uncertainties. The results confirmed the necessity of an analysis tool for assessing plan robustness to provide a fair comparison of photon and proton plans. Conclusions: Robustness analysis is a critical part of plan evaluation when comparing IMRT plans with spot scanned proton therapy plans.

  1. Development and Deployment of a Short Rotation Woody Crops Harvesting System Based on a Case New Holland Forage Harvester and SRC Woody Crop Header

    SciTech Connect (OSTI)

    Eisenbies, Mark; Volk, Timothy

    2014-10-03

    Demand for bioenergy sourced from woody biomass is projected to increase; however, the expansion and rapid deployment of short rotation woody crop systems in the United States has been constrained by high production costs and sluggish market acceptance due to problems with quality and consistency from first-generation harvesting systems. The objective of this study was to evaluate the effect of crop conditions on the performance of a single-pass, cut and chip harvester based on a standard New Holland FR-9000 series forage harvester with a dedicated 130FB short rotation coppice header, and the quality of chipped material. A time motion analysis was conducted to track the movement of machine and chipped material through the system for 153 separate loads over 10 days on a 54-ha harvest. Harvester performance was regulated by either ground conditions, or standing biomass on 153 loads. Material capacities increased linearly with standing biomass up to 40 Mgwet ha-1 and plateaued between 70 and 90 Mgwet hr-1. Moisture contents ranged from 39 to 51% with the majority of samples between 43 and 45%. Loads produced in freezing weather (average temperature over 10 hours preceding load production) had 4% more chips greater than 25.4 mm (P < 0.0119). Over 1.5 Mgdry ha-1 of potentially harvested material (6-9% of a load) was left on site, of which half was commercially undesirable meristematic pieces. The New Holland harvesting system is a reliable and predictable platform for harvesting material over a wide range of standing biomass; performance was consistent overall in 14 willow cultivars.

  2. Uncertainty analysis of integrated gasification combined cycle systems based on Frame 7H versus 7F gas turbines

    SciTech Connect (OSTI)

    Yunhua Zhu; H. Christopher Frey

    2006-12-15

    Integrated gasification combined cycle (IGCC) technology is a promising alternative for clean generation of power and coproduction of chemicals from coal and other feedstocks. Advanced concepts for IGCC systems that incorporate state-of-the-art gas turbine systems, however, are not commercially demonstrated. Therefore, there is uncertainty regarding the future commercial-scale performance, emissions, and cost of such technologies. The Frame 7F gas turbine represents current state-of-practice, whereas the Frame 7H is the most recently introduced advanced commercial gas turbine. The objective of this study was to evaluate the risks and potential payoffs of IGCC technology based on different gas turbine combined cycle designs. Models of entrained-flow gasifier-based IGCC systems with Frame 7F (IGCC-7F) and 7H gas turbine combined cycles (IGCC-7H) were developed in ASPEN Plus. An uncertainty analysis was conducted. Gasifier carbon conversion and project cost uncertainty are identified as the most important uncertain inputs with respect to system performance and cost. The uncertainties in the difference of the efficiencies and costs for the two systems are characterized. Despite uncertainty, the IGCC-7H system is robustly preferred to the IGCC-7F system. Advances in gas turbine design will improve the performance, emissions, and cost of IGCC systems. The implications of this study for decision-making regarding technology selection, research planning, and plant operation are discussed. 38 refs., 11 figs., 5 tabs.

  3. Characterization of electrically-active defects in ultraviolet light-emitting diodes with laser-based failure analysis techniques

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Miller, Mary A.; Tangyunyong, Paiboon; Edward I. Cole, Jr.

    2016-01-12

    In this study, laser-based failure analysis techniques demonstrate the ability to quickly and non-intrusively screen deep ultraviolet light-emitting diodes(LEDs) for electrically-active defects. In particular, two laser-based techniques, light-induced voltage alteration and thermally-induced voltage alteration, generate applied voltage maps (AVMs) that provide information on electrically-active defect behavior including turn-on bias, density, and spatial location. Here, multiple commercial LEDs were examined and found to have dark defect signals in the AVM indicating a site of reduced resistance or leakage through the diode. The existence of the dark defect signals in the AVM correlates strongly with an increased forward-bias leakage current. This increasedmore » leakage is not present in devices without AVM signals. Transmission electron microscopyanalysis of a dark defect signal site revealed a dislocation cluster through the pn junction. The cluster included an open core dislocation. Even though LEDs with few dark AVM defect signals did not correlate strongly with power loss, direct association between increased open core dislocation densities and reduced LED device performance has been presented elsewhere [M. W. Moseley et al., J. Appl. Phys. 117, 095301 (2015)].« less

  4. Application Case Studies

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Studies Application Case Studies NERSC staff along with engineers have worked with NESAP applications to prepare for the Cori-Phase 2 system based on the Xeon Phi "Knights Landing" processor. We document the several optimization case studies below. Our presentations at ISC 16 IXPUG Workshop can all be found: https://www.ixpug.org/events/ixpug-isc-2016 Other pages of interest for those wishing to learn optimization strategies of Cori Phase 2 (Knights Landing): Getting Started Measuring

  5. Well-to-Wheels analysis of landfill gas-based pathways and their addition to the GREET model.

    SciTech Connect (OSTI)

    Mintz, M.; Han, J.; Wang, M.; Saricks, C.; Energy Systems

    2010-06-30

    Today, approximately 300 million standard cubic ft/day (mmscfd) of natural gas and 1600 MW of electricity are produced from the decomposition of organic waste at 519 U.S. landfills (EPA 2010a). Since landfill gas (LFG) is a renewable resource, this energy is considered renewable. When used as a vehicle fuel, compressed natural gas (CNG) produced from LFG consumes up to 185,000 Btu of fossil fuel and generates from 1.5 to 18.4 kg of carbon dioxide-equivalent (CO{sub 2}e) emissions per million Btu of fuel on a 'well-to-wheel' (WTW) basis. This compares with approximately 1.1 million Btu and 78.2 kg of CO{sub 2}e per million Btu for CNG from fossil natural gas and 1.2 million Btu and 97.5 kg of CO{sub 2}e per million Btu for petroleum gasoline. Because of the additional energy required for liquefaction, LFG-based liquefied natural gas (LNG) requires more fossil fuel (222,000-227,000 Btu/million Btu WTW) and generates more GHG emissions (approximately 22 kg CO{sub 2}e /MM Btu WTW) if grid electricity is used for the liquefaction process. However, if some of the LFG is used to generate electricity for gas cleanup and liquefaction (or compression, in the case of CNG), vehicle fuel produced from LFG can have no fossil fuel input and only minimal GHG emissions (1.5-7.7 kg CO{sub 2}e /MM Btu) on a WTW basis. Thus, LFG-based natural gas can be one of the lowest GHG-emitting fuels for light- or heavy-duty vehicles. This report discusses the size and scope of biomethane resources from landfills and the pathways by which those resources can be turned into and utilized as vehicle fuel. It includes characterizations of the LFG stream and the processes used to convert low-Btu LFG into high-Btu renewable natural gas (RNG); documents the conversion efficiencies and losses of those processes, the choice of processes modeled in GREET, and other assumptions used to construct GREET pathways; and presents GREET results by pathway stage. GREET estimates of well-to-pump (WTP), pump

  6. WE-D-BRE-07: Variance-Based Sensitivity Analysis to Quantify the Impact of Biological Uncertainties in Particle Therapy

    SciTech Connect (OSTI)

    Kamp, F.; Brueningk, S.C.; Wilkens, J.J.

    2014-06-15

    Purpose: In particle therapy, treatment planning and evaluation are frequently based on biological models to estimate the relative biological effectiveness (RBE) or the equivalent dose in 2 Gy fractions (EQD2). In the context of the linear-quadratic model, these quantities depend on biological parameters (α, ÎČ) for ions as well as for the reference radiation and on the dose per fraction. The needed biological parameters as well as their dependency on ion species and ion energy typically are subject to large (relative) uncertainties of up to 20–40% or even more. Therefore it is necessary to estimate the resulting uncertainties in e.g. RBE or EQD2 caused by the uncertainties of the relevant input parameters. Methods: We use a variance-based sensitivity analysis (SA) approach, in which uncertainties in input parameters are modeled by random number distributions. The evaluated function is executed 10{sup 4} to 10{sup 6} times, each run with a different set of input parameters, randomly varied according to their assigned distribution. The sensitivity S is a variance-based ranking (from S = 0, no impact, to S = 1, only influential part) of the impact of input uncertainties. The SA approach is implemented for carbon ion treatment plans on 3D patient data, providing information about variations (and their origin) in RBE and EQD2. Results: The quantification enables 3D sensitivity maps, showing dependencies of RBE and EQD2 on different input uncertainties. The high number of runs allows displaying the interplay between different input uncertainties. The SA identifies input parameter combinations which result in extreme deviations of the result and the input parameter for which an uncertainty reduction is the most rewarding. Conclusion: The presented variance-based SA provides advantageous properties in terms of visualization and quantification of (biological) uncertainties and their impact. The method is very flexible, model independent, and enables a broad assessment

  7. DOE BiomassDevelopment and Deployment of a Short Rotation Woody Crops Harvesting System Based on a Case New Holland Forage Harvester and SRC Woody Crop Header RDD Review Template

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Development and Deployment of a Short Rotation Woody Crops Harvesting System Based on a Case New Holland Forage Harvester and SRC Woody Crop Header March 25, 2015 Terrestrial Feedstocks Timothy A. Volk SUNY ESF This presentation does not contain any proprietary, confidential, or otherwise restricted information Goal Statement * Develop, test and deploy a single pass cut and chip harvester combined with a handling, transportation and storage system that is effective and efficient in a range of

  8. A solar thermal cooling and heating system for a building: Experimental and model based performance analysis and design

    SciTech Connect (OSTI)

    Qu, Ming; Yin, Hongxi; Archer, David H.

    2010-02-15

    A solar thermal cooling and heating system at Carnegie Mellon University was studied through its design, installation, modeling, and evaluation to deal with the question of how solar energy might most effectively be used in supplying energy for the operation of a building. This solar cooling and heating system incorporates 52 m{sup 2} of linear parabolic trough solar collectors; a 16 kW double effect, water-lithium bromide (LiBr) absorption chiller, and a heat recovery heat exchanger with their circulation pumps and control valves. It generates chilled and heated water, dependent on the season, for space cooling and heating. This system is the smallest high temperature solar cooling system in the world. Till now, only this system of the kind has been successfully operated for more than one year. Performance of the system has been tested and the measured data were used to verify system performance models developed in the TRaNsient SYstem Simulation program (TRNSYS). On the basis of the installed solar system, base case performance models were programmed; and then they were modified and extended to investigate measures for improving system performance. The measures included changes in the area and orientation of the solar collectors, the inclusion of thermal storage in the system, changes in the pipe diameter and length, and various system operational control strategies. It was found that this solar thermal system could potentially supply 39% of cooling and 20% of heating energy for this building space in Pittsburgh, PA, if it included a properly sized storage tank and short, low diameter connecting pipes. Guidelines for the design and operation of an efficient and effective solar cooling and heating system for a given building space have been provided. (author)

  9. Evaluation of coal-mineral association and coal cleanability by using SEM-based automated image analysis

    SciTech Connect (OSTI)

    Straszheim, W.E.; Younkin, K.A.; Markuszewski, R. ); Smith, F.J. )

    1988-06-01

    A technique employing SEM-based automated image analysis (AIA) has been developed for assessing the association of mineral particles with coal, and thus the cleanability of that coal, when the characteristics of the separation process are known. Data resulting from AIA include the mineral distribution by particle size, mineral phase, and extent of association with coal. This AIA technique was applied to samples of -325 mesh (-44 ..mu..m) coal from the Indiana No. 3, Upper Freeport, and Sunnyside (UT) seams. The coals were subjected to cleaning by float-sink separations at 1.3, 1.4, 1.6, and 1.9 specific gravity and by froth flotation. For the three coals, the float-sink procedure at a given specific gravity produced different amounts of clean coal, but with similar ash content. Froth flotation removed much less ash, yielding a product ash content of --8% for the Upper Freeport coal, regardless of recovery, while reducing the ash content to less than 5% for the other two coals. The AIA results documented significantly more association of minerals with the Upper Freeport coal, which thus led to the poor ash reduction.

  10. Data base for analysis of compositional characteristics of coal seams and macerals. Quarterly technical progress report, November-January 1981

    SciTech Connect (OSTI)

    Davis, A; Suhr, N H; Spackman, W; Painter, P C; Walker, P L; Given, P H

    1981-04-01

    The basic objectives of this program are, first, to understand the systematic relationships between the properties of coals, and, second, to determine the nature of the lateral and vertical variability in the properties of a single seam. Multivariate statistical analyses applied to the Coal Data Base confirm a number of known trends for coal properties. In addition, nitrogen and some components of the ash analysis bear interesting relationships to rank. The macroscopic petrography of column samples of the Lower Kittanning seam reveals a significant difference between the sample from a marine-influenced environment and those from toward the margins of the basin where conditions were non-marine. The various methods of determining the amount and mineralogy of the inorganic fraction of coals are reviewed. General trends in seam thickness, ash, sulfur, volatile matter yield, and vitrinite reflectance of the Lower Kittanning seam of western Pennsylvania are presented. Controls of sedimentation are discussed in relation to the areal variability which has been observed. Differential subsidence and paleotopography appear to have played a major role during the deposition of the coal. The same controls may have maintained some influence upon the coalification process after deposition, especially along the eastern margin of the Lower Kittanning basin.

  11. Life Cost Based FMEA Manual: A Step by Step Guide to Carrying Out a Cost-based Failure Modes and Effects Analysis

    SciTech Connect (OSTI)

    Rhee, Seung; Spencer, Cherrill; /Stanford U. /SLAC

    2009-01-23

    Failure occurs when one or more of the intended functions of a product are no longer fulfilled to the customer's satisfaction. The most critical product failures are those that escape design reviews and in-house quality inspection and are found by the customer. The product may work for a while until its performance degrades to an unacceptable level or it may have not worked even before customer took possession of the product. The end results of failures which may lead to unsafe conditions or major losses of the main function are rated high in severity. Failure Modes and Effects Analysis (FMEA) is a tool widely used in the automotive, aerospace, and electronics industries to identify, prioritize, and eliminate known potential failures, problems, and errors from systems under design, before the product is released (Stamatis, 1997). Several industrial FMEA standards such as those published by the Society of Automotive Engineers, US Department of Defense, and the Automotive Industry Action Group employ the Risk Priority Number (RPN) to measure risk and severity of failures. The Risk Priority Number (RPN) is a product of 3 indices: Occurrence (O), Severity (S), and Detection (D). In a traditional FMEA process design engineers typically analyze the 'root cause' and 'end-effects' of potential failures in a sub-system or component and assign penalty points through the O, S, D values to each failure. The analysis is organized around categories called failure modes, which link the causes and effects of failures. A few actions are taken upon completing the FMEA worksheet. The RPN column generally will identify the high-risk areas. The idea of performing FMEA is to eliminate or reduce known and potential failures before they reach the customers. Thus, a plan of action must be in place for the next task. Not all failures can be resolved during the product development cycle, thus prioritization of actions must be made within the design group. One definition of detection

  12. Accurate reservoir evaluation from borehole imaging techniques and thin bed log analysis: Case studies in shaly sands and complex lithologies in Lower Eocene Sands, Block III, Lake Maracaibo, Venezuela

    SciTech Connect (OSTI)

    Coll, C.; Rondon, L.

    1996-08-01

    Computer-aided signal processing in combination with different types of quantitative log evaluation techniques is very useful for predicting reservoir quality in complex lithologies and will help to increase the confidence level to complete and produce a reservoir. The Lower Eocene Sands in Block III are one of the largest reservoirs in Block III and it has produced light oil since 1960. Analysis of Borehole Images shows the reservoir heterogeneity by the presence of massive sands with very few shale laminations and thinnly bedded sands with a lot of laminations. The effect of these shales is a low resistivity that has been interpreted in most of the cases as water bearing sands. A reduction of the porosity due to diagenetic processes has produced a high-resistivity behaviour. The presence of bed boundaries and shales is detected by the microconductivity curves of the Borehole Imaging Tools allowing the estimation of the percentage of shale on these sands. Interactive computer-aided analysis and various image processing techniques are used to aid in log interpretation for estimating formation properties. Integration between these results, core information and production data was used for evaluating producibility of the reservoirs and to predict reservoir quality. A new estimation of the net pay thickness using this new technique is presented with the consequent improvement on the expectation of additional recovery. This methodology was successfully applied in a case by case study showing consistency in the area.

  13. SU-E-T-129: Dosimetric Evaluation of the Impact of Density Correction On Dose Calculation of Breast Cancer Treatment: A Study Based On RTOG 1005 Cases

    SciTech Connect (OSTI)

    Li, J; Yu, Y

    2014-06-01

    Purpose: RTOG 1005 requires density correction in the dose calculation of breast cancer radiation treatment. The aim of the study was to evaluate the impact of density correction on the dose calculation. Methods: Eight cases were studied, which were planned on an XiO treatment planning system with pixel-by-pixel density correction using a superposition algorithm, following RTOG 1005 protocol requirements. Four were protocol Arm 1 (standard whole breast irradiation with sequential boost) cases and four were Arm 2 (hypofractionated whole breast irradiation with concurrent boost) cases. The plans were recalculated with the same monitor units without density correction. Dose calculations with and without density correction were compared. Results: Results of Arm 1 and Arm 2 cases showed similar trends in the comparison. The average differences between the calculations with and without density correction (difference = Without - With) among all the cases were: -0.82 Gy (range: -2.65??0.18 Gy) in breast PTV Eval D95, ?0.75 Gy (range: ?1.23?0.26 Gy) in breast PTV Eval D90, ?1.00 Gy (range: ?2.46??0.29 Gy) in lumpectomy PTV Eval D95, ?0.78 Gy (range: ?1.30?0.11 Gy) in lumpectomy PTV Eval D90, ?0.43% (range: ?0.95??0.14%) in ipsilateral lung V20, ?0.81% (range: ?1.62??0.26%) in V16, ?1.95% (range: ?4.13??0.84%) in V10, ?2.64% (?5.55??1.04%) in V8, ?4.19% (range: ?6.92??1.81%) in V5, and ?4.95% (range: ?7.49??2.01%) in V4, respectively. The differences in other normal tissues were minimal. Conclusion: The effect of density correction was observed in breast target doses (an average increase of ?1 Gy in D95 and D90, compared to the calculation without density correction) and exposed ipsilateral lung volumes in low dose region (average increases of ?4% and ?5% in V5 and V4, respectively)

  14. Technology Solutions Case Study: Cost Analysis of Roof-Only Air Sealing and Insulation Strategies on 1-1/2 Story Homes in Cold Climates, Minneapolis, MN

    SciTech Connect (OSTI)

    2014-12-01

    This case study describes the External Thermal and Moisture Management System developed by the NorthernSTAR Building America Partnership. This system is typically used in deep energy retrofits and is a valuable approach for the roof-only portions of existing homes, particularly the 1 1/2-story home. It is effective in reducing energy loss through the building envelope, improving building durability, reducing ice dams, and providing opportunities to improve occupant comfort and health.

  15. Building America Case Study: Field Trial of an Aerosol-Based Enclosure Sealing Technology, Clovis, California (Fact Sheet), NREL (National Renewable Energy Laboratory)

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Trial of an Aerosol-Based Enclosure Sealing Technology Clovis, California PROJECT INFORMATION Project Name: Field Trial of an Aerosol- Based Enclosure Sealing Technology Location: Clovis, CA Partners: De Young Properties deyoungproperties.com Building America Team: Alliance for Residential Building Innovation; Western Cooling Efficiency Center, University of California-Davis arbi.davisenergy.com wcec.ucdavis.edu Building Component: Building envelope Application: New, single-family Year Tested:

  16. NMR-based metabonomic analysis of the hepatotoxicity induced by combined exposure to PCBs and TCDD in rats

    SciTech Connect (OSTI)

    Lu Chunfeng; Wang Yimei; Sheng Zhiguo; Liu Gang; Fu Ze; Zhao Jing; Zhao Jun; Yan Xianzhong; Zhu Benzhan; Peng Shuangqing

    2010-11-01

    A metabonomic approach using {sup 1}H NMR spectroscopy was adopted to investigate the metabonomic pattern of rat urine after oral administration of environmental endocrine disruptors (EDs) polychlorinated biphenyls (PCBs) and 2,3,7,8-tetrachlorodibenzo- p-dioxin (TCDD) alone or in combination and to explore the possible hepatotoxic mechanisms of combined exposure to PCBs and TCDD. {sup 1}H NMR spectra of urines collected 24 h before and after exposure were analyzed via pattern recognition by using principal component analysis (PCA). Serum biochemistry and liver histopathology indicated significant hepatotoxicity in the rats of the combined group. The PCA scores plots of urinary {sup 1}H NMR data showed that all the treatment groups could be easily distinguished from the control group, so could the PCBs or TCDD group and the combined group. The loadings plots of the PCA revealed remarkable increases in the levels of lactate, glucose, taurine, creatine, and 2-hydroxy-isovaleric acid and reductions in the levels of 2-oxoglutarate, citrate, succinate, hippurate, and trimethylamine-N-oxide in rat urine after exposure. These changes were more striking in the combined group. The changed metabolites may be considered possible biomarker for the hepatotoxicity. The present study demonstrates that combined exposure to PCBs and TCDD induced significant hepatotoxicity in rats, and mitochondrial dysfunction and fatty acid metabolism perturbations might contribute to the hepatotoxicity. There was good conformity between changes in the urine metabonomic pattern and those in serum biochemistry and liver histopathology. These results showed that the NMR-based metabonomic approach may provide a promising technique for the evaluation of the combined toxicity of EDs.

  17. PanFunPro: Bacterial Pan-Genome Analysis Based on the Functional Profiles (Seventh Annual Sequencing, Finishing, Analysis in the Future (SFAF) Meeting 2012)

    ScienceCinema (OSTI)

    Lukjancenko, Oksana [Technical University of Denmark

    2013-01-25

    Julien Tremblay from DOE JGI presents "Evaluation of Multiplexed 16S rRNA Microbial Population Surveys Using Illumina MiSeq Platorm" at the 7th Annual Sequencing, Finishing, Analysis in the Future (SFAF) Meeting held in June, 2012 in Santa Fe, NM.

  18. Analysis of CASES-99 Lidar and Turbulence Data in Support of Wind Turbine Effects: April 1, 2001 to Januay 31, 2003

    SciTech Connect (OSTI)

    Banta, R. M.

    2003-06-01

    The nocturnal low-level jet (LLJ) of the Great Plains of the central United States has been identified as a promising source of high-momentum wind flow for wind energy. The acceleration of the winds after sunset above the surface produces a jet profile in the wind velocity, with maximum speeds that often exceed 10 m s-1 or more at heights near 100 m or more. These high wind speeds are advantageous for wind energy generation. The high speeds aloft, however, also produce a region of high shear between the LLJ and the earth's surface, where the nocturnal flow is often calm or nearly so. This shear zone below the LLJ generates atmospheric waves and turbulence that can cause strong vibration in the turbine rotors. It has been suggested that these vibrations contribute to premature failures in large wind turbines, which, of course, would be a considerable disadvantage for wind energy applications. In October 1999, a field project called the Cooperative Atmosphere-Surface Exchange Study 1999 campaign, or CASES-99, was conducted in southeastern Kansas to study the nocturnal stable boundary layer. One of the instruments deployed during CASES-99 was the High-Resolution Doppler Lidar, a new scanning, remote-sensing, wind-mapping instrument.

  19. Initial Business Case Analysis of Two Integrated Heat Pump HVAC Systems for Near-Zero-Energy Homes -- Update to Include Analyses of an Economizer Option and Alternative Winter Water Heating Control Option

    SciTech Connect (OSTI)

    Baxter, Van D

    2006-12-01

    . Eleven system concepts with central air distribution ducting and nine multi-zone systems were selected and their annual and peak demand performance estimated for five locations: Atlanta (mixed-humid), Houston (hot-humid), Phoenix (hot-dry), San Francisco (marine), and Chicago (cold). Performance was estimated by simulating the systems using the TRNSYS simulation engine (Solar Energy Laboratory et al. 2006) in two 1800-ft{sup 2} houses--a Building America (BA) benchmark house and a prototype NZEH taken from BEopt results at the take-off (or crossover) point (i.e., a house incorporating those design features such that further progress towards ZEH is through the addition of photovoltaic power sources, as determined by current BEopt analyses conducted by NREL). Results were summarized in a project report, HVAC Equipment Design options for Near-Zero-Energy Homes--A Stage 2 Scoping Assessment, ORNL/TM-2005/194 (Baxter 2005). The 2005 study report describes the HVAC options considered, the ranking criteria used, and the system rankings by priority. In 2006, the two top-ranked options from the 2005 study, air-source and ground-source versions of an integrated heat pump (IHP) system, were subjected to an initial business case study. The IHPs were subjected to a more rigorous hourly-based assessment of their performance potential compared to a baseline suite of equipment of legally minimum efficiency that provided the same heating, cooling, water heating, demand dehumidification, and ventilation services as the IHPs. Results were summarized in a project report, Initial Business Case Analysis of Two Integrated Heat Pump HVAC Systems for Near-Zero-Energy Homes, ORNL/TM-2006/130 (Baxter 2006). The present report is an update to that document. Its primary purpose is to summarize results of an analysis of the potential of adding an outdoor air economizer operating mode to the IHPs to take advantage of free cooling (using outdoor air to cool the house) whenever possible. In addition it

  20. Building America Case Study: Multifamily Zero Energy Ready Home Analysis, Elmsford, New York (Fact Sheet), Technology Solutions for New and Existing Homes, Energy Efficiency & Renewable Energy (EERE)

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Multifamily Zero Energy Ready Home Analysis Elmsford, New York PROJECT INFORMATION Project Name: Avalon Green III Location: Elmsford, NY Partners: AvalonBay Communities avaloncommunities.com Advanced Residential Integrated Solutions Collaborative (ARIES) Building Components: Whole building Application: New construction, multifamily Year Tested: 2015 Applicable Climate Zone: 4 PERFORMANCE DATA Cost of energy-efficiency measure (including labor): $1,000-$1,300 per unit Projected source energy

  1. NORASCO Case Engineering Group JV | Open Energy Information

    Open Energy Info (EERE)

    NORASCO Case Engineering Group JV Jump to: navigation, search Name: NORASCO & Case Engineering Group JV Place: India Sector: Solar Product: India-based JV developer of small solar...

  2. A Ten Step Protocol and Plan for CCS Site Characterization, Based on an Analysis of the Rocky Mountain Region, USA

    SciTech Connect (OSTI)

    McPherson, Brian; Matthews, Vince

    2013-09-15

    This report expresses a Ten-Step Protocol for CO2 Storage Site Characterization, the final outcome of an extensive Site Characterization analysis of the Rocky Mountain region, USA. These ten steps include: (1) regional assessment and data gathering; (2) identification and analysis of appropriate local sites for characterization; (3) public engagement; (4) geologic and geophysical analysis of local site(s); (5) stratigraphic well drilling and coring; (6) core analysis and interpretation with other data; (7) database assembly and static model development; (8) storage capacity assessment; (9) simulation and uncertainty assessment; (10) risk assessment. While the results detailed here are primarily germane to the Rocky Mountain region, the intent of this protocol is to be portable or generally applicable for CO2 storage site characterization.

  3. Fuel Cell Power Model Version 2: Startup Guide, System Designs, and Case Studies. Modeling Electricity, Heat, and Hydrogen Generation from Fuel Cell-Based Distributed Energy Systems

    SciTech Connect (OSTI)

    Steward, D.; Penev, M.; Saur, G.; Becker, W.; Zuboy, J.

    2013-06-01

    This guide helps users get started with the U.S. Department of Energy/National Renewable Energy Laboratory Fuel Cell Power (FCPower) Model Version 2, which is a Microsoft Excel workbook that analyzes the technical and economic aspects of high-temperature fuel cell-based distributed energy systems with the aim of providing consistent, transparent, comparable results. This type of energy system would provide onsite-generated heat and electricity to large end users such as hospitals and office complexes. The hydrogen produced could be used for fueling vehicles or stored for later conversion to electricity.

  4. Updated laser safety&hazard analysis for the ARES laser system based on the 2007 ANSI Z136.1 standard.

    SciTech Connect (OSTI)

    Augustoni, Arnold L.

    2007-08-01

    A laser safety and hazard analysis was performed for the temperature stabilized Big Sky Laser Technology (BSLT) laser central to the ARES system based on the 2007 version of the American National Standards Institute's (ANSI) Standard Z136.1, for Safe Use of Lasers and the 2005 version of the ANSI Standard Z136.6, for Safe Use of Lasers Outdoors. The ARES laser system is a Van/Truck based mobile platform, which is used to perform laser interaction experiments and tests at various national test sites.

  5. Recirculating industrial air: The impact on air compliance and workers. Safety case study: Hill Air Force Base C-130 painting operations

    SciTech Connect (OSTI)

    LaPuma, P.T.

    1998-06-29

    The 1990 Clean Air Act Amendment resulted in new environmental regulations called the National Emission Standards for Hazardous Air Pollutants (NESHAPs). Industries such as painting facilities may have to treat large volumes of air, which drives the cost of an air control system. Recirculating a portion of the air back into the facility is an option to reduce the amount of air to be treated. A guided computer model written in Microsoft Excel 97% is developed to analyze worker safety and compliance costs with a focus on recirculation. The model has a chemical database containing over 1300 chemicals and requires inputs such as tasks performed, hazardous products used, and chemical make-up of the products. The model will predict indoor air concentrations in relation to occupational exposure limits (OELs). A case study is performed on a C-130 aircraft painting facility at Hill AFB, UT. The Aerospace NESHAP requires air pollution reductions in aircraft painting operations. The model predicts strontium chromate concentrations found in primer paints will reach 1000 times the OEL. Strontium chromate and other solid particulates are nearly unaffected by recirculation because the air is filtered prior to recirculation. The next highest chemical, hexamethylene diisocyanate (HDI), increases from 2.6 to 10.5 times the OEL at 0% and 75% recirculation, respectively. Due to the level of respiratory protection required for the strontium chromate, workers are well protected from the modest increases in concentrations caused by recirculating 75%. The initial cost of a VOC control system with no recirculation is $4.5 million and $1.8 million at 75% recirculation. To decide the best operating conditions for a facility, all options such as product substitution, operational changes or recirculation should be explored. The model is an excellent tool to evaluate these options.

  6. How do A-train Sensors Intercompare in the Retrieval of Above-Cloud Aerosol Optical Depth? A Case Study-based Assessment

    SciTech Connect (OSTI)

    Jethva, Hiren T.; Torres, Omar; Waquet, Fabien; Chand, Duli; Hu, Yong X.

    2014-01-15

    We inter-compare the above-cloud aerosol optical depth (ACAOD) of biomass burning plumes retrieved from different A-train sensors, i.e., MODIS, CALIOP, POLDER, and OMI. These sensors have shown independent capabilities to detect and retrieve aerosol loading above marine boundary layer clouds--a kind of situation often found over the Southeast Atlantic Ocean during dry burning season. A systematic one-to-one comparison reveals that, in general, all passive sensors and CALIOP-based research methods derive comparable ACAOD with differences mostly within 0.2 over homogeneous cloud fields. The 532-nm ACAOD retrieved by CALIOP operational algorithm is largely underestimated; however, it’s 1064-nm AOD when converted to 500 nm shows closer agreement to the passive sensors. Given the different types of sensor measurements processed with different algorithms, the close agreement between them is encouraging. Due to lack of adequate direct measurements above cloud, the validation of satellite-based ACAOD retrievals remains an open challenge. The inter-satellite comparison, however, can be useful for the relative evaluation and consistency check.

  7. Automated Voxel-Based Analysis of Volumetric Dynamic Contrast-Enhanced CT Data Improves Measurement of Serial Changes in Tumor Vascular Biomarkers

    SciTech Connect (OSTI)

    Coolens, Catherine; Driscoll, Brandon; Chung, Caroline; Shek, Tina; Gorjizadeh, Alborz; MĂ©nard, Cynthia; Jaffray, David

    2015-01-01

    Objectives: Development of perfusion imaging as a biomarker requires more robust methodologies for quantification of tumor physiology that allow assessment of volumetric tumor heterogeneity over time. This study proposes a parametric method for automatically analyzing perfused tissue from volumetric dynamic contrast-enhanced (DCE) computed tomography (CT) scans and assesses whether this 4-dimensional (4D) DCE approach is more robust and accurate than conventional, region-of-interest (ROI)-based CT methods in quantifying tumor perfusion with preliminary evaluation in metastatic brain cancer. Methods and Materials: Functional parameter reproducibility and analysis of sensitivity to imaging resolution and arterial input function were evaluated in image sets acquired from a 320-slice CT with a controlled flow phantom and patients with brain metastases, whose treatments were planned for stereotactic radiation surgery and who consented to a research ethics board-approved prospective imaging biomarker study. A voxel-based temporal dynamic analysis (TDA) methodology was used at baseline, at day 7, and at day 20 after treatment. The ability to detect changes in kinetic parameter maps in clinical data sets was investigated for both 4D TDA and conventional 2D ROI-based analysis methods. Results: A total of 7 brain metastases in 3 patients were evaluated over the 3 time points. The 4D TDA method showed improved spatial efficacy and accuracy of perfusion parameters compared to ROI-based DCE analysis (P<.005), with a reproducibility error of less than 2% when tested with DCE phantom data. Clinically, changes in transfer constant from the blood plasma into the extracellular extravascular space (K{sub trans}) were seen when using TDA, with substantially smaller errors than the 2D method on both day 7 post radiation surgery (±13%; P<.05) and by day 20 (±12%; P<.04). Standard methods showed a decrease in K{sub trans} but with large uncertainty (111.6 ± 150.5) %. Conclusions

  8. Comparing urban solid waste recycling from the viewpoint of urban metabolism based on physical input-output model: A case of Suzhou in China

    SciTech Connect (OSTI)

    Liang Sai; Zhang Tianzhu

    2012-01-15

    Highlights: Black-Right-Pointing-Pointer Impacts of solid waste recycling on Suzhou's urban metabolism in 2015 are analyzed. Black-Right-Pointing-Pointer Sludge recycling for biogas is regarded as an accepted method. Black-Right-Pointing-Pointer Technical levels of reusing scrap tires and food wastes should be improved. Black-Right-Pointing-Pointer Other fly ash utilization methods should be exploited. Black-Right-Pointing-Pointer Secondary wastes from reusing food wastes and sludge should be concerned. - Abstract: Investigating impacts of urban solid waste recycling on urban metabolism contributes to sustainable urban solid waste management and urban sustainability. Using a physical input-output model and scenario analysis, urban metabolism of Suzhou in 2015 is predicted and impacts of four categories of solid waste recycling on urban metabolism are illustrated: scrap tire recycling, food waste recycling, fly ash recycling and sludge recycling. Sludge recycling has positive effects on reducing all material flows. Thus, sludge recycling for biogas is regarded as an accepted method. Moreover, technical levels of scrap tire recycling and food waste recycling should be improved to produce positive effects on reducing more material flows. Fly ash recycling for cement production has negative effects on reducing all material flows except solid wastes. Thus, other fly ash utilization methods should be exploited. In addition, the utilization and treatment of secondary wastes from food waste recycling and sludge recycling should be concerned.

  9. A Complexity Science-Based Framework for Global Joint Operations Analysis to Support Force Projection: LDRD Final Report.

    SciTech Connect (OSTI)

    Lawton, Craig R.

    2015-01-01

    The military is undergoing a significant transformation as it modernizes for the information age and adapts to address an emerging asymmetric threat beyond traditional cold war era adversaries. Techniques such as traditional large-scale, joint services war gaming analysis are no longer adequate to support program evaluation activities and mission planning analysis at the enterprise level because the operating environment is evolving too quickly. New analytical capabilities are necessary to address modernization of the Department of Defense (DoD) enterprise. This presents significant opportunity to Sandia in supporting the nation at this transformational enterprise scale. Although Sandia has significant experience with engineering system of systems (SoS) and Complex Adaptive System of Systems (CASoS), significant fundamental research is required to develop modeling, simulation and analysis capabilities at the enterprise scale. This report documents an enterprise modeling framework which will enable senior level decision makers to better understand their enterprise and required future investments.

  10. Preliminary analysis of ground-based microwave and infrared radiance observations during the Pilot Radiation OBservation Experiment

    SciTech Connect (OSTI)

    Westwater, E.R.; Han, Y.; Churnside, J.H.; Snider, J.B.

    1995-04-01

    During Phase Two of the Pilot Radiation OBservation Experiment (PROBE) held in Kavieng, Papua New Guinea, the National Oceanic and Atmospheric Administration`s Environmental Technology Laboratory (ETL) operated both microwave and infrared radiometers. Phase Two lasted from January 6 to February 28, 1993. The dramatic differences in the water vapor environment between the tropics and mid-latitudes were illustrated by Westwater et al. (1994) who presented PROBE data as well as additional data that were taken during the 1991 First ISCCP Regional Experiment (FIRE) 11 experiment in Coffeyville, Kansas. We present an analysis of microwave data and a preliminary analysis of infrared data obtained during PROBE.

  11. The use of representative cases in hazard analysis of the tank waste remediation system at Hanford. The information in this document is a combination of HNF-SA-3168-A {ampersand} HNF-SA-3169-A - The control identification process

    SciTech Connect (OSTI)

    Niemi, B.J.

    1997-04-24

    During calendar year 1996, Duke Engineering and Services Hanford, Inc. conducted a safety analysis in accordance with DOE-STD-3009-94 as part of the development of a Final Safety Analysis Report (TSAR) for the Tank Waste Remediation System (TWRS) at the DOE Hanford site. The scope of the safety analysis of TWRS primarily addressed 177 large underground liquid waste storage tanks and associated equipment for transferring waste to and from tanks. The waste in the tanks was generated by the nuclear production and processing facilities at Hanford. The challenge facing the safety analysis team was to efficiently analyze the system within the time and budget allotted to provide the necessary and sufficient information for accident selection, control identification, and justification on the acceptability of the level of safety of TWRS. It was clear from the start that a hazard and accident analysis for each of the 177 similar tanks and supporting equipment was not practical nor necessary. For example, many of the tanks were similar enough that the results of the analysis of one tank would apply to many tanks. This required the development and use of a tool called the ''Hazard Topography''. The use of the Hazard Topography assured that all tank operations and configurations were adequately assessed in the hazard analysis and that the results (e.g., hazard identification and control decisions) were appropriately applied to all tanks and associated systems. The TWRS Hazard Topography was a data base of all the TWRS facilities (e.g., tanks, diversion boxes, transfer lines, and related facilities) along with data on their configuration, material at risk (MAR), hazards, and known safety related phenomenological issues. Facilities were then classified into groups based on similar combinations of configuration, MAR, hazards and phenomena. A hazard evaluation was performed for a tank or facility in each group. The results of these evaluations, also contained in a data base, were

  12. Building America Case Study: Apartment Compartmentalization with an Aerosol-Based Sealing Process - Queens, NY; Technology Solutions for New and Existing Homes, Energy Efficiency & Renewable Energy (EERE)

    SciTech Connect (OSTI)

    2015-07-01

    Air sealing of building enclosures is a difficult and time-consuming process. Current methods in new construction require laborers to physically locate small and sometimes large holes in multiple assemblies and then manually seal each of them. The innovation demonstrated under this research study was the automated air sealing and compartmentalization of buildings through the use of an aerosolized sealant, developed by the Western Cooling Efficiency Center at University of California Davis.
    CARB sought to demonstrate this new technology application in a multifamily building in Queens, NY. The effectiveness of the sealing process was evaluated by three methods: air leakage testing of overall apartment before and after sealing, point-source testing of individual leaks, and pressure measurements in the walls of the target apartment during sealing. Aerosolized sealing was successful by several measures in this study. Many individual leaks that are labor-intensive to address separately were well sealed by the aerosol particles. In addition, many diffuse leaks that are difficult to identify and treat were also sealed. The aerosol-based sealing process resulted in an average reduction of 71% in air leakage across three apartments and an average apartment airtightness of 0.08 CFM50/SF of enclosure area.

  13. A GIS-based Adaptive Management Decision Support System to Develop a Multi-Objective Framework: A case study utilizing GIS technologies and physically-based models to archieve improved decision making for site management.

    SciTech Connect (OSTI)

    Coleman, Andre M.; Wigmosta, Mark S.; Lane, Leonard J.; Tagestad, Jerry D.; Roberts, Damon

    2008-06-26

    The notion of Adaptive Management (AM) allows for the realization and adjustment of management practices in response to elements of uncertainty. In terms of natural resource management, this will typically integrate monitoring, databases, simulation modeling, decision theory, and expert judgment to evaluate management alternatives and adapt them as necessary to continually improve the natural resource condition as defined by the stakeholders. Natural resource management scenarios can often be expressed, viewed, and understood as a spatial and temporal problem. The integration of Geographic Information System (GIS) technologies and physically-based models provide an effective state-of-the-art solution for deriving, understanding, and applying AM scenarios for land use and remediation. A recently developed GIS-based adaptive management decision support system is presented for the U.S. Department of Defense Yakima Training Center near Yakima, Washington.

  14. Genome-Based Metabolic Mapping and 13C Flux Analysis Reveal Systematic Properties of an Oleaginous Microalga Chlorella protothecoides

    SciTech Connect (OSTI)

    Wu, Chao; Xiong, Wei; Dai, Junbiao; Wu, Qingyu

    2014-12-15

    We report that integrated and genome-based flux balance analysis, metabolomics, and 13C-label profiling of phototrophic and heterotrophic metabolism in Chlorella protothecoides, an oleaginous green alga for biofuel. The green alga Chlorella protothecoides, capable of autotrophic and heterotrophic growth with rapid lipid synthesis, is a promising candidate for biofuel production. Based on the newly available genome knowledge of the alga, we reconstructed the compartmentalized metabolic network consisting of 272 metabolic reactions, 270 enzymes, and 461 encoding genes and simulated the growth in different cultivation conditions with flux balance analysis. Phenotype-phase plane analysis shows conditions achieving theoretical maximum of the biomass and corresponding fatty acid-producing rate for phototrophic cells (the ratio of photon uptake rate to CO2 uptake rate equals 8.4) and heterotrophic ones (the glucose uptake rate to O2 consumption rate reaches 2.4), respectively. Isotope-assisted liquid chromatography-mass spectrometry/mass spectrometry reveals higher metabolite concentrations in the glycolytic pathway and the tricarboxylic acid cycle in heterotrophic cells compared with autotrophic cells. We also observed enhanced levels of ATP, nicotinamide adenine dinucleotide (phosphate), reduced, acetyl-Coenzyme A, and malonyl-Coenzyme A in heterotrophic cells consistently, consistent with a strong activity of lipid synthesis. To profile the flux map in experimental conditions, we applied nonstationary 13C metabolic flux analysis as a complementing strategy to flux balance analysis. We found that the result reveals negligible photorespiratory fluxes and a metabolically low active tricarboxylic acid cycle in phototrophic C. protothecoides. In comparison, high throughput of amphibolic reactions and the tricarboxylic acid cycle with no glyoxylate shunt activities were measured for heterotrophic cells. Lastly, taken together, the

  15. Analysis of plug-in hybrid electric vehicles' utility factors using GPS-based longitudinal travel data

    SciTech Connect (OSTI)

    Wu, Xing; Aviquzzaman, Md.; Lin, Zhenhong

    2015-05-29

    The benefit of using a PHEV comes from its ability to substitute gasoline with electricity in operation. Defined as the proportion of distance traveled in the electric mode, the utility factor (UF) depends mostly on the battery capacity, but also on many other factors, such as travel pattern and recharging pattern. Conventionally, the UFs are calculated based on the daily vehicle miles traveled (DVMT) by assuming motorists leave home in the morning with a full battery, and no charge occurs before returning home in the evening. Such an assumption, however, ignores the impact of the heterogeneity in both travel and charging behavior, such as going back home more than once in a day, the impact of available charging time, and the price of gasoline. In addition, the conventional UFs are based on the National Household Travel Survey (NHTS) data, which are one-day travel data of each sample vehicle. A motorist's daily distance variation is ignored. This paper employs the GPS-based longitudinal travel data (covering 3-18 months) collected from 403 vehicles in the Seattle metropolitan area to investigate how such travel and charging behavior affects UFs. To do this, for each vehicle, we organized trips to a series of home and work related tours. The UFs based on the DVMT are found close to those based on home-to-home tours. However, it is seen that the workplace charge opportunities significantly increase UFs if the CD range is no more than 40 miles.

  16. Analysis of plug-in hybrid electric vehicles' utility factors using GPS-based longitudinal travel data

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Wu, Xing; Aviquzzaman, Md.; Lin, Zhenhong

    2015-05-29

    The benefit of using a PHEV comes from its ability to substitute gasoline with electricity in operation. Defined as the proportion of distance traveled in the electric mode, the utility factor (UF) depends mostly on the battery capacity, but also on many other factors, such as travel pattern and recharging pattern. Conventionally, the UFs are calculated based on the daily vehicle miles traveled (DVMT) by assuming motorists leave home in the morning with a full battery, and no charge occurs before returning home in the evening. Such an assumption, however, ignores the impact of the heterogeneity in both travel andmore » charging behavior, such as going back home more than once in a day, the impact of available charging time, and the price of gasoline. In addition, the conventional UFs are based on the National Household Travel Survey (NHTS) data, which are one-day travel data of each sample vehicle. A motorist's daily distance variation is ignored. This paper employs the GPS-based longitudinal travel data (covering 3-18 months) collected from 403 vehicles in the Seattle metropolitan area to investigate how such travel and charging behavior affects UFs. To do this, for each vehicle, we organized trips to a series of home and work related tours. The UFs based on the DVMT are found close to those based on home-to-home tours. However, it is seen that the workplace charge opportunities significantly increase UFs if the CD range is no more than 40 miles.« less

  17. Sustainability & Strategic Analysis

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    ... Technologies Office * Design cases of biofuel pathways * Environmental sustainability metrics for conversion stage * GREET analysis of full pathway to identify drivers of GHG ...

  18. Comparative analysis of quantum cascade laser modeling based on density matrices and non-equilibrium Green's functions

    SciTech Connect (OSTI)

    Lindskog, M. Wacker, A.; Wolf, J. M.; Liverini, V.; Faist, J.; Trinite, V.; Maisons, G.; Carras, M.; Aidam, R.; Ostendorf, R.

    2014-09-08

    We study the operation of an 8.5 Όm quantum cascade laser based on GaInAs/AlInAs lattice matched to InP using three different simulation models based on density matrix (DM) and non-equilibrium Green's function (NEGF) formulations. The latter advanced scheme serves as a validation for the simpler DM schemes and, at the same time, provides additional insight, such as the temperatures of the sub-band carrier distributions. We find that for the particular quantum cascade laser studied here, the behavior is well described by simple quantum mechanical estimates based on Fermi's golden rule. As a consequence, the DM model, which includes second order currents, agrees well with the NEGF results. Both these simulations are in accordance with previously reported data and a second regrown device.

  19. Analysis of axial-induction-based wind plant control using an engineering and a high-order wind plant model

    SciTech Connect (OSTI)

    Annoni, Jennifer; Gebraad, Pieter M. O.; Scholbrock, Andrew K.; Fleming, Paul A.; Wingerden, Jan-Willem van

    2015-08-14

    Wind turbines are typically operated to maximize their performance without considering the impact of wake effects on nearby turbines. Wind plant control concepts aim to increase overall wind plant performance by coordinating the operation of the turbines. This paper focuses on axial-induction-based wind plant control techniques, in which the generator torque or blade pitch degrees of freedom of the wind turbines are adjusted. The paper addresses discrepancies between a high-order wind plant model and an engineering wind plant model. Changes in the engineering model are proposed to better capture the effects of axial-induction-based control shown in the high-order model.

  20. BPO crude oil analysis data base user`s guide: Methods, publications, computer access correlations, uses, availability

    SciTech Connect (OSTI)

    Sellers, C.; Fox, B.; Paulz, J.

    1996-03-01

    The Department of Energy (DOE) has one of the largest and most complete collections of information on crude oil composition that is available to the public. The computer program that manages this database of crude oil analyses has recently been rewritten to allow easier access to this information. This report describes how the new system can be accessed and how the information contained in the Crude Oil Analysis Data Bank can be obtained.

  1. Multi-scaled normal mode analysis method for dynamics simulation of protein-membrane complexes: A case study of potassium channel gating motion correlations

    SciTech Connect (OSTI)

    Wu, Xiaokun; Han, Min; Ming, Dengming

    2015-10-07

    Membrane proteins play critically important roles in many cellular activities such as ions and small molecule transportation, signal recognition, and transduction. In order to fulfill their functions, these proteins must be placed in different membrane environments and a variety of protein-lipid interactions may affect the behavior of these proteins. One of the key effects of protein-lipid interactions is their ability to change the dynamics status of membrane proteins, thus adjusting their functions. Here, we present a multi-scaled normal mode analysis (mNMA) method to study the dynamics perturbation to the membrane proteins imposed by lipid bi-layer membrane fluctuations. In mNMA, channel proteins are simulated at all-atom level while the membrane is described with a coarse-grained model. mNMA calculations clearly show that channel gating motion can tightly couple with a variety of membrane deformations, including bending and twisting. We then examined bi-channel systems where two channels were separated with different distances. From mNMA calculations, we observed both positive and negative gating correlations between two neighboring channels, and the correlation has a maximum as the channel center-to-center distance is close to 2.5 times of their diameter. This distance is larger than recently found maximum attraction distance between two proteins embedded in membrane which is 1.5 times of the protein size, indicating that membrane fluctuation might impose collective motions among proteins within a larger area. The hybrid resolution feature in mNMA provides atomic dynamics information for key components in the system without costing much computer resource. We expect it to be a conventional simulation tool for ordinary laboratories to study the dynamics of very complicated biological assemblies. The source code is available upon request to the authors.

  2. FINAL SIMULATION RESULTS FOR DEMONSTRATION CASE 1 AND 2

    SciTech Connect (OSTI)

    David Sloan; Woodrow Fiveland

    2003-10-15

    The goal of this DOE Vision-21 project work scope was to develop an integrated suite of software tools that could be used to simulate and visualize advanced plant concepts. Existing process simulation software did not meet the DOE's objective of ''virtual simulation'' which was needed to evaluate complex cycles. The overall intent of the DOE was to improve predictive tools for cycle analysis, and to improve the component models that are used in turn to simulate equipment in the cycle. Advanced component models are available; however, a generic coupling capability that would link the advanced component models to the cycle simulation software remained to be developed. In the current project, the coupling of the cycle analysis and cycle component simulation software was based on an existing suite of programs. The challenge was to develop a general-purpose software and communications link between the cycle analysis software Aspen Plus{reg_sign} (marketed by Aspen Technology, Inc.), and specialized component modeling packages, as exemplified by industrial proprietary codes (utilized by ALSTOM Power Inc.) and the FLUENT{reg_sign} computational fluid dynamics (CFD) code (provided by Fluent Inc). A software interface and controller, based on an open CAPE-OPEN standard, has been developed and extensively tested. Various test runs and demonstration cases have been utilized to confirm the viability and reliability of the software. ALSTOM Power was tasked with the responsibility to select and run two demonstration cases to test the software--(1) a conventional steam cycle (designated as Demonstration Case 1), and (2) a combined cycle test case (designated as Demonstration Case 2). Demonstration Case 1 is a 30 MWe coal-fired power plant for municipal electricity generation, while Demonstration Case 2 is a 270 MWe, natural gas-fired, combined cycle power plant. Sufficient data was available from the operation of both power plants to complete the cycle configurations. Three runs

  3. Attack Methodology Analysis: Emerging Trends in Computer-Based Attack Methodologies and Their Applicability to Control System Networks

    SciTech Connect (OSTI)

    Bri Rolston

    2005-06-01

    Threat characterization is a key component in evaluating the threat faced by control systems. Without a thorough understanding of the threat faced by critical infrastructure networks, adequate resources cannot be allocated or directed effectively to the defense of these systems. Traditional methods of threat analysis focus on identifying the capabilities and motivations of a specific attacker, assessing the value the adversary would place on targeted systems, and deploying defenses according to the threat posed by the potential adversary. Too many effective exploits and tools exist and are easily accessible to anyone with access to an Internet connection, minimal technical skills, and a significantly reduced motivational threshold to be able to narrow the field of potential adversaries effectively. Understanding how hackers evaluate new IT security research and incorporate significant new ideas into their own tools provides a means of anticipating how IT systems are most likely to be attacked in the future. This research, Attack Methodology Analysis (AMA), could supply pertinent information on how to detect and stop new types of attacks. Since the exploit methodologies and attack vectors developed in the general Information Technology (IT) arena can be converted for use against control system environments, assessing areas in which cutting edge exploit development and remediation techniques are occurring can provide significance intelligence for control system network exploitation, defense, and a means of assessing threat without identifying specific capabilities of individual opponents. Attack Methodology Analysis begins with the study of what exploit technology and attack methodologies are being developed in the Information Technology (IT) security research community within the black and white hat community. Once a solid understanding of the cutting edge security research is established, emerging trends in attack methodology can be identified and the gap between

  4. An enhanced droplet-based liquid microjunction surface sampling system coupled with HPLC-ESI-MS/MS for spatially resolved analysis

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Van Berkel, Gary J.; Weiskittel, Taylor M.; Kertesz, Vilmos

    2014-11-07

    Droplet-based liquid microjunction surface sampling coupled with high-performance liquid chromatography (HPLC)-electrospray ionization (ESI)-tandem mass spectrometry (MS/MS) for spatially resolved analysis provides the possibility of effective analysis of complex matrix samples and can provide a greater degree of chemical information from a single spot sample than is typically possible with a direct analysis of an extract. Described here is the setup and enhanced capabilities of a discrete droplet liquid microjunction surface sampling system employing a commercially available CTC PAL autosampler. The system enhancements include incorporation of a laser distance sensor enabling unattended analysis of samples and sample locations of dramatically disparatemore » height as well as reliably dispensing just 0.5 ÎŒL of extraction solvent to make the liquid junction to the surface, wherein the extraction spot size was confined to an area about 0.7 mm in diameter; software modifications improving the spatial resolution of sampling spot selection from 1.0 to 0.1 mm; use of an open bed tray system to accommodate samples as large as whole-body rat thin tissue sections; and custom sample/solvent holders that shorten sampling time to approximately 1 min per sample. Lastly, the merit of these new features was demonstrated by spatially resolved sampling, HPLC separation, and mass spectral detection of pharmaceuticals and metabolites from whole-body rat thin tissue sections and razor blade (“crude”) cut mouse tissue.« less

  5. Impact of Boost Radiation in the Treatment of Ductal Carcinoma In Situ: A Population-Based Analysis

    SciTech Connect (OSTI)

    Rakovitch, Eileen; Institute for Clinical Evaluative Sciences, Toronto, Ontario; University of Toronto, Toronto, Ontario ; Narod, Steven A.; Women’s College Research Institute, Toronto, Ontario ; Nofech-Moses, Sharon; Hanna, Wedad; University of Toronto, Toronto, Ontario ; Thiruchelvam, Deva; Saskin, Refik; Taylor, Carole; Tuck, Alan; Youngson, Bruce; Miller, Naomi; Done, Susan J.; Sengupta, Sandip; Elavathil, Leela; Henderson General Hospital, 711 Concession Street, Hamilton, Ontario ; Jani, Prashant A.; Regional Health Sciences Centre, Thunder Bay, Ontario ; Bonin, Michel; Metcalfe, Stephanie; Paszat, Lawrence; Institute for Clinical Evaluative Sciences, Toronto, Ontario; University of Toronto, Toronto, Ontario

    2013-07-01

    Purpose: To report the outcomes of a population of women with ductal carcinoma in situ (DCIS) treated with breast-conserving surgery and radiation and to evaluate the independent effect of boost radiation on the development of local recurrence. Methods and Materials: All women diagnosed with DCIS and treated with breast-conserving surgery and radiation therapy in Ontario from 1994 to 2003 were identified. Treatments and outcomes were identified through administrative databases and validated by chart review. The impact of boost radiation on the development of local recurrence was determined using survival analyses. Results: We identified 1895 cases of DCIS that were treated by breast-conserving surgery and radiation therapy; 561 patients received boost radiation. The cumulative 10-year rate of local recurrence was 13% for women who received boost radiation and 12% for those who did not (P=.3). The 10-year local recurrence-free survival (LRFS) rate among women who did and who did not receive boost radiation was 88% and 87%, respectively (P=.27), 94% and 93% for invasive LRFS (P=.58), and was 95% and 93% for DCIS LRFS (P=.31). On multivariable analyses, boost radiation was not associated with a lower risk of local recurrence (hazard ratio = 0.82, 95% confidence interval 0.59-1.15) (P=.25). Conclusions: Among a population of women treated with breast-conserving surgery and radiation for DCIS, additional (boost) radiation was not associated with a lower risk of local or invasive recurrence.

  6. Proof-of-Concept Demonstrations for Computation-Based Human Reliability Analysis. Modeling Operator Performance During Flooding Scenarios

    SciTech Connect (OSTI)

    Joe, Jeffrey Clark; Boring, Ronald Laurids; Herberger, Sarah Elizabeth Marie; Mandelli, Diego; Smith, Curtis Lee

    2015-09-01

    The United States (U.S.) Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) program has the overall objective to help sustain the existing commercial nuclear power plants (NPPs). To accomplish this program objective, there are multiple LWRS “pathways,” or research and development (R&D) focus areas. One LWRS focus area is called the Risk-Informed Safety Margin and Characterization (RISMC) pathway. Initial efforts under this pathway to combine probabilistic and plant multi-physics models to quantify safety margins and support business decisions also included HRA, but in a somewhat simplified manner. HRA experts at Idaho National Laboratory (INL) have been collaborating with other experts to develop a computational HRA approach, called the Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER), for inclusion into the RISMC framework. The basic premise of this research is to leverage applicable computational techniques, namely simulation and modeling, to develop and then, using RAVEN as a controller, seamlessly integrate virtual operator models (HUNTER) with 1) the dynamic computational MOOSE runtime environment that includes a full-scope plant model, and 2) the RISMC framework PRA models already in use. The HUNTER computational HRA approach is a hybrid approach that leverages past work from cognitive psychology, human performance modeling, and HRA, but it is also a significant departure from existing static and even dynamic HRA methods. This report is divided into five chapters that cover the development of an external flooding event test case and associated statistical modeling considerations.

  7. Storm-water characterization and lagoon sediment analysis, Grand Forks Air Force Base, North Dakota. Final report

    SciTech Connect (OSTI)

    Garland, J.G.; Vaughn, R.W.; Scott, P.T.

    1990-08-01

    Sampling was conducted in the wastewater treatment lagoons and stormwater runoff at Grand Forks AFB. The base was concerned about whether the unlined lagoons were creating a potential groundwater contamination problem and whether their stormwater runoff met North Dakota state stream standards. Lagoon sediment did not contain Extraction Procedure hazardous chemicals. Stormwater runoff exceeded state standards for boron, phosphates, and phenols and contained trace levels of methylene chloride. Characterization of lagoon influent showed it to be generally representative of domestic sewage, but also contained trace levels of boron, phenols, toluene, cyanide, chloroform, methylene chloride and ethyl benzene.

  8. Computational analysis of an autophagy/translation switch based on mutual inhibition of MTORC1 and ULK1

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    SzymaƄska, Paulina; Martin, Katie R.; MacKeigan, Jeffrey P.; Hlavacek, William S.; Lipniacki, Tomasz

    2015-03-11

    We constructed a mechanistic, computational model for regulation of (macro)autophagy and protein synthesis (at the level of translation). The model was formulated to study the system-level consequences of interactions among the following proteins: two key components of MTOR complex 1 (MTORC1), namely the protein kinase MTOR (mechanistic target of rapamycin) and the scaffold protein RPTOR; the autophagy-initiating protein kinase ULK1; and the multimeric energy-sensing AMP-activated protein kinase (AMPK). Inputs of the model include intrinsic AMPK kinase activity, which is taken as an adjustable surrogate parameter for cellular energy level or AMP:ATP ratio, and rapamycin dose, which controls MTORC1 activity. Outputsmore » of the model include the phosphorylation level of the translational repressor EIF4EBP1, a substrate of MTORC1, and the phosphorylation level of AMBRA1 (activating molecule in BECN1-regulated autophagy), a substrate of ULK1 critical for autophagosome formation. The model incorporates reciprocal regulation of mTORC1 and ULK1 by AMPK, mutual inhibition of MTORC1 and ULK1, and ULK1-mediated negative feedback regulation of AMPK. Through analysis of the model, we find that these processes may be responsible, depending on conditions, for graded responses to stress inputs, for bistable switching between autophagy and protein synthesis, or relaxation oscillations, comprising alternating periods of autophagy and protein synthesis. A sensitivity analysis indicates that the prediction of oscillatory behavior is robust to changes of the parameter values of the model. The model provides testable predictions about the behavior of the AMPK-MTORC1-ULK1 network, which plays a central role in maintaining cellular energy and nutrient homeostasis.« less

  9. Computational analysis of an autophagy/translation switch based on mutual inhibition of MTORC1 and ULK1

    SciTech Connect (OSTI)

    SzymaƄska, Paulina; Martin, Katie R.; MacKeigan, Jeffrey P.; Hlavacek, William S.; Lipniacki, Tomasz

    2015-03-11

    We constructed a mechanistic, computational model for regulation of (macro)autophagy and protein synthesis (at the level of translation). The model was formulated to study the system-level consequences of interactions among the following proteins: two key components of MTOR complex 1 (MTORC1), namely the protein kinase MTOR (mechanistic target of rapamycin) and the scaffold protein RPTOR; the autophagy-initiating protein kinase ULK1; and the multimeric energy-sensing AMP-activated protein kinase (AMPK). Inputs of the model include intrinsic AMPK kinase activity, which is taken as an adjustable surrogate parameter for cellular energy level or AMP:ATP ratio, and rapamycin dose, which controls MTORC1 activity. Outputs of the model include the phosphorylation level of the translational repressor EIF4EBP1, a substrate of MTORC1, and the phosphorylation level of AMBRA1 (activating molecule in BECN1-regulated autophagy), a substrate of ULK1 critical for autophagosome formation. The model incorporates reciprocal regulation of mTORC1 and ULK1 by AMPK, mutual inhibition of MTORC1 and ULK1, and ULK1-mediated negative feedback regulation of AMPK. Through analysis of the model, we find that these processes may be responsible, depending on conditions, for graded responses to stress inputs, for bistable switching between autophagy and protein synthesis, or relaxation oscillations, comprising alternating periods of autophagy and protein synthesis. A sensitivity analysis indicates that the prediction of oscillatory behavior is robust to changes of the parameter values of the model. The model provides testable predictions about the behavior of the AMPK-MTORC1-ULK1 network, which plays a central role in maintaining cellular energy and nutrient homeostasis.

  10. Quantifying the Impact of Immediate Reconstruction in Postmastectomy Radiation: A Large, Dose-Volume Histogram-Based Analysis

    SciTech Connect (OSTI)

    Ohri, Nisha; Cordeiro, Peter G.; Keam, Jennifer; Ballangrud, Ase; Shi Weiji; Zhang Zhigang; Nerbun, Claire T.; Woch, Katherine M.; Stein, Nicholas F.; Zhou Ying; McCormick, Beryl; Powell, Simon N.; Ho, Alice Y.

    2012-10-01

    Purpose: To assess the impact of immediate breast reconstruction on postmastectomy radiation (PMRT) using dose-volume histogram (DVH) data. Methods and Materials: Two hundred forty-seven women underwent PMRT at our center, 196 with implant reconstruction and 51 without reconstruction. Patients with reconstruction were treated with tangential photons, and patients without reconstruction were treated with en-face electron fields and customized bolus. Twenty percent of patients received internal mammary node (IMN) treatment. The DVH data were compared between groups. Ipsilateral lung parameters included V20 (% volume receiving 20 Gy), V40 (% volume receiving 40 Gy), mean dose, and maximum dose. Heart parameters included V25 (% volume receiving 25 Gy), mean dose, and maximum dose. IMN coverage was assessed when applicable. Chest wall coverage was assessed in patients with reconstruction. Propensity-matched analysis adjusted for potential confounders of laterality and IMN treatment. Results: Reconstruction was associated with lower lung V20, mean dose, and maximum dose compared with no reconstruction (all P<.0001). These associations persisted on propensity-matched analysis (all P<.0001). Heart doses were similar between groups (P=NS). Ninety percent of patients with reconstruction had excellent chest wall coverage (D95 >98%). IMN coverage was superior in patients with reconstruction (D95 >92.0 vs 75.7%, P<.001). IMN treatment significantly increased lung and heart parameters in patients with reconstruction (all P<.05) but minimally affected those without reconstruction (all P>.05). Among IMN-treated patients, only lower lung V20 in those without reconstruction persisted (P=.022), and mean and maximum heart doses were higher than in patients without reconstruction (P=.006, P=.015, respectively). Conclusions: Implant reconstruction does not compromise the technical quality of PMRT when the IMNs are untreated. Treatment technique, not reconstruction, is the primary

  11. Characterization of the Fracture Toughness of TRIP 800 Sheet Steels Using Microstructure-Based Finite Element Analysis

    SciTech Connect (OSTI)

    Soulami, Ayoub; Choi, Kyoo Sil; Liu, Wenning N.; Sun, Xin; Khaleel, Mohammad A.

    2009-04-01

    Recently, several studies conducted by automotive industry revealed the tremendous advantages of Advanced High Strength Steels (AHSS). TRansformation Induced Plasticity (TRIP) steel is one of the typical representative of AHSS. This kind of materials exhibits high strength as well as high formability. Analyzing the crack behaviour in TRIP steels is a challenging task due to the microstructure level inhomogeneities between the different phases (Ferrite, Bainite, Austenite, Martensite) that constitute these materials. This paper aims at investigating the fracture resistance of TRIP steels. For this purpose, a micromechanical finite element model is developed based on the actual microstructure of a TRIP 800 steel. Uniaxial tensile tests on TRIP 800 sheet notched specimens were also conducted and tensile properties and R-curves (Resistance curves) were determined. The comparison between simulation and experimental results leads us to the conclusion that the method using microstructure-based representative volume element (RVE) captures well enough the complex behavior of TRIP steels. The effect of phase transformation, which occurs during the deformation process, on the toughness is observed and discussed.

  12. Controlling Wind Turbines for Secondary Frequency Regulation: An Analysis of AGC Capabilities Under New Performance Based Compensation Policy: Preprint

    SciTech Connect (OSTI)

    Aho, J.; Pao, L. Y.; Fleming, P.; Ela, E.

    2015-02-01

    As wind energy becomes a larger portion of the world's energy portfolio there has been an increased interest for wind turbines to control their active power output to provide ancillary services which support grid reliability. One of these ancillary services is the provision of frequency regulation, also referred to as secondary frequency control or automatic generation control (AGC), which is often procured through markets which recently adopted performance-based compensation. A wind turbine with a control system developed to provide active power ancillary services can be used to provide frequency regulation services. Simulations have been performed to determine the AGC tracking performance at various power schedule set-points, participation levels, and wind conditions. The performance metrics used in this study are based on those used by several system operators in the US. Another metric that is analyzed is the damage equivalent loads (DELs) on turbine structural components, though the impacts on the turbine electrical components are not considered. The results of these single-turbine simulations show that high performance scores can be achieved when there is sufficient wind resource available. The capability of a wind turbine to rapidly and accurately follow power commands allows for high performance even when tracking rapidly changing AGC signals. As the turbine de-rates to meet decreased power schedule set-points there is a reduction in the DELs, and the participation in frequency regulation has a negligible impact on these loads.

  13. Micromagnetic analysis of dynamical bubble-like solitons based on the time domain evolution of the topological density

    SciTech Connect (OSTI)

    Puliafito, Vito Azzerboni, Bruno; Finocchio, Giovanni; Torres, Luis; Ozatay, Ozhan

    2014-05-07

    Dynamical bubble-like solitons have been recently investigated in nanocontact-based spin-torque oscillators with a perpendicular free layer. Those magnetic configurations can be excited also in different geometries as long as they consist of perpendicular materials. Thus, in this paper, a systematic study of the influence of both external field and high current on that kind of dynamics is performed for a spin-valve point-contact geometry where both free and fixed layers present strong perpendicular anisotropy. The usage of the topological density tool highlights the excitation of complex bubble/antibubble configurations. In particular, at high currents, a deformation of the soliton and its simultaneous shift from the contact area are observed and can be ascribable to the Oersted field. Results provide further detailed information on the excitation of solitons in perpendicular materials for application in spintronics, magnonics, and domain wall logic.

  14. Analysis of the electrochemical characteristics of a direct methanol fuel cell based on a Pt-Ru/C anode catalyst

    SciTech Connect (OSTI)

    Arico, A.S.; Creti, P.; Mantegna, R.

    1996-12-31

    This paper deals with a vapour-feed direct methanol fuel cell (DMFC) based on a Nafion 117{reg_sign} solid polymer electrolyte. Pt-Ru/C and Pt/C catalysts were employed for methanol oxidation and oxygen reduction, respectively. Structure and surface chemistry of catalysts were investigated by X-ray powder diffraction (XRD) and X-ray photoelectron spectroscopy (XPS). Membrane/electrode assembly (M&E) was prepared by using a {open_quotes}paste process{close_quotes} method. Electrical power densities of about 150 mW cm{sup -2} were obtained at 95{degrees} C with Pt loadings of 0.8 and 0.5 mg cm{sup -2} at anode and cathode respectively.

  15. CORE-BASED INTEGRATED SEDIMENTOLOGIC, STRATIGRAPHIC, AND GEOCHEMICAL ANALYSIS OF THE OIL SHALE BEARING GREEN RIVER FORMATION, UINTA BASIN, UTAH

    SciTech Connect (OSTI)

    Lauren P. Birgenheier; Michael D. Vanden Berg,

    2011-04-11

    An integrated detailed sedimentologic, stratigraphic, and geochemical study of Utah's Green River Formation has found that Lake Uinta evolved in three phases (1) a freshwater rising lake phase below the Mahogany zone, (2) an anoxic deep lake phase above the base of the Mahogany zone and (3) a hypersaline lake phase within the middle and upper R-8. This long term lake evolution was driven by tectonic basin development and the balance of sediment and water fill with the neighboring basins, as postulated by models developed from the Greater Green River Basin by Carroll and Bohacs (1999). Early Eocene abrupt global-warming events may have had significant control on deposition through the amount of sediment production and deposition rates, such that lean zones below the Mahogany zone record hyperthermal events and rich zones record periods between hyperthermals. This type of climatic control on short-term and long-term lake evolution and deposition has been previously overlooked. This geologic history contains key points relevant to oil shale development and engineering design including: (1) Stratigraphic changes in oil shale quality and composition are systematic and can be related to spatial and temporal changes in the depositional environment and basin dynamics. (2) The inorganic mineral matrix of oil shale units changes significantly from clay mineral/dolomite dominated to calcite above the base of the Mahogany zone. This variation may result in significant differences in pyrolysis products and geomechanical properties relevant to development and should be incorporated into engineering experiments. (3) This study includes a region in the Uinta Basin that would be highly prospective for application of in-situ production techniques. Stratigraphic targets for in-situ recovery techniques should extend above and below the Mahogany zone and include the upper R-6 and lower R-8.

  16. Ultrafast harmonic rf kicker design and beam dynamics analysis for an energy recovery linac based electron circulator cooler ring

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Huang, Yulu; Wang, Haipeng; Rimmer, Robert A.; Wang, Shaoheng; Guo, Jiquan

    2016-08-01

    An ultrafast kicker system is being developed for the energy recovery linac (ERL) based electron circulator cooler ring (CCR) in the proposed Jefferson Lab Electron Ion Collider (JLEIC, previously named MEIC). In the CCR, the injected electron bunches can be recirculated while performing ion cooling for 10–30 turns before the extraction, thus reducing the recirculation beam current in the ERL to 1/10–1/30 (150mA–50 mA) of the cooling beam current (up to 1.5 A). Assuming a bunch repetition rate of 476.3 MHz and a recirculating factor of 10 in the CCR, the kicker is required to operate at a pulse repetitionmore » rate of 47.63 MHz with pulse width of around 2 ns, so that only every 10th bunch in the CCR will experience a transverse kick while the rest of the bunches will not be disturbed. Such a kicker pulse can be synthesized by ten harmonic modes of the 47.63 MHz kicker pulse repetition frequency, using up to four quarter wavelength resonator (QWR) based deflecting cavities. In this paper, several methods to synthesize such a kicker waveform will be discussed and a comparison of their beam dynamics performance is made using ELEGANT. Four QWR cavities are envisaged with high transverse shunt impedance requiring less than 100 W of total rf power for a Flat-Top kick pulse. Multipole fields due to the asymmetry of this type of cavity are analyzed. The transverse emittance growth due to the sextupole component is simulated in ELEGANT. In conclusion, off-axis injection and extraction issues and beam optics using a multicavity kick-drift scheme will also be discussed.« less

  17. Analysis of global radiation budgets and cloud forcing using three-dimensional cloud nephanalysis data base. Master's thesis

    SciTech Connect (OSTI)

    Mitchell, B.

    1990-12-01

    A one-dimensional radiative transfer model was used to compute the global radiative budget at the top of the atmosphere (TOA) and the surface for January and July. 1979. The model was also used to determine the global cloud radiative forcing for all clouds and for high and low cloud layers. In the computations. the authors used the monthly cloud data derived from the Air Force Three-Dimensional Cloud Nephanalysis (3DNEPH). These data were used in conjunction with conventional temperature and humidity profiles analyzed during the 1979 First GARP (Global Atmospheric Research Program) Global Experiment (FGGE) year. Global surface albedos were computed from available data and were included in the radiative transfer analysis. Comparisons of the model-produced outgoing solar and infrared fluxes with those derived from Nimbus 7 Earth Radiation Budget (ERS) data were made to validate the radiative model and cloud cover. For reflected solar and emitted infrared (IR) flux, differences within 20 w/sq m meters were shown.

  18. Manufacturing Cost Analysis for YSZ-Based FlexCells at Pilot and Full Scale Production Scales

    SciTech Connect (OSTI)

    Scott Swartz; Lora Thrun; Robin Kimbrell; Kellie Chenault

    2011-05-01

    Significant reductions in cell costs must be achieved in order to realize the full commercial potential of megawatt-scale SOFC power systems. The FlexCell designed by NexTech Materials is a scalable SOFC technology that offers particular advantages over competitive technologies. In this updated topical report, NexTech analyzes its FlexCell design and fabrication process to establish manufacturing costs at both pilot scale (10 MW/year) and full-scale (250 MW/year) production levels and benchmarks this against estimated anode supported cell costs at the 250 MW scale. This analysis will show that even with conservative assumptions for yield, materials usage, and cell power density, a cost of $35 per kilowatt can be achieved at high volume. Through advancements in cell size and membrane thickness, NexTech has identified paths for achieving cell manufacturing costs as low as $27 per kilowatt for its FlexCell technology. Also in this report, NexTech analyzes the impact of raw material costs on cell cost, showing the significant increases that result if target raw material costs cannot be achieved at this volume.

  19. Reinterpretation of the 6300-{angstrom} airglow enhancements observed in ionosphere heating experiments based on analysis of Platteville, Colorado, data

    SciTech Connect (OSTI)

    Mantas, G.P.; Carlson, H.C.

    1996-01-01

    Airglow enhancement observations have been considered as supporting evidence of electron accerleration in ionosphere heating experiments by high-power HF waves. Here the authors analyze some of the 6300-{angstrom} airglow data from the Platteville, Colorado, heating experiments of 1970, employing new electron impact excitation rates for the O({sup 1}D) state and empirical, but in accord with experimental and theoretical constraints, plasma heating rates and show that these airglow enhancements should be attributed to excitation by thermal electrons. An important aspect of the present analysis is the excellent agreement of the observed and the calculated airglow enhancements over several complete transmitter on/off cycles of several minutes duration and an increasing airglow trend of 1 hour duration. The fact that the OI red line may be thermally excited and the scarcity of observations of simultaneous OI red and green line enhancements imply that electron acceleration, even to a few eV, may require very special experimental and ionospheric conditions that are not very often realized. 50 refs., 13 figs., 2 tabs.

  20. Functional gene array-based analysis of microbial community structure in groundwaters with a gradient of contaminant levels

    SciTech Connect (OSTI)

    Waldron, P.J.; Wu, L.; Van Nostrand, J.D.; Schadt, C.W.; Watson, D.B.; Jardine, P.M.; Palumbo, A.V.; Hazen, T.C.; Zhou, J.

    2009-06-15

    To understand how contaminants affect microbial community diversity, heterogeneity, and functional structure, six groundwater monitoring wells from the Field Research Center of the U.S. Department of Energy Environmental Remediation Science Program (ERSP; Oak Ridge, TN), with a wide range of pH, nitrate, and heavy metal contamination were investigated. DNA from the groundwater community was analyzed with a functional gene array containing 2006 probes to detect genes involved in metal resistance, sulfate reduction, organic contaminant degradation, and carbon and nitrogen cycling. Microbial diversity decreased in relation to the contamination levels of the wells. Highly contaminated wells had lower gene diversity but greater signal intensity than the pristine well. The microbial composition was heterogeneous, with 17-70% overlap between different wells. Metal-resistant and metal-reducing microorganisms were detected in both contaminated and pristine wells, suggesting the potential for successful bioremediation of metal-contaminated groundwaters. In addition, results of Mantel tests and canonical correspondence analysis indicate that nitrate, sulfate, pH, uranium, and technetium have a significant (p < 0.05) effect on microbial community structure. This study provides an overall picture of microbial community structure in contaminated environments with functional gene arrays by showing that diversity and heterogeneity can vary greatly in relation to contamination.

  1. Functional gene array-based analysis of microbial community structure in groundwater with gradient of contaminant levels

    SciTech Connect (OSTI)

    Wu, Liyou; Van Nostrand, Joy; Schadt, Christopher Warren; Watson, David B; Jardine, Philip M; Palumbo, Anthony Vito; Hazen, Terry; Zhou, Jizhong

    2009-04-01

    To understand how contaminants affect microbial community diversity, heterogeneity, and functional structure, six groundwater monitoring wells from the Field Research Center of the U.S. Department of Energy Environmental Remediation Science Program (ERSP; Oak Ridge, TN), with a wide range of pH, nitrate, and heavy metal contamination were investigated. DNA from the groundwater community was analyzed with a functional gene array containing 2006 probes to detect genes involved in metal resistance, sulfate reduction, organic contaminant degradation, and carbon and nitrogen cycling. Microbial diversity decreased in relation to the contamination levels of the wells. Highly contaminated wells had lower gene diversity but greater signal intensity than the pristine well. The microbial composition was heterogeneous, with 17?70% overlap between different wells. Metal-resistant and metal-reducing microorganisms were detected in both contaminated and pristine wells, suggesting the potential for successful bioremediation of metal-contaminated groundwaters. In addition, results of Mantel tests and canonical correspondence analysis indicate that nitrate, sulfate, pH, uranium, and technetium have a significant (p < 0.05) effect on microbial community structure. This study provides an overall picture of microbial community structure in contaminated environments with functional gene arrays by showing that diversity and heterogeneity can vary greatly in relation to contamination.

  2. Integration of a constraint-based metabolic model of Brassica napus developing seeds with 13C-metabolic flux analysis

    SciTech Connect (OSTI)

    Hay, Jordan O.; Shi, Hai; Heinzel, Nicolas; Hebbelmann, Inga; Rolletschek, Hardy; Schwender, Jorg

    2014-12-19

    The use of large-scale or genome-scale metabolic reconstructions for modeling and simulation of plant metabolism and integration of those models with large-scale omics and experimental flux data is becoming increasingly important in plant metabolic research. Here we report an updated version of bna572, a bottom-up reconstruction of oilseed rape (Brassica napus L.; Brassicaceae) developing seeds with emphasis on representation of biomass-component biosynthesis. New features include additional seed-relevant pathways for isoprenoid, sterol, phenylpropanoid, flavonoid, and choline biosynthesis. Being now based on standardized data formats and procedures for model reconstruction, bna572+ is available as a COBRA-compliant Systems Biology Markup Language (SBML) model and conforms to the Minimum Information Requested in the Annotation of Biochemical Models (MIRIAM) standards for annotation of external data resources. Bna572+ contains 966 genes, 671 reactions, and 666 metabolites distributed among 11 subcellular compartments. It is referenced to the Arabidopsis thaliana genome, with gene-protein-reaction (GPR) associations resolving subcellular localization. Detailed mass and charge balancing and confidence scoring were applied to all reactions. Using B. napus seed specific transcriptome data, expression was verified for 78% of bna572+ genes and 97% of reactions. Alongside bna572+ we also present a revised carbon centric model for 13C-Metabolic Flux Analysis (13C-MFA) with all its reactions being referenced to bna572+ based on linear projections. By integration of flux ratio constraints obtained from 13C-MFA and by elimination of infinite flux bounds around thermodynamically infeasible loops based on COBRA loopless methods, we demonstrate improvements in predictive power of Flux Variability Analysis (FVA). In conclusion, using this combined approach we characterize the difference in metabolic flux of developing seeds of two B. napus

  3. Analysis of Geothermal Reservoir Stimulation using Geomechanics...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Geomechanics-Based Stochastic Analysis of Injection-Induced Seismicity Analysis of Geothermal Reservoir Stimulation using Geomechanics-Based Stochastic Analysis of ...

  4. Microcomputer-based instrument for the detection and analysis of precession motion in a gas centrifuge machine. Revision 1

    SciTech Connect (OSTI)

    Paulus, S.S.

    1986-03-01

    The Centrifuge Procession Analyzer (CPA) is a microcomputer-based instrument which detects precession motion in a gas centrifuge machine and calculates the amplitude and frequency of precession. The CPA consists of a printed circuit board which contains signal-conditioning circuitry and a 24-bit counter and an INTEL iSBC 80/24 single/board computer. Pression motion is detected by monitoring a signal generated by a variable reluctance pick-up coil in the top of the centrifuge machine. This signal is called a Fidler signal. The initial Fidler signal triggers a counter which is clocked by a high-precision, 20.000000-MHz, temperature-controlled, crystal oscillator. The contents of the counter are read by the computer and the counter reset after every ten Fidler signals. The speed of the centrifuge machine and the amplitude and frequency of precession are calculated and the results are displayed on a liquid crystal display on the front panel of the CPA. The report contains results from data generated by a Fidler signal simulator and data taken when the centrifuge was operated under three test conditions: (1) nitrogen gas during drive-up, steady state, and drive-down; (2) xenon gas during slip test, steady state, and the addition of gas; and (3) no gas during steady state. The qualitative results were consistent with experience with centrifuge machines using UF/sub 6/ in that the amplitude of precession increased and the frequency of precession decreased during drive-up, drive-down and the slip check. The magnitude of the amplitude and frequency of precession were proportional to the molecular weight of the gases in steady state.

  5. Microcomputer-based instrument for the detection and analysis of precession motion in a gas centrifuge machine

    SciTech Connect (OSTI)

    Paulus, S.S.

    1986-03-01

    The Centrifuge Precession Analyzer (CPA) is a microcomputer-based instrument which detects precession motion in a gas centrifuge machine and calculates the amplitude and frequency of precession. The CPA consists of a printed circuit board which contains signal-conditioning circuitry and a 24-bit counter and an INTEL iSBC 80-/24 single-board computer. Precession motion is detected by monitoring a signal generated by a variable reluctance pick-up coil in the top of the centrifuge machine. This signal is called a Fidler signal. The initial Fidler signal triggers a counter which is clocked by a high-precision, 20.000000-MHz, temperature-controlled, crystal oscillator. The contents of the counter are read by the computer, and the counter reset after every ten Fidler signals. The speed of the centrifuge machine and the amplitude and frequency of precession are calculated, and the results are displayed on a liquid crystal display on the front panel of the CPA. The thesis contains results from data generated by a Fidler signal simulator and data taken when the centrifuge was operated under three test conditions: (1) nitrogen gas during drive-up, steady state, and drive-down, (2) xenon gas during slip test, steady state, and the addition of gas, and (3) no gas during steady state. The qualitative results were consistent with experience with centrifuge machines UF/sub 6/ in that the amplitude of precession increased and the frequency of precession decreased during drive-up, drive-down and the slip check. The magnitude of the amplitude and frequency of precession were proportional to the molecular weight of the gases in steady state.

  6. Integrating Nuclear Energy to Oilfield Operations – Two Case Studies

    SciTech Connect (OSTI)

    Eric P. Robertson; Lee O. Nelson; Michael G. McKellar; Anastasia M. Gandrik; Mike W. Patterson

    2011-11-01

    Fossil fuel resources that require large energy inputs for extraction, such as the Canadian oil sands and the Green River oil shale resource in the western USA, could benefit from the use of nuclear power instead of power generated by natural gas combustion. This paper discusses the technical and economic aspects of integrating nuclear energy with oil sands operations and the development of oil shale resources. A high temperature gas reactor (HTGR) that produces heat in the form of high pressure steam (no electricity production) was selected as the nuclear power source for both fossil fuel resources. Both cases were based on 50,000 bbl/day output. The oil sands case was a steam-assisted, gravity-drainage (SAGD) operation located in the Canadian oil sands belt. The oil shale development was an in-situ oil shale retorting operation located in western Colorado, USA. The technical feasibility of the integrating nuclear power was assessed. The economic feasibility of each case was evaluated using a discounted cash flow, rate of return analysis. Integrating an HTGR to both the SAGD oil sands operation and the oil shale development was found to be technically feasible for both cases. In the oil sands case, integrating an HTGR eliminated natural gas combustion and associated CO2 emissions, although there were still some emissions associated with imported electrical power. In the in situ oil shale case, integrating an HTGR reduced CO2 emissions by 88% and increased natural gas production by 100%. Economic viabilities of both nuclear integrated cases were poorer than the non-nuclear-integrated cases when CO2 emissions were not taxed. However, taxing the CO2 emissions had a significant effect on the economics of the non-nuclear base cases, bringing them in line with the economics of the nuclear-integrated cases. As we move toward limiting CO2 emissions, integrating non-CO2-emitting energy sources to the development of energy-intense fossil fuel resources is becoming

  7. Technology Deployment Case Studies

    Broader source: Energy.gov [DOE]

    Find technology deployment case studies below. Click on each individual project link to see the full case study. You can also view a map of technology deployment case studies.

  8. Towards risk-based management of critical infrastructures : enabling insights and analysis methodologies from a focused study of the bulk power grid.

    SciTech Connect (OSTI)

    Richardson, Bryan T.; LaViolette, Randall A.; Cook, Benjamin Koger

    2008-02-01

    This report summarizes research on a holistic analysis framework to assess and manage risks in complex infrastructures, with a specific focus on the bulk electric power grid (grid). A comprehensive model of the grid is described that can approximate the coupled dynamics of its physical, control, and market components. New realism is achieved in a power simulator extended to include relevant control features such as relays. The simulator was applied to understand failure mechanisms in the grid. Results suggest that the implementation of simple controls might significantly alter the distribution of cascade failures in power systems. The absence of cascade failures in our results raises questions about the underlying failure mechanisms responsible for widespread outages, and specifically whether these outages are due to a system effect or large-scale component degradation. Finally, a new agent-based market model for bilateral trades in the short-term bulk power market is presented and compared against industry observations.

  9. Science DMZ Case Studies

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Science DMZ Case Studies Science DMZ @ UF Science DMZ @ CU Science DMZ @ Penn & VTTI Science DMZ @ NOAA Science DMZ @ NERSC Science DMZ @ ALS Multi-facility Workflow Case Study News & Publications ESnet News Publications and Presentations Galleries ESnet Awards and Honors Blog ESnet Live Home » Science Engagement » Case Studies » Science DMZ Case Studies Science Engagement Move your data Programs & Workshops Science Requirements Reviews Case Studies OSCARS Case Studies Science

  10. Shipping Cask Design Review Analysis.

    Energy Science and Technology Software Center (OSTI)

    1998-01-04

    Version 01 SCANS (Shipping Cask ANalysis System) is a microcomputer based system of computer programs and databases for evaluating safety analysis reports on spent fuel shipping casks. SCANS calculates the global response to impact loads, pressure loads, and thermal conditions, providing reviewers with an independent check on analyses submitted by licensees. Analysis options are based on regulatory cases described in the Code of Federal Regulations (1983) and Regulatory Guides published by the NRC in 1977more » and 1978. The system is composed of a series of menus and input entry cask analysis, and output display programs. An analysis is performed by preparing the necessary input data and then selecting the appropriate analysis: impact, thermal (heat transfer), thermally-induced stress, or pressure-induced stress. All data are entered through input screens with descriptive data requests, and, where possible, default values are provided. Output (i.e., impact force, moment and sheer time histories; impact animation; thermal/stress geometry and thermal/stress element outlines; temperature distributions as isocontours or profiles; and temperature time histories) is displayed graphically and can also be printed.« less

  11. BP-12 Rate Case

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Skip navigation links Financial Information Financial Public Processes Asset Management Cost Verification Process Rate Cases BP-18 Rate Case Related Publications Meetings...

  12. BP-16 Rate Case

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Skip navigation links Financial Information Financial Public Processes Asset Management Cost Verification Process Rate Cases BP-18 Rate Case Related Publications Meetings...

  13. Before a Rate Case

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    links Financial Information Financial Public Processes Asset Management Cost Verification Process Rate Cases BP-18 Rate Case Related Publications Meetings and Workshops Customer...

  14. Rate Case Elements

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Proceeding Rate Information Residential Exchange Program Surplus Power Sales Reports Rate Case Elements BPA's rate cases are decided "on the record." That is, in making a decision...

  15. OSCARS Case Studies

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    OSCARS & JGI Science DMZ Case Studies Multi-facility Workflow Case Study News & Publications ESnet News Publications and Presentations Galleries ESnet Awards and Honors Blog ESnet...

  16. Decerns: A framework for multi-criteria decision analysis

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Yatsalo, Boris; Didenko, Vladimir; Gritsyuk, Sergey; Sullivan, Terry

    2015-02-27

    A new framework, Decerns, for multicriteria decision analysis (MCDA) of a wide range of practical problems on risk management is introduced. Decerns framework contains a library of modules that are the basis for two scalable systems: DecernsMCDA for analysis of multicriteria problems, and DecernsSDSS for multicriteria analysis of spatial options. DecernsMCDA includes well known MCDA methods and original methods for uncertainty treatment based on probabilistic approaches and fuzzy numbers. As a result, these MCDA methods are described along with a case study on analysis of multicriteria location problem.

  17. An economic feasibility analysis of distributed electric power generation based upon the natural gas-fired fuel cell: a model of a central utility plant.

    SciTech Connect (OSTI)

    Not Available

    1993-06-30

    This central utilities plant model details the major elements of a central utilities plant for several classes of users. The model enables the analyst to select optional, cost effective, plant features that are appropriate to a fuel cell application. These features permit the future plant owner to exploit all of the energy produced by the fuel cell, thereby reducing the total cost of ownership. The model further affords the analyst an opportunity to identify avoided costs of the fuel cell-based power plant. This definition establishes the performance and capacity information, appropriate to the class of user, to support the capital cost model and the feasibility analysis. It is detailed only to the depth required to identify the major elements of a fuel cell-based system. The model permits the choice of system features that would be suitable for a large condominium complex or a residential institution such as a hotel, boarding school or prison. The user may also select large office buildings that are characterized by 12 to 16 hours per day of operation or industrial users with a steady demand for thermal and electrical energy around the clock.

  18. BBRN Factsheet: Case Study: Community Engagement | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    BBRN Factsheet: Case Study: Community Engagement BBRN Factsheet: Case Study: Community Engagement Case Study: Community Engagement, on the Community Home Energy Retrofit Project (CHERP), based in Claremont, California. Case Study: Community Engagement (197.35 KB) More Documents & Publications Better Buildings Network View | December 2015 Better Buildings Training Toolkit Better Buildings Network View | July-August 2015

  19. Salish & Kootenai Holding Company - Biomass Feasibility Analysis...

    Office of Environmental Management (EM)

    Project Overview Resources Demand Analysis Technology Characterization Regulatory Permitting Economics Business Case 18 Potential Thermal Loads Tribal forestry greenhouse Public ...

  20. Techno-Economic Analysis of Liquid Fuel Production from Woody Biomass via Hydrothermal Liquefaction (HTL) and Upgrading

    SciTech Connect (OSTI)

    Zhu, Yunhua; Biddy, Mary J.; Jones, Susanne B.; Elliott, Douglas C.; Schmidt, Andrew J.

    2014-09-15

    A series of experimental work was conducted to convert woody biomass to gasoline and diesel range products via hydrothermal liquefaction (HTL) and catalytic hydroprocessing. Based on the best available test data, a techno-economic analysis (TEA) was developed for a large scale woody biomass based HTL and upgrading system to evaluate the feasibility of this technology. In this system, 2000 dry metric ton per day woody biomass was assumed to be converted to bio-oil in hot compressed water and the bio-oil was hydrotreated and/or hydrocracked to produce gasoline and diesel range liquid fuel. Two cases were evaluated: a stage-of-technology (SOT) case based on the tests results, and a goal case considering potential improvements based on the SOT case. Process simulation models were developed and cost analysis was implemented based on the performance results. The major performance results included final products and co-products yields, raw materials consumption, carbon efficiency, and energy efficiency. The overall efficiency (higher heating value basis) was 52% for the SOT case and 66% for the goal case. The production cost, with a 10% internal rate of return and 2007 constant dollars, was estimated to be $1.29 /L for the SOT case and $0.74 /L for the goal case. The cost impacts of major improvements for moving from the SOT to the goal case were evaluated and the assumption of reducing the organics loss to the water phase lead to the biggest reduction in the production cost. Sensitivity analysis indicated that the final products yields had the largest impact on the production cost compared to other parameters. Plant size analysis demonstrated that the process was economically attractive if the woody biomass feed rate was over 1,500 dry tonne/day, the production cost was competitive with the then current petroleum-based gasoline price.

  1. Quantifying sources of black carbon in Western North America using observationally based analysis and an emission tagging technique in the Community Atmosphere Model

    SciTech Connect (OSTI)

    Zhang, Rudong; Wang, Hailong; Hegg, D. A.; Qian, Yun; Doherty, Sarah J.; Dang, Cheng; Ma, Po-Lun; Rasch, Philip J.; Fu, Qiang

    2015-11-18

    The Community Atmosphere Model (CAM5), equipped with a technique to tag black carbon (BC) emissions by source regions and types, has been employed to establish source-receptor relationships for atmospheric BC and its deposition to snow over Western North America. The CAM5 simulation was conducted with meteorological fields constrained by reanalysis for year 2013 when measurements of BC in both near-surface air and snow are available for model evaluation. We find that CAM5 has a significant low bias in predicted mixing ratios of BC in snow but only a small low bias in predicted atmospheric concentrations over the Northwest USA and West Canada. Even with a strong low bias in snow mixing ratios, radiative transfer calculations show that the BC-in-snow darkening effect is substantially larger than the BC dimming effect at the surface by atmospheric BC. Local sources contribute more to near-surface atmospheric BC and to deposition than distant sources, while the latter are more important in the middle and upper troposphere where wet removal is relatively weak. Fossil fuel (FF) is the dominant source type for total column BC burden over the two regions. FF is also the dominant local source type for BC column burden, deposition, and near-surface BC, while for all distant source regions combined the contribution of biomass/biofuel (BB) is larger than FF. An observationally based Positive Matrix Factorization (PMF) analysis of the snow-impurity chemistry is conducted to quantitatively evaluate the CAM5 BC source-type attribution. While CAM5 is qualitatively consistent with the PMF analysis with respect to partitioning of BC originating from BB and FF emissions, it significantly underestimates the relative contribution of BB. In addition to a possible low bias in BB emissions used in the simulation, the model is likely missing a significant source of snow darkening from local soil found in the observations.

  2. Quantifying sources of black carbon in Western North America using observationally based analysis and an emission tagging technique in the Community Atmosphere Model

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Zhang, R.; Wang, H.; Hegg, D. A.; Qian, Y.; Doherty, S. J.; Dang, C.; Ma, P.-L.; Rasch, P. J.; Fu, Q.

    2015-05-04

    The Community Atmosphere Model (CAM5), equipped with a technique to tag black carbon (BC) emissions by source regions and types, has been employed to establish source-receptor relationships for atmospheric BC and its deposition to snow over Western North America. The CAM5 simulation was conducted with meteorological fields constrained by reanalysis for year 2013 when measurements of BC in both near-surface air and snow are available for model evaluation. We find that CAM5 has a significant low bias in predicted mixing ratios of BC in snow but only a small low bias in predicted atmospheric concentrations over the Northwest USA andmore » West Canada. Even with a strong low bias in snow mixing ratios, radiative transfer calculations show that the BC-in-snow darkening effect is substantially larger than the BC dimming effect at the surface by atmospheric BC. Local sources contribute more to near-surface atmospheric BC and to deposition than distant sources, while the latter are more important in the middle and upper troposphere where wet removal is relatively weak. Fossil fuel (FF) is the dominant source type for total column BC burden over the two regions. FF is also the dominant local source type for BC column burden, deposition, and near-surface BC, while for all distant source regions combined the contribution of biomass/biofuel (BB) is larger than FF. An observationally based Positive Matrix Factorization (PMF) analysis of the snow-impurity chemistry is conducted to quantitatively evaluate the CAM5 BC source-type attribution. While CAM5 is qualitatively consistent with the PMF analysis with respect to partitioning of BC originating from BB and FF emissions, it significantly underestimates the relative contribution of BB. In addition to a possible low bias in BB emissions used in the simulation, the model is likely missing a significant source of snow darkening from local soil found in the observations.« less

  3. Quantifying sources of black carbon in western North America using observationally based analysis and an emission tagging technique in the Community Atmosphere Model

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Zhang, R.; Wang, H.; Hegg, D. A.; Qian, Y.; Doherty, S. J.; Dang, C.; Ma, P.-L.; Rasch, P. J.; Fu, Q.

    2015-11-18

    The Community Atmosphere Model (CAM5), equipped with a technique to tag black carbon (BC) emissions by source regions and types, has been employed to establish source–receptor relationships for atmospheric BC and its deposition to snow over western North America. The CAM5 simulation was conducted with meteorological fields constrained by reanalysis for year 2013 when measurements of BC in both near-surface air and snow are available for model evaluation. We find that CAM5 has a significant low bias in predicted mixing ratios of BC in snow but only a small low bias in predicted atmospheric concentrations over northwestern USA and westernmore » Canada. Even with a strong low bias in snow mixing ratios, radiative transfer calculations show that the BC-in-snow darkening effect is substantially larger than the BC dimming effect at the surface by atmospheric BC. Local sources contribute more to near-surface atmospheric BC and to deposition than distant sources, while the latter are more important in the middle and upper troposphere where wet removal is relatively weak. Fossil fuel (FF) is the dominant source type for total column BC burden over the two regions. FF is also the dominant local source type for BC column burden, deposition, and near-surface BC, while for all distant source regions combined the contribution of biomass/biofuel (BB) is larger than FF. An observationally based positive matrix factorization (PMF) analysis of the snow-impurity chemistry is conducted to quantitatively evaluate the CAM5 BC source-type attribution. While CAM5 is qualitatively consistent with the PMF analysis with respect to partitioning of BC originating from BB and FF emissions, it significantly underestimates the relative contribution of BB. In addition to a possible low bias in BB emissions used in the simulation, the model is likely missing a significant source of snow darkening from local soil found in the observations.« less

  4. mu-Scale Variations Of Elemental Composition In Individual Atmospheric Particles By Means Of Synchrotron Radiation Based mu-XRF Analysis

    SciTech Connect (OSTI)

    Schleicher, N.; Kramar, U.; Norra, S.; Dietze, V.; Kaminski, U.; Cen, K.; Yu, Y.

    2010-04-06

    this study, synchrotron radiation based mu-X-ray fluorescence analysis (mu-SXRF) proved to be an excellent tool to investigate mu-scalic distributions of main and trace element concentrations within individual airborne particles.

  5. NREL: Energy Analysis - Elaine Hale

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Elaine Hale is a member of the Energy Forecasting and Modeling Group in the Strategic Energy Analysis Center. Senior ... in Production Cost Models: Methodology and a Case Study. ...

  6. A knowledge based system for economic analysis and risk assessment of subsea development scenarios for small oilfields in the North Sea

    SciTech Connect (OSTI)

    Dyer, N.J.; Ford, J.T.; Tweedie, A.

    1996-12-31

    The decision to develop a small (<100 mmbbls) oilfield in the UK sector of the Central/Northern North Sea requires a careful assessment of all aspects of the field development plan: the reservoir model (reserves and production profile); capital and operating costs; and the current economic climate (oil price, interest rates, tax regime etc). This paper describes the development of a knowledge based software package that allows a quick look assessment of the overall economics and risk profile associated with the development of small oilfields in this region. It is a modular system that uses a cost database, cost adjustment algorithms, cash flow analysis engine and simulation procedures to integrate and analyze the impact of reservoir and production characteristics costs (capex and opex) and economic factors on the decision to develop such a field. The production system configurations considered by the system are: (1) an unmanned wellhead platform tied back to a third party platform for fluid processing/export, (2) an FPSO with oil export via a shuttle tanker and gas export via a tie-in to the existing North Sea gas pipeline infrastructure, and (3) a straight tie-back from a group of subsea wells to a third party platform for fluid processing/export.

  7. [Environmental investigation of ground water contamination at Wright-Patterson Air Force Base, Ohio]. Volume 3, Sampling and analysis plan (SAP): Phase 1, Task 4, Field Investigation: Draft

    SciTech Connect (OSTI)

    Not Available

    1991-10-01

    In April 1990, Wright-Patterson Air Force Base (WPAFB), initiated an investigation to evaluate a potential Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) removal action to prevent, to the extent practicable, the offsite migration of contaminated ground water from WPAFB. WPAFB retained the services of the Environmental Management Operations (EMO) and its principle subcontractor, International Technology Corporation (IT) to complete Phase 1 of the environmental investigation of ground-water contamination at WPAFB. Phase 1 of the investigation involves the short-term evaluation and potential design for a program to remove ground-water contamination that appears to be migrating across the western boundary of Area C, and across the northern boundary of Area B along Springfield Pike. Primarily, Task 4 of Phase 1 focuses on collection of information at the Area C and Springfield Pike boundaries of WPAFB. This Sampling and Analysis Plan (SAP) has been prepared to assist in completion of the Task 4 field investigation and is comprised of the Quality Assurance Project Plan (QAPP) and the Field Sampling Plan (FSP).

  8. National Geo-Database for Biofuel Simulations and Regional Analysis of Biorefinery Siting Based on Cellulosic Feedstock Grown on Marginal Lands

    SciTech Connect (OSTI)

    Izaurralde, Roberto C.; Zhang, Xuesong; Sahajpal, Ritvik; Manowitz, David H.

    2012-04-01

    SQL database hosting. The second resource was the DOE-JGCRI 'Evergreen' cluster, capable of executing millions of simulations in relatively short periods. ARRA funding also supported a PhD student from UMD who worked on creating the geodatabases and executing some of the simulations in this study. Using a physically based classification of marginal lands, we simulated production of cellulosic feedstocks from perennial mixtures grown on these lands in the US Midwest. Marginal lands in the western states of the US Midwest appear to have significant potential to supply feedstocks to a cellulosic biofuel industry. Similar results were obtained with simulations of N-fertilized perennial mixtures. A detailed spatial analysis allowed for the identification of possible locations for the establishment of 34 cellulosic ethanol biorefineries with an annual production capacity of 5.6 billion gallons. In summary, we have reported on the development of a spatially explicit national geodatabase to conduct biofuel simulation studies and provided simulation results on the potential of perennial cropping systems to serve as feedstocks for the production of cellulosic ethanol. To accomplish this, we have employed sophisticated spatial analysis methods in combination with the process-based biogeochemical model EPIC. The results of this study will be submitted to the USDOE Bioenergy Knowledge Discovery Framework as a way to contribute to the development of a sustainable bioenergy industry. This work provided the opportunity to test the hypothesis that marginal lands can serve as sources of cellulosic feedstocks and thus contribute to avoid potential conflicts between bioenergy and food production systems. This work, we believe, opens the door for further analysis on the characteristics of cellulosic feedstocks as major contributors to the development of a sustainable bioenergy economy.

  9. ENHANCED UNCERTAINTY ANALYSIS FOR SRS COMPOSITE ANALYSIS

    SciTech Connect (OSTI)

    Smith, F.; Phifer, M.

    2011-06-30

    The Composite Analysis (CA) performed for the Savannah River Site (SRS) in 2009 (SRS CA 2009) included a simplified uncertainty analysis. The uncertainty analysis in the CA (Smith et al. 2009b) was limited to considering at most five sources in a separate uncertainty calculation performed for each POA. To perform the uncertainty calculations in a reasonable amount of time, the analysis was limited to using 400 realizations, 2,000 years of simulated transport time, and the time steps used for the uncertainty analysis were increased from what was used in the CA base case analysis. As part of the CA maintenance plan, the Savannah River National Laboratory (SRNL) committed to improving the CA uncertainty/sensitivity analysis. The previous uncertainty analysis was constrained by the standard GoldSim licensing which limits the user to running at most four Monte Carlo uncertainty calculations (also called realizations) simultaneously. Some of the limitations on the number of realizations that could be practically run and the simulation time steps were removed by building a cluster of three HP Proliant windows servers with a total of 36 64-bit processors and by licensing the GoldSim DP-Plus distributed processing software. This allowed running as many as 35 realizations simultaneously (one processor is reserved as a master process that controls running the realizations). These enhancements to SRNL computing capabilities made uncertainty analysis: using 1000 realizations, using the time steps employed in the base case CA calculations, with more sources, and simulating radionuclide transport for 10,000 years feasible. In addition, an importance screening analysis was performed to identify the class of stochastic variables that have the most significant impact on model uncertainty. This analysis ran the uncertainty model separately testing the response to variations in the following five sets of model parameters: (a) K{sub d} values (72 parameters for the 36 CA elements in

  10. Technical Comparative Analysis of "Best of Breed" Turnkey Si-Based Processes and Equipment, to be Used to Produce a Combined Multi-entity Research and Development Technology Roadmap for Thick and Thin Silicon PV

    SciTech Connect (OSTI)

    Hovel, Harold; Prettyman, Kevin

    2015-03-27

    A side-by-side analysis was done on then currently available technology, along with roadmaps to push each particular option forward. Variations in turnkey line processes can and do result in finished solar device performance. Together with variations in starting material quality, the result is a distribution of effciencies. Forensic analysis and characterization of each crystalline Si based technology will determine the most promising approach with respect to cost, efficiency and reliability. Forensic analysis will also shed light on the causes of binning variations. Si solar cells were forensically analyzed from each turn key supplier using a host of techniques

  11. Analysis of the cracking behavior of Alloy 600 RVH penetrations. Part 1: Stress analysis and K computation

    SciTech Connect (OSTI)

    Bhandari, S.; Vagner, J.; Garriga-Majo, D.; Amzallag, C.; Faidy, C.

    1996-12-01

    The study presented here concerns the analysis of crack propagation behavior in the Alloy 600 RVH penetrations used in the French 900 and 1300 MWe PWR series. The damage mechanism identified is clearly the SCC in primary water environment. Consequently the analysis presented here is based on: (1) the stress analysis carried out on the RVH penetrations, (2) the SCC model developed in primary water environment and at the operating temperatures, and (3) the fracture mechanics concepts. The different steps involved in the study are: (1) Evaluation of the stress state for the case of the peripheral configuration of RVH penetrations; the case retained here is that of a conic tube with stress analysis conducted using multi-pass welding. (2) Computation of the influence functions (IF) for a polynomial stress distribution in case of a tube of Ri/t ratio (internal diameter/thickness) corresponding to that of an RVH penetration. (3) Establishment of a propagation law based on study and review of data available in the literature. (4) Conduction of a parametric study of crack propagation using several initial defects. (5) Analysis of crack propagation of defects observed in various reactors and comparison with measured propagation rates. This paper (Part 1) deals with the first two steps namely Stress Analysis and K Computation.

  12. Characteristics and Outcomes of Patients With Nodular Lymphocyte-Predominant Hodgkin Lymphoma Versus Those With Classical Hodgkin Lymphoma: A Population-Based Analysis

    SciTech Connect (OSTI)

    Gerber, Naamit K.; Atoria, Coral L.; Elkin, Elena B.; Yahalom, Joachim

    2015-05-01

    Purpose: Nodular lymphocyte-predominant Hodgkin lymphoma (NLPHL) is rare, comprising approximately 5% of all Hodgkin lymphoma (HL) cases. Patients with NLPHL tend to have better prognoses than those with classical HL (CHL). Our goal was to assess differences in survival between NLPHL and CHL patients, controlling for differences in patient and disease characteristics. Methods and Materials: Using data from the population-based Surveillance, Epidemiology and End Results (SEER) cancer registry program, we identified patients diagnosed with pathologically confirmed HL between 1988 and 2010. Results: We identified 1,162 patients with NLPHL and 29,083 patients with CHL. With a median follow-up of 7 years, 5- and 10-year overall survival (OS) rates were 91% and 83% for NLPHL, respectively, and 81% and 74% for CHL, respectively. After adjusting for all available characteristics, NLPHL (vs CHL) was associated with higher OS (hazard ratio [HR]: 0.62, P<.01) and disease-specific survival (DSS; HR: 0.48, P<.01). The male predominance of NLPHL, compared to CHL, as well as the more favorable prognostic features in NLPHL patients are most pronounced in NLPHL patients <20 years old. Among all NLPHL patients, younger patients were less likely to receive radiation, and radiation use has declined by 40% for all patients from 1988 to 2010. Receipt of radiation was associated with better OS (HR: 0.64, P=.03) and DSS (HR: 0.45, P=.01) in NLPHL patients after controlling for available baseline characteristics. Other factors associated with OS and DSS in NLPHL patients are younger age and early stage. Conclusions: Our results in a large population dataset demonstrated that NLPHL patients have improved prognosis compared to CHL patients, even after accounting for stage and baseline characteristics. Use of radiation is declining among NLPHL patients despite an association in this series between radiation and better DSS and OS. Unique treatment strategies for NLPHL are warranted in both

  13. Utilizing the Inherent Electrolysis in a Chip-Based Nanoelectrospray Emitter System to Facilitate Selective Ionization and Mass Spectrometric Analysis of Metallo Alkylporphyrins

    SciTech Connect (OSTI)

    Van Berkel, Gary J; Kertesz, Vilmos

    2012-01-01

    A commercially available chip-based infusion nanoelectrospray ionization system was used to ionize metallo alkylporphyrins for mass spectrometric detection and structure elucidation by mass spectrometry. Different ionic forms of model compounds (nickel (II), vanadyl (II), copper (II) and cobalt (II) octaethylporphyrin) were created by using two different types of conductive pipette tips supplied with the device. These pipette tips provide the conductive contact to solution at which the electrolysis process inherent to electrospray takes places in the device. The original unmodified, bare carbon-impregnated plastic pipette tips, were exploited to intentionally electrochemically oxidize (ionize) the porphyrins to form molecular radical cations for detection. Use of modified pipette tips, with a surface coating devised to inhibit analyte mass transport to the surface, was shown to limit the ionic species observed in the mass spectra of these porphyrins largely, but not exclusively, to the protonated molecule. Under the conditions of these experiments, the effective upper potential limit for oxidation with the uncoated pipette tip was 1.1 V or less and the coated pipette tips effectively prevented the oxidation of analytes with redox potentials greater than about 0.25 V. Product ion spectra of either molecular ionic species could be used to determine the alkyl chain length on the porphyrin macrocycle. The utility of this electrochemical ionization approach for the analysis of naturally occurring samples was demonstrated using nickel geoporphyrin fractions isolated from Gilsonite bitumen. Acquiring neutral loss spectra as a means to improve the specificity of detection in these complex natural samples was also illustrated.

  14. Geographically Based Hydrogen Demand and Infrastructure Rollout...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Rollout Scenario Analysis Geographically Based Hydrogen Demand and Infrastructure Rollout Scenario Analysis Presentation by Margo Melendez at the 2010-2025 Scenario Analysis for ...

  15. Business Case for CNG in Municipal Fleets (Presentation)

    SciTech Connect (OSTI)

    Johnson, C.

    2010-07-27

    Presentation about compressed natural gas in municipal fleets, assessing investment profitability, the VICE model, base-case scenarios, and pressing questions for fleet owners.

  16. No Sunset and Extended Policies Cases (released in AEO2010)

    Reports and Publications (EIA)

    2010-01-01

    The Annual Energy Outlook 2010 Reference case is best described as a current laws and regulations case, because it generally assumes that existing laws and fully promulgated regulations will remain unchanged throughout the projection period, unless the legislation establishing them specifically calls for them to end or change. The Reference case often serves as a starting point for the analysis of proposed legislative or regulatory changes, a task that would be difficult if the Reference case included projected legislative or regulatory changes.

  17. EMGEO Case Study

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    EMGEO Case Study EMGEO Case Study June 20, 2016 Background EMGeo is composed of two geophysical imaging applications: one for subsurface imaging using electromagnetic data and another using seismic data. Although the applications model different physics (Maxwell's equations in one case, the elastic wave equation in another) they have much in common. We focus on the more involved part for solving the forward pass of the inverse scattering for the seismic part. The code takes advantage of

  18. Fuel Cell Case Study

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Global Leader, Sustainable Engineering, Maintenance & Energy Management Whole Foods Market, Inc. Fuel Cell Case Study 2 Holistic Approach from Development to Operation WFM Energy ...

  19. Appendix A: Reference case

    Annual Energy Outlook [U.S. Energy Information Administration (EIA)]

    Reference case Energy Information Administration Annual Energy Outlook 2014 Table A17. Renewable energy consumption by sector and source (quadrillion Btu) Sector and source...

  20. Appendix A: Reference case

    Annual Energy Outlook [U.S. Energy Information Administration (EIA)]

    4 Reference case Table A2. Energy consumption by sector and source (quadrillion Btu per year, unless otherwise noted) Energy Information Administration Annual Energy Outlook 2014...

  1. OSCARS Case Study

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    The Network OSCARS How It Works Who's Using OSCARS? OSCARS and Future Tech OSCARS Standard and Open Grid Forum OSCARS Developers Community Read More... OSCARS Case Study...

  2. SU-E-I-83: Error Analysis of Multi-Modality Image-Based Volumes of Rodent Solid Tumors Using a Preclinical Multi-Modality QA Phantom

    SciTech Connect (OSTI)

    Lee, Y; Fullerton, G; Goins, B

    2015-06-15

    Purpose: In our previous study a preclinical multi-modality quality assurance (QA) phantom that contains five tumor-simulating test objects with 2, 4, 7, 10 and 14 mm diameters was developed for accurate tumor size measurement by researchers during cancer drug development and testing. This study analyzed the errors during tumor volume measurement from preclinical magnetic resonance (MR), micro-computed tomography (micro- CT) and ultrasound (US) images acquired in a rodent tumor model using the preclinical multi-modality QA phantom. Methods: Using preclinical 7-Tesla MR, US and micro-CT scanners, images were acquired of subcutaneous SCC4 tumor xenografts in nude rats (3–4 rats per group; 5 groups) along with the QA phantom using the same imaging protocols. After tumors were excised, in-air micro-CT imaging was performed to determine reference tumor volume. Volumes measured for the rat tumors and phantom test objects were calculated using formula V = (π/6)*a*b*c where a, b and c are the maximum diameters in three perpendicular dimensions determined by the three imaging modalities. Then linear regression analysis was performed to compare image-based tumor volumes with the reference tumor volume and known test object volume for the rats and the phantom respectively. Results: The slopes of regression lines for in-vivo tumor volumes measured by three imaging modalities were 1.021, 1.101 and 0.862 for MRI, micro-CT and US respectively. For phantom, the slopes were 0.9485, 0.9971 and 0.9734 for MRI, micro-CT and US respectively. Conclusion: For both animal and phantom studies, random and systematic errors were observed. Random errors were observer-dependent and systematic errors were mainly due to selected imaging protocols and/or measurement method. In the animal study, there were additional systematic errors attributed to ellipsoidal assumption for tumor shape. The systematic errors measured using the QA phantom need to be taken into account to reduce measurement

  3. COMBINING A NEW 3-D SEISMIC S-WAVE PROPAGATION ANALYSIS FOR REMOTE FRACTURE DETECTION WITH A ROBUST SUBSURFACE MICROFRACTURE-BASED VERIFICATION TECHNIQUE

    SciTech Connect (OSTI)

    Bob Hardage; M.M. Backus; M.V. DeAngelo; R.J. Graebner; S.E. Laubach; Paul Murray

    2004-02-01

    Fractures within the producing reservoirs at McElroy Field could not be studied with the industry-provided 3C3D seismic data used as a cost-sharing contribution in this study. The signal-to-noise character of the converted-SV data across the targeted reservoirs in these contributed data was not adequate for interpreting azimuth-dependent data effects. After illustrating the low signal quality of the converted-SV data at McElroy Field, the seismic portion of this report abandons the McElroy study site and defers to 3C3D seismic data acquired across a different fractured carbonate reservoir system to illustrate how 3C3D seismic data can provide useful information about fracture systems. Using these latter data, we illustrate how fast-S and slow-S data effects can be analyzed in the prestack domain to recognize fracture azimuth, and then demonstrate how fast-S and slow-S data volumes can be analyzed in the poststack domain to estimate fracture intensity. In the geologic portion of the report, we analyze published regional stress data near McElroy Field and numerous formation multi-imager (FMI) logs acquired across McElroy to develop possible fracture models for the McElroy system. Regional stress data imply a fracture orientation different from the orientations observed in most of the FMI logs. This report culminates Phase 2 of the study, ''Combining a New 3-D Seismic S-Wave Propagation Analysis for Remote Fracture Detection with a Robust Subsurface Microfracture-Based Verification Technique''. Phase 3 will not be initiated because wells were to be drilled in Phase 3 of the project to verify the validity of fracture-orientation maps and fracture-intensity maps produced in Phase 2. Such maps cannot be made across McElroy Field because of the limitations of the available 3C3D seismic data at the depth level of the reservoir target.

  4. Accident Tolerant Fuel Analysis

    SciTech Connect (OSTI)

    Curtis Smith; Heather Chichester; Jesse Johns; Melissa Teague; Michael Tonks; Robert Youngblood

    2014-09-01

    Safety is central to the design, licensing, operation, and economics of Nuclear Power Plants (NPPs). Consequently, the ability to better characterize and quantify safety margin holds the key to improved decision making about light water reactor design, operation, and plant life extension. A systematic approach to characterization of safety margins and the subsequent margins management options represents a vital input to the licensee and regulatory analysis and decision making that will be involved. The purpose of the Risk Informed Safety Margin Characterization (RISMC) Pathway research and development (R&D) is to support plant decisions for risk-informed margins management by improving economics and reliability, and sustaining safety, of current NPPs. Goals of the RISMC Pathway are twofold: (1) Develop and demonstrate a risk-assessment method coupled to safety margin quantification that can be used by NPP decision makers as part of their margin recovery strategies. (2) Create an advanced “RISMC toolkit” that enables more accurate representation of NPP safety margin. In order to carry out the R&D needed for the Pathway, the Idaho National Laboratory is performing a series of case studies that will explore methods- and tools-development issues, in addition to being of current interest in their own right. One such study is a comparative analysis of safety margins of plants using different fuel cladding types: specifically, a comparison between current-technology Zircaloy cladding and a notional “accident-tolerant” (e.g., SiC-based) cladding. The present report begins the process of applying capabilities that are still under development to the problem of assessing new fuel designs. The approach and lessons learned from this case study will be included in future Technical Basis Guides produced by the RISMC Pathway. These guides will be the mechanism for developing the specifications for RISMC tools and for defining how plant decision makers should propose and

  5. Accident tolerant fuel analysis

    SciTech Connect (OSTI)

    Smith, Curtis; Chichester, Heather; Johns, Jesse; Teague, Melissa; Tonks, Michael Idaho National Laboratory; Youngblood, Robert

    2014-09-01

    Safety is central to the design, licensing, operation, and economics of Nuclear Power Plants (NPPs). Consequently, the ability to better characterize and quantify safety margin holds the key to improved decision making about light water reactor design, operation, and plant life extension. A systematic approach to characterization of safety margins and the subsequent margins management options represents a vital input to the licensee and regulatory analysis and decision making that will be involved. The purpose of the Risk Informed Safety Margin Characterization (RISMC) Pathway research and development (R&D) is to support plant decisions for risk-informed margins management by improving economics and reliability, and sustaining safety, of current NPPs. Goals of the RISMC Pathway are twofold: (1) Develop and demonstrate a risk-assessment method coupled to safety margin quantification that can be used by NPP decision makers as part of their margin recovery strategies. (2) Create an advanced ''RISMC toolkit'' that enables more accurate representation of NPP safety margin. In order to carry out the R&D needed for the Pathway, the Idaho National Laboratory is performing a series of case studies that will explore methods- and tools-development issues, in addition to being of current interest in their own right. One such study is a comparative analysis of safety margins of plants using different fuel cladding types: specifically, a comparison between current-technology Zircaloy cladding and a notional ''accident-tolerant'' (e.g., SiC-based) cladding. The present report begins the process of applying capabilities that are still under development to the problem of assessing new fuel designs. The approach and lessons learned from this case study will be included in future Technical Basis Guides produced by the RISMC Pathway. These guides will be the mechanism for developing the specifications for RISMC tools and for defining how plant decision makers should propose and

  6. Pilot Project Technology Business Case: Mobile Work Packages

    SciTech Connect (OSTI)

    Thomas, Ken; Lawrie, Sean; Niedermuller, Josef

    2015-05-01

    Performance advantages of the new pilot project technologies are widely acknowledged, but it has proven difficult for utilities to derive business cases for justifying investment in these new capabilities. Lack of a business case is often cited by utilities as a barrier to pursuing wide-scale application of digital technologies to nuclear plant work activities. The decision to move forward with funding usually hinges on demonstrating actual cost reductions that can be credited to budgets and thereby truly reduce O&M or capital costs. Technology enhancements, while enhancing work methods and making work more efficient, often fail to eliminate workload such that it changes overall staffing and material cost requirements. It is critical to demonstrate cost reductions or impacts on non-cost performance objectives in order for the business case to justify investment by nuclear operators. The Business Case Methodology (BCM) was developed in September of 2015 to frame the benefit side of II&C technologies to address the “benefit” side of the analysis—as opposed to the cost side—and how the organization evaluates discretionary projects (net present value (NPV), accounting effects of taxes, discount rates, etc.). The cost and analysis side is not particularly difficult for the organization and can usually be determined with a fair amount of precision (not withstanding implementation project cost overruns). It is in determining the “benefits” side of the analysis that utilities have more difficulty in technology projects and that is the focus of this methodology. The methodology is presented in the context of the entire process, but the tool provided is limited to determining the organizational benefits only. This report describes a the use of the BCM in building a business case for mobile work packages, which includes computer-based procedures and other automated elements of a work package. Key to those impacts will be identifying where the savings are â

  7. Thermal analysis finds optimum FCCU revamp scheme

    SciTech Connect (OSTI)

    Aguilar-Rodriquez, E.; Ortiz-Estrada, C.; Aguilera-Lopez, M. )

    1994-11-07

    The 25,000 b/d fluid catalytic cracking unit (FCCU) at Petroleos Mexicanos' idle Azcapotzalco refinery near Mexico City has been relocated to Pemex's 235,000 b/d Cadereyta refinery. The results of a thermal-integration analysis are being used to revamp the unit and optimize its vapor-recovery scheme. For the case of the Azcapotzalco FCCU, the old unit was designed in the 1950s, so modifications to the reactor/regenerator section incorporate many important changes, including a new riser, feed nozzles, cyclones, air distributor, and other internals. For the new scheme, the analysis was based on the following restrictions: (1) Two cases concerning gas oil feed conditions must be met. In the hot-feed case, feed is introduced from a processing unit outside battery limits (OSBL) at 188 C. For the cold-feed case, feed is introduced from OSBL from storage tanks at 70 C. (2) No new fire heaters are to be installed. (3) Existing equipment must be reused whenever possible. The paper describes and analyzes three alternative schemes.

  8. Decerns: A framework for multi-criteria decision analysis

    SciTech Connect (OSTI)

    Yatsalo, Boris; Didenko, Vladimir; Gritsyuk, Sergey; Sullivan, Terry

    2015-02-27

    A new framework, Decerns, for multicriteria decision analysis (MCDA) of a wide range of practical problems on risk management is introduced. Decerns framework contains a library of modules that are the basis for two scalable systems: DecernsMCDA for analysis of multicriteria problems, and DecernsSDSS for multicriteria analysis of spatial options. DecernsMCDA includes well known MCDA methods and original methods for uncertainty treatment based on probabilistic approaches and fuzzy numbers. As a result, these MCDA methods are described along with a case study on analysis of multicriteria location problem.

  9. Stand-alone Renewable Energy-Economic and Financial Analysis...

    Open Energy Info (EERE)

    and Financial Analysis1 Background Economic Analysis of Solar Home Systems: A Case Study for the Philippines, Peter Meier, Prepared for The World Bank, Washington, D.C....

  10. Quantitative Analysis of Variability and Uncertainty in Environmental Data and Models. Volume 2. Performance, Emissions, and Cost of Combustion-Based NOx Controls for Wall and Tangential Furnace Coal-Fired Power Plants

    SciTech Connect (OSTI)

    Frey, H. Christopher; Tran, Loan K.

    1999-04-30

    This is Volume 2 of a two-volume set of reports describing work conducted at North Carolina State University sponsored by Grant Number DE-FG05-95ER30250 by the U.S. Department of Energy. The title of the project is “Quantitative Analysis of Variability and Uncertainty in Acid Rain Assessments.” The work conducted under sponsorship of this grant pertains primarily to two main topics: (1) development of new methods for quantitative analysis of variability and uncertainty applicable to any type of model; and (2) analysis of variability and uncertainty in the performance, emissions, and cost of electric power plant combustion-based NOx control technologies. These two main topics are reported separately in Volumes 1 and 2.

  11. EMGeo Case Study

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    methods (QMR in one case, and IDR in the other), both solvers are dominated by memory bandwidth intensive operations like sparse matrix-vector multiply (SpMV), dot...

  12. Better Buildings Case Competition

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    ... and Investment Authority, the nation's first green bank, where I'm helping apply insights from our team's case proposals." -John D'Agostino Yale Team, 2013 9 2014 Closing ...

  13. Early application case studies

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Early application case studies Early application case studies The Babbage test system was used to study representative applications and kernels in various scientific fields to gain experience with the challenges and strategies needed to optimize code performance on the MIC architecture. Below we highlight a few examples: BerkeleyGW The BerkeleyGW package is a materials science application that calculates electronic and optical properties with quantitative accuracy, a critical need in materials

  14. VASP Case Study

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    VASP Case Study VASP Case Study Code description and computational problem The Vienna Ab-initio Simulation Package (VASP) [1-2] is a widely used materials science application for performing ab-initio electronic structure calculations and quantum-mechanical molecular dynamics (MD) simulations using pseudopotentials or the projector-augmented wave method and a plane wave basis set. VASP computes an approximate solution to the many-body Schrödinger equation, either within the Density Functional

  15. WARP Case Study

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    WARP Case Study WARP Case Study Background WARP is an accelerator code that is used to conduct detailed simulations of particle accelerators, among other high energy physics applications. It is a so-called Particle-In-Cell (PIC) code that solves for the motion of charged particles acted upon by electric and magnetic forces. The particle motion is computed in a Lagrangian sense, following individual particles. The electric and magnetic fields acting on the particle are considered to be Eulerian

  16. CESM Case Study

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    CESM Case Study CESM Case Study CESM MG2 Kernel Code Description The Community Earth System Model (CESM) is a coupled multi-physics code which consists of multiple model components: Atmosphere, Ocean, Sea-ice, Land-ice, Land, River Runoff, and Coupler. During the course of a CESM run, the model components integrate forward in time, periodically stopping to exchange information with the coupler. The active (dynamical) components are generally fully prognostic, and they are state-of-the-art

  17. Generic Argillite/Shale Disposal Reference Case

    SciTech Connect (OSTI)

    Zheng, Liange; Colon, Carlos Jové; Bianchi, Marco; Birkholzer, Jens

    2014-08-08

    Radioactive waste disposal in a deep subsurface repository hosted in clay/shale/argillite is a subject of widespread interest given the desirable isolation properties, geochemically reduced conditions, and widespread geologic occurrence of this rock type (Hansen 2010; Bianchi et al. 2013). Bianchi et al. (2013) provides a description of diffusion in a clay-hosted repository based on single-phase flow and full saturation using parametric data from documented studies in Europe (e.g., ANDRA 2005). The predominance of diffusive transport and sorption phenomena in this clay media are key attributes to impede radionuclide mobility making clay rock formations target sites for disposal of high-level radioactive waste. The reports by Hansen et al. (2010) and those from numerous studies in clay-hosted underground research laboratories (URLs) in Belgium, France and Switzerland outline the extensive scientific knowledge obtained to assess long-term clay/shale/argillite repository isolation performance of nuclear waste. In the past several years under the UFDC, various kinds of models have been developed for argillite repository to demonstrate the model capability, understand the spatial and temporal alteration of the repository, and evaluate different scenarios. These models include the coupled Thermal-Hydrological-Mechanical (THM) and Thermal-Hydrological-Mechanical-Chemical (THMC) models (e.g. Liu et al. 2013; Rutqvist et al. 2014a, Zheng et al. 2014a) that focus on THMC processes in the Engineered Barrier System (EBS) bentonite and argillite host hock, the large scale hydrogeologic model (Bianchi et al. 2014) that investigates the hydraulic connection between an emplacement drift and surrounding hydrogeological units, and Disposal Systems Evaluation Framework (DSEF) models (Greenberg et al. 2013) that evaluate thermal evolution in the host rock approximated as a thermal conduction process to facilitate the analysis of design options. However, the assumptions and the

  18. Single casing reheat turbine

    SciTech Connect (OSTI)

    Matsushima, Tatsuro; Nishimura, Shigeo

    1999-07-01

    For conventional power plants, regenerative reheat steam turbines have been accepted as the most practical method to meet the demand for efficient and economical power generation. Recently the application of reheat steam turbines for combined cycle power plant began according to the development of large-capacity high temperature gas turbine. The two casing double flow turbine has been applied for this size of reheat steam turbine. The single casing reheat turbine can offer economical and compact power plant. Through development of HP-LP combined rotor and long LP blading series, Mitsubishi Heavy Industries, Ltd. had developed a single casing reheat steam turbine series and began to use it in actual plants. Six units are already in operation and another seven units are under manufacturing. Multiple benefits of single casing reheat turbine are smaller space requirements, shorter construction and erection period, equally good performance, easier operation and maintenance, shorter overhaul period, smaller initial investment, lower transportation expense and so on. Furthermore, single exhaust steam turbine makes possible to apply axial exhaust type, which will lower the height of T/G foundation and T/G housing. The single casing reheat turbine has not only compact and economical configuration itself but also it can reduce the cost of civil construction. In this paper, major developments and design features of the single casing reheat turbine are briefly discussed and operating experience, line-up and technical consideration for performance improvement are presented.

  19. Comprehensive Report For Proposed Elevated Temperature Elastic Perfectly Plastic (EPP) Code Cases Representative Example Problems

    SciTech Connect (OSTI)

    Greg L. Hollinger

    2014-06-01

    Background: The current rules in the nuclear section of the ASME Boiler and Pressure Vessel (B&PV) Code , Section III, Subsection NH for the evaluation of strain limits and creep-fatigue damage using simplified methods based on elastic analysis have been deemed inappropriate for Alloy 617 at temperatures above 1200F (650C)1. To address this issue, proposed code rules have been developed which are based on the use of elastic-perfectly plastic (E-PP) analysis methods and which are expected to be applicable to very high temperatures. The proposed rules for strain limits and creep-fatigue evaluation were initially documented in the technical literature 2, 3, and have been recently revised to incorporate comments and simplify their application. The revised code cases have been developed. Task Objectives: The goal of the Sample Problem task is to exercise these code cases through example problems to demonstrate their feasibility and, also, to identify potential corrections and improvements should problems be encountered. This will provide input to the development of technical background documents for consideration by the applicable B&PV committees considering these code cases for approval. This task has been performed by Hollinger and Pease of Becht Engineering Co., Inc., Nuclear Services Division and a report detailing the results of the E-PP analyses conducted on example problems per the procedures of the E-PP strain limits and creep-fatigue draft code cases is enclosed as Enclosure 1. Conclusions: The feasibility of the application of the E-PP code cases has been demonstrated through example problems that consist of realistic geometry (a nozzle attached to a semi-hemispheric shell with a circumferential weld) and load (pressure; pipe reaction load applied at the end of the nozzle, including axial and shear forces, bending and torsional moments; through-wall transient temperature gradient) and design and operating conditions (Levels A, B and C).

  20. Technology Solutions for New Homes Case Study: Multifamily Zero...

    Energy Savers [EERE]

    Case Study: Multifamily Zero Energy Ready Home Analysis AvalonBay Communities, which is a large multifamily ... planned to be certified to the ENERGY STAR Homes Version 3 program. ...

  1. A Case for Climate Neutrality: Case Studies on Moving Towards...

    Open Energy Info (EERE)

    TOOL Name: A Case for Climate Neutrality: Case Studies on Moving Towards a Low Carbon Economy AgencyCompany Organization: United Nations Environment Programme (UNEP) Sector:...

  2. Annual Energy Outlook 2016 Early Release: Annotated Summary of Two Cases

    U.S. Energy Information Administration (EIA) Indexed Site

    Early Release: Annotated Summary of Two Cases May 17, 2016 The Annual Energy Outlook 2016 (AEO2016) Early Release features two cases: the Reference case and a case excluding implementation of the Clean Power Plan (CPP) Reference case: A business-as-usual trend estimate, given known technology and technological and demographic trends. The Reference case assumes CPP compliance through mass-based standards that establish caps on CO2 emissions from fossil-fired generators covered by the CPP. The

  3. Cogeneration: Economic and technical analysis. (Latest citations from the INSPEC: Information Services for the Physics and Engineering Communities data base). Published Search

    SciTech Connect (OSTI)

    Not Available

    1992-08-01

    The bibliography contains citations concerning economic and technical analyses of cogeneration systems. Topics include electric power generation, industrial cogeneration, use by utilities, and fuel cell cogeneration. The citations explore steam power station, gas turbine and steam turbine technology, district heating, refuse derived fuels, environmental effects and regulations, bioenergy and solar energy conversion, waste heat and waste product recycling, and performance analysis. (Contains a minimum of 89 citations and includes a subject term index and title list.)

  4. PEM Electrolysis H2A Production Case Study Documentation

    SciTech Connect (OSTI)

    James, Brian; Colella, Whitney; Moton, Jennie; Saur, G.; Ramsden, T.

    2013-12-31

    This report documents the development of four DOE Hydrogen Analysis (H2A) case studies for polymer electrolyte membrane (PEM) electrolysis. The four cases characterize PEM electrolyzer technology for two hydrogen production plant sizes (Forecourt and Central) and for two technology development time horizons (Current and Future).

  5. EVALUATION OF THE EFFECTIVENESS OF TRUCK EFFICIENCY TECHNOLOGIES IN CLASS 8 TRACTOR-TRAILERS BASED ON A TRACTIVE ENERGY ANALYSIS USING MEASURED DRIVE CYCLE DATA

    SciTech Connect (OSTI)

    LaClair, Tim J; Gao, Zhiming; Fu, Joshua S.; Calcagno, Jimmy; Yun, Jeongran

    2014-01-01

    Quantifying the fuel savings that can be achieved from different truck fuel efficiency technologies for a fleet s specific usage allows the fleet to select the combination of technologies that will yield the greatest operational efficiency and profitability. This paper presents an analysis of vehicle usage in a commercial vehicle fleet and an assessment of advanced efficiency technologies using an analysis of measured drive cycle data for a class 8 regional commercial shipping fleet. Drive cycle measurements during a period of a full year from six tractor-trailers in normal operations in a less-than-truckload (LTL) carrier were analyzed to develop a characteristic drive cycle that is highly representative of the fleet s usage. The vehicle mass was also estimated to account for the variation of loads that the fleet experienced. The drive cycle and mass data were analyzed using a tractive energy analysis to quantify the fuel efficiency and CO2 emissions benefits that can be achieved on class 8 tractor-trailers when using advanced efficiency technologies, either individually or in combination. Although differences exist among class 8 tractor-trailer fleets, this study provides valuable insight into the energy and emissions reduction potential that various technologies can bring in this important trucking application.

  6. The Science Manager's Guide to Case Studies

    SciTech Connect (OSTI)

    Branch, Kristi M.; Peffers, Melissa S.; Ruegg, Rosalie T.; Vallario, Robert W.

    2001-09-24

    This guide takes the science manager through the steps of planning, implementing, validating, communicating, and using case studies. It outlines the major methods of analysis, describing their relative merits and applicability while providing relevant examples and sources of additional information. Well-designed case studies can provide a combination of rich qualitative and quantitative information, offering valuable insights into the nature, outputs, and longer-term impacts of the research. An objective, systematic, and credible approach to the evaluation of U.S. Department of Energy Office of Science programs adds value to the research process and is the subject of this guide.

  7. In Case of Emergency

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    In Case of Emergency In Case of Emergency Print Fire/Police Emergency: ext. 7911 Cell phone or off-site: 510-486-7911 When dialing from off-site, the following numbers need to be proceeded by 486-. the area code for the LBNL is (510). Fire Department (non-emergency): ext. 6015 Police Department (non-emergency): ext. 5472 Non-Emergency Reporting: ext. 6999 Additional information about emergency procedures at Berkeley Lab can be found on the red Emergency Response Guides posted around the lab and

  8. MFDn Case Study

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    MFDn Case Study MFDn Case Study Description of MFDn Many-Fermion Dynamics---nuclear, or MFDn, is a configuration interaction (CI) code for nuclear structure calculations. It is a platform independent Fortran 90 code using a hybrid MPI/ OpenMP programming model,and is being used on current supercomputers, such as Edison at NERSC, for ab initio calculations of atomic nuclei using realistic nucleon-nucleons and three-nucleon forces. A calculation consists of generating a many-body basis space,

  9. SU-F-18C-01: Minimum Detectability Analysis for Comprehensive Sized Based Optimization of Image Quality and Radiation Dose Across CT Protocols

    SciTech Connect (OSTI)

    Smitherman, C; Chen, B; Samei, E

    2014-06-15

    Purpose: This work involved a comprehensive modeling of task-based performance of CT across a wide range of protocols. The approach was used for optimization and consistency of dose and image quality within a large multi-vendor clinical facility. Methods: 150 adult protocols from the Duke University Medical Center were grouped into sub-protocols with similar acquisition characteristics. A size based image quality phantom (Duke Mercury Phantom) was imaged using these sub-protocols for a range of clinically relevant doses on two CT manufacturer platforms (Siemens, GE). The images were analyzed to extract task-based image quality metrics such as the Task Transfer Function (TTF), Noise Power Spectrum, and Az based on designer nodule task functions. The data were analyzed in terms of the detectability of a lesion size/contrast as a function of dose, patient size, and protocol. A graphical user interface (GUI) was developed to predict image quality and dose to achieve a minimum level of detectability. Results: Image quality trends with variations in dose, patient size, and lesion contrast/size were evaluated and calculated data behaved as predicted. The GUI proved effective to predict the Az values representing radiologist confidence for a targeted lesion, patient size, and dose. As an example, an abdomen pelvis exam for the GE scanner, with a task size/contrast of 5-mm/50-HU, and an Az of 0.9 requires a dose of 4.0, 8.9, and 16.9 mGy for patient diameters of 25, 30, and 35 cm, respectively. For a constant patient diameter of 30 cm, the minimum detected lesion size at those dose levels would be 8.4, 5, and 3.9 mm, respectively. Conclusion: The designed CT protocol optimization platform can be used to evaluate minimum detectability across dose levels and patient diameters. The method can be used to improve individual protocols as well as to improve protocol consistency across CT scanners.

  10. Geothermal Case Studies

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    Young, Katherine

    2014-09-30

    database.) In fiscal year 2015, NREL is working with universities to populate additional case studies on OpenEI. The goal is to provide a large enough dataset to start conducting analyses of exploration programs to identify correlations between successful exploration plans for areas with similar geologic occurrence models.

  11. Geothermal Case Studies

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    Young, Katherine

    database.) In fiscal year 2015, NREL is working with universities to populate additional case studies on OpenEI. The goal is to provide a large enough dataset to start conducting analyses of exploration programs to identify correlations between successful exploration plans for areas with similar geologic occurrence models.

  12. Stress analysis of closure bolts for shipping casks

    SciTech Connect (OSTI)

    Mok, G.C.; Fischer, L.E. ); Hsu, S.T. )

    1993-01-01

    This report specifies the requirements and criteria for stress analysis of closure bolts for shipping casks containing nuclear spent fuels or high level radioactive materials. The specification is based on existing information conceming the structural behavior, analysis, and design of bolted joints. The approach taken was to extend the ASME Boiler and Pressure Vessel Code requirements and criteria for bolting analysis of nuclear piping and pressure vessels to include the appropriate design and load characteristics of the shipping cask. The characteristics considered are large, flat, closure lids with metal-to-metal contact within the bolted joint; significant temperature and impact loads; and possible prying and bending effects. Specific formulas and procedures developed apply to the bolt stress analysis of a circular, flat, bolted closure. The report also includes critical load cases and desirable design practices for the bolted closure, an in-depth review of the structural behavior of bolted joints, and a comprehensive bibliography of current information on bolted joints.

  13. Modelling renewable electric resources: A case study of wind

    SciTech Connect (OSTI)

    Bernow, S.; Biewald, B.; Hall, J.; Singh, D.

    1994-07-01

    The central issue facing renewables in the integrated resource planning process is the appropriate assessment of the value of renewables to utility systems. This includes their impact on both energy and capacity costs (avoided costs), and on emissions and environmental impacts, taking account of the reliability, system characteristics, interactions (in dispatch), seasonality, and other characteristics and costs of the technologies. These are system-specific considerations whose relationships may have some generic implications. In this report, we focus on the reliability contribution of wind electric generating systems, measured as the amount of fossil capacity they can displace while meeting the system reliability criterion. We examine this issue for a case study system at different wind characteristics and penetration, for different years, with different system characteristics, and with different modelling techniques. In an accompanying analysis we also examine the economics of wind electric generation, as well as its emissions and social costs, for the case study system. This report was undertaken for the {open_quotes}Innovative IRP{close_quotes} program of the U.S. Department of Energy, and is based on work by both Union of Concerned Scientists (UCS) and Tellus Institute, including America`s Energy Choices and the UCS Midwest Renewables Project.

  14. Performance-based ratemaking for electric utilities: Review of plans and analysis of economic and resource-planning issues. Volume 2, Appendices

    SciTech Connect (OSTI)

    Comnes, G.A.; Stoft, S.; Greene, N.; Hill, L.J.

    1995-11-01

    This document contains summaries of the electric utilities performance-based rate plans for the following companies: Alabama Power Company; Central Maine Power Company; Consolidated Edison of New York; Mississippi Power Company; New York State Electric and Gas Corporation; Niagara Mohawk Power Corporation; PacifiCorp; Pacific Gas and Electric; Southern California Edison; San Diego Gas & Electric; and Tucson Electric Power. In addition, this document also contains information about LBNL`s Power Index and Incentive Properties of a Hybrid Cap and Long-Run Demand Elasticity.

  15. An Analysis Of The Impact Of Selected Carbon Capture And Storage Policy Scenarios On The US Fossil-Based Electric Power Sector

    SciTech Connect (OSTI)

    Davidson, Casie L.; Dooley, James J.; Dahowski, Robert T.; Mahasenan, N Maha

    2003-09-13

    CO2 capture and storage (CCS) is rapidly emerging as a potential key climate change mitigation option. However, as policymakers and industrial stakeholders begin the process of formulating new policy for implementing CCS technologies, participants require a tool to assess large-scale CCS deployment over a number of different possible future scenarios. This paper will analyze several scenarios using two state-of-the-art Battelle developed models, the MiniCAM and the CO2-GIS for examining CCS deployment. Outputs include the total amount of CO2 captured, total annual emissions, and fossil-based generating capacity.

  16. Preliminary hazards analysis -- vitrification process

    SciTech Connect (OSTI)

    Coordes, D.; Ruggieri, M.; Russell, J.; TenBrook, W.; Yimbo, P.

    1994-06-01

    This paper presents a Preliminary Hazards Analysis (PHA) for mixed waste vitrification by joule heating. The purpose of performing a PHA is to establish an initial hazard categorization for a DOE nuclear facility and to identify those processes and structures which may have an impact on or be important to safety. The PHA is typically performed during and provides input to project conceptual design. The PHA is then followed by a Preliminary Safety Analysis Report (PSAR) performed during Title 1 and 2 design. The PSAR then leads to performance of the Final Safety Analysis Report performed during the facility`s construction and testing. It should be completed before routine operation of the facility commences. This PHA addresses the first four chapters of the safety analysis process, in accordance with the requirements of DOE Safety Guidelines in SG 830.110. The hazards associated with vitrification processes are evaluated using standard safety analysis methods which include: identification of credible potential hazardous energy sources; identification of preventative features of the facility or system; identification of mitigative features; and analyses of credible hazards. Maximal facility inventories of radioactive and hazardous materials are postulated to evaluate worst case accident consequences. These inventories were based on DOE-STD-1027-92 guidance and the surrogate waste streams defined by Mayberry, et al. Radiological assessments indicate that a facility, depending on the radioactive material inventory, may be an exempt, Category 3, or Category 2 facility. The calculated impacts would result in no significant impact to offsite personnel or the environment. Hazardous materials assessment indicates that a Mixed Waste Vitrification facility will be a Low Hazard facility having minimal impacts to offsite personnel and the environment.

  17. Wind to Hydrogen in California: Case Study

    SciTech Connect (OSTI)

    Antonia, O.; Saur, G.

    2012-08-01

    This analysis presents a case study in California for a large scale, standalone wind electrolysis site. This is a techno-economic analysis of the 40,000 kg/day renewable production of hydrogen and subsequent delivery by truck to a fueling station in the Los Angeles area. This quantity of hydrogen represents about 1% vehicle market penetration for a city such as Los Angeles (assuming 0.62 kg/day/vehicle and 0.69 vehicles/person) [8]. A wind site near the Mojave Desert was selected for proximity to the LA area where hydrogen refueling stations are already built.

  18. Risk Informed Safety Margin Characterization Case Study: Selection of

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Electrical Equipment To Be Subjected to Environmental Qualification | Department of Energy Case Study: Selection of Electrical Equipment To Be Subjected to Environmental Qualification Risk Informed Safety Margin Characterization Case Study: Selection of Electrical Equipment To Be Subjected to Environmental Qualification Reference 1 discussed key elements of the process for developing a margins-based "safety case" to support safe and efficient operation for an extended period. The

  19. FES Case Study Worksheets

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Worksheets FES Case Study Worksheets This workshop is closed, and the worksheets can no longer be edited. If you have questions, please report any problems or suggestions for improvement to Richard Gerber (ragerber@lbl.gov). Please choose your worksheet template: Lee Berry, Paul Bonoli, David Green [Read] Jeff Candy [Read] CS Chang [Read] Stephane Ethier [Read] Alex Friedman [Read] Kai Germaschewski [Read] Martin Greenwald [Read] Stephen Jardin [Read] Charlson Kim [Read] Scott Kruger [Read]

  20. Kinetics of Cold-Cap Reactions for Vitrification of Nuclear Waste Glass Based on Simultaneous Differential Scanning Calorimetry - Thermogravimetry (DSC-TGA) and Evolved Gas Analysis (EGA)

    SciTech Connect (OSTI)

    Rodriguez, Carmen P.; Pierce, David A.; Schweiger, Michael J.; Kruger, Albert A.; Chun, Jaehun; Hrma, Pavel R.

    2013-12-03

    For vitrifying nuclear waste glass, the feed, a mixture of waste with glass-forming and modifying additives, is charged onto the cold cap that covers 90-100% of the melt surface. The cold cap consists of a layer of reacting molten glass floating on the surface of the melt in an all-electric, continuous glass melter. As the feed moves through the cold cap, it undergoes chemical reactions and phase transitions through which it is converted to molten glass that moves from the cold cap into the melt pool. The process involves a series of reactions that generate multiple gases and subsequent mass loss and foaming significantly influence the mass and heat transfers. The rate of glass melting, which is greatly influenced by mass and heat transfers, affects the vitrification process and the efficiency of the immobilization of nuclear waste. We studied the cold-cap reactions of a representative waste glass feed using both the simultaneous differential scanning calorimetry thermogravimetry (DSC-TGA) and the thermogravimetry coupled with gas chromatography-mass spectrometer (TGA-GC-MS) as complementary tools to perform evolved gas analysis (EGA). Analyses from DSC-TGA and EGA on the cold-cap reactions provide a key element for the development of an advanced cold-cap model. It also helps to formulate melter feeds for higher production rate.

  1. Analysis of Geothermal Reservoir Stimulation Using Geomechanics...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Analysis of Geothermal Reservoir Stimulation Using Geomechanics-based Stochastic Analysis of Injection-induced Seismicity; 2010 Geothermal Technology Program Peer Review Report ...

  2. DOE Zero Energy Ready Home Case Study: Palo Duro Homes, Albuquerque...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    DOE Zero Energy Ready Home Case Study: Palo Duro Homes, Albuquerque, NM Case study of a New Mexico-based home builder who has built more DOE Zero Energy Ready certified homes than ...

  3. DOE Zero Energy Ready Home Case Study: Palo Duro Homes, Albuquerque...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Albuquerque, NM DOE Zero Energy Ready Home Case Study: Palo Duro Homes, Albuquerque, NM Case study of a New Mexico-based home builder who has built more DOE Zero Energy Ready ...

  4. Development and Performance of Detectors for the Cryogenic Dark Matter Search Experiment with an Increased Sensitivity Based on a Maximum Likelihood Analysis of Beta Contamination

    SciTech Connect (OSTI)

    Driscoll, Donald D.; /Case Western Reserve U.

    2004-01-01

    The Cryogenic Dark Matter Search (CDMS) uses cryogenically-cooled detectors made of germanium and silicon in an attempt to detect dark matter in the form of Weakly-Interacting Massive Particles (WIMPs). The expected interaction rate of these particles is on the order of 1/kg/day, far below the 200/kg/day expected rate of background interactions after passive shielding and an active cosmic ray muon veto. Our detectors are instrumented to make a simultaneous measurement of both the ionization energy and thermal energy deposited by the interaction of a particle with the crystal substrate. A comparison of these two quantities allows for the rejection of a background of electromagnetically-interacting particles at a level of better than 99.9%. The dominant remaining background at a depth of {approx} 11 m below the surface comes from fast neutrons produced by cosmic ray muons interacting in the rock surrounding the experiment. Contamination of our detectors by a beta emitter can add an unknown source of unrejected background. In the energy range of interest for a WIMP study, electrons will have a short penetration depth and preferentially interact near the surface. Some of the ionization signal can be lost to the charge contacts there and a decreased ionization signal relative to the thermal signal will cause a background event which interacts at the surface to be misidentified as a signal event. We can use information about the shape of the thermal signal pulse to discriminate against these surface events. Using a subset of our calibration set which contains a large fraction of electron events, we can characterize the expected behavior of surface events and construct a cut to remove them from our candidate signal events. This thesis describes the development of the 6 detectors (4 x 250 g Ge and 2 x 100 g Si) used in the 2001-2002 CDMS data run at the Stanford Underground Facility with a total of 119 livedays of data. The preliminary results presented are based on the

  5. Stereotactic Body Radiotherapy Versus Surgery for Medically Operable Stage I Non-Small-Cell Lung Cancer: A Markov Model-Based Decision Analysis

    SciTech Connect (OSTI)

    Louie, Alexander V.; Rodrigues, George; Palma, David A.; Cao, Jeffrey Q.; Yaremko, Brian P.; Malthaner, Richard; Mocanu, Joseph D.

    2011-11-15

    Purpose: To compare the quality-adjusted life expectancy and overall survival in patients with Stage I non-small-cell lung cancer (NSCLC) treated with either stereotactic body radiation therapy (SBRT) or surgery. Methods and Materials: We constructed a Markov model to describe health states after either SBRT or lobectomy for Stage I NSCLC for a 5-year time frame. We report various treatment strategy survival outcomes stratified by age, sex, and pack-year history of smoking, and compared these with an external outcome prediction tool (Adjuvant{exclamation_point} Online). Results: Overall survival, cancer-specific survival, and other causes of death as predicted by our model correlated closely with those predicted by the external prediction tool. Overall survival at 5 years as predicted by baseline analysis of our model is in favor of surgery, with a benefit ranging from 2.2% to 3.0% for all cohorts. Mean quality-adjusted life expectancy ranged from 3.28 to 3.78 years after surgery and from 3.35 to 3.87 years for SBRT. The utility threshold for preferring SBRT over surgery was 0.90. Outcomes were sensitive to quality of life, the proportion of local and regional recurrences treated with standard vs. palliative treatments, and the surgery- and SBRT-related mortalities. Conclusions: The role of SBRT in the medically operable patient is yet to be defined. Our model indicates that SBRT may offer comparable overall survival and quality-adjusted life expectancy as compared with surgical resection. Well-powered prospective studies comparing surgery vs. SBRT in early-stage lung cancer are warranted to further investigate the relative survival, quality of life, and cost characteristics of both treatment paradigms.

  6. Coal supply/demand, 1980 to 2000. Task 3. Resource applications industrialization system data base. Final review draft. [USA; forecasting 1980 to 2000; sector and regional analysis

    SciTech Connect (OSTI)

    Fournier, W.M.; Hasson, V.

    1980-10-10

    This report is a compilation of data and forecasts resulting from an analysis of the coal market and the factors influencing supply and demand. The analyses performed for the forecasts were made on an end-use-sector basis. The sectors analyzed are electric utility, industry demand for steam coal, industry demand for metallurgical coal, residential/commercial, coal demand for synfuel production, and exports. The purpose is to provide coal production and consumption forecasts that can be used to perform detailed, railroad company-specific coal transportation analyses. To make the data applicable for the subsequent transportation analyses, the forecasts have been made for each end-use sector on a regional basis. The supply regions are: Appalachia, East Interior, West Interior and Gulf, Northern Great Plains, and Mountain. The demand regions are the same as the nine Census Bureau regions. Coal production and consumption in the United States are projected to increase dramatically in the next 20 years due to increasing requirements for energy and the unavailability of other sources of energy to supply a substantial portion of this increase. Coal comprises 85 percent of the US recoverable fossil energy reserves and could be mined to supply the increasing energy demands of the US. The NTPSC study found that the additional traffic demands by 1985 may be met by the railways by the way of improved signalization, shorter block sections, centralized traffic control, and other modernization methods without providing for heavy line capacity works. But by 2000 the incremental traffic on some of the major corridors was projected to increase very significantly and is likely to call for special line capacity works involving heavy investment.

  7. Identification of Patient Benefit From Proton Therapy for Advanced Head and Neck Cancer Patients Based on Individual and Subgroup Normal Tissue Complication Probability Analysis

    SciTech Connect (OSTI)

    Jakobi, Annika; Bandurska-Luque, Anna; StĂŒtzer, Kristin; Haase, Robert; Löck, Steffen; Wack, Linda-Jacqueline; Mönnich, David; Thorwarth, Daniela; and others

    2015-08-01

    Purpose: The purpose of this study was to determine, by treatment plan comparison along with normal tissue complication probability (NTCP) modeling, whether a subpopulation of patients with head and neck squamous cell carcinoma (HNSCC) could be identified that would gain substantial benefit from proton therapy in terms of NTCP. Methods and Materials: For 45 HNSCC patients, intensity modulated radiation therapy (IMRT) was compared to intensity modulated proton therapy (IMPT). Physical dose distributions were evaluated as well as the resulting NTCP values, using modern models for acute mucositis, xerostomia, aspiration, dysphagia, laryngeal edema, and trismus. Patient subgroups were defined based on primary tumor location. Results: Generally, IMPT reduced the NTCP values while keeping similar target coverage for all patients. Subgroup analyses revealed a higher individual reduction of swallowing-related side effects by IMPT for patients with tumors in the upper head and neck area, whereas the risk reduction of acute mucositis was more pronounced in patients with tumors in the larynx region. More patients with tumors in the upper head and neck area had a reduction in NTCP of more than 10%. Conclusions: Subgrouping can help to identify patients who may benefit more than others from the use of IMPT and, thus, can be a useful tool for a preselection of patients in the clinic where there are limited PT resources. Because the individual benefit differs within a subgroup, the relative merits should additionally be evaluated by individual treatment plan comparisons.

  8. An economic feasibility analysis of distributed electric power generation based upon the Natural Gas-Fired Fuel Cell: a model of the operations cost.

    SciTech Connect (OSTI)

    Not Available

    1993-06-30

    This model description establishes the revenues, expenses incentives and avoided costs of Operation of a Natural Gas-Fired Fuel Cell-Based. Fuel is the major element of the cost of operation of a natural gas-fired fuel cell. Forecasts of the change in the price of this commodity a re an important consideration in the ownership of an energy conversion system. Differences between forecasts, the interests of the forecaster or geographical areas can all have significant effects on imputed fuel costs. There is less effect on judgments made on the feasibility of an energy conversion system since changes in fuel price can affect the cost of operation of the alternatives to the fuel cell in a similar fashion. The forecasts used in this model are only intended to provide the potential owner or operator with the means to examine alternate future scenarios. The operations model computes operating costs of a system suitable for a large condominium complex or a residential institution such as a hotel, boarding school or prison. The user may also select large office buildings that are characterized by 12 to 16 hours per day of operation or industrial users with a steady demand for thermal and electrical energy around the clock.

  9. Performance-based ratemaking for electric utilities: Review of plans and analysis of economic and resource-planning issues. Volume 1

    SciTech Connect (OSTI)

    Comnes, G.A.; Stoft, S.; Greene, N.; Hill, L.J. |

    1995-11-01

    Performance-Based Ratemaking (PBR) is a form of utility regulation that strengthens the financial incentives to lower rates, lower costs, or improve nonprice performance relative traditional regulation, which the authors call cost-of-service, rate-of-return (COS/ROR) regulation. Although the electric utility industry has considerable experience with incentive mechanisms that target specific areas of performance, implementation of mechanisms that cover a comprehensive set of utility costs or services is relatively rare. In recent years, interest in PBR has increased as a result of growing dissatisfaction with COS/ROR and as a result of economic and technological trends that are leading to more competition in certain segments of the electricity industry. In addition, incentive regulation has been used with some success in other public utility industries, most notably telecommunications in the US and telecommunications, energy, and water in the United Kingdom. In this report, the authors analyze comprehensive PBR mechanisms for electric utilities in four ways: (1) they describe different types of PBR mechanisms, (2) they review a sample of actual PBR plans, (3) they consider the interaction of PBR and utility-funded energy efficiency programs, and (4) they examine how PBR interacts with electric utility resource planning and industry restructuring. The report should be of interest to technical staff of utilities and regulatory commissions that are actively considering or designing PBR mechanisms. 16 figs., 17 tabs.

  10. Technology Solutions for New Homes Case Study: Multifamily Zero Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Ready Home Analysis | Department of Energy Multifamily Zero Energy Ready Home Analysis Technology Solutions for New Homes Case Study: Multifamily Zero Energy Ready Home Analysis AvalonBay Communities, which is a large multifamily developer, was developing a three-building complex in Elmsford, New York. The buildings were planned to be certified to the ENERGY STARÂź Homes Version 3 program. This plan led to AvalonBay partnering with the Advanced Residential Integrated Solutions (ARIES)

  11. Structure-Based Analysis of Toxoplasma gondii Profilin: A Parasite-Specific Motif Is Required for Recognition by Toll-Like Receptor 11

    SciTech Connect (OSTI)

    K Kucera; A Koblansky; L Saunders; K Frederick; E De La Cruz; S Ghosh; Y Modis

    2011-12-31

    Profilins promote actin polymerization by exchanging ADP for ATP on monomeric actin and delivering ATP-actin to growing filament barbed ends. Apicomplexan protozoa such as Toxoplasma gondii invade host cells using an actin-dependent gliding motility. Toll-like receptor (TLR) 11 generates an innate immune response upon sensing T. gondii profilin (TgPRF). The crystal structure of TgPRF reveals a parasite-specific surface motif consisting of an acidic loop, followed by a long {beta}-hairpin. A series of structure-based profilin mutants show that TLR11 recognition of the acidic loop is responsible for most of the interleukin (IL)-12 secretion response to TgPRF in peritoneal macrophages. Deletion of both the acidic loop and the {beta}-hairpin completely abrogates IL-12 secretion. Insertion of the T. gondii acidic loop and {beta}-hairpin into yeast profilin is sufficient to generate TLR11-dependent signaling. Substitution of the acidic loop in TgPRF with the homologous loop from the apicomplexan parasite Cryptosporidium parvum does not affect TLR11-dependent IL-12 secretion, while substitution with the acidic loop from Plasmodium falciparum results in reduced but significant IL-12 secretion. We conclude that the parasite-specific motif in TgPRF is the key molecular pattern recognized by TLR11. Unlike other profilins, TgPRF slows nucleotide exchange on monomeric rabbit actin and binds rabbit actin weakly. The putative TgPRF actin-binding surface includes the {beta}-hairpin and diverges widely from the actin-binding surfaces of vertebrate profilins.

  12. Advancing the surgical implantation of electronic tags in fish: a gap analysis and research agenda based on a review of trends in intracoelomic tagging effects studies

    SciTech Connect (OSTI)

    Cooke, Steven J.; Woodley, Christa M.; Eppard, M. B.; Brown, Richard S.; Nielsen, Jennifer L.

    2011-03-08

    Early approaches to surgical implantation of electronic tags in fish were often through trial and error, however, in recent years there has been an interest in using scientific research to identify techniques and procedures that improve the outcome of surgical procedures and determine the effects of tagging on individuals. Here we summarize the trends in 108 peer-reviewed electronic tagging effect studies focused on intracoleomic implantation to determine opportunities for future research. To date, almost all of the studies have been conducted in freshwater, typically in laboratory environments, and have focused on biotelemetry devices. The majority of studies have focused on salmonids, cyprinids, ictalurids and centrarchids, with a regional bias towards North America, Europe and Australia. Most studies have focused on determining whether there is a negative effect of tagging relative to control fish, with proportionally fewer that have contrasted different aspects of the surgical procedure (e.g., methods of sterilization, incision location, wound closure material) that could advance the discipline. Many of these studies included routine endpoints such as mortality, growth, healing and tag retention, with fewer addressing sublethal measures such as swimming ability, predator avoidance, physiological costs, or fitness. Continued research is needed to further elevate the practice of electronic tag implantation in fish in order to ensure that the data generated are relevant to untagged conspecifics (i.e., no long-term behavioural or physiological consequences) and the surgical procedure does not impair the health and welfare status of the tagged fish. To that end, we advocate for i) rigorous controlled manipulations based on statistical designs that have adequate power, account for inter-individual variation, and include controls and shams, ii) studies that transcend the laboratory and the field with more studies in marine waters, iii) incorporation of knowledge and

  13. Agent-based Infrastructure Interdependency Model

    Energy Science and Technology Software Center (OSTI)

    2003-10-01

    The software is used to analyze infrastructure interdependencies. Agent-based modeling is used for the analysis.

  14. SEP CASE STUDY WEBINAR: MEDIMMUNE

    Office of Energy Efficiency and Renewable Energy (EERE)

    This Measurement and Verification Case Study webinar is the first in a series of case study webinars to highlight the successes of facilities that have achieved Superior Energy Performance (SEP)...

  15. Modeling & Analysis

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Facilities, Modeling, Modeling, Modeling & Analysis, Modeling & Analysis, Renewable Energy, Research & Capabilities, Wind Energy, Wind News Virtual LIDAR Model Helps Researchers ...

  16. RESULTS OF THE TECHNICAL AND ECONOMIC FEASIBILITY ANALYSIS FOR A NOVEL BIOMASS GASIFICATION-BASED POWER GENERATION SYSTEM FOR THE FOREST PRODUCTS INDUSTRY

    SciTech Connect (OSTI)

    Bruce Bryan; Joseph Rabovitser; Sunil Ghose; Jim Patel

    2003-11-01

    In 2001, the Gas Technology Institute (GTI) entered into Cooperative Agreement DE-FC26-01NT41108 with the U.S. Department of Energy (DOE) for an Agenda 2020 project to develop an advanced biomass gasification-based power generation system for near-term deployment in the Forest Products Industry (FPI). The advanced power system combines three advanced components, including biomass gasification, 3-stage stoker-fired combustion for biomass conversion, and externally recuperated gas turbines (ERGTs) for power generation. The primary performance goals for the advanced power system are to provide increased self-generated power production for the mill and to increase wastewood utilization while decreasing fossil fuel use. Additional goals are to reduce boiler NOx and CO{sub 2} emissions. The current study was conducted to determine the technical and economic feasibility of an Advanced Power Generation System capable of meeting these goals so that a capital investment decision can be made regarding its implementation at a paper mill demonstration site in DeRidder, LA. Preliminary designs and cost estimates were developed for all major equipment, boiler modifications and balance of plant requirements including all utilities required for the project. A three-step implementation plan was developed to reduce technology risk. The plant design was found to meet the primary objectives of the project for increased bark utilization, decreased fossil fuel use, and increased self-generated power in the mill. Bark utilization for the modified plant is significantly higher (90-130%) than current operation compared to the 50% design goal. For equivalent steam production, the total gas usage for the fully implemented plant is 29% lower than current operation. While the current average steam production from No.2 Boiler is about 213,000 lb/h, the total steam production from the modified plant is 379,000 lb/h. This steam production increase will be accomplished at a grate heat release rate

  17. NREL: Energy Analysis: Resource Assessment

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    geographic distribution, using geographic information systems (GIS) and other techniques. ... U.S. Renewable Energy Technical Potentials: A GIS-Based Analysis. (2012). Milbrandt, A. ...

  18. Feasibility analysis of geothermal district heating for Lakeview, Oregon

    SciTech Connect (OSTI)

    Not Available

    1980-12-23

    An analysis of the geothermal resource at Lakeview, Oregon, indicates that a substantial resource exists in the area capable of supporting extensive residential, commercial and industrial heat loads. Good resource productivity is expected with water temperatures of 200{degrees}F at depths of 600 to 3000 feet in the immediate vicinity of the town. Preliminary district heating system designs were developed for a Base Case serving 1170 homes, 119 commercial and municipal buildings, and a new alcohol fuel production facility; a second design was prepared for a downtown Mini-district case with 50 commercial users and the alcohol plant. Capital and operating costs were determined for both cases. Initial development of the Lakeview system has involved conducting user surveys, well tests, determinations of institutional requirements, system designs, and project feasibility analyses. A preferred approach for development will be to establish the downtown Mini-district and, as experience and acceptance are obtained, to expand the system to other areas of town. Projected energy costs for the Mini-district are $10.30 per million Btu while those for the larger Base Case design are $8.20 per million Btu. These costs are competitive with costs for existing sources of energy in the Lakeview area.

  19. The Business Case for Fuel Cells 2014: Powering the Bottom Line...

    Office of Environmental Management (EM)

    These include wastewater treatment plants, government buildings, universities, military bases, hospitals, and other sites. The Business Case for Fuel Cells 2014: Powering the ...

  20. Appendix A: Reference case

    Annual Energy Outlook [U.S. Energy Information Administration (EIA)]

    12.92 12.90 13.09 -0.2% 1 Commercial trucks 8,501 to 10,000 pounds gross vehicle weight rating. 2 CAFE standard based on projected new vehicle sales. 3 Includes CAFE credits for...

  1. Restricted Natural Gas Supply Case (released in AEO2005)

    Reports and Publications (EIA)

    2005-01-01

    The restricted natural gas supply case provides an analysis of the energy-economic implications of a scenario in which future gas supply is significantly more constrained than assumed in the reference case. Future natural gas supply conditions could be constrained because of problems with the construction and operation of large new energy projects, and because the future rate of technological progress could be significantly lower than the historical rate. Although the restricted natural gas supply case represents a plausible set of constraints on future natural gas supply, it is not intended to represent what is likely to happen in the future.

  2. Technology Deployment Case Studies | Department of Energy

    Office of Environmental Management (EM)

    Deployment Technology Deployment Case Studies Technology Deployment Case Studies These case studies describe evaluations of energy-efficient technologies being used in federal...

  3. Patrick Case | Y-12 National Security Complex

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Blake Case Larry Case Patrick Case Dorothy Coker Gordon Fee Linda Fellers Louis Freels Marie Guy Nathan Henry Agnes Houser John Rice Irwin Harvey Kite Charlie Manning Alice...

  4. Larry Case | Y-12 National Security Complex

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Blake Case Larry Case Patrick Case Dorothy Coker Gordon Fee Linda Fellers Louis Freels Marie Guy Nathan Henry Agnes Houser John Rice Irwin Harvey Kite Charlie Manning Alice...

  5. Blake Case | Y-12 National Security Complex

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Blake Case Larry Case Patrick Case Dorothy Coker Gordon Fee Linda Fellers Louis Freels Marie Guy Nathan Henry Agnes Houser John Rice Irwin Harvey Kite Charlie Manning Alice...

  6. Water Efficiency Case Studies | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Water Efficiency Case Studies Water Efficiency Case Studies These case studies offer examples of water efficiency projects implemented by federal agencies. They are organized by ...

  7. Proteomics based compositional analysis of complex cellulase...

    Office of Scientific and Technical Information (OSTI)

    VIRIDE cellulase; hemicellulase; glycosyl hydrolases; spectral counting; cellulosic ethanol; enzymatic hydrolysis; lignocellulose; mass spectrometry; LC-MSMS; Environmental ...

  8. Geographically Based Hydrogen Demand and Infrastructure Analysis

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    This work has been authored by Midwest Research Institute (MRI) under Contract No. ... Neither MRI, the DOE, the Government, nor any other agency thereof, nor any of their ...

  9. Explosively separable casing

    DOE Patents [OSTI]

    Jacobson, Albin K. (Albuquerque, NM); Rychnovsky, Raymond E. (Livermore, CA); Visbeck, Cornelius N. (Livermore, CA)

    1985-01-01

    An explosively separable casing including a cylindrical afterbody and a circular cover for one end of the afterbody is disclosed. The afterbody has a cylindrical tongue extending longitudinally from one end which is matingly received in a corresponding groove in the cover. The groove is sized to provide a pocket between the end of the tongue and the remainder of the groove so that an explosive can be located therein. A seal is also provided between the tongue and the groove for sealing the pocket from the atmosphere. A frangible holding device is utilized to hold the cover to the afterbody. When the explosive is ignited, the increase in pressure in the pocket causes the cover to be accelerated away from the afterbody. Preferably, the inner wall of the afterbody is in the same plane as the inner wall of the tongue to provide a maximum space for storage in the afterbody and the side wall of the cover is thicker than the side wall of the afterbody so as to provide a sufficiently strong surrounding portion for the pocket in which the explosion takes place. The detonator for the explosive is also located on the cover and is carried away with the cover during separation. The seal is preferably located at the longitudinal end of the tongue and has a chevron cross section.

  10. A Business Case for Home Performance Contracting

    SciTech Connect (OSTI)

    Baechler, Michael C.; Antonopoulos, Chrissi A.; Sevigny, Maureen; Gilbride, Theresa L.; Hefty, Marye G.

    2012-10-01

    This report was prepared by PNNL for the DOE Building America program. The report provides information for businesses considering entering the home performance contracting industry. Metrics discussed include industry trends and drivers, specific points of entry, business models, startup costs, and marketing strategies. The report includes detailed analysis of eight businesses around the country that have successfully entered the home performance contracting industry. Data is provided on their financial structures, program participation, marketing efforts, and staff training. This report will be distributed via the DOE Building America website, www.buildingamerica.gov. Individual case studies will also be cleared separately.

  11. Multi-scale statistical analysis of coronal solar activity

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Gamborino, Diana; del-Castillo-Negrete, Diego; Martinell, Julio J.

    2016-07-08

    Multi-filter images from the solar corona are used to obtain temperature maps that are analyzed using techniques based on proper orthogonal decomposition (POD) in order to extract dynamical and structural information at various scales. Exploring active regions before and after a solar flare and comparing them with quiet regions, we show that the multi-scale behavior presents distinct statistical properties for each case that can be used to characterize the level of activity in a region. Information about the nature of heat transport is also to be extracted from the analysis.

  12. Hydrogen Analysis

    Broader source: Energy.gov [DOE]

    Presentation on Hydrogen Analysis to the DOE Systems Analysis Workshop held in Washington, D.C. July 28-29, 2004 to discuss and define role of systems analysis in DOE Hydrogen Program.

  13. FAQ for Case Study Authors

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Reviews » FAQ for Case Study Authors Science Engagement Move your data Programs & Workshops Science Requirements Reviews Network Requirements Reviews Documents and Background Materials FAQ for Case Study Authors BER Requirements Review 2015 ASCR Requirements Review 2015 Previous Reviews Requirements Review Reports Case Studies Contact Us Technical Assistance: 1 800-33-ESnet (Inside US) 1 800-333-7638 (Inside US) 1 510-486-7600 (Globally) 1 510-486-7607 (Globally) Report Network Problems:

  14. Non-ferromagnetic overburden casing

    DOE Patents [OSTI]

    Vinegar, Harold J.; Harris, Christopher Kelvin; Mason, Stanley Leroy

    2010-09-14

    Systems, methods, and heaters for treating a subsurface formation are described herein. At least one system for electrically insulating an overburden portion of a heater wellbore is described. The system may include a heater wellbore located in a subsurface formation and an electrically insulating casing located in the overburden portion of the heater wellbore. The casing may include at least one non-ferromagnetic material such that ferromagnetic effects are inhibited in the casing.

  15. FAQ for Case Study Authors

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Reviews FAQ for Case Study Authors Science Engagement Move your data Programs & Workshops Science Requirements Reviews Network Requirements Reviews Documents and Background...

  16. Evaluation of energy system analysis techniques for identifying underground facilities

    SciTech Connect (OSTI)

    VanKuiken, J.C.; Kavicky, J.A.; Portante, E.C.

    1996-03-01

    This report describes the results of a study to determine the feasibility and potential usefulness of applying energy system analysis techniques to help detect and characterize underground facilities that could be used for clandestine activities. Four off-the-shelf energy system modeling tools were considered: (1) ENPEP (Energy and Power Evaluation Program) - a total energy system supply/demand model, (2) ICARUS (Investigation of Costs and Reliability in Utility Systems) - an electric utility system dispatching (or production cost and reliability) model, (3) SMN (Spot Market Network) - an aggregate electric power transmission network model, and (4) PECO/LF (Philadelphia Electric Company/Load Flow) - a detailed electricity load flow model. For the purposes of most of this work, underground facilities were assumed to consume about 500 kW to 3 MW of electricity. For some of the work, facilities as large as 10-20 MW were considered. The analysis of each model was conducted in three stages: data evaluation, base-case analysis, and comparative case analysis. For ENPEP and ICARUS, open source data from Pakistan were used for the evaluations. For SMN and PECO/LF, the country data were not readily available, so data for the state of Arizona were used to test the general concept.

  17. Protein Structure Recognition: From Eigenvector Analysis to Structural...

    Office of Scientific and Technical Information (OSTI)

    ThesisDissertation: Protein Structure Recognition: From Eigenvector Analysis to ... The sensitivity and specificity of this method is discussed, along with a case of blind ...

  18. Geoscience/engineering characterization of the interwell environment in carbonate reservoirs based on outcrop analogs, Permian Basin, West Texas and New Mexico--waterflood performance analysis for the South Cowden Grayburg Reservoir, Ector County, Texas. Final report

    SciTech Connect (OSTI)

    Jennings, J.W. Jr.

    1997-05-01

    A reservoir engineering study was conducted of waterflood performance in the South Cowden field, an Upper Permian Grayburg reservoir on the Central Basin Platform in West Texas. The study was undertaken to understand the historically poor waterflood performance, evaluate three techniques for incorporating petrophysical measurements and geological interpretation into heterogeneous reservoir models, and identify issues in heterogeneity modeling and fluid-flow scaleup that require further research. The approach included analysis of relative permeability data, analysis of injection and production data, heterogeneity modeling, and waterflood simulation. The poor South Cowden waterflood recovery is due, in part, to completion of wells in only the top half of the formation. Recompletion of wells through the entire formation is estimated to improve recovery in ten years by 6 percent of the original oil in place in some areas of the field. A direct three-dimensional stochastic approach to heterogeneity modeling produced the best fit to waterflood performance and injectivity, but a more conventional model based on smooth mapping of layer-averaged properties was almost as good. The results reaffirm the importance of large-scale heterogeneities in waterflood modeling but demonstrate only a slight advantage for stochastic modeling at this scale. All the flow simulations required a reduction to the measured whole-core k{sub v}/k{sub h} to explain waterflood behavior, suggesting the presence of barriers to vertical flow not explicitly accounted for in any of the heterogeneity models. They also required modifications to the measured steady-state relative permeabilities, suggesting the importance of small-scale heterogeneities and scaleup. Vertical flow barriers, small-scale heterogeneity modeling, and relative permeability scaleup require additional research for waterflood performance prediction in reservoirs like South Cowden.

  19. Renewable Fuels Legislation Impact Analysis

    Reports and Publications (EIA)

    2005-01-01

    An analysis based on an extension of the ethanol supply curve in our model to allow for enough ethanol production to meet the requirements of S. 650. This analysis provides an update of the May 23, 2005 analysis, with revised ethanol production and cost assumptions.

  20. Documentation and control over economic regulatory adminstration field cases

    SciTech Connect (OSTI)

    Not Available

    1988-08-01

    This review was performed to evaluate the Economic Regulatory Administrations's (ERA) documentation of and control over cases involving alleged petroleum pricing violations. In response to the oil embargo and price increase, the Congress passed the Emergency Petroleum Allocation Act of 1973 (Act). The Government assured compliance by investigating petroleum pricing violations, recovering overcharges, and making restitution to injured parties. Between August 1973 and January 1981, ERA and predecessor Federal agencies established and enforced regulations controlling the allocation and pricing of crude oil and refined petroleum products. The purpose of this review was to determine whether adequate internal controls were in place to assure that overcharge cases were being resolved in accordance with established guidelines. Specific objectives were to determine whether ERA's internal controls assured that (1) the bases for resolving cases were documented, (2) case settlements were approved by more than one person, and (3) cases were tracked until all overcharge issues were resolved.

  1. Thermal initiation caused by fragment impact on cased explosives

    SciTech Connect (OSTI)

    Schnurr, N.M. )

    1989-01-01

    Numerical calculations have been used to predict the velocity threshold for thermal initiation of a cased explosive caused by fragment impact. A structural analysis code was used to determine temperature profiles and a thermal analysis code was used to calculate reaction rates. Results generated for the United States Air Force MK 82 bomb indicate that the velocity threshold for thermal initiation is slightly higher than that for the shock-to-detonation process. 8 refs., 5 figs., 2 tabs.

  2. Case Study for the ARRA-funded GSHP Demonstration at University at Albany

    SciTech Connect (OSTI)

    Liu, Xiaobing; Malhotra, Mini; Xiong, Zeyu

    2015-03-01

    High initial costs and lack of public awareness of ground-source heat pump (GSHP) technology are the two major barriers preventing rapid deployment of this energy-saving technology in the United States. Under the American Recovery and Reinvestment Act (ARRA), 26 GSHP projects have been competitively selected and carried out to demonstrate the benefits of GSHP systems and innovative technologies for cost reduction and/or performance improvement. This report highlights the findings of a case study of one of the ARRA-funded GSHP demonstration projects—a distributed GSHP system at a new 500-bed apartment-style student residence hall at the University at Albany. This case study is based on the analysis of detailed design documents, measured performance data, published catalog data of heat pump equipment, and actual construction costs. Simulations with a calibrated computer model are performed for both the demonstrated GSHP system and a baseline heating, ventilation, and airconditioning (HVAC) system to determine the energy savings and other related benefits achieved by the GSHP system. The evaluated performance metrics include the energy efficiency of the heat pump equipment and the overall GSHP system, as well as the pumping performance, energy savings, carbon emission reductions, and cost-effectiveness of the demonstrated GSHP system compared with the baseline HVAC system. This case study also identifies opportunities for improving the operational efficiency of the demonstrated GSHP system.

  3. Case study for ARRA-funded ground-source heat pump (GSHP) demonstration at Oakland University

    SciTech Connect (OSTI)

    Im, Piljae; Liu, Xiaobing

    2015-09-01

    High initial costs and lack of public awareness of ground-source heat pump (GSHP) technology are the two major barriers preventing rapid deployment of this energy-saving technology in the United States. Under the American Recovery and Reinvestment Act (ARRA), 26 GSHP projects have been competitively selected and carried out to demonstrate the benefits of GSHP systems and innovative technologies for cost reduction and/or performance improvement. This paper highlights the findings of a case study of one of the ARRA-funded GSHP demonstration projects, a ground-source variable refrigerant flow (GS-VRF) system installed at the Human Health Building at Oakland University in Rochester, Michigan. This case study is based on the analysis of measured performance data, maintenance records, construction costs, and simulations of the energy consumption of conventional central heating, ventilation, and air-conditioning (HVAC) systems providing the same level of space conditioning as the demonstrated GS-VRF system. The evaluated performance metrics include the energy efficiency of the heat pump equipment and the overall GS-VRF system, pumping performance, energy savings, carbon emission reductions, and cost-effectiveness of the GS-VRF system compared with conventional HVAC systems. This case study also identified opportunities for reducing uncertainties in the performance evaluation, improving the operational efficiency, and reducing the installed cost of similar GSHP systems in the future.

  4. NREL: Energy Analysis - Sustainability Analysis

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... Life Cycle Assessment Harmonization Sustainable Biomass Resource Development and Use Renewable Energy on Contaminated Lands Technology Systems Analysis Geospatial Analysis Key ...

  5. Elizabeth Case | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Case About Us Elizabeth Case - Guest Blogger, Cycle for Science Most Recent Rain or Shine: We Cycle for Science July 2 Mountains, and Teachers, and a Bear, Oh My! June 2 Sol-Cycle: Biking Across America for Science Education May 1

  6. Federal Utility Energy Service Contract Case Studies | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Federal Utility Energy Service Contract Case Studies Federal Utility Energy Service Contract Case Studies These case studies feature examples of federal projects made possible by the use of utility energy service contracts (UESCs). Photo of the Coast Guard's Multi-Site UESC project. U.S. Coast Guard: This 12-site project with 21 energy-conservation measures reduced electricity consumption by 19.1%, water consumption by 64.2%, and natural gas consumption by 21.1%. Photo of Patrick Air Force Base.

  7. Kinetics of Mn-based sorbents for hot coal gas desulfurization. Task 2 -- Exploratory experimental studies: Single pellet tests; rate mechanism analysis. Quarterly report, December 15, 1995--March 15, 1996

    SciTech Connect (OSTI)

    Hepworth, M.T.; Berns, J.

    1996-03-15

    Currently, the Morgantown Energy Technology Center, is actively investigating alternative hot fuel gas desulfurization sorbents for application to the Integrated Gasification Combined Cycle (IGCC). A sorbent must be highly active towards sulfur at high temperatures and pressure`s, and under varying degrees of reducing atmospheres. Thus, high conversion of the metal oxide and low hydrogen sulfide exit partial pressures. Also, it must regenerate nearly ideally to maintain activity over numerous cycles. Furthermore, regeneration must yield a sulfur product which is economically recoverable directly or indirectly. This cyclic process requires a holistic approach as any one criteria may eliminate a candidate sorbent from further consideration. Over fifty induration campaigns have been conducted among the fifteens Mn-based sorbent formulations. All indurated sorbents has been tested for crush strength and chemical analysis. Also, fifteen sorbent formulations have been tested in a TGA for at least on e induration condition. Subsequently described are the three main groups of formulations tested. They are the MnCO{sub 3} supported with TiO{sub 2} (with or without bentonite), MnCO{sub 3} supported with Al{sub 2}O{sub 3} (with or without porosity enhancers), and MnO{sub 2} ore supported with alundum (with and without bentonite).

  8. Research and evaluation of biomass resources/conversion/utilization systems (market/experimental analysis for development of a data base for a fuels from biomass model). Quarterly technical progress report, Februray 1, 1980-April 30, 1980

    SciTech Connect (OSTI)

    Ahn, Y.K.; Chen, Y.C.; Chen, H.T.; Helm, R.W.; Nelson, E.T.; Shields, K.J.

    1980-01-01

    The project will result in two distinct products: (1) a biomass allocation model which will serve as a tool for the energy planner. (2) the experimental data is being generated to help compare and contrast the behavior of a large number of biomass material in thermochemical environments. Based on information in the literature, values have been developed for regional biomass costs and availabilities and for fuel costs and demands. This data is now stored in data banks and may be updated as better data become available. Seventeen biomass materials have been run on the small TGA and the results partially analyzed. Ash analysis has been performed on 60 biomass materials. The Effluent Gas Analyzer with its associated gas chromatographs has been made operational and some runs have been carried out. Using a computerized program for developing product costs, parametric studies on all but 1 of the 14 process configurations being considered have been performed. Background economic data for all the configuration have been developed. Models to simulate biomass gasifications in an entrained and fixed bed have been developed using models previously used for coal gasification. Runs have been carried out in the fluidized and fixed bed reactor modes using a variety of biomass materials in atmospheres of steam, O/sub 2/ and air. Check aout of the system continues using fabricated manufacturing cost and efficiency data. A users manual has been written.

  9. Interactive savings calculations for RCS measures, six case studies

    SciTech Connect (OSTI)

    Stovall, T.K.

    1983-11-01

    Many Residential Conservation Service (RCS) audits are based, in whole or in part, on the RCS Model Audit. This audit calculates the savings for each measure independently, that is, as if no other conservation actions were taken. This method overestimates the total savings due to a group of measures, and an explanatory warning is given to the customer. Presenting interactive results to consumers would increase the perceived credibility of the audit results by eliminating the need for the warning about uncalculated interactive effects. An increased level of credibility would hopefully lead to an increased level of conservation actions based on the audit results. Because many of the existing RCS audits are based on the RCS Model Audit, six case studies were produced to show that the Model Audit algorithms can be used to produce interactive savings estimates. These six Model Audit case studies, as well as two Computerized Instrumented Residential Audit cases, are presented along with a discussion of the calculation methods used.

  10. Multi-criteria analysis on how to select solar radiation hydrogen production system

    SciTech Connect (OSTI)

    Badea, G.; Naghiu, G. S. Felseghi, R.-A.; Giurca, I.; Răboacă, S.; AƟchilean, I.

    2015-12-23

    The purpose of this article is to present a method of selecting hydrogen-production systems using the electric power obtained in photovoltaic systems, and as a selecting method, we suggest the use of the Advanced Multi-Criteria Analysis based on the FRISCO formula. According to the case study on how to select the solar radiation hydrogen production system, the most convenient alternative is the alternative A4, namely the technical solution involving a hydrogen production system based on the electrolysis of water vapor obtained with concentrated solar thermal systems and electrical power obtained using concentrating photovoltaic systems.

  11. Steel catenary risers for semisubmersible based floating production systems

    SciTech Connect (OSTI)

    Hays, P.R.

    1996-12-31

    The DeepStar production riser committee has investigated the feasibility of using steel catenary risers (SCRs) in water depths of 3,000--6,000 ft. Using Sonat`s George Richardson as the base semisubmersible, DeepStar has examined both extreme event response and fatigue life of an SCR made of pipe sections welded end-to-end. Concepts using alternative materials were investigated. This included steel, steel with titanium and titanium catenary risers. The pros and cons of frequency domain versus time domain analysis were investigated with a commercially available analysis package. A second study outlined a definitive analysis procedure which optimized the analysis time requirements. Analyses showed that steel catenary risers are feasible for semisubmersible based floating production systems. For the DeepStar Gulf of Mexico design criteria, alternative materials are not required. The greatest fatigue damage occurs in the touchdown region of the riser. Mild sea states contribute most to fatigue damage near riser touchdown. Wave drift and wind forces provide a significant contribution to touchdown area fatigue damage. Estimated fatigue lives are unacceptable. Although the rotations of the upper end of the riser are large relative to an SCR attached to a TLP, the rotation required can probably be accommodated with existing technology. For the case of product export, steel catenary risers provide very cost effective and readily installable deep water riser alternatives.

  12. RACORO continental boundary layer cloud investigations. Part I: Case study development and ensemble large-scale forcings

    SciTech Connect (OSTI)

    Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; Endo, Satoshi; Lin, Wuyin; Wang, Jian; Feng, Sha; Zhang, Yunyan; Turner, David D.; Liu, Yangang; Li, Zhijin; Xie, Shaocheng; Ackerman, Andrew S.; Zhang, Minghua; Khairoutdinov, Marat

    2015-06-19

    Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60-hour case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in-situ measurements from the RACORO field campaign and remote-sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functions for concise representation in models. Values of the aerosol hygroscopicity parameter, ?, are derived from observations to be ~0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing datasets are derived from the ARM variational analysis, ECMWF forecasts, and a multi-scale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in 'trial' large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary clouds.

  13. RACORO continental boundary layer cloud investigations. Part I: Case study development and ensemble large-scale forcings

    SciTech Connect (OSTI)

    Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; Endo, Satoshi; Lin, Wuyin; Wang, Jian; Feng, Sha; Zhang, Yunyan; Turner, David D.; Liu, Yangang; Li, Zhijin; Xie, Shaocheng; Ackerman, Andrew S.; Zhang, Minghua; Khairoutdinov, Marat

    2015-06-19

    Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60-hour case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in-situ measurements from the RACORO field campaign and remote-sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functions for concise representation in models. Values of the aerosol hygroscopicity parameter, Îș, are derived from observations to be ~0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing datasets are derived from the ARM variational analysis, ECMWF forecasts, and a multi-scale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in 'trial' large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary clouds.

  14. RACORO continental boundary layer cloud investigations. Part I: Case study development and ensemble large-scale forcings

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; Endo, Satoshi; Lin, Wuyin; Wang, Jian; Feng, Sha; Zhang, Yunyan; Turner, David D.; Liu, Yangang; et al

    2015-06-19

    Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60-hour case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in-situ measurements from the RACORO field campaign and remote-sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functionsmore » for concise representation in models. Values of the aerosol hygroscopicity parameter, Îș, are derived from observations to be ~0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing datasets are derived from the ARM variational analysis, ECMWF forecasts, and a multi-scale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in 'trial' large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary clouds.« less

  15. Spatial data analysis and environmental justice

    SciTech Connect (OSTI)

    Bahadur, R.; Samuels, W.B.; Williams, J.W.; Zeitoun, A.H.

    1997-08-01

    Evaluations of environmental justice for government actions concerned with the transportation of hazardous materials over cross country routes presents a significant challenge in spatial data analysis. The sheer volume of data required for accurate identification of minority and low-income populations along the routes and at the endpoints can be formidable. Managing and integrating large volumes of information with state-of-the-art tools is essential in the analysis of environmental justice and equity concerns surrounding transportation of hazardous materials. This paper discusses the role and limitations of geographical information systems in the analysis and visualization of populations potentially affected by the transportation of hazardous materials over transcontinental ground and water routes. Case studies are used to demonstrate the types of data and analyses needed for evaluations of environmental justice for cross country routes and end points. Inherent capabilities and limitations in spatial resolution are evaluated for environmental assessments in which potentially affected areas are quantified based on the physical characteristics of the hazardous cargo.

  16. Incorporating Experience Curves in Appliance Standards Analysis

    SciTech Connect (OSTI)

    Garbesi, Karina; Chan, Peter; Greenblatt, Jeffery; Kantner, Colleen; Lekov, Alex; Meyers, Stephen; Rosenquist, Gregory; Buskirk, Robert Van; Yang, Hung-Chia; Desroches, Louis-Benoit

    2011-10-31

    The technical analyses in support of U.S. energy conservation standards for residential appliances and commercial equipment have typically assumed that manufacturing costs and retail prices remain constant during the projected 30-year analysis period. There is, however, considerable evidence that this assumption does not reflect real market prices. Costs and prices generally fall in relation to cumulative production, a phenomenon known as experience and modeled by a fairly robust empirical experience curve. Using price data from the Bureau of Labor Statistics, and shipment data obtained as part of the standards analysis process, we present U.S. experience curves for room air conditioners, clothes dryers, central air conditioners, furnaces, and refrigerators and freezers. These allow us to develop more representative appliance price projections than the assumption-based approach of constant prices. These experience curves were incorporated into recent energy conservation standards for these products. The impact on the national modeling can be significant, often increasing the net present value of potential standard levels in the analysis. In some cases a previously cost-negative potential standard level demonstrates a benefit when incorporating experience. These results imply that past energy conservation standards analyses may have undervalued the economic benefits of potential standard levels.

  17. Industrial process heat case studies. [PROSYS/ECONMAT code

    SciTech Connect (OSTI)

    Hooker, D.W.; May, E.K.; West, R.E.

    1980-05-01

    Commercially available solar collectors have the potential to provide a large fraction of the energy consumed for industrial process heat (IPH). Detailed case studies of individual industrial plants are required in order to make an accurate assessment of the technical and economic feasibility of applications. This report documents the results of seven such case studies. The objectives of the case study program are to determine the near-term feasibility of solar IPH in selected industries, identify energy conservation measures, identify conditions of IPH systems that affect solar applications, test SERI's IPH analysis software (PROSYS/ECONOMAT), disseminate information to the industrial community, and provide inputs to the SERI research program. The detailed results from the case studies are presented. Although few near-term, economical solar applications were found, the conditions that would enhance the opportunities for solar IPH applications are identified.

  18. Appendix A: Reference case projections

    Gasoline and Diesel Fuel Update (EIA)

    C Low Economic Growth case projections This page inTenTionally lefT blank 47 U.S. Energy Information Administration | International Energy Outlook 2016 Low Economic Growth case projections Table C1. World total primary energy consumption by region, Low Economic Growth case, 2011-40 (quadrillion Btu) Region History Projections Average annual percent change, 2012-40 2011 2012 2020 2025 2030 2035 2040 OECD OECD Americas 120.6 118.1 123.3 123.9 124.7 126.3 128.8 0.3 United States a 96.8 94.4 98.7

  19. Appendix A: Reference case projections

    Gasoline and Diesel Fuel Update (EIA)

    D High Oil Price case projections This page inTenTionally lefT blank 51 U.S. Energy Information Administration | International Energy Outlook 2016 High Oil Price case projections Table D1. World total primary energy consumption by region, High Oil Price case, 2011-40 (quadrillion Btu) Region History Projections Average annual percent change, 2012-40 2011 2012 2020 2025 2030 2035 2040 OECD OECD Americas 120.6 118.1 125.3 127.9 130.8 135.5 142.1 0.7 United States a 96.8 94.4 100.8 102.2 103.3

  20. Appendix A: Reference case projections

    Gasoline and Diesel Fuel Update (EIA)

    I Reference case projections for natural gas production This page inTenTionally lefT blank 121 U.S. Energy Information Administration | International Energy Outlook 2016 Reference case projections for natural gas production Table I1. World total natural gas production by region, Reference case, 2012-40 (trillion cubic feet) Region/country Projections Average annual percent change, 2012-40 2012 2020 2025 2030 2035 2040 OECD OECD Americas 31.8 35.7 38.6 42.1 44.6 47.3 1.4 United States a 24.0 28.7

  1. New pulsating casing collar to improve cementing quality

    SciTech Connect (OSTI)

    Chen, P.; He, K.; Wu, J.

    1998-12-31

    This paper presents the design and test results of a new pulsating casing collar which improves cementing quality. The new pulsating casing collar (PCC) is designed according to the Helmholtz oscillator to generate a pulsating jet flow by self-excitation in the cementing process. By placing this new pulsating casing collar at the bottom of casing string, the generated pulsating jet flow transmits vibrating pressure waves up through the annulus and helps remove drilling mud in the annulus. It can therefore improve cementing quality, especially when eccentric annulus exists due to casing eccentricity where the mud is difficult to remove. The new pulsating casing collar consists of a top nozzle, a resonant chamber, and a bottom nozzle. It can be manufactured easily and is easy to use in the field. It has been tested in Jianghan oil-field, P.R. China. The field-test results support the theoretical analysis and laboratory test, and the cementing quality is shown greatly improved by using the new pulsating casing collar.

  2. Alternative Fuels Data Center: Case Studies

    Alternative Fuels and Advanced Vehicles Data Center [Office of Energy Efficiency and Renewable Energy (EERE)]

    Case Studies Printable Version Share this resource Send a link to Alternative Fuels Data Center: Case Studies to someone by E-mail Share Alternative Fuels Data Center: Case Studies on Facebook Tweet about Alternative Fuels Data Center: Case Studies on Twitter Bookmark Alternative Fuels Data Center: Case Studies on Google Bookmark Alternative Fuels Data Center: Case Studies on Delicious Rank Alternative Fuels Data Center: Case Studies on Digg Find More places to share Alternative Fuels Data

  3. An inquiry into the potential of scenario analysis for dealing with uncertainty in strategic environmental assessment in China

    SciTech Connect (OSTI)

    Zhu Zhixi Bai, Hongtao Xu He Zhu Tan

    2011-11-15

    Strategic environmental assessment (SEA) inherently needs to address greater levels of uncertainty in the formulation and implementation processes of strategic decisions, compared with project environmental impact assessment. The range of uncertainties includes internal and external factors of the complex system that is concerned in the strategy. Scenario analysis is increasingly being used to cope with uncertainty in SEA. Following a brief introduction of scenarios and scenario analysis, this paper examines the rationale for scenario analysis in SEA in the context of China. The state of the art associated with scenario analysis applied to SEA in China was reviewed through four SEA case analyses. Lessons learned from these cases indicated the word 'scenario' appears to be abused and the scenario-based methods appear to be misused due to the lack of understanding of an uncertain future and scenario analysis. However, good experiences were also drawn on, regarding how to integrate scenario analysis into the SEA process in China, how to cope with driving forces including uncertainties, how to combine qualitative scenario storylines with quantitative impact predictions, and how to conduct assessments and propose recommendations based on scenarios. Additionally, the ways to improve the application of this tool in SEA were suggested. We concluded by calling for further methodological research on this issue and more practices.

  4. OHA Misc Cases Archive File

    Office of Energy Efficiency and Renewable Energy (EERE)

    This is a archive file of our Misc decisions, Please download this file to your local computer and use the build in adobe search feature. Individual cases are listed in the bookmark section of the...

  5. OHA Whistleblower Cases Archive File

    Office of Energy Efficiency and Renewable Energy (EERE)

    This is a archive file of our Whistleblower decisions, Please download this file to your local computer and use the build in adobe search feature. Individual cases are listed in the bookmark...

  6. OHA Security Cases Archive File

    Office of Energy Efficiency and Renewable Energy (EERE)

    This is a archive file of our Security decisions, Please download this file to your local computer and use the build in adobe search feature. Individual cases are listed in the bookmark section of...

  7. OHA EIA CASES ARCHIVE FILE

    Office of Energy Efficiency and Renewable Energy (EERE)

    This is a archive file of our EIA decisions, Please download this file to your local computer and use the build in adobe search feature. Individual cases are listed in the bookmark section of the...

  8. OHA FOIA Cases Archive File

    Office of Energy Efficiency and Renewable Energy (EERE)

    This is a archive file of our FOIA decisions, Please download this file to your local computer and use the build in adobe search feature. Individual cases are listed in the bookmark section of the...

  9. Appendix A. Reference case projections

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    by region and end-use sector, High Oil Price case, 2010-40 (quadrillion Btu) Region History Projections Average annual percent change, 2010-40 2010 2020 2025 2030 2035 2040 OECD...

  10. Appendix A. Reference case projections

    Gasoline and Diesel Fuel Update (EIA)

    by region and country, Low Oil Price case, 2009-40 (million barrels per day) Region History Projections Average annual percent change, 2010-40 2009 2010 2011 2020 2025 2030...

  11. EIA Cases | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    EIA Cases EIA Cases RSS February 14, 2011 TEE-0073 - In the Matter of Cole Distributing, Inc. On December 13, 2010, Cole Distributing, Inc. (Cole) filed an Application for Exception with the Office of Hearings and Appeals (OHA) of the Department of Energy (DOE). The firm requests that it be permanently relieved of the requirement to prepare and file the Energy Information Administration (EIA) Form EIA-782B, entitled "Resellers'/Retailers' Monthly Petroleum Product Sales Report." As

  12. BerkeleyGW Case Study

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    BerkeleyGW Case Study BerkeleyGW Case Study Code Description and Science Problem BerkeleyGW is a Materials Science application for calculating the excited state properties of materials such as band gaps, band structures, absoprtion spectroscopy, photoemission spectroscopy and more. It requires as input the Kohn-Sham orbitals and energies from a DFT code like Quantum ESPRESSO, PARATEC, PARSEC etc. Like such DFT codes, it is heavily depedent on FFTs, Dense Linear algebra and tensor contraction

  13. Computer aided cogeneration feasibility analysis

    SciTech Connect (OSTI)

    Anaya, D.A.; Caltenco, E.J.L.; Robles, L.F.

    1996-12-31

    A successful cogeneration system design depends of several factors, and the optimal configuration can be founded using a steam and power simulation software. The key characteristics of one of this kind of software are described below, and its application on a process plant cogeneration feasibility analysis is shown in this paper. Finally a study case is illustrated. 4 refs., 2 figs.

  14. Transmittal of the Calculation Package that Supports the Analysis of Performance of the Environmental Management Waste Management Facility Oak Ridge, Tennessee (Based 5-Cell Design Issued 8/14/09)

    SciTech Connect (OSTI)

    Williams M.J.

    2009-09-14

    This document presents the results of an assessment of the performance of a build-out of the Environmental Management Waste Management Facility (EMWMF). The EMWMF configuration that was assessed includes the as-constructed Cells 1 through 4, with a groundwater underdrain that was installed beneath Cell 3 during the winter of 2003-2004, and Cell 5, whose proposed design is an Addendum to Remedial Design Report for the Disposal of Oak Ridge Reservation Comprehensive Environmental Response, Compensation, and Liability Act of 1980 Waste, Oak Ridge, Tennessee, DOE/OR/01-1873&D2/A5/R1. The total capacity of the EMWMF with 5 cells is about 1.7 million cubic yards. This assessment was conducted to determine the conditions under which the approved Waste Acceptance Criteria (WAC) for the EMWMF found in the Attainment Plan for Risk/Toxicity-Based Waste Acceptance Criteria at the Oak Ridge Reservation, Oak Ridge, Tennessee [U.S. Department of Energy (DOE) 2001a], as revised for constituents added up to October 2008, would remain protective of public health and safety for a five-cell disposal facility. For consistency, the methods of analyses and the exposure scenario used to predict the performance of a five-cell disposal facility were identical to those used in the Remedial Investigation and Feasibility Study (RI/FS) and its addendum (DOE 1998a, DOE 1998b) to develop the approved WAC. To take advantage of new information and design changes departing from the conceptual design, the modeling domain and model calibration were upaded from those used in the RI/FS and its addendum. It should be noted that this analysis is not intended to justify or propose a change in the approved WAC.

  15. Methods for spectral image analysis by exploiting spatial simplicity

    DOE Patents [OSTI]

    Keenan, Michael R.

    2010-05-25

    Several full-spectrum imaging techniques have been introduced in recent years that promise to provide rapid and comprehensive chemical characterization of complex samples. One of the remaining obstacles to adopting these techniques for routine use is the difficulty of reducing the vast quantities of raw spectral data to meaningful chemical information. Multivariate factor analysis techniques, such as Principal Component Analysis and Alternating Least Squares-based Multivariate Curve Resolution, have proven effective for extracting the essential chemical information from high dimensional spectral image data sets into a limited number of components that describe the spectral characteristics and spatial distributions of the chemical species comprising the sample. There are many cases, however, in which those constraints are not effective and where alternative approaches may provide new analytical insights. For many cases of practical importance, imaged samples are "simple" in the sense that they consist of relatively discrete chemical phases. That is, at any given location, only one or a few of the chemical species comprising the entire sample have non-zero concentrations. The methods of spectral image analysis of the present invention exploit this simplicity in the spatial domain to make the resulting factor models more realistic. Therefore, more physically accurate and interpretable spectral and abundance components can be extracted from spectral images that have spatially simple structure.

  16. Methods for spectral image analysis by exploiting spatial simplicity

    DOE Patents [OSTI]

    Keenan, Michael R.

    2010-11-23

    Several full-spectrum imaging techniques have been introduced in recent years that promise to provide rapid and comprehensive chemical characterization of complex samples. One of the remaining obstacles to adopting these techniques for routine use is the difficulty of reducing the vast quantities of raw spectral data to meaningful chemical information. Multivariate factor analysis techniques, such as Principal Component Analysis and Alternating Least Squares-based Multivariate Curve Resolution, have proven effective for extracting the essential chemical information from high dimensional spectral image data sets into a limited number of components that describe the spectral characteristics and spatial distributions of the chemical species comprising the sample. There are many cases, however, in which those constraints are not effective and where alternative approaches may provide new analytical insights. For many cases of practical importance, imaged samples are "simple" in the sense that they consist of relatively discrete chemical phases. That is, at any given location, only one or a few of the chemical species comprising the entire sample have non-zero concentrations. The methods of spectral image analysis of the present invention exploit this simplicity in the spatial domain to make the resulting factor models more realistic. Therefore, more physically accurate and interpretable spectral and abundance components can be extracted from spectral images that have spatially simple structure.

  17. EPICS BASE

    Energy Science and Technology Software Center (OSTI)

    002230MLTPL00 Experimental Physics and Industrial Control System BASE  http://www.aps.anl.gov/epics 

  18. Comprehensive Energy Program at Patrick Air Force Base Set to...

    Office of Environmental Management (EM)

    Download the Patrick Air Force Base case study. (709.94 KB) More Documents & Publications FPL Energy Services ESCO Qualification Sheet UESC Project Overview: NASA Ames Research ...

  19. Infrastructure Analysis Tools: A Focus on Cash Flow Analysis (Presentation)

    SciTech Connect (OSTI)

    Melaina, M.; Penev, M.

    2012-09-01

    NREL has developed and maintains a variety of infrastructure analysis models for the U.S. Department of Energy. Business case analysis has recently been added to this tool set. This presentation focuses on cash flow analysis. Cash flows depend upon infrastructure costs, optimized spatially and temporally, and assumptions about financing and revenue. NREL has incorporated detailed metrics on financing and incentives into the models. Next steps in modeling include continuing to collect feedback on regional/local infrastructure development activities and 'roadmap' dynamics, and incorporating consumer preference assumptions on infrastructure to provide direct feedback between vehicles and station rollout.

  20. Analysis of surface integrity of grinded gears using Barkhausen noise analysis and x-ray diffraction

    SciTech Connect (OSTI)

    Vrkoslavová, Lucie; Louda, Petr; Malec, Ji?i

    2014-02-18

    The contribution is focused to present results of study grinded gears made of 18CrNiMo7-6 steel used in the wind power plant for support (service) purposes. These gears were case-hardened due to standard hard case and soft core formation. This heat treatment increases wear resistance and fatigue strength of machine parts. During serial production some troubles with surface integrity have occurred. When solving complex problems lots of samples were prepared. For grinding of gears were used different parameters of cutting speed, number of material removal and lots from different subsuppliers. Material characterization was carried out using Barkhausen noise analysis (BNA) device; X-ray diffraction (XRD) measurement of surface residual stresses was done as well. Depth profile of measured characteristics, e.g. magnetoelastic parameter and residual stress was obtained by step by step layers' removing using electrolytic etching. BNA software Viewscan was used to measure magnetizing frequency sweep (MFS) and magnetizing voltage sweep (MVS). Scanning of Magnetoelastic parameter (MP) endwise individual teeth were also carried out with Viewscan. These measurements were done to find problematic surface areas after grinding such as thermal damaged locations. Plots of the hardness and thickness of case-hardened layer on cross sections were measurered as well. Evaluation of structure of subsurface case-hardened layer and core was made on etched metallographic patterns. The aim of performed measurements was to find correlation between conditions of grinding, residual stresses and structural and magnetoelastic parameters. Based on correlation of measured values and technological parameters optimizing the production of gears will be done.