National Library of Energy BETA

Sample records for analysis base cases

  1. 1980 Base case and feasibility analysis

    SciTech Connect (OSTI)

    1993-03-01

    This report describes a task of documenting a ``base case`` and performing a feasibility analysis for a national residential energy efficiency program for new homes, The principal objective of the task wasto estimate the energy consumption of typical homes built in 1980 and then to identify and assess the feasibility of methods to reduce that consumption by 50%. The goal of the program by the year 2000 is to reduce heating and cooling energy use in new homes built under the program to one-half of the energy use in typical new homes built in 1980. The task also calls for determining whether the program goal should be revised, based on the analysis.

  2. 1980 Base case and feasibility analysis

    SciTech Connect (OSTI)

    Not Available

    1993-03-01

    This report describes a task of documenting a base case'' and performing a feasibility analysis for a national residential energy efficiency program for new homes, The principal objective of the task wasto estimate the energy consumption of typical homes built in 1980 and then to identify and assess the feasibility of methods to reduce that consumption by 50%. The goal of the program by the year 2000 is to reduce heating and cooling energy use in new homes built under the program to one-half of the energy use in typical new homes built in 1980. The task also calls for determining whether the program goal should be revised, based on the analysis.

  3. Load flow analysis: Base cases, data, diagrams, and results (Technical

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Report) | SciTech Connect Load flow analysis: Base cases, data, diagrams, and results Citation Details In-Document Search Title: Load flow analysis: Base cases, data, diagrams, and results × You are accessing a document from the Department of Energy's (DOE) SciTech Connect. This site is a product of DOE's Office of Scientific and Technical Information (OSTI) and is provided as a public service. Visit OSTI to utilize additional information resources in energy science and technology. A paper

  4. Definition of the base analysis case of the interim performance assessment

    SciTech Connect (OSTI)

    Mann, F.M.

    1995-12-01

    The base analysis case for the ``Hanford Low-Level Tank Waste Interim Performance Assessment`` is defined. Also given are brief description of the sensitivity cases.

  5. Fuel Cycle Analysis Framework Base Cases for the IAEA/INPRO GAINS Collaborative Project

    SciTech Connect (OSTI)

    Brent Dixon

    2012-09-01

    Thirteen countries participated in the Collaborative Project GAINS “Global Architecture of Innovative Nuclear Energy Systems Based on Thermal and Fast Reactors Including a Closed Fuel Cycle”, which was the primary activity within the IAEA/INPRO Program Area B: “Global Vision on Sustainable Nuclear Energy” for the last three years. The overall objective of GAINS was to develop a standard framework for assessing future nuclear energy systems taking into account sustainable development, and to validate results through sample analyses. This paper details the eight scenarios that constitute the GAINS framework base cases for analysis of the transition to future innovative nuclear energy systems. The framework base cases provide a reference for users of the framework to start from in developing and assessing their own alternate systems. Each base case is described along with performance results against the GAINS sustainability evaluation metrics. The eight cases include four using a moderate growth projection and four using a high growth projection for global nuclear electricity generation through 2100. The cases are divided into two sets, addressing homogeneous and heterogeneous scenarios developed by GAINS to model global fuel cycle strategies. The heterogeneous world scenario considers three separate nuclear groups based on their fuel cycle strategies, with non-synergistic and synergistic cases. The framework base case analyses results show the impact of these different fuel cycle strategies while providing references for future users of the GAINS framework. A large number of scenario alterations are possible and can be used to assess different strategies, different technologies, and different assumptions about possible futures of nuclear power. Results can be compared to the framework base cases to assess where these alternate cases perform differently versus the sustainability indicators.

  6. A case study for cloud based high throughput analysis of NGS data using the globus genomics system

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Bhuvaneshwar, Krithika; Sulakhe, Dinanath; Gauba, Robinder; Rodriguez, Alex; Madduri, Ravi; Dave, Utpal; Lacinski, Lukasz; Foster, Ian; Gusev, Yuriy; Madhavan, Subha

    2015-01-01

    Next generation sequencing (NGS) technologies produce massive amounts of data requiring a powerful computational infrastructure, high quality bioinformatics software, and skilled personnel to operate the tools. We present a case study of a practical solution to this data management and analysis challenge that simplifies terabyte scale data handling and provides advanced tools for NGS data analysis. These capabilities are implemented using the “Globus Genomics” system, which is an enhanced Galaxy workflow system made available as a service that offers users the capability to process and transfer data easily, reliably and quickly to address end-to-end NGS analysis requirements. The Globus Genomicsmore » system is built on Amazon's cloud computing infrastructure. The system takes advantage of elastic scaling of compute resources to run multiple workflows in parallel and it also helps meet the scale-out analysis needs of modern translational genomics research.« less

  7. A case study for cloud based high throughput analysis of NGS data using the globus genomics system

    SciTech Connect (OSTI)

    Bhuvaneshwar, Krithika; Sulakhe, Dinanath; Gauba, Robinder; Rodriguez, Alex; Madduri, Ravi; Dave, Utpal; Lacinski, Lukasz; Foster, Ian; Gusev, Yuriy; Madhavan, Subha

    2015-01-01

    Next generation sequencing (NGS) technologies produce massive amounts of data requiring a powerful computational infrastructure, high quality bioinformatics software, and skilled personnel to operate the tools. We present a case study of a practical solution to this data management and analysis challenge that simplifies terabyte scale data handling and provides advanced tools for NGS data analysis. These capabilities are implemented using the “Globus Genomics” system, which is an enhanced Galaxy workflow system made available as a service that offers users the capability to process and transfer data easily, reliably and quickly to address end-to-end NGS analysis requirements. The Globus Genomics system is built on Amazon's cloud computing infrastructure. The system takes advantage of elastic scaling of compute resources to run multiple workflows in parallel and it also helps meet the scale-out analysis needs of modern translational genomics research.

  8. Final base case community analysis: Indian Springs, Nevada for the Clark County socioeconomic impact assessment of the proposed high- level nuclear waste repository at Yucca Mountain, Nevada

    SciTech Connect (OSTI)

    1992-06-18

    This document provides a base case description of the rural Clark County community of Indian Springs in anticipation of change associated with the proposed high-level nuclear waste repository at Yucca Mountain. As the community closest to the proposed site, Indian Springs may be seen by site characterization workers, as well as workers associated with later repository phases, as a logical place to live. This report develops and updates information relating to a broad spectrum of socioeconomic variables, thereby providing a `snapshot` or `base case` look at Indian Springs in early 1992. With this as a background, future repository-related developments may be analytically separated from changes brought about by other factors, thus allowing for the assessment of the magnitude of local changes associated with the proposed repository. Given the size of the community, changes that may be considered small in an absolute sense may have relatively large impacts at the local level. Indian Springs is, in many respects, a unique community and a community of contrasts. An unincorporated town, it is a small yet important enclave of workers on large federal projects and home to employees of small- scale businesses and services. It is a rural community, but it is also close to the urbanized Las Vega Valley. It is a desert community, but has good water resources. It is on flat terrain, but it is located within 20 miles of the tallest mountains in Nevada. It is a town in which various interest groups diverge on issues of local importance, but in a sense of community remains an important feature of life. Finally, it has a sociodemographic history of both surface transience and underlying stability. If local land becomes available, Indian Springs has some room for growth but must first consider the historical effects of growth on the town and its desired direction for the future.

  9. Analysis of Restricted Natural Gas Supply Cases

    Reports and Publications (EIA)

    2004-01-01

    The four cases examined in this study have progressively greater impacts on overall natural gas consumption, prices, and supply. Compared to the Annual Energy Outlook 2004 reference case, the no Alaska pipeline case has the least impact; the low liquefied natural gas case has more impact; the low unconventional gas recovery case has even more impact; and the combined case has the most impact.

  10. Proteomics based compositional analysis of complex cellulase...

    Office of Scientific and Technical Information (OSTI)

    Proteomics based compositional analysis of complex cellulase-hemicellulase mixtures Citation Details In-Document Search Title: Proteomics based compositional analysis of complex ...

  11. 20th International Conference on Case Based Reasoning | GE Global...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Cheetham on "Case-Based Reasoning for Turbine Trip Diagnostics". ICCBR was hosted in Lyon, France September 3-6, 2012. Case-Based Reasoning (CBR) is a lazy learning algorithm...

  12. Analysis of design tradeoffs for diplay case evaporators

    SciTech Connect (OSTI)

    Bullard, CLARK

    2004-08-11

    A model for simulating a display case evaporator under frosting conditions has been developed, using a quasi-steady and finite-volume approach and a Newton-Raphson based solution algorithm. It is capable of simulating evaporators with multiple modules having different geometries, e.g. tube and fin thicknesses and pitch. The model was validated against data taken at two-minute intervals from a well-instrumented medium-temperature vertical display case, for two evaporators having very different configurations. The data from these experiments provided both the input data for the model and also the data to compare the modeling results. The validated model has been used to generate some general guidelines for coil design. Effects of various geometrical parameters were quantified, and compressor performance data were used to express the results in terms of total power consumption. Using these general guidelines, a new prototype evaporator was designed for the subject display case, keeping in mind the current packaging restrictions, tube and fin availabilities. It is an optimum coil for the given external load conditions. Subsequently, the validated model was used in a more extensive analysis to design prototype coils with some of the current tube and fin spacing restrictions removed. A new microchannel based suction line heat exchanger was installed in the display case system. The performance of this suction line heat exchanger is reported.

  13. Economic Analysis Case Studies of Battery Energy Storage with...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Economic Analysis Case Studies of Battery Energy Storage with SAM Nicholas DiOrio, Aron Dobos, and Steven Janzou National Renewable Energy Laboratory Technical Report NREL...

  14. Chapter 11. Community analysis-based methods

    SciTech Connect (OSTI)

    Cao, Y.; Wu, C.H.; Andersen, G.L.; Holden, P.A.

    2010-05-01

    Microbial communities are each a composite of populations whose presence and relative abundance in water or other environmental samples are a direct manifestation of environmental conditions, including the introduction of microbe-rich fecal material and factors promoting persistence of the microbes therein. As shown by culture-independent methods, different animal-host fecal microbial communities appear distinctive, suggesting that their community profiles can be used to differentiate fecal samples and to potentially reveal the presence of host fecal material in environmental waters. Cross-comparisons of microbial communities from different hosts also reveal relative abundances of genetic groups that can be used to distinguish sources. In increasing order of their information richness, several community analysis methods hold promise for MST applications: phospholipid fatty acid (PLFA) analysis, denaturing gradient gel electrophoresis (DGGE), terminal restriction fragment length polymorphism (TRFLP), cloning/sequencing, and PhyloChip. Specific case studies involving TRFLP and PhyloChip approaches demonstrate the ability of community-based analyses of contaminated waters to confirm a diagnosis of water quality based on host-specific marker(s). The success of community-based MST for comprehensively confirming fecal sources relies extensively upon using appropriate multivariate statistical approaches. While community-based MST is still under evaluation and development as a primary diagnostic tool, results presented herein demonstrate its promise. Coupled with its inherently comprehensive ability to capture an unprecedented amount of microbiological data that is relevant to water quality, the tools for microbial community analysis are increasingly accessible, and community-based approaches have unparalleled potential for translation into rapid, perhaps real-time, monitoring platforms.

  15. Geographically-Based Infrastructure Analysis

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Previous and Ongoing * HYDS ME - Evaluates best infrastructure options * Interstate Infrastructure Analysis - Minimal infrastructure to facilitate interstate travel during ...

  16. Byfl: Compiler-based Application Analysis

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Feynman Center (505) 665-9090 Email Byfl: Compiler-based Application Analysis Byfl is a productivity tool that helps computational scientists analyze their code for...

  17. Integrated fire analysis: Application to offshore cases

    SciTech Connect (OSTI)

    Saubestre, V.; Khalfi, J.P.; Paygnard, J.C.

    1995-12-31

    Evaluating thermal loads from different fire scenarios and then response of the structure to these loads covers several fields. It is also difficult and time consuming to implement. Interfaces are necessary between the heat calculation, transient propagation and structural analysis software packages. Nevertheless, it is necessary to design structures to accommodate heat loads in order to meet safety requirements or functional specification. Elf, along with several operators and organizations, have sponsored a research project on this topic. The project, managed by SINTEF NBL (Norwegian Fire Research Laboratory), has delivered an integrated fire analysis software package which can be used to address design-to-fire-related issues in various contexts. The core modules of the integrated package are robust, well validated analysis tools. This paper describes some benefits (technical or cost related) of using an integrated approach to assess the response of a structure to thermal loads. Three examples are described: consequence of an accidental scenario on the living quarters in an offshore complex, necessity for the reinforcement of a flareboom following a change in process, evaluation of the amount of insulation needed for a topside process primary structure. The paper focuses on the importance for the operator to have a practical tool which can lead to substantial cost saving while reducing the uncertainty linked to safety issues.

  18. Geographically Based Hydrogen Demand and Infrastructure Analysis |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy Analysis Geographically Based Hydrogen Demand and Infrastructure Analysis Presentation by NREL's Margo Melendez at the 2010 - 2025 Scenario Analysis for Hydrogen Fuel Cell Vehicles and Infrastructure Meeting on August 9 - 10, 2006 in Washington, D.C. PDF icon melendez_geo_h2_demand.pdf More Documents & Publications 2010 - 2025 Scenario Analysis Meeting Agenda for August 9 - 10, 2006 Agenda for the 2010-2025 Scenario Analysis for Hydrogen Fuel Cell Vehicles and

  19. Network-based Analysis and Insights | NISAC

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    NISACNetwork-based Analysis and Insights content top Chemical Supply Chain Analysis Posted by Admin on Mar 1, 2012 in | Comments 0 comments Chemical Supply Chain Analysis NISAC has developed a range of capabilities for analyzing the consequences of disruptions to the chemical manufacturing industry. Each capability provides a different but complementary perspective on the questions of interest-questions like Given an event, will the entire chemical sector be impacted or just parts? Which

  20. Well casing-based geophysical sensor apparatus, system and method

    DOE Patents [OSTI]

    Daily, William D.

    2010-03-09

    A geophysical sensor apparatus, system, and method for use in, for example, oil well operations, and in particular using a network of sensors emplaced along and outside oil well casings to monitor critical parameters in an oil reservoir and provide geophysical data remote from the wells. Centralizers are affixed to the well casings and the sensors are located in the protective spheres afforded by the centralizers to keep from being damaged during casing emplacement. In this manner, geophysical data may be detected of a sub-surface volume, e.g. an oil reservoir, and transmitted for analysis. Preferably, data from multiple sensor types, such as ERT and seismic data are combined to provide real time knowledge of the reservoir and processes such as primary and secondary oil recovery.

  1. Chiller condition monitoring using topological case-based modeling

    SciTech Connect (OSTI)

    Tsutsui, Hiroaki; Kamimura, Kazuyuki

    1996-11-01

    To increase energy efficiency and economy, commercial building projects now often utilize centralized, shared sources of heat such as district heating and cooling (DHC) systems. To maintain efficiency, precise monitoring and scheduling of maintenance for chillers and heat pumps is essential. Low-performance operation results in energy loss, while unnecessary maintenance is expensive and wasteful. Plant supervisors are responsible for scheduling and supervising maintenance. Modeling systems that assist in analyzing system deterioration are of great benefit for these tasks. Topological case-based modeling (TCBM) (Tsutsui et al. 1993; Tsutsui 1995) is an effective tool for chiller performance deterioration monitoring. This paper describes TCBM and its application to this task using recorded historical performance data.

  2. Topology-based Feature Definition and Analysis

    SciTech Connect (OSTI)

    Weber, Gunther H.; Bremer, Peer-Timo; Gyulassy, Attila; Pascucci, Valerio

    2010-12-10

    Defining high-level features, detecting them, tracking them and deriving quantities based on them is an integral aspect of modern data analysis and visualization. In combustion simulations, for example, burning regions, which are characterized by high fuel-consumption, are a possible feature of interest. Detecting these regions makes it possible to derive statistics about their size and track them over time. However, features of interest in scientific simulations are extremely varied, making it challenging to develop cross-domain feature definitions. Topology-based techniques offer an extremely flexible means for general feature definitions and have proven useful in a variety of scientific domains. This paper will provide a brief introduction into topological structures like the contour tree and Morse-Smale complex and show how to apply them to define features in different science domains such as combustion. The overall goal is to provide an overview of these powerful techniques and start a discussion how these techniques can aid in the analysis of astrophysical simulations.

  3. Analysis of Geothermal Reservoir Stimulation using Geomechanics-Based

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Stochastic Analysis of Injection-Induced Seismicity | Department of Energy Geothermal Reservoir Stimulation using Geomechanics-Based Stochastic Analysis of Injection-Induced Seismicity Analysis of Geothermal Reservoir Stimulation using Geomechanics-Based Stochastic Analysis of Injection-Induced Seismicity Analysis of Geothermal Reservoir Stimulation using Geomechanics-Based Stochastic Analysis of Injection-Induced Seismicity presentation at the April 2013 peer review meeting held in Denver,

  4. A review of recent NEPA alternatives analysis case law

    SciTech Connect (OSTI)

    Smith, Michael D. . E-mail: michael.smith@humboldt.edu

    2007-03-15

    According to the Council on Environmental Quality (CEQ) Regulations for implementing the National Environmental Policy Act (NEPA), the analysis and comparison of alternatives is considered the 'heart' of the NEPA process. Although over 20 years have passed since the original mandate appeared to construct and assess a 'reasonable range' of alternatives contained in the CEQ Regulations, there is a perception that there is still a significant amount of confusion about what exactly constitutes a legally-compliant alternatives analysis. One manifestation of this confusion is the increasing amount of litigation over the alternatives analysis in NEPA documents. This study examined decisions on challenges to alternative analyses contained in federal agency NEPA documents in federal Courts of Appeals for the ten-year period 1996-2005. The results show that federal agencies are overwhelmingly successful against such challenges - winning 30 of the 37 cases. The most common challenge was that federal agencies had not included a full reasonable range of alternatives, while the second most frequent was that agencies had improperly constructed their purpose and need for their projects. Brief descriptions of several of the key court decisions are provided that illustrate the main factors that led to agencies being successful, as well as being unsuccessful, in their court challenges. The results provide little support for recent calls to amend the NEPA Statute and the CEQ Regulations to better clarify the requirements for alternatives analysis. The conclusion to the study focuses on practical steps NEPA practitioners can take to prepare their alternatives analyses in a manner that fulfills the requirements of the NEPA Statute and Council on Environmental Quality (CEQ) Regulations and makes them less vulnerable to an unfavorable court decision if legally challenged.

  5. Economic Analysis Case Studies of Battery Energy Storage with SAM

    SciTech Connect (OSTI)

    DiOrio, Nicholas; Dobos, Aron; Janzou, Steven

    2015-11-01

    Interest in energy storage has continued to increase as states like California have introduced mandates and subsidies to spur adoption. This energy storage includes customer sited behind-the-meter storage coupled with photovoltaics (PV). This paper presents case study results from California and Tennessee, which were performed to assess the economic benefit of customer-installed systems. Different dispatch strategies, including manual scheduling and automated peak-shaving were explored to determine ideal ways to use the storage system to increase the system value and mitigate demand charges. Incentives, complex electric tariffs, and site specific load and PV data were used to perform detailed analysis. The analysis was performed using the free, publically available System Advisor Model (SAM) tool. We find that installation of photovoltaics with a lithium-ion battery system priced at $300/kWh in Los Angeles under a high demand charge utility rate structure and dispatched using perfect day-ahead forecasting yields a positive net-present value, while all other scenarios cost the customer more than the savings accrued. Different dispatch strategies, including manual scheduling and automated peak-shaving were explored to determine ideal ways to use the storage system to increase the system value and mitigate demand charges. Incentives, complex electric tariffs, and site specific load and PV data were used to perform detailed analysis. The analysis was performed using the free, publically available System Advisor Model (SAM) tool. We find that installation of photovoltaics with a lithium-ion battery system priced at $300/kWh in Los Angeles under a high demand charge utility rate structure and dispatched using perfect day-ahead forecasting yields a positive net-present value, while all other scenarios cost the customer more than the savings accrued.

  6. Bismuth-based electrochemical stripping analysis

    DOE Patents [OSTI]

    Wang, Joseph

    2004-01-27

    Method and apparatus for trace metal detection and analysis using bismuth-coated electrodes and electrochemical stripping analysis. Both anodic stripping voltammetry and adsorptive stripping analysis may be employed.

  7. Analysis of Geothermal Reservoir Stimulation Using Geomechanics-based

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Stochastic Analysis of Injection-induced Seismicity; 2010 Geothermal Technology Program Peer Review Report | Department of Energy Analysis of Geothermal Reservoir Stimulation Using Geomechanics-based Stochastic Analysis of Injection-induced Seismicity; 2010 Geothermal Technology Program Peer Review Report Analysis of Geothermal Reservoir Stimulation Using Geomechanics-based Stochastic Analysis of Injection-induced Seismicity; 2010 Geothermal Technology Program Peer Review Report DOE 2010

  8. Geographically-Based Infrastructure Analysis for California | Department of

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Energy Geographically-Based Infrastructure Analysis for California Geographically-Based Infrastructure Analysis for California Presentation by Joan Ogden of the University of California at the 2010 - 2025 Scenario Analysis for Hydrogen Fuel Cell Vehicles and Infrastructure Meeting on August 9 - 10, 2006 in Washington, D.C. PDF icon ogden_geo_infrastructure_analysis.pdf More Documents & Publications Hydrogen Infrastructure Strategies EIS-0105: Draft Environmental Impact Statement Natural

  9. Prior-knowledge-based spectral mixture analysis for impervious surface

    Office of Scientific and Technical Information (OSTI)

    mapping (Journal Article) | SciTech Connect SciTech Connect Search Results Journal Article: Prior-knowledge-based spectral mixture analysis for impervious surface mapping Citation Details In-Document Search Title: Prior-knowledge-based spectral mixture analysis for impervious surface mapping In this study, we developed a prior-knowledge-based spectral mixture analysis (PKSMA) to map impervious surfaces by using endmembers derived separately for high- and low-density urban regions. First, an

  10. Agent-Based Modeling and Simulation for Hydrogen Transition Analysis

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Laboratory is managed by The University of Chicago for the U.S. Department of Energy Agent Agent - - Based Modeling Based Modeling and Simulation (ABMS) and Simulation (ABMS) for Hydrogen Transition for Hydrogen Transition Analysis Analysis Marianne Mintz Hydrogen Transition Analysis Workshop US Department of Energy January 26, 2006 Objectives and Scope for Phase 1 2 Analyze the hydrogen infrastructure development as a complex adaptive system using an agent-based modeling and simulation (ABMS)

  11. Geographically Based Hydrogen Demand and Infrastructure Analysis...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Presentation by NREL's Margo Melendez at the 2010 - 2025 Scenario Analysis for Hydrogen Fuel Cell Vehicles and Infrastructure Meeting on August 9 - 10, 2006 in Washington, D.C....

  12. Cluster Analysis-Based Approaches for Geospatiotemporal Data...

    Office of Scientific and Technical Information (OSTI)

    Cluster Analysis-Based Approaches for Geospatiotemporal Data Mining of Massive Data Sets for Identification of Forest Threats Mills, Richard T ORNL ORNL; Hoffman, Forrest M...

  13. NETL - Petroleum-Based Fuels Life Cycle Greenhouse Gas Analysis...

    Open Energy Info (EERE)

    search Tool Summary LAUNCH TOOL Name: NETL - Petroleum-Based Fuels Life Cycle Greenhouse Gas Analysis 2005 Baseline Model AgencyCompany Organization: National Energy Technology...

  14. MEMS-based chemical analysis systems development at Sandia National...

    Office of Scientific and Technical Information (OSTI)

    MEMS-based chemical analysis systems development at Sandia National Labs. Citation Details ... Sponsoring Org: USDOE Country of Publication: United States Language: English Subject: 77 ...

  15. Physics-Based Constraints in the Forward Modeling Analysis of...

    Office of Scientific and Technical Information (OSTI)

    Technical Report: Physics-Based Constraints in the Forward Modeling Analysis of Time-Correlated Image Data, (Long Version) Citation Details In-Document Search Title: Physics-Based...

  16. Physics-based constraints in the forward modeling analysis of...

    Office of Scientific and Technical Information (OSTI)

    Physics-based constraints in the forward modeling analysis of time-correlated image data Citation Details In-Document Search Title: Physics-based constraints in the forward...

  17. Physics-based constraints in the forward modeling analysis of...

    Office of Scientific and Technical Information (OSTI)

    Conference: Physics-based constraints in the forward modeling analysis of time-correlated image data Citation Details In-Document Search Title: Physics-based constraints in the ...

  18. Physics-Based Constraints in the Forward Modeling Analysis of...

    Office of Scientific and Technical Information (OSTI)

    Technical Report: Physics-Based Constraints in the Forward Modeling Analysis of Time-Correlated Image Data, (Long Version) Citation Details In-Document Search Title: Physics-Based ...

  19. Sandia National Laboratories analysis code data base

    SciTech Connect (OSTI)

    Peterson, C.W.

    1994-11-01

    Sandia National Laboratories, mission is to solve important problems in the areas of national defense, energy security, environmental integrity, and industrial technology. The Laboratories` strategy for accomplishing this mission is to conduct research to provide an understanding of the important physical phenomena underlying any problem, and then to construct validated computational models of the phenomena which can be used as tools to solve the problem. In the course of implementing this strategy, Sandia`s technical staff has produced a wide variety of numerical problem-solving tools which they use regularly in the design, analysis, performance prediction, and optimization of Sandia components, systems and manufacturing processes. This report provides the relevant technical and accessibility data on the numerical codes used at Sandia, including information on the technical competency or capability area that each code addresses, code ``ownership`` and release status, and references describing the physical models and numerical implementation.

  20. Overview of New Tools to Perform Safety Analysis: BWR Station Black Out Test Case

    SciTech Connect (OSTI)

    D. Mandelli; C. Smith; T. Riley; J. Nielsen; J. Schroeder; C. Rabiti; A. Alfonsi; Cogliati; R. Kinoshita; V. Pasucci; B. Wang; D. Maljovec

    2014-06-01

    Dynamic Probabilistic Risk Assessment (DPRA) methodologies couple system simulator codes (e.g., RELAP, MELCOR) with simulation controller codes (e.g., RAVEN, ADAPT). While system simulator codes accurately model system dynamics deterministically, simulation controller codes introduce both deterministic (e.g., system control logic, operating procedures) and stochastic (e.g., component failures, parameter uncertainties) elements into the simulation. Typically, a DPRA is performed by: 1) sampling values of a set of parameters from the uncertainty space of interest (using the simulation controller codes), and 2) simulating the system behavior for that specific set of parameter values (using the system simulator codes). For complex systems, one of the major challenges in using DPRA methodologies is to analyze the large amount of information (i.e., large number of scenarios ) generated, where clustering techniques are typically employed to allow users to better organize and interpret the data. In this paper, we focus on the analysis of a nuclear simulation dataset that is part of the Risk Informed Safety Margin Characterization (RISMC) Boiling Water Reactor (BWR) station blackout (SBO) case study. We apply a software tool that provides the domain experts with an interactive analysis and visualization environment for understanding the structures of such high-dimensional nuclear simulation datasets. Our tool encodes traditional and topology-based clustering techniques, where the latter partitions the data points into clusters based on their uniform gradient flow behavior. We demonstrate through our case study that both types of clustering techniques complement each other in bringing enhanced structural understanding of the data.

  1. 2007 Wholesale Power Rate Case Final Proposal : Risk Analysis Study.

    SciTech Connect (OSTI)

    United States. Bonneville Power Administration.

    2006-07-01

    BPA's operating environment is filled with numerous uncertainties, and thus the rate-setting process must take into account a wide spectrum of risks. The objective of the Risk Analysis is to identify, model, and analyze the impacts that key risks have on BPA's net revenue (total revenues less total expenses). This is carried out in two distinct steps: a risk analysis step, in which the distributions, or profiles, of operating and non operating risks are defined, and a risk mitigation step, in which different rate tools are tested to assess their ability to recover BPA's costs in the face of this uncertainty. Two statistical models are used in the risk analysis step for this rate proposal, the Risk Analysis Model (RiskMod), and the Non-Operating Risk Model (NORM), while a third model, the ToolKit, is used to test the effectiveness of rate tools options in the risk mitigation step. RiskMod is discussed in Sections 2.1 through 2.4, the NORM is discussed in Section 2.5, and the ToolKit is discussed in Section 3. The models function together so that BPA can develop rates that cover all of its costs and provide a high probability of making its Treasury payments on time and in full during the rate period. By law, BPA's payments to Treasury are the lowest priority for revenue application, meaning that payments to Treasury are the first to be missed if financial reserves are insufficient to pay all bills on time. For this reason, BPA measures its potential for recovering costs in terms of probability of being able to make Treasury payments on time (also known as Treasury Payment Probability or TPP).

  2. Building America Special Research Project: High-R Walls Case Study Analysis

    Office of Environmental Management (EM)

    | Department of Energy Building America Special Research Project: High-R Walls Case Study Analysis Building America Special Research Project: High-R Walls Case Study Analysis This report considers a number of promising wall systems with improved thermal control to improve plant-wide performance. Unlike previous studies, it considers performance in a more realistic matter, including some true three-dimensional heat flow and the relative risk of moisture damage. PDF icon Building America

  3. Analysis of Energy Efficiency Program Impacts Based on Program Spending

    Gasoline and Diesel Fuel Update (EIA)

    Energy Efficiency Program Impacts Based on Program Spending May 2015 Independent Statistics & Analysis www.eia.gov U.S. Department of Energy Washington, DC 20585 U.S. Energy Information Administration | Analysis of Energy Efficiency Program Impacts Based on Program Spending i This report was prepared by the U.S. Energy Information Administration (EIA), the statistical and analytical agency within the U.S. Department of Energy. By law, EIA's data, analyses, and forecasts are independent of

  4. Physics-Based Constraints in the Forward Modeling Analysis of

    Office of Scientific and Technical Information (OSTI)

    Time-Correlated Image Data, (Long Version) (Technical Report) | SciTech Connect Technical Report: Physics-Based Constraints in the Forward Modeling Analysis of Time-Correlated Image Data, (Long Version) Citation Details In-Document Search Title: Physics-Based Constraints in the Forward Modeling Analysis of Time-Correlated Image Data, (Long Version) Authors: Carroll, James L. [1] ; Tomkins, Christopher D. [1] + Show Author Affiliations Los Alamos National Laboratory Publication Date:

  5. Physics-based constraints in the forward modeling analysis of

    Office of Scientific and Technical Information (OSTI)

    time-correlated image data (Conference) | SciTech Connect Conference: Physics-based constraints in the forward modeling analysis of time-correlated image data Citation Details In-Document Search Title: Physics-based constraints in the forward modeling analysis of time-correlated image data Authors: Carroll, James [1] ; Tomkins, Chris [1] + Show Author Affiliations Los Alamos National Laboratory Publication Date: 2012-03-15 OSTI Identifier: 1209307 Report Number(s): LA-UR-12-01365;

  6. Proteomics based compositional analysis of complex cellulase-hemicellulase

    Office of Scientific and Technical Information (OSTI)

    mixtures (Journal Article) | SciTech Connect Proteomics based compositional analysis of complex cellulase-hemicellulase mixtures Citation Details In-Document Search Title: Proteomics based compositional analysis of complex cellulase-hemicellulase mixtures Efficient deconstruction of cellulosic biomass to fermentable sugars for fuel and chemical production is accomplished by a complex mixture of cellulases, hemicellulases and accessory enzymes (e.g., >50 extracellular proteins).

  7. Copula-Based Flood Frequency Analysis at Ungauged Basin Confluences:

    Office of Scientific and Technical Information (OSTI)

    Nashville, Tennessee (Journal Article) | SciTech Connect SciTech Connect Search Results Journal Article: Copula-Based Flood Frequency Analysis at Ungauged Basin Confluences: Nashville, Tennessee Citation Details In-Document Search Title: Copula-Based Flood Frequency Analysis at Ungauged Basin Confluences: Nashville, Tennessee Many cities are located at or near the confluence of streams where availability of water resources may be enhanced to sustain user needs while also posing an increased

  8. Analysis of Web Based Solar Photovoltaic Mapping Tools | Department of

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Energy Information Resources » Analysis of Web Based Solar Photovoltaic Mapping Tools Analysis of Web Based Solar Photovoltaic Mapping Tools A PV mapping tool visually represents a specific site and calculates PV system size and projected electricity production. This report identifies the commercially available solar mapping tools and thoroughly summarizes the source data type and resolution, the visualization software program being used, user inputs, calculation methodology and algorithms,

  9. Product Lifecycle Management Architecture: A Model Based Systems Engineering Analysis.

    SciTech Connect (OSTI)

    Noonan, Nicholas James

    2015-07-01

    This report is an analysis of the Product Lifecycle Management (PLM) program. The analysis is centered on a need statement generated by a Nuclear Weapons (NW) customer. The need statement captured in this report creates an opportunity for the PLM to provide a robust service as a solution. Lifecycles for both the NW and PLM are analyzed using Model Based System Engineering (MBSE).

  10. 2007 Wholesale Power Rate Case Initial Proposal : Risk Analysis Study.

    SciTech Connect (OSTI)

    United States. Bonneville Power Administration.

    2005-11-01

    The Federal Columbia River Power System (FCRPS), operated on behalf of the ratepayers of the PNW by BPA and other Federal agencies, faces many uncertainties during the FY 2007-2009 rate period. Among these uncertainties, the largest revolve around hydro conditions, market prices and river operations for fish recovery. In order to provide a high probability of making its U.S. Treasury payments, BPA performs a Risk Analysis as part of its rate-making process. In this Risk Analysis, BPA identifies key risks, models their relationships, and then analyzes their impacts on net revenues (total revenues less expenses). BPA subsequently evaluates in the ToolKit Model the Treasury Payment Probability (TPP) resulting from the rates, risks, and risk mitigation measures described here and in the Wholesale Power Rate Development Study (WPRDS). If the TPP falls short of BPA's standard, additional risk mitigation revenues, such as PNRR and CRAC revenues are incorporated in the modeling in ToolKit until the TPP standard is met. Increased wholesale market price volatility and six years of drought have significantly changed the profile of risk and uncertainty facing BPA and its stakeholders. These present new challenges for BPA in its effort to keep its power rates as low as possible while fully meeting its obligations to the U.S. Treasury. As a result, the risk BPA faces in not receiving the level of secondary revenues that have been credited to power rates before receiving those funds is greater. In addition to market price volatility, BPA also faces uncertainty around the financial impacts of operations for fish programs in FY 2006 and in the FY 2007-2009 rate period. A new Biological Opinion or possible court-ordered change to river operations in FY 2006 through FY 2009 may reduce BPA's net revenues included Initial Proposal. Finally, the FY 2007-2009 risk analysis includes new operational risks as well as a more comprehensive analysis of non-operating risks. Both the operational and non-operational risks will be described in Section 2.0 of this study. Given these risks, if rates are designed using BPA's traditional approach of only adding Planned Net Revenues for Risk (PNRR), power rates would need to recover a much larger ''risk premium'' to meet BPA's TPP standard. As an alternative to high fixed risk premiums, BPA is proposing a risk mitigation package that combines PNRR with a variable rate mechanism similar to the cost recovery adjustment mechanisms used in the FY 2002-2006 rate period. The proposed risk mitigation package is less expensive on a forecasted basis because the rates can be adjusted on an annual basis to respond to uncertain financial outcomes. BPA is also proposing a Dividend Distribution Clause (DDC) to refund reserves in excess of $800M to customers in the event net revenues in the next rate period exceed current financial forecasts.

  11. An approach to model validation and model-based prediction -- polyurethane foam case study.

    SciTech Connect (OSTI)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical analyses and hypothesis tests as a part of the validation step to provide feedback to analysts and modelers. Decisions on how to proceed in making model-based predictions are made based on these analyses together with the application requirements. Updating modifying and understanding the boundaries associated with the model are also assisted through this feedback. (4) We include a ''model supplement term'' when model problems are indicated. This term provides a (bias) correction to the model so that it will better match the experimental results and more accurately account for uncertainty. Presumably, as the models continue to develop and are used for future applications, the causes for these apparent biases will be identified and the need for this supplementary modeling will diminish. (5) We use a response-modeling approach for our predictions that allows for general types of prediction and for assessment of prediction uncertainty. This approach is demonstrated through a case study supporting the assessment of a weapons response when subjected to a hydrocarbon fuel fire. The foam decomposition model provides an important element of the response of a weapon system in this abnormal thermal environment. Rigid foam is used to encapsulate critical components in the weapon system providing the needed mechanical support as well as thermal isolation. Because the foam begins to decompose at temperatures above 250 C, modeling the decomposition is critical to assessing a weapons response. In the validation analysis it is indicated that the model tends to ''exaggerate'' the effect of temperature changes when compared to the experimental results. The data, however, are too few and to restricted in terms of experimental design to make confident statements regarding modeling problems. For illustration, we assume these indications are correct and compensate for this apparent bias by constructing a model supplement term for use in the model-based predictions. Several hypothetical prediction problems are created and addressed. Hypothetical problems are used because no guidance was provided concern

  12. Klonos: A Similarity Analysis Based Tool for Software Porting

    Energy Science and Technology Software Center (OSTI)

    2014-07-30

    The Klonos is a compiler-based tool that can help users for scientific application porting. The tool is based on the similarity analysis with the help of the OpenUH compiler (a branch of Open64 compiler). This tool combines syntactic and cost-model-provided metrics clusters, which aggregate similar subroutines that can be ported similarity. The generated porting plan, which allows programmers and compilers to reuse porting experience as much as possible during the porting process.

  13. Tariff-based analysis of commercial building electricityprices

    SciTech Connect (OSTI)

    Coughlin, Katie M.; Bolduc, Chris A.; Rosenquist, Greg J.; VanBuskirk, Robert D.; McMahon, James E.

    2008-03-28

    This paper presents the results of a survey and analysis ofelectricity tariffs and marginal electricity prices for commercialbuildings. The tariff data come from a survey of 90 utilities and 250tariffs for non-residential customers collected in 2004 as part of theTariff Analysis Project at LBNL. The goals of this analysis are toprovide useful summary data on the marginal electricity prices commercialcustomers actually see, and insight into the factors that are mostimportant in determining prices under different circumstances. We providea new, empirically-based definition of several marginal prices: theeffective marginal price and energy-only anddemand-only prices, andderive a simple formula that expresses the dependence of the effectivemarginal price on the marginal load factor. The latter is a variable thatcan be used to characterize the load impacts of a particular end-use orefficiency measure. We calculate all these prices for eleven regionswithin the continental U.S.

  14. Financial Analysis of Incentive Mechanisms to Promote Energy Efficiency: Case Study of a Prototypical Southwest Utility

    SciTech Connect (OSTI)

    Cappers, Peter; Goldman, Charles; Chait, Michele; Edgar, George; Schlegel, Jeff; Shirley, Wayne

    2009-03-04

    Many state regulatory commissions and policymakers want utilities to aggressively pursue energy efficiency as a strategy to mitigate demand and energy growth, diversify the resource mix, and provide an alternative to building new, costly generation. However, as the National Action Plan for Energy Efficiency (NAPEE 2007) points out, many utilities continue to shy away from aggressively expanding their energy efficiency efforts when their shareholder's fundamental financial interests are placed at risk by doing so. Thus, there is increased interest in developing effective ratemaking and policy approaches that address utility disincentives to pursue energy efficiency or lack of incentives for more aggressive energy efficiency efforts. New regulatory initiatives to promote increased utility energy efficiency efforts also affect the interests of consumers. Ratepayers and their advocates are concerned with issues of fairness, impacts on rates, and total consumer costs. From the perspective of energy efficiency advocates, the quid pro quo for utility shareholder incentives is the obligation to acquire all, or nearly all, achievable cost-effective energy efficiency. A key issue for state regulators and policymakers is how to maximize the cost-effective energy efficiency savings attained while achieving an equitable sharing of benefits, costs and risks among the various stakeholders. In this study, we modeled a prototypical vertically-integrated electric investor-owned utility in the southwestern US that is considering implementing several energy efficiency portfolios. We analyze the impact of these energy efficiency portfolios on utility shareholders and ratepayers as well as the incremental effect on each party when lost fixed cost recovery and/or utility shareholder incentive mechanisms are implemented. A primary goal of our quantitative modeling is to provide regulators and policymakers with an analytic framework and tools that assess the financial impacts of alternative incentive approaches on utility shareholders and customers if energy efficiency is implemented under various utility operating, cost, and supply conditions.We used and adapted a spreadsheet-based financial model (the Benefits Calculator) which was developed originally as a tool to support the National Action Plan for Energy Efficiency (NAPEE). The major steps in our analysis are displayed graphically in Figure ES- 1. Two main inputs are required: (1) characterization of the utility which includes its initial financial and physical market position, a forecast of the utility?s future sales, peak demand, and resource strategy to meet projected growth; and (2) characterization of the Demand-Side Resource (DSR) portfolio ? projected electricity and demand savings, costs and economic lifetime of a portfolio of energy efficiency (and/or demand response) programs that the utility is planning or considering implementing during the analysis period. The Benefits Calculator also estimates total resource costs and benefits of the DSR portfolio using a forecast of avoided capacity and energy costs. The Benefits Calculator then uses inputs provided in the Utility Characterization to produce a ?business-as usual? base case as well as alternative scenarios that include energy efficiency resources, including the corresponding utility financial budgets required in each case. If a decoupling and/or a shareholder incentive mechanism are instituted, the Benefits Calculator model readjusts the utility?s revenue requirement and retail rates accordingly. Finally, for each scenario, the Benefits Calculator produces several metrics that provides insights on how energy efficiency resources, decoupling and/or a shareholder incentive mechanism impacts utility shareholders (e.g. overall earnings, return on equity), ratepayers (e.g., average customer bills and rates) and society (e.g. net resource benefits).

  15. Building America Special Research Project: High-R Walls Case Study Analysis

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    building science.com © 2009 Building Science Press All rights of reproduction in any form reserved. Building America Special Research Project: High-R Walls Case Study Analysis Research Report - 0903 March 11, 2009 (Rev. June 8, 2011) John Straube and Jonathan Smegal Abstract: Many concerns, including the rising cost of energy, climate change concerns, and demands for increased comfort, have lead to the desire for increased insulation levels in many new and existing buildings. More building

  16. Techno-Economic Analysis of Biofuels Production Based on Gasification

    SciTech Connect (OSTI)

    Swanson, R. M.; Platon, A.; Satrio, J. A.; Brown, R. C.; Hsu, D. D.

    2010-11-01

    This study compares capital and production costs of two biomass-to-liquid production plants based on gasification. The first biorefinery scenario is an oxygen-fed, low-temperature (870?C), non-slagging, fluidized bed gasifier. The second scenario is an oxygen-fed, high-temperature (1,300?C), slagging, entrained flow gasifier. Both are followed by catalytic Fischer-Tropsch synthesis and hydroprocessing to naphtha-range (gasoline blend stock) and distillate-range (diesel blend stock) liquid fractions. Process modeling software (Aspen Plus) is utilized to organize the mass and energy streams and cost estimation software is used to generate equipment costs. Economic analysis is performed to estimate the capital investment and operating costs. Results show that the total capital investment required for nth plant scenarios is $610 million and $500 million for high-temperature and low-temperature scenarios, respectively. Product value (PV) for the high-temperature and low-temperature scenarios is estimated to be $4.30 and $4.80 per gallon of gasoline equivalent (GGE), respectively, based on a feedstock cost of $75 per dry short ton. Sensitivity analysis is also performed on process and economic parameters. This analysis shows that total capital investment and feedstock cost are among the most influential parameters affecting the PV.

  17. Lossless droplet transfer of droplet-based microfluidic analysis

    DOE Patents [OSTI]

    Kelly, Ryan T (West Richland, WA); Tang, Keqi (Richland, WA); Page, Jason S (Kennewick, WA); Smith, Richard D (Richland, WA)

    2011-11-22

    A transfer structure for droplet-based microfluidic analysis is characterized by a first conduit containing a first stream having at least one immiscible droplet of aqueous material and a second conduit containing a second stream comprising an aqueous fluid. The interface between the first conduit and the second conduit can define a plurality of apertures, wherein the apertures are sized to prevent exchange of the first and second streams between conduits while allowing lossless transfer of droplets from the first conduit to the second conduit through contact between the first and second streams.

  18. Identification and Prioritization of Analysis Cases for Marine and Hydrokinetic Energy Risk Screening

    SciTech Connect (OSTI)

    Anderson, Richard M.; Unwin, Stephen D.; Van Cleve, Frances B.

    2010-06-16

    In this report we describe the development of the Environmental Risk Evaluation System (ERES), a risk-informed analytical process for estimating the environmental risks associated with the construction and operation of marine and hydrokinetic energy generation projects. The development process consists of two main phases of analysis. In the first phase, preliminary risk analyses will take the form of screening studies in which key environmental impacts and the uncertainties that create risk are identified, leading to a better-focused characterization of the relevant environmental effects. Existence of critical data gaps will suggest areas in which specific modeling and/or data collection activities should take place. In the second phase, more detailed quantitative risk analyses will be conducted, with residual uncertainties providing the basis for recommending risk mitigation and monitoring activities. We also describe the process used for selecting three cases for fiscal year 2010 risk screening analysis using the ERES. A case is defined as a specific technology deployed in a particular location involving certain environmental receptors specific to that location. The three cases selected satisfy a number of desirable criteria: 1) they correspond to real projects whose deployment is likely to take place in the foreseeable future; 2) the technology developers are willing to share technology and project-related data; 3) the projects represent a diversity of technology-site-receptor characteristics; 4) the projects are of national interest, and 5) environmental effects data may be available for the projects.

  19. Feature-Based Statistical Analysis of Combustion Simulation Data

    SciTech Connect (OSTI)

    Bennett, J; Krishnamoorthy, V; Liu, S; Grout, R; Hawkes, E; Chen, J; Pascucci, V; Bremer, P T

    2011-11-18

    We present a new framework for feature-based statistical analysis of large-scale scientific data and demonstrate its effectiveness by analyzing features from Direct Numerical Simulations (DNS) of turbulent combustion. Turbulent flows are ubiquitous and account for transport and mixing processes in combustion, astrophysics, fusion, and climate modeling among other disciplines. They are also characterized by coherent structure or organized motion, i.e. nonlocal entities whose geometrical features can directly impact molecular mixing and reactive processes. While traditional multi-point statistics provide correlative information, they lack nonlocal structural information, and hence, fail to provide mechanistic causality information between organized fluid motion and mixing and reactive processes. Hence, it is of great interest to capture and track flow features and their statistics together with their correlation with relevant scalar quantities, e.g. temperature or species concentrations. In our approach we encode the set of all possible flow features by pre-computing merge trees augmented with attributes, such as statistical moments of various scalar fields, e.g. temperature, as well as length-scales computed via spectral analysis. The computation is performed in an efficient streaming manner in a pre-processing step and results in a collection of meta-data that is orders of magnitude smaller than the original simulation data. This meta-data is sufficient to support a fully flexible and interactive analysis of the features, allowing for arbitrary thresholds, providing per-feature statistics, and creating various global diagnostics such as Cumulative Density Functions (CDFs), histograms, or time-series. We combine the analysis with a rendering of the features in a linked-view browser that enables scientists to interactively explore, visualize, and analyze the equivalent of one terabyte of simulation data. We highlight the utility of this new framework for combustion science; however, it is applicable to many other science domains.

  20. A Research Roadmap for Computation-Based Human Reliability Analysis

    SciTech Connect (OSTI)

    Boring, Ronald; Mandelli, Diego; Joe, Jeffrey; Smith, Curtis; Groth, Katrina

    2015-08-01

    The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is often secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.

  1. Business Case Analysis of Prototype Fabrication Division Recapitalization Plan—Summary

    SciTech Connect (OSTI)

    Booth, Steven Richard; Benson, Faith Ann; Dinehart, Timothy Grant

    2015-04-30

    Business case studies were completed to support procurement of new machines and capital equipment in the Prototype Fabrication (PF) Division SM-39 and TA-03-0102 machine shops. Economic analysis was conducted for replacing the Mazak 30Y Mill-Turn Machine in SM-39, the Haas Vertical CNC Mill in Building 102, and the Hardinge Q10/65-SP Lathe in SM-39. Analysis was also conducted for adding a NanoTech Lathe in Building 102 and a new electrical discharge machine (EDM) in SM-39 to augment current capabilities. To determine the value of switching machinery, a baseline scenario was compared with a future scenario where new machinery was purchased and installed. Costs and benefits were defined via interviews with subject matter experts.

  2. Discrete Mathematical Approaches to Graph-Based Traffic Analysis

    SciTech Connect (OSTI)

    Joslyn, Cliff A.; Cowley, Wendy E.; Hogan, Emilie A.; Olsen, Bryan K.

    2014-04-01

    Modern cyber defense and anlaytics requires general, formal models of cyber systems. Multi-scale network models are prime candidates for such formalisms, using discrete mathematical methods based in hierarchically-structured directed multigraphs which also include rich sets of labels. An exemplar of an application of such an approach is traffic analysis, that is, observing and analyzing connections between clients, servers, hosts, and actors within IP networks, over time, to identify characteristic or suspicious patterns. Towards that end, NetFlow (or more generically, IPFLOW) data are available from routers and servers which summarize coherent groups of IP packets flowing through the network. In this paper, we consider traffic analysis of Netflow using both basic graph statistics and two new mathematical measures involving labeled degree distributions and time interval overlap measures. We do all of this over the VAST test data set of 96M synthetic Netflow graph edges, against which we can identify characteristic patterns of simulated ground-truth network attacks.

  3. Landslide hazard analysis for pipelines: The case of the Simonette river crossing

    SciTech Connect (OSTI)

    Grivas, D.A.; Schultz, B.C.; O`Neil, G.; Rizkalla, M.; McGuffey, V.C.

    1995-12-31

    The overall objective of this study is to develop a probabilistic methodology to analyze landslide hazards and their effects on the safety of buried pipelines. The methodology incorporates a range of models that can accommodate differences in the ground movement modes and the amount and type of information available at various site locations. Two movement modes are considered, namely (a) instantaneous (catastrophic) slides, and (b) gradual ground movement which may result in cumulative displacements over the pipeline design life (30--40 years) that are in excess of allowable values. Probabilistic analysis is applied in each case to address the uncertainties associated with important factors that control slope stability. Availability of information ranges from relatively well studied, instrumented installations to cases where data is limited to what can be derived from topographic and geologic maps. The methodology distinguishes between procedures applied where there is little information and those that can be used when relatively extensive data is available. important aspects of the methodology are illustrated in a case study involving a pipeline located in Northern Alberta, Canada, in the Simonette river valley.

  4. SYSTEM DESIGN AND ANALYSIS FOR CONCEPTUAL DESIGN OF OXYGEN-BASED PC BOILER

    SciTech Connect (OSTI)

    Zhen Fan; Andrew Seltzer

    2003-11-01

    The objective of the system design and analysis task of the Conceptual Design of Oxygen-Based PC Boiler study is to optimize the PC boiler plant by maximizing system efficiency. Simulations of the oxygen-fired plant with CO{sub 2} sequestration were conducted using Aspen Plus and were compared to a reference air-fired 460 Mw plant. Flue gas recycle is used in the O{sub 2}-fired PC to control the flame temperature. Parametric runs were made to determine the effect of flame temperature on system efficiency and required waterwall material and thickness. The degree of improvement on system efficiency of various modifications including hot gas recycle, purge gas recycle, flue gas feedwater recuperation, and recycle purge gas expansion were investigated. The selected O{sub 2}-fired design case has a system efficiency of 30.1% compared to the air-fired system efficiency of 36.7%. The design O{sub 2}-fired case requires T91 waterwall material and has a waterwall surface area of only 44% of the air-fired reference case. Compared to other CO{sub 2} sequestration technologies, the O{sub 2}-fired PC is substantially better than both natural gas combined cycles and post CO{sub 2} removal PCs and is slightly better than integrated gasification combined cycles.

  5. Copula-Based Flood Frequency Analysis at Ungauged Basin Confluences...

    Office of Scientific and Technical Information (OSTI)

    This case study may help researchers and practitioners develop a better understanding of joint flood frequency with consideration of upstream dam regulation among several ...

  6. Feature-based Analysis of Plasma-based Particle Acceleration Data

    SciTech Connect (OSTI)

    Ruebel, Oliver; Geddes, Cameron G.R.; Chen, Min; Cormier-Michel, Estelle; Bethel, E. Wes

    2013-07-05

    Plasma-based particle accelerators can produce and sustain thousands of times stronger acceleration fields than conventional particle accelerators, providing a potential solution to the problem of the growing size and cost of conventional particle accelerators. To facilitate scientific knowledge discovery from the ever growing collections of accelerator simulation data generated by accelerator physicists to investigate next-generation plasma-based particle accelerator designs, we describe a novel approach for automatic detection and classification of particle beams and beam substructures due to temporal differences in the acceleration process, here called acceleration features. The automatic feature detection in combination with a novel visualization tool for fast, intuitive, query-based exploration of acceleration features enables an effective top-down data exploration process, starting from a high-level, feature-based view down to the level of individual particles. We describe the application of our analysis in practice to analyze simulations of single pulse and dual and triple colliding pulse accelerator designs, and to study the formation and evolution of particle beams, to compare substructures of a beam and to investigate transverse particle loss.

  7. Survey of sampling-based methods for uncertainty and sensitivity analysis.

    SciTech Connect (OSTI)

    Johnson, Jay Dean; Helton, Jon Craig; Sallaberry, Cedric J. PhD.; Storlie, Curt B.

    2006-06-01

    Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (1) Definition of probability distributions to characterize epistemic uncertainty in analysis inputs, (2) Generation of samples from uncertain analysis inputs, (3) Propagation of sampled inputs through an analysis, (4) Presentation of uncertainty analysis results, and (5) Determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition.

  8. Geography-based structural analysis of the Internet

    SciTech Connect (OSTI)

    Kasiviswanathan, Shiva; Eidenbenz, Stephan; Yan, Guanhua

    2010-01-01

    In this paper, we study some geographic aspects of the Internet. We base our analysis on a large set of geolocated IP hop-level session data (including about 300,000 backbone routers, 150 million end hosts, and 1 billion sessions) that we synthesized from a variety of different input sources such as US census data, computer usage statistics, Internet market share data, IP geolocation data sets, CAJDA's Skitter data set for backbone connectivity, and BGP routing tables. We use this model to perform a nationwide and statewide geographic analysis of the Internet. Our main observations are: (1) There is a dominant coast-to-coast pattern in the US Internet traffic. In fact, in many instances even if the end-devices are not near either coast, still the traffic between them takes a long detour through the coasts. (2) More than half of the Internet paths are inflated by 100% or more compared to their corresponding geometric straight-line distance. This circuitousness makes the average ratio between the routing distance and geometric distance big (around 10). (3) The weighted mean hop count is around 5, but the hop counts are very loosely correlated with the distances. The weighted mean AS count (number of ASes traversed) is around 3. (4) The AS size and the AS location number distributions are heavy-tailed and strongly correlated. Most of the ASes are medium sized and there is a wide variability in the geographic dispersion size (measured in terms of the convex hull area) of these ASes.

  9. Statistical Analysis of Microarray Data with Replicated Spots: A Case Study with Synechococcus WH8102

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Thomas, E. V.; Phillippy, K. H.; Brahamsha, B.; Haaland, D. M.; Timlin, J. A.; Elbourne, L. D. H.; Palenik, B.; Paulsen, I. T.

    2009-01-01

    Until recently microarray experiments often involved relatively few arrays with only a single representation of each gene on each array. A complete genome microarray with multiple spots per gene (spread out spatially across the array) was developed in order to compare the gene expression of a marine cyanobacterium and a knockout mutant strain in a defined artificial seawater medium. Statistical methods were developed for analysis in the special situation of this case study where there is gene replication within an array and where relatively few arrays are used, which can be the case with current array technology. Due in partmore » to the replication within an array, it was possible to detect very small changes in the levels of expression between the wild type and mutant strains. One interesting biological outcome of this experiment is the indication of the extent to which the phosphorus regulatory system of this cyanobacterium affects the expression of multiple genes beyond those strictly involved in phosphorus acquisition.« less

  10. Statistical Analysis of Microarray Data with Replicated Spots: A Case Study withSynechococcusWH8102

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Thomas, E. V.; Phillippy, K. H.; Brahamsha, B.; Haaland, D. M.; Timlin, J. A.; Elbourne, L. D. H.; Palenik, B.; Paulsen, I. T.

    2009-01-01

    Until recently microarray experiments often involved relatively few arrays with only a single representation of each gene on each array. A complete genome microarray with multiple spots per gene (spread out spatially across the array) was developed in order to compare the gene expression of a marine cyanobacterium and a knockout mutant strain in a defined artificial seawater medium. Statistical methods were developed for analysis in the special situation of this case study where there is gene replication within an array and where relatively few arrays are used, which can be the case with current array technology. Due in partmore »to the replication within an array, it was possible to detect very small changes in the levels of expression between the wild type and mutant strains. One interesting biological outcome of this experiment is the indication of the extent to which the phosphorus regulatory system of this cyanobacterium affects the expression of multiple genes beyond those strictly involved in phosphorus acquisition.« less

  11. FAQS Gap Analysis Qualification Card – General Technical Base

    Broader source: Energy.gov [DOE]

    Functional Area Qualification Standard Gap Analysis Qualification Cards outline the differences between the last and latest version of the FAQ Standard.

  12. A comprehensive evaluation of various sensitivity analysis methods: A case study with a hydrological model

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Gan, Yanjun; Duan, Qingyun; Gong, Wei; Tong, Charles; Sun, Yunwei; Chu, Wei; Ye, Aizhong; Miao, Chiyuan; Di, Zhenhua

    2014-01-01

    Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin nearmore » Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more efficient but less accurate and robust than quantitative ones.« less

  13. Habitat-Lite: A GSC case study based on free text terms for environmental metadata

    SciTech Connect (OSTI)

    Kyrpides, Nikos; Hirschman, Lynette; Clark, Cheryl; Cohen, K. Bretonnel; Mardis, Scott; Luciano, Joanne; Kottmann, Renzo; Cole, James; Markowitz, Victor; Kyrpides, Nikos; Field, Dawn

    2008-04-01

    There is an urgent need to capture metadata on the rapidly growing number of genomic, metagenomic and related sequences, such as 16S ribosomal genes. This need is a major focus within the Genomic Standards Consortium (GSC), and Habitat is a key metadata descriptor in the proposed 'Minimum Information about a Genome Sequence' (MIGS) specification. The goal of the work described here is to provide a light-weight, easy-to-use (small) set of terms ('Habitat-Lite') that captures high-level information about habitat while preserving a mapping to the recently launched Environment Ontology (EnvO). Our motivation for building Habitat-Lite is to meet the needs of multiple users, such as annotators curating these data, database providers hosting the data, and biologists and bioinformaticians alike who need to search and employ such data in comparative analyses. Here, we report a case study based on semi-automated identification of terms from GenBank and GOLD. We estimate that the terms in the initial version of Habitat-Lite would provide useful labels for over 60% of the kinds of information found in the GenBank isolation-source field, and around 85% of the terms in the GOLD habitat field. We present a revised version of Habitat-Lite and invite the community's feedback on its further development in order to provide a minimum list of terms to capture high-level habitat information and to provide classification bins needed for future studies.

  14. Analysis of Energy Efficiency Program Impacts Based on Program...

    U.S. Energy Information Administration (EIA) Indexed Site

    ... electricity sales but no program spending, they were put in a "no spend" category. 2Analysis by U.S. Energy Information ... Standards (EERS) (April 2015), accessed May 15, 2015; ...

  15. System planning analysis applied to OTEC: initial cases by Florida Power Corporation. Task II report No. FC-5237-2

    SciTech Connect (OSTI)

    1980-03-01

    The objective of the task was to exercise the FPC system planning methodology on: (1) Base Case, 10 year generation expansion plan with coal plants providing base load expansion, and (2) same, but 400 MW of OTEC substituting for coal burning units with equal resultant system reliability. OTEC inputs were based on reasonable economic projections of direct capital cost and O and M costs for first-generation large commercial plants. OTEC inputs discussed in Section 2. The Base Case conditions for FPC system planning methodology involved base load coal fueled additions during the 1980's and early 1990's. The first trial runs of the PROMOD system planning model substituted OTEC for 400 MW purchases of coal generated power during 1988-1989 and then 400 MW coal capacity thereafter. Result showed higher system reliability than Base Case runs. Reruns with greater coal fueled capacity displacement showed that OTEC could substitute for 400 MW purchases in 1988-1989 and replace the 800 MW coal unit scheduled for 1990 to yield equivalent system reliability. However, a 1995 unit would need to be moved to 1994. Production costing computer model runs were used as input to Corporate Model to examine corporate financial impact. Present value of total revenue requirements were primary indication of relative competitiveness between Base Case and OTEC. Results show present value of total revenue requirements unfavorable to OTEC as compared to coal units. The disparity was in excess of the allowable range for possible consideration.

  16. Global Trade Analysis Project (GTAP) Data Base | Open Energy...

    Open Energy Info (EERE)

    TOOL Name: GTAP 6 Data Base AgencyCompany Organization: Purdue University Sector: Energy Topics: Policiesdeployment programs, Co-benefits assessment, - Macroeconomic,...

  17. Algorithms and tools for high-throughput geometry-based analysis...

    Office of Scientific and Technical Information (OSTI)

    Journal Article: Algorithms and tools for high-throughput geometry-based analysis of crystalline porous materials Citation Details In-Document Search Title: Algorithms and tools ...

  18. The IAEA Coordinated Research Program on HTGR Reactor Physics, Thermal-hydraulics and Depletion Uncertainty Analysis: Description of the Benchmark Test Cases and Phases

    SciTech Connect (OSTI)

    Frederik Reitsma; Gerhard Strydom; Bismark Tyobeka; Kostadin Ivanov

    2012-10-01

    The continued development of High Temperature Gas Cooled Reactors (HTGRs) requires verification of design and safety features with reliable high fidelity physics models and robust, efficient, and accurate codes. The uncertainties in the HTR analysis tools are today typically assessed with sensitivity analysis and then a few important input uncertainties (typically based on a PIRT process) are varied in the analysis to find a spread in the parameter of importance. However, one wish to apply a more fundamental approach to determine the predictive capability and accuracies of coupled neutronics/thermal-hydraulics and depletion simulations used for reactor design and safety assessment. Today there is a broader acceptance of the use of uncertainty analysis even in safety studies and it has been accepted by regulators in some cases to replace the traditional conservative analysis. Finally, there is also a renewed focus in supplying reliable covariance data (nuclear data uncertainties) that can then be used in uncertainty methods. Uncertainty and sensitivity studies are therefore becoming an essential component of any significant effort in data and simulation improvement. In order to address uncertainty in analysis and methods in the HTGR community the IAEA launched a Coordinated Research Project (CRP) on the HTGR Uncertainty Analysis in Modelling early in 2012. The project is built on the experience of the OECD/NEA Light Water Reactor (LWR) Uncertainty Analysis in Best-Estimate Modelling (UAM) benchmark activity, but focuses specifically on the peculiarities of HTGR designs and its simulation requirements. Two benchmark problems were defined with the prismatic type design represented by the MHTGR-350 design from General Atomics (GA) while a 250 MW modular pebble bed design, similar to the INET (China) and indirect-cycle PBMR (South Africa) designs are also included. In the paper more detail on the benchmark cases, the different specific phases and tasks and the latest status and plans are presented.

  19. CHAPTER 7. BERYLLIUM ANALYSIS BY NON-PLASMA BASED METHODS

    SciTech Connect (OSTI)

    Ekechukwu, A

    2009-04-20

    The most common method of analysis for beryllium is inductively coupled plasma atomic emission spectrometry (ICP-AES). This method, along with inductively coupled plasma mass spectrometry (ICP-MS), is discussed in Chapter 6. However, other methods exist and have been used for different applications. These methods include spectroscopic, chromatographic, colorimetric, and electrochemical. This chapter provides an overview of beryllium analysis methods other than plasma spectrometry (inductively coupled plasma atomic emission spectrometry or mass spectrometry). The basic methods, detection limits and interferences are described. Specific applications from the literature are also presented.

  20. Strategic backdrop analysis for fossil fuel planning. Task 1. Default Case. Report 468-117-07/02

    SciTech Connect (OSTI)

    Not Available

    1980-06-01

    This report presents data describing a default case analysis performed using the strategic backdrop analytical framework developed to facilitate fossil fuel planning within the DOE. Target years are 1985, 2000, and 2025. Residential, commercial, and industrial energy demands and impacts of energy technology implementation and market penetration are forecast using a set of energy technology assumptions. (DMC)

  1. Strategic backdrop analysis for fossil fuel planning. Task 1. Default Case. Report 468-117-07/03

    SciTech Connect (OSTI)

    Not Available

    1980-06-01

    This report presents data describing a default case analysis performed using the strategic backdrop analytical framework developed to facilitate fossil fuel planning within the DOE. Target years are 1985, 2000, and 2025. Residential, commercial, and industrial energy demands and impacts of energy technology implementation and market penetration are forecast using a set of energy technology assumptions.

  2. Strategic backdrop analysis for fossil fuel planning. Task 1. Default Case. Report 468-117-07/01

    SciTech Connect (OSTI)

    Not Available

    1980-06-01

    This report presents data describing a default case analysis performed using the strategic backdrop analytical framework developed to facilitate fossil fuel planning within the DOE. Target years are 1985 and 2025. Residential, commercial, and industrial energy demands are forecast as well as the impacts of energy technology implementation and market penetration using a set of energy technology assumptions. (DMC)

  3. Session Papers Preliminary Analysis of Ground-Based Microwave...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Shaw, J. A., J. H. Churnside, and E. R. Westwater. 1991. An Infrared Spectrometer for Ground-Based Profiling of Atmospheric Temperature and Humidity. Proc. SPIE Int'l. Symp. on...

  4. Posters Preliminary Analysis of Ground-Based Microwave and Infrared...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Shaw, J. A., J. H. Churnside, and E. R. Westwater. 1991. An Infrared Spectrometer for Ground-Based Profiling of Atmospheric Temperature and Humidity. Proc. SPIE Int'l. Symp. on...

  5. Prevalence and contribution of BRCA1 mutations in breast cancer and ovarian cancer: Results from three US population-based case-control studies of ovarian cancer

    SciTech Connect (OSTI)

    Whittemore, A.S.; Gong, G.; Itnyre, J.

    1997-03-01

    We investigate the familial risks of cancers of the breast and ovary, using data pooled from three population-based case-control studies of ovarian cancer that were conducted in the United States. We base estimates of the frequency of mutations of BRCA1 (and possibly other genes) on the reported occurrence of breast cancer and ovarian cancer in the mothers and sisters of 922 women with incident ovarian cancer (cases) and in 922 women with no history of ovarian cancer (controls). Segregation analysis and goodness-of-fit testing of genetic models suggest that rare mutations (frequency .0014; 95% confidence interval .0002-.011) account for all the observed aggregation of breast cancer and ovarian cancer in these families. The estimated risk of breast cancer by age 80 years is 73.5% in mutation carriers and 6.8% in noncarriers. The corresponding estimates for ovarian cancer are 27.8% in carriers and 1.8% in noncarriers. For cancer risk in carriers, these estimates are lower than those obtained from families selected for high cancer prevalence. The estimated proportion of all U.S. cancer diagnoses, by age 80 years, that are due to germ-line BRCA1 mutations is 3.0% for breast cancer and 4.4% for ovarian cancer. Aggregation of breast cancer and ovarian cancer was less evident in the families of 169 cases with borderline ovarian cancers than in the families of cases with invasive cancers. Familial aggregation did not differ by the ethnicity of the probands, although the number of non-White and Hispanic cases (N = 99) was sparse. 14 refs., 3 figs., 6 tabs.

  6. Improving the Accuracy of Software-Based Energy Analysis for Residential Buildings (Presentation)

    SciTech Connect (OSTI)

    Polly, B.

    2011-09-01

    This presentation describes the basic components of software-based energy analysis for residential buildings, explores the concepts of 'error' and 'accuracy' when analysis predictions are compared to measured data, and explains how NREL is working to continuously improve the accuracy of energy analysis methods.

  7. Temperature-based Instanton Analysis: Identifying Vulnerability in Transmission Networks

    SciTech Connect (OSTI)

    Kersulis, Jonas; Hiskens, Ian; Chertkov, Michael; Backhaus, Scott N.; Bienstock, Daniel

    2015-04-08

    A time-coupled instanton method for characterizing transmission network vulnerability to wind generation fluctuation is presented. To extend prior instanton work to multiple-time-step analysis, line constraints are specified in terms of temperature rather than current. An optimization formulation is developed to express the minimum wind forecast deviation such that at least one line is driven to its thermal limit. Results are shown for an IEEE RTS-96 system with several wind-farms.

  8. Intelligent Control in Automation Based on Wireless Traffic Analysis

    SciTech Connect (OSTI)

    Kurt Derr; Milos Manic

    2007-09-01

    Wireless technology is a central component of many factory automation infrastructures in both the commercial and government sectors, providing connectivity among various components in industrial realms (distributed sensors, machines, mobile process controllers). However wireless technologies provide more threats to computer security than wired environments. The advantageous features of Bluetooth technology resulted in Bluetooth units shipments climbing to five million per week at the end of 2005 [1, 2]. This is why the real-time interpretation and understanding of Bluetooth traffic behavior is critical in both maintaining the integrity of computer systems and increasing the efficient use of this technology in control type applications. Although neuro-fuzzy approaches have been applied to wireless 802.11 behavior analysis in the past, a significantly different Bluetooth protocol framework has not been extensively explored using this technology. This paper presents a new neurofuzzy traffic analysis algorithm of this still new territory of Bluetooth traffic. Further enhancements of this algorithm are presented along with the comparison against the traditional, numerical approach. Through test examples, interesting Bluetooth traffic behavior characteristics were captured, and the comparative elegance of this computationally inexpensive approach was demonstrated. This analysis can be used to provide directions for future development and use of this prevailing technology in various control type applications, as well as making the use of it more secure.

  9. Intelligent Control in Automation Based on Wireless Traffic Analysis

    SciTech Connect (OSTI)

    Kurt Derr; Milos Manic

    2007-08-01

    Wireless technology is a central component of many factory automation infrastructures in both the commercial and government sectors, providing connectivity among various components in industrial realms (distributed sensors, machines, mobile process controllers). However wireless technologies provide more threats to computer security than wired environments. The advantageous features of Bluetooth technology resulted in Bluetooth units shipments climbing to five million per week at the end of 2005 [1, 2]. This is why the real-time interpretation and understanding of Bluetooth traffic behavior is critical in both maintaining the integrity of computer systems and increasing the efficient use of this technology in control type applications. Although neuro-fuzzy approaches have been applied to wireless 802.11 behavior analysis in the past, a significantly different Bluetooth protocol framework has not been extensively explored using this technology. This paper presents a new neurofuzzy traffic analysis algorithm of this still new territory of Bluetooth traffic. Further enhancements of this algorithm are presented along with the comparison against the traditional, numerical approach. Through test examples, interesting Bluetooth traffic behavior characteristics were captured, and the comparative elegance of this computationally inexpensive approach was demonstrated. This analysis can be used to provide directions for future development and use of this prevailing technology in various control type applications, as well as making the use of it more secure.

  10. Digital Instrumentation and Control Failure Events Derivation and Analysis by Frame-Based Technique

    SciTech Connect (OSTI)

    Hui-Wen Huang; Chunkuan Shih [National Tsing Hua University, 101, Section 2, Kuang-Fu Road, Hsinchu, Taiwan 30013 (China); Swu Yih [DML International, 18F-1 295, Section 2 Kuang Fu Road, Hsinchu, Taiwan (China); Yen-Chang Tzeng; Ming-Huei Chen [Institute of Nuclear Energy Research, No. 1000, Wunhua Rd., Jiaan Village, Longtan Township, Taoyuan County 32546, Taiwan (China)

    2006-07-01

    A frame-based technique, including physical frame, logical frame, and cognitive frame, was adopted to perform digital I and C failure events derivation and analysis for generic ABWR. The physical frame was structured with a modified PCTran-ABWR plant simulation code, which was extended and enhanced on the feedwater system, recirculation system, and steam line system. The logical model is structured with MATLAB, which was incorporated into PCTran-ABWR to improve the pressure control system, feedwater control system, recirculation control system, and automated power regulation control system. As a result, the software failure of these digital control systems can be properly simulated and analyzed. The cognitive frame was simulated by the operator awareness status in the scenarios. Moreover, via an internal characteristics tuning technique, the modified PCTran-ABWR can precisely reflect the characteristics of the power-core flow. Hence, in addition to the transient plots, the analysis results can then be demonstrated on the power-core flow map. A number of postulated I and C system software failure events were derived to achieve the dynamic analyses. The basis for event derivation includes the published classification for software anomalies, the digital I and C design data for ABWR, chapter 15 accident analysis of generic SAR, and the reported NPP I and C software failure events. The case study of this research includes (1) the software CMF analysis for the major digital control systems; and (2) postulated ABWR digital I and C software failure events derivation from the actual happening of non-ABWR digital I and C software failure events, which were reported to LER of USNRC or IRS of IAEA. These events were analyzed by PCTran-ABWR. Conflicts among plant status, computer status, and human cognitive status are successfully identified. The operator might not easily recognize the abnormal condition, because the computer status seems to progress normally. However, a well trained operator can become aware of the abnormal condition with the inconsistent physical parameters; and then can take early corrective actions to avoid the system hazard. This paper also discusses the advantage of Simulation-based method, which can investigate more in-depth dynamic behavior of digital I and C system than other approaches. Some unanticipated interactions can be observed by this method. (authors)

  11. Microsoft PowerPoint - Microbial Genome and Metagenome Analysis Case Study (NERSC Workshop - May 7-8, 2009).ppt [Compatibility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Microbial Genome & Metagenome Analysis: Computational Challenges Natalia N. Ivanova * Nikos C. Kyrpides * Victor M. Markowitz ** * Genome Biology Program, Joint Genome Institute ** Lawrence Berkeley National Lab Microbial genome & metagenome analysis General aims Understand microbial life Apply to agriculture, bioremediation, biofuels, human health Specific aims include Specific aims include Predict biochemistry & physiology of organisms based on genome sequence Explain known

  12. Code cases for implementing risk-based inservice testing in the ASME OM code

    SciTech Connect (OSTI)

    Rowley, C.W.

    1996-12-01

    Historically inservice testing has been reasonably effective, but quite costly. Recent applications of plant PRAs to the scope of the IST program have demonstrated that of the 30 pumps and 500 valves in the typical plant IST program, less than half of the pumps and ten percent of the valves are risk significant. The way the ASME plans to tackle this overly-conservative scope for IST components is to use the PRA and plant expert panels to create a two tier IST component categorization scheme. The PRA provides the quantitative risk information and the plant expert panel blends the quantitative and deterministic information to place the IST component into one of two categories: More Safety Significant Component (MSSC) or Less Safety Significant Component (LSSC). With all the pumps and valves in the IST program placed in MSSC or LSSC categories, two different testing strategies will be applied. The testing strategies will be unique for the type of component, such as centrifugal pump, positive displacement pump, MOV, AOV, SOV, SRV, PORV, HOV, CV, and MV. A series of OM Code Cases are being developed to capture this process for a plant to use. One Code Case will be for Component Importance Ranking. The remaining Code Cases will develop the MSSC and LSSC testing strategy for type of component. These Code Cases are planned for publication in early 1997. Later, after some industry application of the Code Cases, the alternative Code Case requirements will gravitate to the ASME OM Code as appendices.

  13. Simulation and Non-Simulation Based Human Reliability Analysis Approaches

    SciTech Connect (OSTI)

    Boring, Ronald Laurids; Shirley, Rachel Elizabeth; Joe, Jeffrey Clark; Mandelli, Diego

    2014-12-01

    Part of the U.S. Department of Energy’s Light Water Reactor Sustainability (LWRS) Program, the Risk-Informed Safety Margin Characterization (RISMC) Pathway develops approaches to estimating and managing safety margins. RISMC simulations pair deterministic plant physics models with probabilistic risk models. As human interactions are an essential element of plant risk, it is necessary to integrate human actions into the RISMC risk model. In this report, we review simulation-based and non-simulation-based human reliability assessment (HRA) methods. Chapter 2 surveys non-simulation-based HRA methods. Conventional HRA methods target static Probabilistic Risk Assessments for Level 1 events. These methods would require significant modification for use in dynamic simulation of Level 2 and Level 3 events. Chapter 3 is a review of human performance models. A variety of methods and models simulate dynamic human performance; however, most of these human performance models were developed outside the risk domain and have not been used for HRA. The exception is the ADS-IDAC model, which can be thought of as a virtual operator program. This model is resource-intensive but provides a detailed model of every operator action in a given scenario, along with models of numerous factors that can influence operator performance. Finally, Chapter 4 reviews the treatment of timing of operator actions in HRA methods. This chapter is an example of one of the critical gaps between existing HRA methods and the needs of dynamic HRA. This report summarizes the foundational information needed to develop a feasible approach to modeling human interactions in the RISMC simulations.

  14. Proteomics based compositional analysis of complex cellulase-hemicellulase mixtures

    SciTech Connect (OSTI)

    Chundawat, Shishir P.; Lipton, Mary S.; Purvine, Samuel O.; Uppugundla, Nirmal; Gao, Dahai; Balan, Venkatesh; Dale, Bruce E.

    2011-10-07

    Efficient deconstruction of cellulosic biomass to fermentable sugars for fuel and chemical production is accomplished by a complex mixture of cellulases, hemicellulases and accessory enzymes (e.g., >50 extracellular proteins). Cellulolytic enzyme mixtures, produced industrially mostly using fungi like Trichoderma reesei, are poorly characterized in terms of their protein composition and its correlation to hydrolytic activity on cellulosic biomass. The secretomes of commercial glycosyl hydrolase producing microbes was explored using a proteomics approach with high-throughput quantification using liquid chromatography-tandem mass spectrometry (LC-MS/MS). Here, we show that proteomics based spectral counting approach is a reasonably accurate and rapid analytical technique that can be used to determine protein composition of complex glycosyl hydrolase mixtures that also correlates with the specific activity of individual enzymes present within the mixture. For example, a strong linear correlation was seen between Avicelase activity and total cellobiohydrolase content. Reliable, quantitative and cheaper analytical methods that provide insight into the cellulosic biomass degrading fungal and bacterial secretomes would lead to further improvements towards commercialization of plant biomass derived fuels and chemicals.

  15. U.S. Renewable Energy Technical Potentials: A GIS-Based Analysis...

    Open Energy Info (EERE)

    Renewable Energy Technical Potentials: A GIS-Based Analysis Jump to: navigation, search OpenEI Reference LibraryAdd to library Report: U.S. Renewable Energy Technical Potentials: A...

  16. Analysis of Customer Enrollment Patterns in TIme-Based Rate Programs...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Analysis of Customer Enrollment Patterns in TIme-Based Rate Programs: Initial Results from the SGIG Consumer Behavior Studies (July 2013) The U.S. Department of Energy is ...

  17. MEMS-based chemical analysis systems development at Sandia National Labs.

    Office of Scientific and Technical Information (OSTI)

    (Conference) | SciTech Connect MEMS-based chemical analysis systems development at Sandia National Labs. Citation Details In-Document Search Title: MEMS-based chemical analysis systems development at Sandia National Labs. No abstract prepared. Authors: Simonson, Robert Joseph ; Manginell, Ronald Paul ; Staton, Alan W. ; Porter, Daniel Allen ; Whiting, Joshua J. ; Moorman, Matthew Wallace ; Wheeler, David Roger Publication Date: 2010-08-01 OSTI Identifier: 1024439 Report Number(s):

  18. Algorithms and tools for high-throughput geometry-based analysis of

    Office of Scientific and Technical Information (OSTI)

    crystalline porous materials (Journal Article) | SciTech Connect Journal Article: Algorithms and tools for high-throughput geometry-based analysis of crystalline porous materials Citation Details In-Document Search Title: Algorithms and tools for high-throughput geometry-based analysis of crystalline porous materials Authors: Willems, Thomas F ; Rycroft, Chris ; Kazi, Michael ; Meza, Juan Colin ; Haranczyk, Maciej Publication Date: 2012-01-01 OSTI Identifier: 1065948 DOE Contract Number:

  19. Diagnostic and Prognostic Analysis of Battery Performance & Aging based on

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Kinetic and Thermodynamic Principles | Department of Energy and Prognostic Analysis of Battery Performance & Aging based on Kinetic and Thermodynamic Principles Diagnostic and Prognostic Analysis of Battery Performance & Aging based on Kinetic and Thermodynamic Principles 2012 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Program Annual Merit Review and Peer Evaluation Meeting PDF icon es124_gering_2012_o.pdf More Documents & Publications Diagnostic Testing and

  20. Topology-based Visualization and Analysis of High-dimensional Data and

    Office of Scientific and Technical Information (OSTI)

    Time-varying Data at the Extreme Scale (Conference) | SciTech Connect Topology-based Visualization and Analysis of High-dimensional Data and Time-varying Data at the Extreme Scale Citation Details In-Document Search Title: Topology-based Visualization and Analysis of High-dimensional Data and Time-varying Data at the Extreme Scale Authors: Weber, Gunther H. ; Morozov, Dmitriy ; Beketayev, Kenes ; Bell, John ; Bremer, Peer-Timo ; Day, Marc ; Hamann, Bernd ; Heine, Christian ; Haranczyk,

  1. ANUDlSiTM-40 Load Flow Analysis: Base Cases, Data, Diagrams,...

    Office of Scientific and Technical Information (OSTI)

    To simplify data collection requirements and illustrate key concepts, a representative power system in the United States - the Commonwealth Edison Company (ComEd) in northern...

  2. Analysis of Geothermal Reservoir Stimulation using Geomechanics-Based Stochastic Analysis of Injection-Induced Seismicity

    Broader source: Energy.gov [DOE]

    This project will develop a model for seismicity-based reservoir characterization (SBRC) by combining rock mechanics; finite element modeling; geo-statistical concepts to establish relationships between micro-seismicity; reservoir flow and geomechanical characteristics.

  3. Macroalgae Analysis A National GIS-based Analysis of Macroalgae Production Potential Summary Report and Project Plan

    SciTech Connect (OSTI)

    Roesijadi, Guritno; Coleman, Andre M.; Judd, Chaeli; Van Cleve, Frances B.; Thom, Ronald M.; Buenau, Kate E.; Tagestad, Jerry D.; Wigmosta, Mark S.; Ward, Jeffrey A.

    2011-12-01

    The overall project objective is to conduct a strategic analysis to assess the state of macroalgae as a feedstock for biofuels production. The objective in FY11 is to develop a multi-year systematic national assessment to evaluate the U.S. potential for macroalgae production using a GIS-based assessment tool and biophysical growth model developed as part of these activities. The initial model development for both resource assessment and constraints was completed and applied to the demonstration areas. The model for macroalgal growth was extended to the EEZ off the East and West Coasts of the United States, and a plan to merge the findings for an initial composite assessment was developed. In parallel, an assessment of land-based, port, and offshore infrastructure needs based on published and grey literature was conducted. Major information gaps and challenges encountered during this analysis were identified. Also conducted was an analysis of the type of local, state, and federal requirements that pertain to permitting land-based facilities and nearshore/offshore culture operations

  4. Case Studies

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Case Studies Case Studies The following case studies will be included in the HEP report. Final case studies are due January 7, 2013. Lattice Gauge Theories - Lead: Doug Toussaint Simulations for Cosmic Frontier Experiments - Leads: Peter Nugent & Andrew Connelly Cosmic Microwave Background Data Analysis - Lead: Julian Borrill Cosmological Simulations - Lead: Salman Habib Plasma Accelerator Simulation Using Laser and Particle Beam Drivers - Leads: Cameron Geddes & Frank Tsung Community

  5. EVENT TREE ANALYSIS AT THE SAVANNAH RIVER SITE: A CASE HISTORY

    SciTech Connect (OSTI)

    Williams, R

    2009-05-25

    At the Savannah River Site (SRS), a Department of Energy (DOE) installation in west-central South Carolina there is a unique geologic stratum that exists at depth that has the potential to cause surface settlement resulting from a seismic event. In the past the particular stratum in question has been remediated via pressure grouting, however the benefits of remediation have always been debatable. Recently the SRS has attempted to frame the issue in terms of risk via an event tree or logic tree analysis. This paper describes that analysis, including the input data required.

  6. Energy-water analysis of the 10-year WECC transmission planning study cases.

    SciTech Connect (OSTI)

    Tidwell, Vincent Carroll; Passell, Howard David; Castillo, Cesar; Moreland, Barbara

    2011-11-01

    In 2011 the Department of Energy's Office of Electricity embarked on a comprehensive program to assist our Nation's three primary electric interconnections with long term transmission planning. Given the growing concern over water resources in the western U.S. the Western Electricity Coordinating Council (WECC) requested assistance with integrating water resource considerations into their broader electric transmission planning. The result is a project with three overarching objectives: (1) Develop an integrated Energy-Water Decision Support System (DSS) that will enable planners in the Western Interconnection to analyze the potential implications of water stress for transmission and resource planning. (2) Pursue the formulation and development of the Energy-Water DSS through a strongly collaborative process between the Western Electricity Coordinating Council (WECC), Western Governors Association (WGA), the Western States Water Council (WSWC) and their associated stakeholder teams. (3) Exercise the Energy-Water DSS to investigate water stress implications of the transmission planning scenarios put forward by WECC, WGA, and WSWC. The foundation for the Energy-Water DSS is Sandia National Laboratories Energy-Power-Water Simulation (EPWSim) model (Tidwell et al. 2009). The modeling framework targets the shared needs of energy and water producers, resource managers, regulators, and decision makers at the federal, state and local levels. This framework provides an interactive environment to explore trade-offs, and 'best' alternatives among a broad list of energy/water options and objectives. The decision support framework is formulated in a modular architecture, facilitating tailored analyses over different geographical regions and scales (e.g., state, county, watershed, interconnection). An interactive interface allows direct control of the model and access to real-time results displayed as charts, graphs and maps. The framework currently supports modules for calculating water withdrawal and consumption for current and planned electric power generation; projected water demand from competing use sectors; and, surface and groundwater availability. WECC's long range planning is organized according to two target planning horizons, a 10-year and a 20-year. This study supports WECC in the 10-year planning endeavor. In this case the water implications associated with four of WECC's alternative future study cases (described below) are calculated and reported. In future phases of planning we will work with WECC to craft study cases that aim to reduce the thermoelectric footprint of the interconnection and/or limit production in the most water stressed regions of the West.

  7. Aminoindazole PDK1 Inhibitors: A Case Study in Fragment-Based Drug Discovery

    SciTech Connect (OSTI)

    Medina, Jesus R.; Blackledge, Charles W.; Heerding, Dirk A.; Campobasso, Nino; Ward, Paris; Briand, Jacques; Wright, Lois; Axten, Jeffrey M.

    2012-05-29

    Fragment screening of phosphoinositide-dependent kinase-1 (PDK1) in a biochemical kinase assay afforded hits that were characterized and prioritized based on ligand efficiency and binding interactions with PDK1 as determined by NMR. Subsequent crystallography and follow-up screening led to the discovery of aminoindazole 19, a potent leadlike PDK1 inhibitor with high ligand efficiency. Well-defined structure-activity relationships and protein crystallography provide a basis for further elaboration and optimization of 19 as a PDK1 inhibitor.

  8. Eigenmode analysis of a high-gain free-electron laser based on a transverse

    Office of Scientific and Technical Information (OSTI)

    gradient undulator (Journal Article) | SciTech Connect Journal Article: Eigenmode analysis of a high-gain free-electron laser based on a transverse gradient undulator Citation Details In-Document Search Title: Eigenmode analysis of a high-gain free-electron laser based on a transverse gradient undulator Authors: Baxevanis, Panagiotis ; Huang, Zhirong ; Ruth, Ronald ; Schroeder, Carl B. Publication Date: 2015-01-27 OSTI Identifier: 1181185 Grant/Contract Number: AC02-05CH11231; AC02-76SF00515

  9. Topology-based Visualization and Analysis of High-dimensional Data and

    Office of Scientific and Technical Information (OSTI)

    Time-varying Data at the Extreme Scale (Conference) | SciTech Connect Topology-based Visualization and Analysis of High-dimensional Data and Time-varying Data at the Extreme Scale Citation Details In-Document Search Title: Topology-based Visualization and Analysis of High-dimensional Data and Time-varying Data at the Extreme Scale × You are accessing a document from the Department of Energy's (DOE) SciTech Connect. This site is a product of DOE's Office of Scientific and Technical

  10. Appendix E: Other NEMS-MP results for the base case and scenarios.

    SciTech Connect (OSTI)

    Plotkin, S. E.; Singh, M. K.; Energy Systems

    2009-12-03

    The NEMS-MP model generates numerous results for each run of a scenario. (This model is the integrated National Energy Modeling System [NEMS] version used for the Multi-Path Transportation Futures Study [MP].) This appendix examines additional findings beyond the primary results reported in the Multi-Path Transportation Futures Study: Vehicle Characterization and Scenario Analyses (Reference 1). These additional results are provided in order to help further illuminate some of the primary results. Specifically discussed in this appendix are: (1) Energy use results for light vehicles (LVs), including details about the underlying total vehicle miles traveled (VMT), the average vehicle fuel economy, and the volumes of the different fuels used; (2) Resource fuels and their use in the production of ethanol, hydrogen (H{sub 2}), and electricity; (3) Ethanol use in the scenarios (i.e., the ethanol consumption in E85 vs. other blends, the percent of travel by flex fuel vehicles on E85, etc.); (4) Relative availability of E85 and H2 stations; (5) Fuel prices; (6) Vehicle prices; and (7) Consumer savings. These results are discussed as follows: (1) The three scenarios (Mixed, (P)HEV & Ethanol, and H2 Success) when assuming vehicle prices developed through literature review; (2) The three scenarios with vehicle prices that incorporate the achievement of the U.S. Department of Energy (DOE) program vehicle cost goals; (3) The three scenarios with 'literature review' vehicle prices, plus vehicle subsidies; and (4) The three scenarios with 'program goals' vehicle prices, plus vehicle subsidies. The four versions or cases of each scenario are referred to as: Literature Review No Subsidies, Program Goals No Subsidies, Literature Review with Subsidies, and Program Goals with Subsidies. Two additional points must be made here. First, none of the results presented for LVs in this section include Class 2B trucks. Results for this class are included occasionally in Reference 1. They represent a small, though noticeable, segment of the 'LV plus 2B' market (e.g., a little more than 3% of today's energy use in that market). We generally do not include them in this discussion, simply because it requires additional effort to combine the NEMS-MP results for them with the results for the other LVs. (Where there is an exception, we will indicate so.) Second, where reference is made to E85, the ethanol content is actually 74%. The Energy Information Administration (EIA) assumes that, to address cold-starting issues, the percent of ethanol in E85 will vary seasonally. The EIA uses an annual average ethanol content of 74% in its forecasts. That assumption is maintained in the NEMS-MP scenario runs.

  11. Framework for the Economic Analysis of Hybrid Systems Based on Exergy Consumption

    SciTech Connect (OSTI)

    Cristian Rabiti; Robert S. Cherry; Wesley R. Deason; Piyush Sabharwall; Shannon M. Bragg-Sitton; Richard D. Boardman

    2014-08-01

    Starting from an overview of the dynamic behavior of the electricity market the need of the introduction of energy users that will provide a damping capability to the system is derived as also a qualitative analysis of the impact of uncertainty, both in the demand and supply side, is performed. Then it follows an introduction to the investment analysis methodologies based on the discounting of the cash flow, and then work concludes with the illustration and application of the exergonomic principles to provide a sound methodology for the cost accounting of the plant components to be used in the cash flow analysis.

  12. Dynamic Slope Stability Analysis of Mine Tailing Deposits: the Case of Raibl Mine

    SciTech Connect (OSTI)

    Roberto, Meriggi; Marco, Del Fabbro; Erica, Blasone; Erica, Zilli

    2008-07-08

    Over the last few years, many embankments and levees have collapsed during strong earthquakes or floods. In the Friuli Venezia Giulia Region (North-Eastern Italy), the main source of this type of risk is a slag deposit of about 2x10{sup 6} m{sup 3} deriving from galena and lead mining activity until 1991 in the village of Raibl. For the final remedial action plan, several in situ tests were performed: five boreholes equipped with piezometers, four CPTE and some geophysical tests with different approaches (refraction, ReMi and HVSR). Laboratory tests were conducted on the collected samples: geotechnical classification, triaxial compression tests and constant head permeability tests in triaxial cell. Pressure plate tests were also done on unsaturated slag to evaluate the characteristic soil-water curve useful for transient seepage analysis. A seepage analysis was performed in order to obtain the maximum pore water pressures during the intense rainfall event which hit the area on 29th August 2003. The results highlight that the slag low permeability prevents the infiltration of rainwater, which instead seeps easily through the boundary levees built with coarse materials. For this reason pore water pressures inside the deposits are not particularly influenced by rainfall intensity and frequency. Seismic stability analysis was performed with both the pseudo-static method, coupled with Newmark's method, and dynamic methods, using as design earthquake the one registered in Tolmezzo (Udine) on 6{sup th} May 1976. The low reduction of safety factors and the development of very small cumulative displacements show that the stability of embankments is assured even if an earthquake of magnitude 6.4 and a daily rainfall of 141.6 mm occur at the same time.

  13. Station Blackout: A case study in the interaction of mechanistic and probabilistic safety analysis

    SciTech Connect (OSTI)

    Curtis Smith; Diego Mandelli; Cristian Rabiti

    2013-11-01

    The ability to better characterize and quantify safety margins is important to improved decision making about nuclear power plant design, operation, and plant life extension. As research and development (R&D) in the light-water reactor (LWR) Sustainability (LWRS) Program and other collaborative efforts yield new data, sensors, and improved scientific understanding of physical processes that govern the aging and degradation of plant SSCs needs and opportunities to better optimize plant safety and performance will become known. The purpose of the Risk Informed Safety Margin Characterization (RISMC) Pathway R&D is to support plant decisions for risk-informed margin management with the aim to improve economics, reliability, and sustain safety of current NPPs. In this paper, we describe the RISMC analysis process illustrating how mechanistic and probabilistic approaches are combined in order to estimate a safety margin. We use the scenario of a “station blackout” wherein offsite power and onsite power is lost, thereby causing a challenge to plant safety systems. We describe the RISMC approach, illustrate the station blackout modeling, and contrast this with traditional risk analysis modeling for this type of accident scenario.

  14. An analysis of uranium dispersal and health effects using a Gulf War case study.

    SciTech Connect (OSTI)

    Marshall, Albert Christian

    2005-07-01

    The study described in this report used mathematical modeling to estimate health risks from exposure to depleted uranium (DU) during the 1991 Gulf War for both U.S. troops and nearby Iraqi civilians. The analysis found that the risks of DU-induced leukemia or birth defects are far too small to result in an observable increase in these health effects among exposed veterans or Iraqi civilians. Only a few veterans in vehicles accidentally struck by U.S. DU munitions are predicted to have inhaled sufficient quantities of DU particulate to incur any significant health risk (i.e., the possibility of temporary kidney damage from the chemical toxicity of uranium and about a 1% chance of fatal lung cancer). The health risk to all downwind civilians is predicted to be extremely small. Recommendations for monitoring are made for certain exposed groups. Although the study found fairly large calculational uncertainties, the models developed and used are generally valid. The analysis was also used to assess potential uranium health hazards for workers in the weapons complex. No illnesses are projected for uranium workers following standard guidelines; nonetheless, some research suggests that more conservative guidelines should be considered.

  15. Lipid-Based Nanodiscs as Models for Studying Mesoscale Coalescence A Transport Limited Case

    SciTech Connect (OSTI)

    Hu, Andrew; Fan, Tai-Hsi; Katsaras, John; Xia, Yan; Li, Ming; Nieh, Mu-Ping

    2014-01-01

    Lipid-based nanodiscs (bicelles) are able to form in mixtures of long- and short-chain lipids. Initially, they are of uniform size but grow upon dilution. Previously, nanodisc growth kinetics have been studied using time-resolved small angle neutron scattering (SANS), a technique which is not well suited for probing their change in size immediately after dilution. To address this, we have used dynamic light scattering (DLS), a technique which permits the collection of useful data in a short span of time after dilution of the system. The DLS data indicate that the negatively charged lipids in nanodiscs play a significant role in disc stability and growth. Specifically, the charged lipids are most likely drawn out from the nanodiscs into solution, thereby reducing interparticle repulsion and enabling the discs to grow. We describe a population balance model, which takes into account Coulombic interactions and adequately predicts the initial growth of nanodiscs with a single parameter i.e., surface potential. The results presented here strongly support the notion that the disc coalescence rate strongly depends on nanoparticle charge density. The present system containing low-polydispersity lipid nanodiscs serves as a good model for understanding how charged discoidal micelles coalesce.

  16. Technology Solutions Case Study: Apartment Compartmentalization with an Aerosol-Based Sealing Process

    SciTech Connect (OSTI)

    2015-07-01

    Air sealing of building enclosures is a difficult and time-consuming process. Current methods in new construction require laborers to physically locate small and sometimes large holes in multiple assemblies and then manually seal each of them. This research study by Building America team Consortium for Advanced Residential Buildings demonstrated the automated air sealing and compartmentalization of buildings through the use of an aerosolized sealant developed by the Western Cooling Efficiency Center at University of California Davis. CARB demonstrated this new technology application in a multifamily building in Queens, NY. The effectiveness of the sealing process was evaluated by three methods: air leakage testing of overall apartment before and after sealing, point-source testing of individual leaks, and pressure measurements in the walls of the target apartment during sealing. Aerosolized sealing was successful by several measures in this study. Many individual leaks that are labor-intensive to address separately were well sealed by the aerosol particles. In addition, many diffuse leaks that are difficult to identify and treat were also sealed. The aerosol-based sealing process resulted in an average reduction of 71% in air leakage across three apartments and an average apartment airtightness of 0.08 CFM50/SF of enclosure area.

  17. IAEA CRP on HTGR Uncertainty Analysis: Benchmark Definition and Test Cases

    SciTech Connect (OSTI)

    Gerhard Strydom; Frederik Reitsma; Hans Gougar; Bismark Tyobeka; Kostadin Ivanov

    2012-11-01

    Uncertainty and sensitivity studies are essential elements of the reactor simulation code verification and validation process. Although several international uncertainty quantification activities have been launched in recent years in the LWR, BWR and VVER domains (e.g. the OECD/NEA BEMUSE program [1], from which the current OECD/NEA LWR Uncertainty Analysis in Modelling (UAM) benchmark [2] effort was derived), the systematic propagation of uncertainties in cross-section, manufacturing and model parameters for High Temperature Reactor (HTGR) designs has not been attempted yet. This paper summarises the scope, objectives and exercise definitions of the IAEA Coordinated Research Project (CRP) on HTGR UAM [3]. Note that no results will be included here, as the HTGR UAM benchmark was only launched formally in April 2012, and the specification is currently still under development.

  18. 2007 Wholesale Power Rate Case Initial Proposal : Risk Analysis Study Documentation.

    SciTech Connect (OSTI)

    United States. Bonneville Power Administration.

    2005-11-01

    The RiskMod Model is comprised of a set of risk simulation models, collectively referred to as RiskSim; a set of computer programs that manages data referred to as Data Management Procedures; and RevSim, a model that calculates net revenues. RiskMod interacts with the AURORA Model, the RAM2007, and the ToolKit Model during the process of performing the Risk Analysis Study. AURORA is the computer model being used to perform the Market Price Forecast Study (see Market Price Forecast Study, WP-07-E-BPA-03); the RAM2007 is the computer model being used to calculate rates (see Wholesale Power Rate Development Study, WP-07-E-BPA-05); and the ToolKit is the computer model being used to develop the risk mitigation package that achieves BPA's 92.6 percent TPP standard (see Section 3 in the Risk Analysis Study, WP-07-E-BPA-04). Variations in monthly loads, resources, natural gas prices, forward market electricity prices, transmission expenses, and aluminum smelter benefit payments are simulated in RiskSim. Monthly spot market electricity prices for the simulated loads, resources, and natural gas prices are estimated by the AURORA Model. Data Management Procedures facilitate the format and movement of data that flow to and/or from RiskSim, AURORA, and RevSim. RevSim estimates net revenues using risk data from RiskSim, spot market electricity prices from AURORA, loads and resources data from the Load Resource Study, WP-07-E-BPA-01, various revenues from the Revenue Forecast component of the Wholesale Power Rate Development Study, WP-07-E-BPA-05, and rates and expenses from the RAM2007. Annual average surplus energy revenues, purchased power expenses, and section 4(h)(10)(C) credits calculated by RevSim are used in the Revenue Forecast and the RAM2007. Heavy Load Hour (HLH) and Light Load Hour (LLH) surplus and deficit energy values from RevSim are used in the Transmission Expense Risk Model. Net revenues estimated for each simulation by RevSim are input into the ToolKit Model to develop the risk mitigation package that achieves BPA's 92.6 percent TPP standard. The processes and interaction between each of the models and studies are depicted in Graph 1.

  19. 2007 Wholesale Power Rate Case Final Proposal : Risk Analysis Study Documentation.

    SciTech Connect (OSTI)

    United States. Bonneville Power Administration.

    2006-07-01

    The RiskMod Model is comprised of a set of risk simulation models, collectively referred to as RiskSim; a set of computer programs that manages data referred to as Data Management Procedures; and RevSim, a model that calculates net revenues. RiskMod interacts with the AURORA Model, the RAM2007, and the ToolKit Model during the process of performing the Risk Analysis Study. AURORA is the computer model being used to perform the Market Price Forecast Study (see Market Price Forecast Study, WP-07-FS-BPA-03); the RAM2007 is the computer model being used to calculate rates (see Wholesale Power Rate Development Study, WP-07-FS-BPA-05); and the ToolKit is the computer model being used to develop the risk mitigation package that achieves BPA's 92.6 percent TPP standard (see Section 3 in the Risk Analysis Study, WP-07-FS-BPA-04). Variations in monthly loads, resources, natural gas prices, forward market electricity prices, transmission expenses, and aluminum smelter benefit payments are simulated in RiskSim. Monthly spot market electricity prices for the simulated loads, resources, and natural gas prices are estimated by the AURORA Model. Data Management Procedures facilitate the format and movement of data that flow to and/or from RiskSim, AURORA, and RevSim. RevSim estimates net revenues using risk data from RiskSim, spot market electricity prices from AURORA, loads and resources data from the Load Resource Study, WP-07-FS-BPA-01, various revenues from the Revenue Forecast component of the Wholesale Power Rate Development Study, WP-07-FSBPA-05, and rates and expenses from the RAM2007. Annual average surplus energy revenues, purchased power expenses, and section 4(h)(10)(C) credits calculated by RevSim are used in the Revenue Forecast and the RAM2007. Heavy Load Hour (HLH) and Light Load Hour (LLH) surplus and deficit energy values from RevSim are used in the Transmission Expense Risk Model. Net revenues estimated for each simulation by RevSim are input into the ToolKit Model to develop the risk mitigation package that achieves BPA's 92.6 percent TPP standard. The processes and interaction between each of the models and studies are depicted in Graph 1.

  20. Distributed energy resources in practice: A case study analysis and validation of LBNL's customer adoption model

    SciTech Connect (OSTI)

    Bailey, Owen; Creighton, Charles; Firestone, Ryan; Marnay, Chris; Stadler, Michael

    2003-02-01

    This report describes a Berkeley Lab effort to model the economics and operation of small-scale (<500 kW) on-site electricity generators based on real-world installations at several example customer sites. This work builds upon the previous development of the Distributed Energy Resource Customer Adoption Model (DER-CAM), a tool designed to find the optimal combination of installed equipment, and idealized operating schedule, that would minimize the site's energy bills, given performance and cost data on available DER technologies, utility tariffs, and site electrical and thermal loads over a historic test period, usually a recent year. This study offered the first opportunity to apply DER-CAM in a real-world setting and evaluate its modeling results. DER-CAM has three possible applications: first, it can be used to guide choices of equipment at specific sites, or provide general solutions for example sites and propose good choices for sites with similar circumstances; second, it can additionally provide the basis for the operations of installed on-site generation; and third, it can be used to assess the market potential of technologies by anticipating which kinds of customers might find various technologies attractive. A list of approximately 90 DER candidate sites was compiled and each site's DER characteristics and their willingness to volunteer information was assessed, producing detailed information on about 15 sites of which five sites were analyzed in depth. The five sites were not intended to provide a random sample, rather they were chosen to provide some diversity of business activity, geography, and technology. More importantly, they were chosen in the hope of finding examples of true business decisions made based on somewhat sophisticated analyses, and pilot or demonstration projects were avoided. Information on the benefits and pitfalls of implementing a DER system was also presented from an additional ten sites including agriculture, education, health care, airport, and manufacturing facilities.

  1. Natural time analysis of critical phenomena: The case of pre-fracture electromagnetic emissions

    SciTech Connect (OSTI)

    Potirakis, S. M.; Karadimitrakis, A.; Eftaxias, K.

    2013-06-15

    Criticality of complex systems reveals itself in various ways. One way to monitor a system at critical state is to analyze its observable manifestations using the recently introduced method of natural time. Pre-fracture electromagnetic (EM) emissions, in agreement to laboratory experiments, have been consistently detected in the MHz band prior to significant earthquakes. It has been proposed that these emissions stem from the fracture of the heterogeneous materials surrounding the strong entities (asperities) distributed along the fault, preventing the relative slipping. It has also been proposed that the fracture of heterogeneous material could be described in analogy to the critical phase transitions in statistical physics. In this work, the natural time analysis is for the first time applied to the pre-fracture MHz EM signals revealing their critical nature. Seismicity and pre-fracture EM emissions should be two sides of the same coin concerning the earthquake generation process. Therefore, we also examine the corresponding foreshock seismic activity, as another manifestation of the same complex system at critical state. We conclude that the foreshock seismicity data present criticality features as well.

  2. Security Analysis of Selected AMI Failure Scenarios Using Agent Based Game Theoretic Simulation

    SciTech Connect (OSTI)

    Abercrombie, Robert K; Schlicher, Bob G; Sheldon, Frederick T

    2014-01-01

    Information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. We concentrated our analysis on the Advanced Metering Infrastructure (AMI) functional domain which the National Electric Sector Cyber security Organization Resource (NESCOR) working group has currently documented 29 failure scenarios. The strategy for the game was developed by analyzing five electric sector representative failure scenarios contained in the AMI functional domain. From these five selected scenarios, we characterize them into three specific threat categories affecting confidentiality, integrity and availability (CIA). The analysis using our ABGT simulation demonstrates how to model the AMI functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the AMI network with respect to CIA.

  3. U.S. Renewable Energy Technical Potentials: A GIS-Based Analysis

    Broader source: Energy.gov [DOE]

    The National Renewable Energy Laboratory (NREL) routinely estimates the technical potential of specific renewable electricity generation technologies. These are technology-specific estimates of energy generation potential based on renewable resource availability and quality, technical system performance, topographic limitations, environmental, and land-use constraints only. The estimates do not consider (in most cases) economic or market constraints, and therefore do not represent a level of renewable generation that might actually be deployed. Technical potential estimates for six different renewable energy technologies were calculated by NREL, and methods and results for several other renewable technologies from previously published reports are also presented.

  4. Waste-to-wheel analysis of anaerobic-digestion-based renewable natural gas pathways with the GREET model.

    SciTech Connect (OSTI)

    Han, J.; Mintz, M.; Wang, M.

    2011-12-14

    In 2009, manure management accounted for 2,356 Gg or 107 billion standard cubic ft of methane (CH{sub 4}) emissions in the United States, equivalent to 0.5% of U.S. natural gas (NG) consumption. Owing to the high global warming potential of methane, capturing and utilizing this methane source could reduce greenhouse gas (GHG) emissions. The extent of that reduction depends on several factors - most notably, how much of this manure-based methane can be captured, how much GHG is produced in the course of converting it to vehicular fuel, and how much GHG was produced by the fossil fuel it might displace. A life-cycle analysis was conducted to quantify these factors and, in so doing, assess the impact of converting methane from animal manure into renewable NG (RNG) and utilizing the gas in vehicles. Several manure-based RNG pathways were characterized in the GREET (Greenhouse gases, Regulated Emissions, and Energy use in Transportation) model, and their fuel-cycle energy use and GHG emissions were compared to petroleum-based pathways as well as to conventional fossil NG pathways. Results show that despite increased total energy use, both fossil fuel use and GHG emissions decline for most RNG pathways as compared with fossil NG and petroleum. However, GHG emissions for RNG pathways are highly dependent on the specifics of the reference case, as well as on the process energy emissions and methane conversion factors assumed for the RNG pathways. The most critical factors are the share of flared controllable CH{sub 4} and the quantity of CH{sub 4} lost during NG extraction in the reference case, the magnitude of N{sub 2}O lost in the anaerobic digestion (AD) process and in AD residue, and the amount of carbon sequestered in AD residue. In many cases, data for these parameters are limited and uncertain. Therefore, more research is needed to gain a better understanding of the range and magnitude of environmental benefits from converting animal manure to RNG via AD.

  5. Approach to proliferation risk assessment based on multiple objective analysis framework

    SciTech Connect (OSTI)

    Andrianov, A.; Kuptsov, I.

    2013-07-01

    The approach to the assessment of proliferation risk using the methods of multi-criteria decision making and multi-objective optimization is presented. The approach allows the taking into account of the specifics features of the national nuclear infrastructure, and possible proliferation strategies (motivations, intentions, and capabilities). 3 examples of applying the approach are shown. First, the approach has been used to evaluate the attractiveness of HEU (high enriched uranium)production scenarios at a clandestine enrichment facility using centrifuge enrichment technology. Secondly, the approach has been applied to assess the attractiveness of scenarios for undeclared production of plutonium or HEU by theft of materials circulating in nuclear fuel cycle facilities and thermal reactors. Thirdly, the approach has been used to perform a comparative analysis of the structures of developing nuclear power systems based on different types of nuclear fuel cycles, the analysis being based on indicators of proliferation risk.

  6. Algorithms and tools for high-throughput geometry-based analysis of

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    crystalline porous materials | Center for Gas SeparationsRelevant to Clean Energy Technologies | Blandine Jerome Algorithms and tools for high-throughput geometry-based analysis of crystalline porous materials Previous Next List Thomas F. Willems, Chris H. Rycroft, Michaeel Kazi, Juan C. Meza, Maciej Haranczyk, Microporous Mesoporous Mater., 149, 134-141 (2012) DOI: 10.1016/j.micromeso.2011.08.020 Abstract: Crystalline porous materials have a variety of uses, such as for catalysis and

  7. FERC's acceptance of market-based pricing: An antitrust analysis. [Federal Energy Regulatory Commission

    SciTech Connect (OSTI)

    Harris, B.C.; Frankena, M.W. )

    1992-06-01

    In large part, FERC's determination of market power is based on an analysis that focuses on the ability of power suppliers to foreclose' other potential power suppliers by withholding transmission access to the buyer. The authors believe that this analysis is flawed because the conditions it considers are neither necessary nor sufficient for the existence of market power. That is, it is possible that market-based rates can be subject to market power even if no transmission supplier has the ability to foreclose some power suppliers; conversely, it is possible that no market power exists despite the ability to foreclose other suppliers. This paper provides a critical analysis of FERC's market-power determinations. The concept of market power is defined and its relationship to competition is discussed in Section 1, while a framework for evaluating the existence of market power is presented in Section 2. In Section 3, FERC's recent order in Terra Comfort is examined using this framework. A brief preview of FERC's order in TECO Power Services comprises Section 4. Overall conclusions are presented in Section 5.

  8. Analysis of Geothermal Reservoir Stimulation Using Geomechanics-based Stochastic Analysis of Injection-induced Seismicity; 2010 Geothermal Technology Program Peer Review Report

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    5 4.5.7 Analysis of Geothermal Reservoir Stimulation Using Geomechanics-based Stochastic Analysis of Injection-induced Seismicity Presentation Number: 027 Investigator: Ghassemi, Ahmad (Texas A&M University) Objectives: To develop a model for seismicity-based reservoir characterization (SBRC) by combining rock mechanics, finite element modeling, geostatistical concepts to establish relationships between microseismicity, reservoir flow and geomechanical characteristics. Average Overall Score:

  9. Demonstration of emulator-based Bayesian calibration of safety analysis codes: Theory and formulation

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Yurko, Joseph P.; Buongiorno, Jacopo; Youngblood, Robert

    2015-05-28

    System codes for simulation of safety performance of nuclear plants may contain parameters whose values are not known very accurately. New information from tests or operating experience is incorporated into safety codes by a process known as calibration, which reduces uncertainty in the output of the code and thereby improves its support for decision-making. The work reported here implements several improvements on classic calibration techniques afforded by modern analysis techniques. The key innovation has come from development of code surrogate model (or code emulator) construction and prediction algorithms. Use of a fast emulator makes the calibration processes used here withmore » Markov Chain Monte Carlo (MCMC) sampling feasible. This study uses Gaussian Process (GP) based emulators, which have been used previously to emulate computer codes in the nuclear field. The present work describes the formulation of an emulator that incorporates GPs into a factor analysis-type or pattern recognition-type model. This “function factorization” Gaussian Process (FFGP) model allows overcoming limitations present in standard GP emulators, thereby improving both accuracy and speed of the emulator-based calibration process. Calibration of a friction-factor example using a Method of Manufactured Solution is performed to illustrate key properties of the FFGP based process.« less

  10. Demonstration of emulator-based Bayesian calibration of safety analysis codes: Theory and formulation

    SciTech Connect (OSTI)

    Yurko, Joseph P.; Buongiorno, Jacopo; Youngblood, Robert

    2015-05-28

    System codes for simulation of safety performance of nuclear plants may contain parameters whose values are not known very accurately. New information from tests or operating experience is incorporated into safety codes by a process known as calibration, which reduces uncertainty in the output of the code and thereby improves its support for decision-making. The work reported here implements several improvements on classic calibration techniques afforded by modern analysis techniques. The key innovation has come from development of code surrogate model (or code emulator) construction and prediction algorithms. Use of a fast emulator makes the calibration processes used here with Markov Chain Monte Carlo (MCMC) sampling feasible. This study uses Gaussian Process (GP) based emulators, which have been used previously to emulate computer codes in the nuclear field. The present work describes the formulation of an emulator that incorporates GPs into a factor analysis-type or pattern recognition-type model. This “function factorization” Gaussian Process (FFGP) model allows overcoming limitations present in standard GP emulators, thereby improving both accuracy and speed of the emulator-based calibration process. Calibration of a friction-factor example using a Method of Manufactured Solution is performed to illustrate key properties of the FFGP based process.

  11. Methods for simulation-based analysis of fluid-structure interaction.

    SciTech Connect (OSTI)

    Barone, Matthew Franklin; Payne, Jeffrey L.

    2005-10-01

    Methods for analysis of fluid-structure interaction using high fidelity simulations are critically reviewed. First, a literature review of modern numerical techniques for simulation of aeroelastic phenomena is presented. The review focuses on methods contained within the arbitrary Lagrangian-Eulerian (ALE) framework for coupling computational fluid dynamics codes to computational structural mechanics codes. The review treats mesh movement algorithms, the role of the geometric conservation law, time advancement schemes, wetted surface interface strategies, and some representative applications. The complexity and computational expense of coupled Navier-Stokes/structural dynamics simulations points to the need for reduced order modeling to facilitate parametric analysis. The proper orthogonal decomposition (POD)/Galerkin projection approach for building a reduced order model (ROM) is presented, along with ideas for extension of the methodology to allow construction of ROMs based on data generated from ALE simulations.

  12. Fission matrix-based Monte Carlo criticality analysis of fuel storage pools

    SciTech Connect (OSTI)

    Farlotti, M.; Larsen, E. W.

    2013-07-01

    Standard Monte Carlo transport procedures experience difficulties in solving criticality problems in fuel storage pools. Because of the strong neutron absorption between fuel assemblies, source convergence can be very slow, leading to incorrect estimates of the eigenvalue and the eigenfunction. This study examines an alternative fission matrix-based Monte Carlo transport method that takes advantage of the geometry of a storage pool to overcome this difficulty. The method uses Monte Carlo transport to build (essentially) a fission matrix, which is then used to calculate the criticality and the critical flux. This method was tested using a test code on a simple problem containing 8 assemblies in a square pool. The standard Monte Carlo method gave the expected eigenfunction in 5 cases out of 10, while the fission matrix method gave the expected eigenfunction in all 10 cases. In addition, the fission matrix method provides an estimate of the error in the eigenvalue and the eigenfunction, and it allows the user to control this error by running an adequate number of cycles. Because of these advantages, the fission matrix method yields a higher confidence in the results than standard Monte Carlo. We also discuss potential improvements of the method, including the potential for variance reduction techniques. (authors)

  13. Analysis of FEL-based CeC amplification at high gain limit

    SciTech Connect (OSTI)

    Wang, G.; Litvinenko, V.; Jing, Y.

    2015-05-03

    An analysis of Coherent electron Cooling (CeC) amplifier based on 1D Free Electron Laser (FEL) theory was previously performed with exact solution of the dispersion relation, assuming electrons having Lorentzian energy distribution. At high gain limit, the asymptotic behavior of the FEL amplifier can be better understood by Taylor expanding the exact solution of the dispersion relation with respect to the detuning parameter. In this work, we make quadratic expansion of the dispersion relation for Lorentzian energy distribution and investigate how longitudinal space charge and electrons’ energy spread affect the FEL amplification process.

  14. Global Assessment of Hydrogen Technologies – Tasks 3 & 4 Report Economic, Energy, and Environmental Analysis of Hydrogen Production and Delivery Options in Select Alabama Markets: Preliminary Case Studies

    SciTech Connect (OSTI)

    Fouad, Fouad H.; Peters, Robert W.; Sisiopiku, Virginia P.; Sullivan Andrew J.; Gillette, Jerry; Elgowainy, Amgad; Mintz, Marianne

    2007-12-01

    This report documents a set of case studies developed to estimate the cost of producing, storing, delivering, and dispensing hydrogen for light-duty vehicles for several scenarios involving metropolitan areas in Alabama. While the majority of the scenarios focused on centralized hydrogen production and pipeline delivery, alternative delivery modes were also examined. Although Alabama was used as the case study for this analysis, the results provide insights into the unique requirements for deploying hydrogen infrastructure in smaller urban and rural environments that lie outside the DOE’s high priority hydrogen deployment regions. Hydrogen production costs were estimated for three technologies – steam-methane reforming (SMR), coal gasification, and thermochemical water-splitting using advanced nuclear reactors. In all cases examined, SMR has the lowest production cost for the demands associated with metropolitan areas in Alabama. Although other production options may be less costly for larger hydrogen markets, these were not examined within the context of the case studies.

  15. Loading and Regeneration Analysis of a Diesel Particulate Filter with a Radio Frequency-Based Sensor

    SciTech Connect (OSTI)

    Sappok, Alex; Prikhodko, Vitaly Y; Parks, II, James E

    2010-01-01

    Accurate knowledge of diesel particulate filter (DPF) loading is critical for robust and efficient operation of the combined engine-exhaust aftertreatment system. Furthermore, upcoming on-board diagnostics regulations require on-board technologies to evaluate the status of the DPF. This work describes the application of radio frequency (RF) based sensing techniques to accurately measure DPF soot levels and the spatial distribution of the accumulated material. A 1.9L GM turbo diesel engine and a DPF with an RF-sensor were studied. Direct comparisons between the RF measurement and conventional pressure-based methods were made. Further analysis of the particulate matter loading rates was obtained with a mass-based soot emission measurement instrument (TEOM). Comparison with pressure drop measurements show the RF technique is unaffected by exhaust flow variations and exhibits a high degree of sensitivity to DPF soot loading and good dynamic response. Additional computational and experimental work further illustrates the spatial resolution of the RF measurements. Based on the experimental results, the RF technique shows significant promise for improving DPF control enabling optimization of the combined engine-aftertreatment system for improved fuel economy and extended DPF service life.

  16. CIRA: A Microcomputer-based energy analysis and auditing tool for residential applications

    SciTech Connect (OSTI)

    Sonderegger, R.C.; Dixon, J.D.

    1983-01-01

    Computerized, Instrumented, Residential Audit (CIRA) is a collection of programs for energy analysis and energy auditing of residential buildings. CIRA is written for microcomputers with a CP/M operating system and 64K RAM. Its principal features are: user-friendliness, dynamic defaults, file-oriented structure, design energy analysis capability, economic optimization of retrofits, graphic and tabular output to screen and printer. To calculate monthly energy consumptions both for design and retrofit analyses CIRA uses a modified degree-day and degree-night approach, taking into account solar gains, IR losses to the sky, internal gains and ground heat transfer; the concept of solar storage factor addresses the delayed effect of daytime solar gains while the concept of effective thermal mass ensures proper handling of changes in thermostat setting from day to night; aie infiltration is modeled using the LBL infiltration model based on effective leakage area; HVAC system performance is modeled using correlations developed for DOE-2.1. For any given budget, CIRA can also develop an optimally sequenced list of retrofits with the highest combined savings. Long run-times necessary for economic optimization of retrofits are greatly reduced by using a method based on partial derivatives of energy consumption with respect to principal building parameters. Energy calculations of CIRA compare well with those of DOE-2.1 and with measured energy consumptions from a sample of monitored houses.

  17. A Laser-Based Method for On-Site Analysis of UF6 at Enrichment Plants

    SciTech Connect (OSTI)

    Anheier, Norman C.; Cannon, Bret D.; Martinez, Alonzo; Barrett, Christopher A.; Taubman, Matthew S.; Anderson, Kevin K.; Smith, Leon E.

    2014-11-23

    The International Atomic Energy Agency’s (IAEA’s) long-term research and development plan calls for more cost-effective and efficient safeguard methods to detect and deter misuse of gaseous centrifuge enrichment plants (GCEPs). The IAEA’s current safeguards approaches at GCEPs are based on a combination of routine and random inspections that include environmental sampling and destructive assay (DA) sample collection from UF6 in-process material and selected cylinders. Samples are then shipped offsite for subsequent laboratory analysis. In this paper, a new DA sample collection and onsite analysis approach that could help to meet challenges in transportation and chain of custody for UF6 DA samples is introduced. This approach uses a handheld sampler concept and a Laser Ablation, Laser Absorbance Spectrometry (LAARS) analysis instrument, both currently under development at the Pacific Northwest National Laboratory. A LAARS analysis instrument could be temporarily or permanently deployed in the IAEA control room of the facility, in the IAEA data acquisition cabinet, for example. The handheld PNNL DA sampler design collects and stabilizes a much smaller DA sample mass compared to current sampling methods. The significantly lower uranium mass reduces the sample radioactivity and the stabilization approach diminishes the risk of uranium and hydrogen fluoride release. These attributes enable safe sample handling needed during onsite LAARS assay and may help ease shipping challenges for samples to be processed at the IAEA’s offsite laboratory. The LAARS and DA sampler implementation concepts will be described and preliminary technical viability results presented.

  18. The Business Case for SEP | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Superior Energy Performance » The Business Case for SEP The Business Case for SEP Superior Energy Performance logo Facilities pursue certification to Superior Energy Performance® (SEP(tm)) to achieve an attractive return on investment while enhancing sustainability. The business case for SEP is based on detailed accounts from facilities that have implemented ISO 50001 and SEP. Gain an insider's view from these pioneers. Read the cost-benefit analysis and case studies, and view videos and

  19. Ion Trap Array-Based Systems And Methods For Chemical Analysis

    DOE Patents [OSTI]

    Whitten, William B [Oak Ridge, TN; Ramsey, J Michael [Knoxville, TN

    2005-08-23

    An ion trap-based system for chemical analysis includes an ion trap array. The ion trap array includes a plurality of ion traps arranged in a 2-dimensional array for initially confining ions. Each of the ion traps comprise a central electrode having an aperture, a first and second insulator each having an aperture sandwiching the central electrode, and first and second end cap electrodes each having an aperture sandwiching the first and second insulator. A structure for simultaneously directing a plurality of different species of ions out from the ion traps is provided. A spectrometer including a detector receives and identifies the ions. The trap array can be used with spectrometers including time-of-flight mass spectrometers and ion mobility spectrometers.

  20. Difference between healthy children and ADHD based on wavelet spectral analysis of nuclear magnetic resonance images

    SciTech Connect (OSTI)

    González Gómez Dulce, I. E-mail: emoreno@fcfm.buap.mx E-mail: joserm84@gmail.com; Moreno Barbosa, E. E-mail: emoreno@fcfm.buap.mx E-mail: joserm84@gmail.com; Hernández, Mario Iván Martínez E-mail: emoreno@fcfm.buap.mx E-mail: joserm84@gmail.com; Méndez, José Ramos E-mail: emoreno@fcfm.buap.mx E-mail: joserm84@gmail.com; Silvia, Hidalgo Tobón; Pilar, Dies Suarez E-mail: neurodoc@prodigy.net.mx; Eduardo, Barragán Pérez E-mail: neurodoc@prodigy.net.mx; Benito, De Celis Alonso

    2014-11-07

    The main goal of this project was to create a computer algorithm based on wavelet analysis of region of homogeneity images obtained during resting state studies. Ideally it would automatically diagnose ADHD. Because the cerebellum is an area known to be affected by ADHD, this study specifically analysed this region. Male right handed volunteers (infants with ages between 7 and 11 years old) were studied and compared with age matched controls. Statistical differences between the values of the absolute integrated wavelet spectrum were found and showed significant differences (p<0.0015) between groups. This difference might help in the future to distinguish healthy from ADHD patients and therefore diagnose ADHD. Even if results were statistically significant, the small size of the sample limits the applicability of this methods as it is presented here, and further work with larger samples and using freely available datasets must be done.

  1. Cut set-based risk and reliability analysis for arbitrarily interconnected networks

    DOE Patents [OSTI]

    Wyss, Gregory D.

    2000-01-01

    Method for computing all-terminal reliability for arbitrarily interconnected networks such as the United States public switched telephone network. The method includes an efficient search algorithm to generate minimal cut sets for nonhierarchical networks directly from the network connectivity diagram. Efficiency of the search algorithm stems in part from its basis on only link failures. The method also includes a novel quantification scheme that likewise reduces computational effort associated with assessing network reliability based on traditional risk importance measures. Vast reductions in computational effort are realized since combinatorial expansion and subsequent Boolean reduction steps are eliminated through analysis of network segmentations using a technique of assuming node failures to occur on only one side of a break in the network, and repeating the technique for all minimal cut sets generated with the search algorithm. The method functions equally well for planar and non-planar networks.

  2. Orbit-based analysis of resonant excitations of Alfvén waves in tokamaks

    SciTech Connect (OSTI)

    Bierwage, Andreas; Shinohara, Kouji

    2014-11-15

    The exponential growth phase of fast-ion-driven Alfvénic instabilities is simulated and the resonant wave-particle interactions are analyzed numerically. The simulations are carried out in realistic magnetic geometry and with a realistic particle distribution for a JT-60U plasma driven by negative-ion-based neutral beams. In order to deal with the large magnetic drifts of the fast ions, two new mapping methods are developed and applied. The first mapping yields the radii and pitch angles at the points, where the unperturbed orbit of a particle intersects the mid-plane. These canonical coordinates allow to express analysis results (e.g., drive profiles and resonance widths) in a form that is easy to understand and directly comparable to the radial mode structure. The second mapping yields the structure of the wave field along the particle trajectory. This allows us to unify resonance conditions for trapped and passing particles, determine which harmonics are driven, and which orders of the resonance are involved. This orbit-based resonance analysis (ORA) method is applied to fast-ion-driven instabilities with toroidal mode numbers n = 1-3. After determining the order and width of each resonance, the kinetic compression of resonant particles and the effect of linear resonance overlap are examined. On the basis of the ORA results, implications for the fully nonlinear regime, for the long-time evolution of the system in the presence of a fast ion source, and for the interpretation of experimental observations are discussed.

  3. Analysis of laser remote fusion cutting based on a mathematical model

    SciTech Connect (OSTI)

    Matti, R. S. [Department of Engineering Sciences and Mathematics, Luleå University of Technology, S-971 87 Luleå (Sweden); Department of Mechanical Engineering, College of Engineering, University of Mosul, Mosul (Iraq); Ilar, T.; Kaplan, A. F. H. [Department of Engineering Sciences and Mathematics, Luleå University of Technology, S-971 87 Luleå (Sweden)

    2013-12-21

    Laser remote fusion cutting is analyzed by the aid of a semi-analytical mathematical model of the processing front. By local calculation of the energy balance between the absorbed laser beam and the heat losses, the three-dimensional vaporization front can be calculated. Based on an empirical model for the melt flow field, from a mass balance, the melt film and the melting front can be derived, however only in a simplified manner and for quasi-steady state conditions. Front waviness and multiple reflections are not modelled. The model enables to compare the similarities, differences, and limits between laser remote fusion cutting, laser remote ablation cutting, and even laser keyhole welding. In contrast to the upper part of the vaporization front, the major part only slightly varies with respect to heat flux, laser power density, absorptivity, and angle of front inclination. Statistical analysis shows that for high cutting speed, the domains of high laser power density contribute much more to the formation of the front than for low speed. The semi-analytical modelling approach offers flexibility to simplify part of the process physics while, for example, sophisticated modelling of the complex focused fibre-guided laser beam is taken into account to enable deeper analysis of the beam interaction. Mechanisms like recast layer generation, absorptivity at a wavy processing front, and melt film formation are studied too.

  4. Model-Based Analysis of Electric Drive Options for Medium-Duty Parcel Delivery Vehicles: Preprint

    SciTech Connect (OSTI)

    Barnitt, R. A.; Brooker, A. D.; Ramroth, L.

    2010-12-01

    Medium-duty vehicles are used in a broad array of fleet applications, including parcel delivery. These vehicles are excellent candidates for electric drive applications due to their transient-intensive duty cycles, operation in densely populated areas, and relatively high fuel consumption and emissions. The National Renewable Energy Laboratory (NREL) conducted a robust assessment of parcel delivery routes and completed a model-based techno-economic analysis of hybrid electric vehicle (HEV) and plug-in hybrid electric vehicle configurations. First, NREL characterized parcel delivery vehicle usage patterns, most notably daily distance driven and drive cycle intensity. Second, drive-cycle analysis results framed the selection of drive cycles used to test a parcel delivery HEV on a chassis dynamometer. Next, measured fuel consumption results were used to validate simulated fuel consumption values derived from a dynamic model of the parcel delivery vehicle. Finally, NREL swept a matrix of 120 component size, usage, and cost combinations to assess impacts on fuel consumption and vehicle cost. The results illustrated the dependency of component sizing on drive-cycle intensity and daily distance driven and may allow parcel delivery fleets to match the most appropriate electric drive vehicle to their fleet usage profile.

  5. Knowledge-based analysis of microarray gene expression data by using support vector machines

    SciTech Connect (OSTI)

    William Grundy; Manuel Ares, Jr.; David Haussler

    2001-06-18

    The authors introduce a method of functionally classifying genes by using gene expression data from DNA microarray hybridization experiments. The method is based on the theory of support vector machines (SVMs). SVMs are considered a supervised computer learning method because they exploit prior knowledge of gene function to identify unknown genes of similar function from expression data. SVMs avoid several problems associated with unsupervised clustering methods, such as hierarchical clustering and self-organizing maps. SVMs have many mathematical features that make them attractive for gene expression analysis, including their flexibility in choosing a similarity function, sparseness of solution when dealing with large data sets, the ability to handle large feature spaces, and the ability to identify outliers. They test several SVMs that use different similarity metrics, as well as some other supervised learning methods, and find that the SVMs best identify sets of genes with a common function using expression data. Finally, they use SVMs to predict functional roles for uncharacterized yeast ORFs based on their expression data.

  6. NASTRAN-based computer program for structural dynamic analysis of horizontal axis wind turbines

    SciTech Connect (OSTI)

    Lobitz, D.W.

    1984-01-01

    This paper describes a computer program developed for structural dynamic analysis of horizontal axis wind turbines (HAWTs). It is based on the finite element method through its reliance on NASTRAN for the development of mass, stiffness, and damping matrices of the tower and rotor, which are treated in NASTRAN as separate structures. The tower is modeled in a stationary frame and the rotor in one rotating at a constant angular velocity. The two structures are subsequently joined together (external to NASTRAN) using a time-dependent transformation consistent with the hub configuration. Aerodynamic loads are computed with an established flow model based on strip theory. Aeroelastic effects are included by incorporating the local velocity and twisting deformation of the blade in the load computation. The turbulent nature of the wind, both in space and time, is modeled by adding in stochastic wind increments. The resulting equations of motion are solved in the time domain using the implicit Newmark-Beta integrator. Preliminary comparisons with data from the Boeing/NASA MOD2 HAWT indicate that the code is capable of accurately and efficiently predicting the response of HAWTs driven by turbulent winds.

  7. Optimized periodic verification testing blended risk and performance-based MOV inservice test program an application of ASME code case OMN-1

    SciTech Connect (OSTI)

    Sellers, C.; Fleming, K.; Bidwell, D.; Forbes, P.

    1996-12-01

    This paper presents an application of ASME Code Case OMN-1 to the GL 89-10 Program at the South Texas Project Electric Generating Station (STPEGS). Code Case OMN-1 provides guidance for a performance-based MOV inservice test program that can be used for periodic verification testing and allows consideration of risk insights. Blended probabilistic and deterministic evaluation techniques were used to establish inservice test strategies including both test methods and test frequency. Described in the paper are the methods and criteria for establishing MOV safety significance based on the STPEGS probabilistic safety assessment, deterministic considerations of MOV performance characteristics and performance margins, the expert panel evaluation process, and the development of inservice test strategies. Test strategies include a mix of dynamic and static testing as well as MOV exercising.

  8. Monitoring Based Commissioning: Benchmarking Analysis of 24 UC/CSU/IOU Projects

    SciTech Connect (OSTI)

    Mills, Evan; Mathew, Paul

    2009-04-01

    Buildings rarely perform as intended, resulting in energy use that is higher than anticipated. Building commissioning has emerged as a strategy for remedying this problem in non-residential buildings. Complementing traditional hardware-based energy savings strategies, commissioning is a 'soft' process of verifying performance and design intent and correcting deficiencies. Through an evaluation of a series of field projects, this report explores the efficacy of an emerging refinement of this practice, known as monitoring-based commissioning (MBCx). MBCx can also be thought of as monitoring-enhanced building operation that incorporates three components: (1) Permanent energy information systems (EIS) and diagnostic tools at the whole-building and sub-system level; (2) Retro-commissioning based on the information from these tools and savings accounting emphasizing measurement as opposed to estimation or assumptions; and (3) On-going commissioning to ensure efficient building operations and measurement-based savings accounting. MBCx is thus a measurement-based paradigm which affords improved risk-management by identifying problems and opportunities that are missed with periodic commissioning. The analysis presented in this report is based on in-depth benchmarking of a portfolio of MBCx energy savings for 24 buildings located throughout the University of California and California State University systems. In the course of the analysis, we developed a quality-control/quality-assurance process for gathering and evaluating raw data from project sites and then selected a number of metrics to use for project benchmarking and evaluation, including appropriate normalizations for weather and climate, accounting for variations in central plant performance, and consideration of differences in building types. We performed a cost-benefit analysis of the resulting dataset, and provided comparisons to projects from a larger commissioning 'Meta-analysis' database. A total of 1120 deficiency-intervention combinations were identified in the course of commissioning the projects described in this report. The most common location of deficiencies was in HVAC equipment (65% of sites), followed by air-handling and distributions systems (59%), cooling plant (29%), heating plants (24%), and terminal units (24%). The most common interventions were adjusting setpoints, modifying sequences of operations, calibration, and various mechanical fixes (each done in about two-thirds of the sites). The normalized rate of occurrence of deficiencies and corresponding interventions ranged from about 0.1/100ksf to 10/100ksf, depending on the issue. From these interventions flowed significant and highly cost-effective energy savings For the MBCx cohort, source energy savings of 22 kBTU/sf-year (10%) were achieved, with a range of 2% to 25%. Median electricity savings were 1.9 kWh/sf-year (9%), with a range of 1% to 17%. Peak electrical demand savings were 0.2 W/sf-year (4%), with a range of 3% to 11%. The aggregate commissioning cost for the 24 projects was $2.9 million. We observed a range of normalized costs from $0.37 to 1.62/sf, with a median value of $1.00/sf for buildings that implemented MBCx projects. Per the program design, monitoring costs as a percentage of total costs are significantly higher in MBCx projects (median value 40%) than typical commissioning projects included in the Meta-analysis (median value of 2% in the commissioning database). Half of the projects were in buildings containing complex and energy-intensive laboratory space, with higher associated costs. Median energy cost savings were $0.25/sf-year, for a median simple payback time of 2.5 years. Significant and cost-effective energy savings were thus obtained. The greatest absolute energy savings and shortest payback times were achieved in laboratory-type facilities. While impacts varied from project to project, on a portfolio basis we find MBCx to be a highly cost-effective means of obtaining significant program-level energy savings across a variety of building types. Energy savings are ex

  9. Modeling of electrodes and implantable pulse generator cases for the analysis of implant tip heating under MR imaging

    SciTech Connect (OSTI)

    Acikel, Volkan Atalar, Ergin; Uslubas, Ali

    2015-07-15

    Purpose: The authors’ purpose is to model the case of an implantable pulse generator (IPG) and the electrode of an active implantable medical device using lumped circuit elements in order to analyze their effect on radio frequency induced tissue heating problem during a magnetic resonance imaging (MRI) examination. Methods: In this study, IPG case and electrode are modeled with a voltage source and impedance. Values of these parameters are found using the modified transmission line method (MoTLiM) and the method of moments (MoM) simulations. Once the parameter values of an electrode/IPG case model are determined, they can be connected to any lead, and tip heating can be analyzed. To validate these models, both MoM simulations and MR experiments were used. The induced currents on the leads with the IPG case or electrode connections were solved using the proposed models and the MoTLiM. These results were compared with the MoM simulations. In addition, an electrode was connected to a lead via an inductor. The dissipated power on the electrode was calculated using the MoTLiM by changing the inductance and the results were compared with the specific absorption rate results that were obtained using MoM. Then, MRI experiments were conducted to test the IPG case and the electrode models. To test the IPG case, a bare lead was connected to the case and placed inside a uniform phantom. During a MRI scan, the temperature rise at the lead was measured by changing the lead length. The power at the lead tip for the same scenario was also calculated using the IPG case model and MoTLiM. Then, an electrode was connected to a lead via an inductor and placed inside a uniform phantom. During a MRI scan, the temperature rise at the electrode was measured by changing the inductance and compared with the dissipated power on the electrode resistance. Results: The induced currents on leads with the IPG case or electrode connection were solved for using the combination of the MoTLiM and the proposed lumped circuit models. These results were compared with those from the MoM simulations. The mean square error was less than 9%. During the MRI experiments, when the IPG case was introduced, the resonance lengths were calculated to have an error less than 13%. Also the change in tip temperature rise at resonance lengths was predicted with less than 4% error. For the electrode experiments, the value of the matching impedance was predicted with an error less than 1%. Conclusions: Electrical models for the IPG case and electrode are suggested, and the method is proposed to determine the parameter values. The concept of matching of the electrode to the lead is clarified using the defined electrode impedance and the lead Thevenin impedance. The effect of the IPG case and electrode on tip heating can be predicted using the proposed theory. With these models, understanding the tissue heating due to the implants becomes easier. Also, these models are beneficial for implant safety testers and designers. Using these models, worst case conditions can be determined and the corresponding implant test experiments can be planned.

  10. Abstract: Development and Deployment of a Short Rotation Woody Crops Harvesting System Based on a Case New Holland Forage Harvester and SRC Woody Crop Header

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Deployment of a Short Rotation Woody Crops Harvesting System Based on a Case New Holland Forage Harvester and SRC Woody Crop Header Despite the projected increase in demand for woody biomass from short rotation woody crops (SRWC) and the wide array of benefits associated with their production and use, the expansion and rapid deployment of these systems has been restricted by their high cost of production and in some situations a lack of market acceptance because of poor quality chips from first

  11. High-Throughput Genetic Analysis and Combinatorial Chiral Separations Based on Capillary Electrophoresis

    SciTech Connect (OSTI)

    Wenwan Zhong

    2003-08-05

    Capillary electrophoresis (CE) offers many advantages over conventional analytical methods, such as speed, simplicity, high resolution, low cost, and small sample consumption, especially for the separation of enantiomers. However, chiral method developments still can be time consuming and tedious. They designed a comprehensive enantioseparation protocol employing neutral and sulfated cyclodextrins as chiral selectors for common basic, neutral, and acidic compounds with a 96-capillary array system. By using only four judiciously chosen separation buffers, successful enantioseparations were achieved for 49 out of 54 test compounds spanning a large variety of pKs and structures. Therefore, unknown compounds can be screened in this manner to identify optimal enantioselective conditions in just one rn. In addition to superior separation efficiency for small molecules, CE is also the most powerful technique for DNA separations. Using the same multiplexed capillary system with UV absorption detection, the sequence of a short DNA template can be acquired without any dye-labels. Two internal standards were utilized to adjust the migration time variations among capillaries, so that the four electropherograms for the A, T, C, G Sanger reactions can be aligned and base calling can be completed with a high level of confidence. the CE separation of DNA can be applied to study differential gene expression as well. Combined with pattern recognition techniques, small variations among electropherograms obtained by the separation of cDNA fragments produced from the total RNA samples of different human tissues can be revealed. These variations reflect the differences in total RNA expression among tissues. Thus, this Ce-based approach can serve as an alternative to the DNA array techniques in gene expression analysis.

  12. Structural Analysis of a Highly Glycosylated and Unliganded gp120-Based Antigen Using Mass Spectrometry

    SciTech Connect (OSTI)

    L Wang; Y Qin; S Ilchenko; J Bohon; W Shi; M Cho; K Takamoto; M Chance

    2011-12-31

    Structural characterization of the HIV-1 envelope protein gp120 is very important for providing an understanding of the protein's immunogenicity and its binding to cell receptors. So far, the crystallographic structure of gp120 with an intact V3 loop (in the absence of a CD4 coreceptor or antibody) has not been determined. The third variable region (V3) of the gp120 is immunodominant and contains glycosylation signatures that are essential for coreceptor binding and entry of the virus into T-cells. In this study, we characterized the structure of the outer domain of gp120 with an intact V3 loop (gp120-OD8) purified from Drosophila S2 cells utilizing mass spectrometry-based approaches. We mapped the glycosylation sites and calculated the glycosylation occupancy of gp120-OD8; 11 sites from 15 glycosylation motifs were determined as having high-mannose or hybrid glycosylation structures. The specific glycan moieties of nine glycosylation sites from eight unique glycopeptides were determined by a combination of ECD and CID MS approaches. Hydroxyl radical-mediated protein footprinting coupled with mass spectrometry analysis was employed to provide detailed information about protein structure of gp120-OD8 by directly identifying accessible and hydroxyl radical-reactive side chain residues. Comparison of gp120-OD8 experimental footprinting data with a homology model derived from the ligated CD4-gp120-OD8 crystal structure revealed a flexible V3 loop structure in which the V3 tip may provide contacts with the rest of the protein while residues in the V3 base remain solvent accessible. In addition, the data illustrate interactions between specific sugar moieties and amino acid side chains potentially important to the gp120-OD8 structure.

  13. Microscopic silicon-based lateral high-aspect-ratio structures for thin film conformality analysis

    SciTech Connect (OSTI)

    Gao, Feng; Arpiainen, Sanna; Puurunen, Riikka L.

    2015-01-15

    Film conformality is one of the major drivers for the interest in atomic layer deposition (ALD) processes. This work presents new silicon-based microscopic lateral high-aspect-ratio (LHAR) test structures for the analysis of the conformality of thin films deposited by ALD and by other chemical vapor deposition means. The microscopic LHAR structures consist of a lateral cavity inside silicon with a roof supported by pillars. The cavity length (e.g., 20–5000??m) and cavity height (e.g., 200–1000?nm) can be varied, giving aspect ratios of, e.g., 20:1 to 25?000:1. Film conformality can be analyzed with the microscopic LHAR by several means, as demonstrated for the ALD Al{sub 2}O{sub 3} and TiO{sub 2} processes from Me{sub 3}Al/H{sub 2}O and TiCl{sub 4}/H{sub 2}O. The microscopic LHAR test structures introduced in this work expose a new parameter space for thin film conformality investigations expected to prove useful in the development, tuning and modeling of ALD and other chemical vapor deposition processes.

  14. Science-Based Simulation Model of Human Performance for Human Reliability Analysis

    SciTech Connect (OSTI)

    Dana L. Kelly; Ronald L. Boring; Ali Mosleh; Carol Smidts

    2011-10-01

    Human reliability analysis (HRA), a component of an integrated probabilistic risk assessment (PRA), is the means by which the human contribution to risk is assessed, both qualitatively and quantitatively. However, among the literally dozens of HRA methods that have been developed, most cannot fully model and quantify the types of errors that occurred at Three Mile Island. Furthermore, all of the methods lack a solid empirical basis, relying heavily on expert judgment or empirical results derived in non-reactor domains. Finally, all of the methods are essentially static, and are thus unable to capture the dynamics of an accident in progress. The objective of this work is to begin exploring a dynamic simulation approach to HRA, one whose models have a basis in psychological theories of human performance, and whose quantitative estimates have an empirical basis. This paper highlights a plan to formalize collaboration among the Idaho National Laboratory (INL), the University of Maryland, and The Ohio State University (OSU) to continue development of a simulation model initially formulated at the University of Maryland. Initial work will focus on enhancing the underlying human performance models with the most recent psychological research, and on planning follow-on studies to establish an empirical basis for the model, based on simulator experiments to be carried out at the INL and at the OSU.

  15. Systems Analysis of an Advanced Nuclear Fuel Cycle Based on a Modified UREX+3c Process

    SciTech Connect (OSTI)

    E. R. Johnson; R. E. Best

    2009-12-28

    The research described in this report was performed under a grant from the U.S. Department of Energy (DOE) to describe and compare the merits of two advanced alternative nuclear fuel cycles -- named by this study as the “UREX+3c fuel cycle” and the “Alternative Fuel Cycle” (AFC). Both fuel cycles were assumed to support 100 1,000 MWe light water reactor (LWR) nuclear power plants operating over the period 2020 through 2100, and the fast reactors (FRs) necessary to burn the plutonium and minor actinides generated by the LWRs. Reprocessing in both fuel cycles is assumed to be based on the UREX+3c process reported in earlier work by the DOE. Conceptually, the UREX+3c process provides nearly complete separation of the various components of spent nuclear fuel in order to enable recycle of reusable nuclear materials, and the storage, conversion, transmutation and/or disposal of other recovered components. Output of the process contains substantially all of the plutonium, which is recovered as a 5:1 uranium/plutonium mixture, in order to discourage plutonium diversion. Mixed oxide (MOX) fuel for recycle in LWRs is made using this 5:1 U/Pu mixture plus appropriate makeup uranium. A second process output contains all of the recovered uranium except the uranium in the 5:1 U/Pu mixture. The several other process outputs are various waste streams, including a stream of minor actinides that are stored until they are consumed in future FRs. For this study, the UREX+3c fuel cycle is assumed to recycle only the 5:1 U/Pu mixture to be used in LWR MOX fuel and to use depleted uranium (tails) for the makeup uranium. This fuel cycle is assumed not to use the recovered uranium output stream but to discard it instead. On the other hand, the AFC is assumed to recycle both the 5:1 U/Pu mixture and all of the recovered uranium. In this case, the recovered uranium is reenriched with the level of enrichment being determined by the amount of recovered plutonium and the combined amount of the resulting MOX. The study considered two sub-cases within each of the two fuel cycles in which the uranium and plutonium from the first generation of MOX spent fuel (i) would not be recycled to produce a second generation of MOX for use in LWRs or (ii) would be recycled to produce a second generation of MOX fuel for use in LWRs. The study also investigated the effects of recycling MOX spent fuel multiple times in LWRs. The study assumed that both fuel cycles would store and then reprocess spent MOX fuel that is not recycled to produce a next generation of LWR MOX fuel and would use the recovered products to produce FR fuel. The study further assumed that FRs would begin to be brought on-line in 2043, eleven years after recycle begins in LWRs, when products from 5-year cooled spent MOX fuel would be available. Fuel for the FRs would be made using the uranium, plutonium, and minor actinides recovered from MOX. For the cases where LWR fuel was assumed to be recycled one time, the 1st generation of MOX spent fuel was used to provide nuclear materials for production of FR fuel. For the cases where the LWR fuel was assumed to be recycled two times, the 2nd generation of MOX spent fuel was used to provide nuclear materials for production of FR fuel. The number of FRs in operation was assumed to increase in successive years until the rate that actinides were recovered from permanently discharged spent MOX fuel equaled the rate the actinides were consumed by the operating fleet of FRs. To compare the two fuel cycles, the study analyzed recycle of nuclear fuel in LWRs and FRs and determined the radiological characteristics of irradiated nuclear fuel, nuclear waste products, and recycle nuclear fuels. It also developed a model to simulate the flows of nuclear materials that could occur in the two advanced nuclear fuel cycles over 81 years beginning in 2020 and ending in 2100. Simulations projected the flows of uranium, plutonium, and minor actinides as these nuclear fuel materials were produced and consumed in a fleet of 100 1,000 MWe LWRs and in FRs. The model als

  16. Automating Embedded Analysis Capabilities and Managing Software Complexity in Multiphysics Simulation, Part I: Template-Based Generic Programming

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Pawlowski, Roger P.; Phipps, Eric T.; Salinger, Andrew G.

    2012-01-01

    An approach for incorporating embedded simulation and analysis capabilities in complex simulation codes through template-based generic programming is presented. This approach relies on templating and operator overloading within the C++ language to transform a given calculation into one that can compute a variety of additional quantities that are necessary for many state-of-the-art simulation and analysis algorithms. An approach for incorporating these ideas into complex simulation codes through general graph-based assembly is also presented. These ideas have been implemented within a set of packages in the Trilinos framework and are demonstrated on a simple problem from chemical engineering.

  17. Integrated Experimental and Model-based Analysis Reveals the Spatial Aspects of EGFR Activation Dynamics

    SciTech Connect (OSTI)

    Shankaran, Harish; Zhang, Yi; Chrisler, William B.; Ewald, Jonathan A.; Wiley, H. S.; Resat, Haluk

    2012-10-02

    The epidermal growth factor receptor (EGFR) belongs to the ErbB family of receptor tyrosine kinases, and controls a diverse set of cellular responses relevant to development and tumorigenesis. ErbB activation is a complex process involving receptor-ligand binding, receptor dimerization, phosphorylation, and trafficking (internalization, recycling and degradation), which together dictate the spatio-temporal distribution of active receptors within the cell. The ability to predict this distribution, and elucidation of the factors regulating it, would help to establish a mechanistic link between ErbB expression levels and the cellular response. Towards this end, we constructed mathematical models for deconvolving the contributions of receptor dimerization and phosphorylation to EGFR activation, and to examine the dependence of these processes on sub-cellular location. We collected experimental datasets for EGFR activation dynamics in human mammary epithelial cells, with the specific goal of model parameterization, and used the data to estimate parameters for several alternate models. Model-based analysis indicated that: 1) signal termination via receptor dephosphorylation in late endosomes, prior to degradation, is an important component of the response, 2) less than 40% of the receptors in the cell are phosphorylated at any given time, even at saturating ligand doses, and 3) receptor dephosphorylation rates at the cell surface and early endosomes are comparable. We validated the last finding by measuring EGFR dephosphorylation rates at various times following ligand addition both in whole cells, and in endosomes using ELISAs and fluorescent imaging. Overall, our results provide important information on how EGFR phosphorylation levels are regulated within cells. Further, the mathematical model described here can be extended to determine receptor dimer abundances in cells co-expressing various levels of ErbB receptors. This study demonstrates that an iterative cycle of experiments and modeling can be used to gain mechanistic insight regarding complex cell signaling networks.

  18. Analysis on fuel breeding capability of FBR core region based on minor actinide recycling doping

    SciTech Connect (OSTI)

    Permana, Sidik; Novitrian,; Waris, Abdul; Ismail; Suzuki, Mitsutoshi; Saito, Masaki

    2014-09-30

    Nuclear fuel breeding based on the capability of fuel conversion capability can be achieved by conversion ratio of some fertile materials into fissile materials during nuclear reaction processes such as main fissile materials of U-233, U-235, Pu-239 and Pu-241 and for fertile materials of Th-232, U-238, and Pu-240 as well as Pu-238. Minor actinide (MA) loading option which consists of neptunium, americium and curium will gives some additional contribution from converted MA into plutonium such as conversion Np-237 into Pu-238 and it's produced Pu-238 converts to Pu-239 via neutron capture. Increasing composition of Pu-238 can be used to produce fissile material of Pu-239 as additional contribution. Trans-uranium (TRU) fuel (Mixed fuel loading of MOX (U-Pu) and MA composition) and mixed oxide (MOX) fuel compositions are analyzed for comparative analysis in order to show the effect of MA to the plutonium productions in core in term of reactor criticality condition and fuel breeding capability. In the present study, neptunium (Np) nuclide is used as a representative of MAin trans-uranium (TRU) fuel composition as Np-MOX fuel type. It was loaded into the core region gives significant contribution to reduce the excess reactivity in comparing to mixed oxide (MOX) fuel and in the same time it contributes to increase nuclear fuel breeding capability of the reactor. Neptunium fuel loading scheme in FBR core region gives significant production of Pu-238 as fertile material to absorp neutrons for reducing excess reactivity and additional contribution for fuel breeding.

  19. Arthropod monitoring for fine-scale habitat analysis: A case study of the El Segundo sand dunes

    SciTech Connect (OSTI)

    Mattoni, R.; Longcore, T.; Novotny, V.

    2000-04-01

    Arthropod communities from several habitats on and adjacent to the El Segundo dunes (Los Angeles County, CA) were sampled using pitfall and yellow pan traps to evaluate their possible use as indicators of restoration success. Communities were ordinated and clustered using correspondence analysis, detrended correspondence analysis, two-way indicator species analysis, and Ward's method of agglomerative clustering. The results showed high repeatability among replicates within any sampling arena that permits discrimination of (1) degraded and relatively undisturbed habitat, (2) different dune habitat types, and (3) annual change. Canonical correspondence analysis showed a significant effect of disturbance history on community composition that explained 5--20% of the variation. Replicates of pitfall and yellow pan traps on single sites clustered together reliably when species abundance was considered, whereas clusters using only species incidence did not group replicates as consistently. The broad taxonomic approach seems appropriate for habitat evaluation and monitoring of restoration projects as an alternative to assessments geared to single species or even single families.

  20. DEVELOPMENT OF A NOVEL GAS PRESSURIZED STRIPPING (GPS)-BASED TECHNOLOGY FOR CO2 CAPTURE FROM POST-COMBUSTION FLUE GASES Topical Report: Techno-Economic Analysis of GPS-based Technology for CO2 Capture

    SciTech Connect (OSTI)

    Chen, Shiaoguo

    2015-09-30

    This topical report presents the techno-economic analysis, conducted by Carbon Capture Scientific, LLC (CCS) and Nexant, for a nominal 550 MWe supercritical pulverized coal (PC) power plant utilizing CCS patented Gas Pressurized Stripping (GPS) technology for post-combustion carbon capture (PCC). Illinois No. 6 coal is used as fuel. Because of the difference in performance between the GPS-based PCC and the MEA-based CO2 absorption technology, the net power output of this plant is not exactly 550 MWe. DOE/NETL Case 11 supercritical PC plant without CO2 capture and Case 12 supercritical PC plant with benchmark MEA-based CO2 capture are chosen as references. In order to include CO2 compression process for the baseline case, CCS independently evaluated the generic 30 wt% MEA-based PCC process together with the CO2 compression section. The net power produced in the supercritical PC plant with GPS-based PCC is 647 MW, greater than the MEA-based design. The levelized cost of electricity (LCOE) over a 20-year period is adopted to assess techno-economic performance. The LCOE for the supercritical PC plant with GPS-based PCC, not considering CO2 transport, storage and monitoring (TS&M), is 97.4 mills/kWh, or 152% of the Case 11 supercritical PC plant without CO2 capture, equivalent to $39.6/tonne for the cost of CO2 capture. GPS-based PCC is also significantly superior to the generic MEA-based PCC with CO2 compression section, whose LCOE is as high as 109.6 mills/kWh.

  1. Initial Business Case Analysis of Two Integrated Heat Pump HVAC Systems for Near-Zero-Energy Homes - Update to Include Evaluation of Impact of Including a Humidifier Option

    SciTech Connect (OSTI)

    Baxter, Van D

    2007-02-01

    The long range strategic goal of the Department of Energy's Building Technologies (DOE/BT) Program is to create, by 2020, technologies and design approaches that enable the construction of net-zero energy homes at low incremental cost (DOE/BT 2005). A net zero energy home (NZEH) is a residential building with greatly reduced needs for energy through efficiency gains, with the balance of energy needs supplied by renewable technologies. While initially focused on new construction, these technologies and design approaches are intended to have application to buildings constructed before 2020 as well resulting in substantial reduction in energy use for all building types and ages. DOE/BT's Emerging Technologies (ET) team is working to support this strategic goal by identifying and developing advanced heating, ventilating, air-conditioning, and water heating (HVAC/WH) technology options applicable to NZEHs. In FY05 ORNL conducted an initial Stage 1 (Applied Research) scoping assessment of HVAC/WH systems options for future NZEHs to help DOE/BT identify and prioritize alternative approaches for further development. Eleven system concepts with central air distribution ducting and nine multi-zone systems were selected and their annual and peak demand performance estimated for five locations: Atlanta (mixed-humid), Houston (hot-humid), Phoenix (hot-dry), San Francisco (marine), and Chicago (cold). Performance was estimated by simulating the systems using the TRNSYS simulation engine (Solar Energy Laboratory et al. 2006) in two 1800-ft{sup 2} houses--a Building America (BA) benchmark house and a prototype NZEH taken from BEopt results at the take-off (or crossover) point (i.e., a house incorporating those design features such that further progress towards ZEH is through the addition of photovoltaic power sources, as determined by current BEopt analyses conducted by NREL). Results were summarized in a project report, HVAC Equipment Design options for Near-Zero-Energy Homes--A Stage 2 Scoping Assessment, ORNL/TM-2005/194 (Baxter 2005). The 2005 study report describes the HVAC options considered, the ranking criteria used, and the system rankings by priority. In 2006, the two top-ranked options from the 2005 study, air-source and ground-source versions of a centrally ducted integrated heat pump (IHP) system, were subjected to an initial business case study. The IHPs were subjected to a more rigorous hourly-based assessment of their performance potential compared to a baseline suite of equipment of legally minimum efficiency that provided the same heating, cooling, water heating, demand dehumidification, and ventilation services as the IHPs. Results were summarized in a project report, Initial Business Case Analysis of Two Integrated Heat Pump HVAC Systems for Near-Zero-Energy Homes, ORNL/TM-2006/130 (Baxter 2006a). The present report is an update to that document which summarizes results of an analysis of the impact of adding a humidifier to the HVAC system to maintain minimum levels of space relative humidity (RH) in winter. The space RH in winter has direct impact on occupant comfort and on control of dust mites, many types of disease bacteria, and 'dry air' electric shocks. Chapter 8 in ASHRAE's 2005 Handbook of Fundamentals (HOF) suggests a 30% lower limit on RH for indoor temperatures in the range of {approx}68-69F based on comfort (ASHRAE 2005). Table 3 in chapter 9 of the same reference suggests a 30-55% RH range for winter as established by a Canadian study of exposure limits for residential indoor environments (EHD 1987). Harriman, et al (2001) note that for RH levels of 35% or higher, electrostatic shocks are minimized and that dust mites cannot live at RH levels below 40%. They also indicate that many disease bacteria life spans are minimized when space RH is held within a 30-60% range. From the foregoing it is reasonable to assume that a winter space RH range of 30-40% would be an acceptable compromise between comfort considerations and limitation of growth rates for dust mites and many bacteria. In addition it reports some corrections made to the simulation models used in order to correct some errors in the TRNSYS building model for Atlanta and in the refrigerant pressure drop calculation in the water-to-refrigerant evaporator module of the ORNL Heat Pump Design Model (HPDM) used for the IHP analyses. These changes resulted in some minor differences between IHP performance as reported in Baxter (2006) and in this report.

  2. Analysis

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    SunShot Grand Challenge: Regional Test Centers Analysis Home/Tag:Analysis - Electricity use by water service sector and county. Shown are electricity use by (a) large-scale conveyance, (b) groundwater irrigation pumping, (c) surface water irrigation pumping, (d) drinking water, and (e) wastewater. Aggregate electricity use across these sectors (f) is also mapped. Permalink Gallery Sandians Recognized in Environmental Science & Technology's Best Paper Competition Analysis, Capabilities,

  3. Analysis

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Analysis Home/Analysis - Structures of the zwitterionic coatings synthesized for this study. Permalink Gallery Investigations on Anti-biofouling Zwitterionic Coatings for MHK Is Now in Press Analysis, Capabilities, Energy, News, News & Events, Renewable Energy, Research & Capabilities, Water Power Investigations on Anti-biofouling Zwitterionic Coatings for MHK Is Now in Press Sandia's Marine Hydrokinetic (MHK) Advanced Materials program has a new publication on the antifouling efficacy

  4. U.S. Renewable Energy Technical Potentials. A GIS-Based Analysis

    SciTech Connect (OSTI)

    Lopez, Anthony; Roberts, Billy; Heimiller, Donna; Blair, Nate; Porro, Gian

    2012-07-01

    This report presents the state-level results of a spatial analysis effort calculating energy technical potential, reported in square kilometers of available land, megawatts of capacity, and gigawatt-hours of generation, for six different renewable technologies. For this analysis, the system specific power density (or equivalent), efficiency (capacity factor), and land-use constraints were identified for each technology using independent research, published research, and professional contacts. This report also presents technical potential findings from previous reports.

  5. U.S. Renewable Energy Technical Potentials: A GIS-Based Analysis

    SciTech Connect (OSTI)

    Lopez, A.; Roberts, B.; Heimiller, D.; Blair, N.; Porro, G.

    2012-07-01

    This report presents the state-level results of a spatial analysis effort calculating energy technical potential, reported in square kilometers of available land, megawatts of capacity, and gigawatt-hours of generation, for six different renewable technologies. For this analysis, the system specific power density (or equivalent), efficiency (capacity factor), and land-use constraints were identified for each technology using independent research, published research, and professional contacts. This report also presents technical potential findings from previous reports.

  6. Roof-top solar energy potential under performance-based building energy codes: The case of Spain

    SciTech Connect (OSTI)

    Izquierdo, Salvador; Montanes, Carlos; Dopazo, Cesar; Fueyo, Norberto

    2011-01-15

    The quantification at regional level of the amount of energy (for thermal uses and for electricity) that can be generated by using solar systems in buildings is hindered by the availability of data for roof area estimation. In this note, we build on an existing geo-referenced method for determining available roof area for solar facilities in Spain to produce a quantitative picture of the likely limits of roof-top solar energy. The installation of solar hot water systems (SHWS) and photovoltaic systems (PV) is considered. After satisfying up to 70% (if possible) of the service hot water demand in every municipality, PV systems are installed in the remaining roof area. Results show that, applying this performance-based criterion, SHWS would contribute up to 1662 ktoe/y of primary energy (or 68.5% of the total thermal-energy demand for service hot water), while PV systems would provide 10 T W h/y of electricity (or 4.0% of the total electricity demand). (author)

  7. Case Studies

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    OSCARS Case Studies Science DMZ Case Studies Multi-facility Workflow Case Study News & Publications ESnet News Publications and Presentations Galleries ESnet Awards and Honors Blog...

  8. Assessment of effectiveness of geologic isolation systems. Test case release consequence analysis for a spent fuel repository in bedded salt

    SciTech Connect (OSTI)

    Raymond, J.R.; Bond, F.W.; Cole, C.R.; Nelson, R.W.; Reisenauer, A.E.; Washburn, J.F.; Norman, N.A.; Mote, P.A.; Segol, G.

    1980-01-01

    Geologic and geohydrologic data for the Paradox Basin have been used to simulate movement of ground water and radioacrtive contaminants from a hypothetical nuclear reactor spent fuel repository after an assumed accidental release. The pathlines, travel times and velocity of the ground water from the repository to the discharge locale (river) were determined after the disruptive event by use of a two-dimensional finite difference hydrologic model. The concentration of radioactive contaminants in the ground water was calculated along a series of flow tubes by use of a one-dimensional mass transport model which takes into account convection, dispersion, contaminant/media interactions and radioactive decay. For the hypothetical site location and specific parameters used in this demonstration, it is found that Iodine-129 (I-129) is tthe only isotope reaching the Colorado River in significant concentration. This concentration occurs about 8.0 x 10/sup 5/ years after the repository has been breached. This I-129 ground-water concentration is about 0.3 of the drinking water standard for uncontrolled use. The groundwater concentration would then be diluted by the Colorado River. None of the actinide elements reach more than half the distance from the repository to the Colorado River in the two-million year model run time. This exercise demonstrates that the WISAP model system is applicable for analysis of contaminant transport. The results presented in this report, however, are valid only for one particular set of parameters. A complete sensitivity analysis must be performed to evaluate the range of effects from the release of contaminants from a breached repository.

  9. Integrating multi-criteria decision analysis for a GIS-based hazardous waste landfill sitting in Kurdistan Province, western Iran

    SciTech Connect (OSTI)

    Sharifi, Mozafar Hadidi, Mosslem Vessali, Elahe Mosstafakhani, Parasto Taheri, Kamal Shahoie, Saber Khodamoradpour, Mehran

    2009-10-15

    The evaluation of a hazardous waste disposal site is a complicated process because it requires data from diverse social and environmental fields. These data often involve processing of a significant amount of spatial information which can be used by GIS as an important tool for land use suitability analysis. This paper presents a multi-criteria decision analysis alongside with a geospatial analysis for the selection of hazardous waste landfill sites in Kurdistan Province, western Iran. The study employs a two-stage analysis to provide a spatial decision support system for hazardous waste management in a typically under developed region. The purpose of GIS was to perform an initial screening process to eliminate unsuitable land followed by utilization of a multi-criteria decision analysis (MCDA) to identify the most suitable sites using the information provided by the regional experts with reference to new chosen criteria. Using 21 exclusionary criteria, as input layers, masked maps were prepared. Creating various intermediate or analysis map layers a final overlay map was obtained representing areas for hazardous waste landfill sites. In order to evaluate different landfill sites produced by the overlaying a landfill suitability index system was developed representing cumulative effects of relative importance (weights) and suitability values of 14 non-exclusionary criteria including several criteria resulting from field observation. Using this suitability index 15 different sites were visited and based on the numerical evaluation provided by MCDA most suitable sites were determined.

  10. Dynamic analysis of the urban-based low-carbon policy using system dynamics: Focused on housing and green space

    SciTech Connect (OSTI)

    Hong, Taehoon; Kim, Jimin Jeong, Kwangbok; Koo, Choongwan

    2015-02-09

    To systematically manage the energy consumption of existing buildings, the government has to enforce greenhouse gas reduction policies. However, most of the policies are not properly executed because they do not consider various factors from the urban level perspective. Therefore, this study aimed to conduct a dynamic analysis of an urban-based low-carbon policy using system dynamics, with a specific focus on housing and green space. This study was conducted in the following steps: (i) establishing the variables of urban-based greenhouse gases (GHGs) emissions; (ii) creating a stock/flow diagram of urban-based GHGs emissions; (iii) conducting an information analysis using the system dynamics; and (iv) proposing the urban-based low-carbon policy. If a combined energy policy that uses the housing sector (30%) and the green space sector (30%) at the same time is implemented, 2020 CO{sub 2} emissions will be 7.23 million tons (i.e., 30.48% below 2020 business-as-usual), achieving the national carbon emissions reduction target (26.9%). The results of this study could contribute to managing and improving the fundamentals of the urban-based low-carbon policies to reduce greenhouse gas emissions.

  11. COMMERCIALIZATION OF AN ATMOSPHERIC IRON-BASED CDCL PROCESS FOR POWER PRODUCTION. PHASE I: TECHNOECONOMIC ANALYSIS

    SciTech Connect (OSTI)

    Vargas, Luis

    2013-11-01

    Coal Direct Chemical Looping (CDCL) is an advanced oxy-combustion technology that has potential to enable substantial reductions in the cost and energy penalty associated with carbon dioxide (CO2) capture from coal-fired power plants. Through collaborative efforts, the Babcock & Wilcox Power Generation Group (B&W) and The Ohio State University (OSU) developed a conceptual design for a 550 MWe (net) supercritical CDCL power plant with greater than 90% CO2 capture and compression. Process simulations were completed to enable an initial assessment of its technical performance. A cost estimate was developed following DOE’s guidelines as outlined in NETL’s report “Quality Guidelines for Energy System Studies: Cost Estimation Methodology for NETL Assessments of Power Plant Performance”, (2011/1455). The cost of electricity for the CDCL plant without CO2 Transportation and Storage cost resulted in $ $102.67 per MWh, which corresponds to a 26.8 % increase in cost of electricity (COE) when compared to an air-fired pulverized-coal supercritical power plant. The cost of electricity is strongly depending on the total plant cost and cost of the oxygen carrier particles. The CDCL process could capture further potential savings by increasing the performance of the particles and reducing the plant size. During the techno-economic analysis, the team identified technology and engineering gaps that need to be closed to bring the technology to commercialization. The technology gaps were focused in five critical areas: (i) moving bed reducer reactor, (ii) fluidized bed combustor, (iii) particle riser, (iv) oxygen-carrier particle properties, and (v) process operation. The key technology gaps are related to particle performance, particle manufacturing cost, and the operation of the reducer reactor. These technology gaps are to be addressed during Phase II of project. The project team is proposing additional lab testing to be completed on the particle and a 3MWth pilot facility be built to evaluate the reducer reactor performance among other aspects of the technology. A Phase II proposal was prepared and submitted to DOE. The project team proposed a three year program in Phase II. Year 1 includes lab testing and particle development work aimed at improving the chemical and mechanical properties of the oxygen carrier particle. In parallel, B&W will design the 3MWt pilot plant. Any improvements to the particle performance discovered in year 1 that would impact the design of the pilot will be incorporated into the final design. Year 2 will focus on procurement of materials and equipment, and construction of the pilot plant. Year 3 will include, commissioning, start-up, and testing in the pilot. Phase I work was successfully completed and a design and operating philosophy for a 550 MWe commercial scale coal-direct chemical looping power plant was developed. Based on the results of the techno-economic evaluation, B&W projects that the CDCL process can achieve 96.5% CO2 capture with a

  12. Quantitative Analysis of Variability and Uncertainty in Environmental Data and Models. Volume 1. Theory and Methodology Based Upon Bootstrap Simulation

    SciTech Connect (OSTI)

    Frey, H. Christopher; Rhodes, David S.

    1999-04-30

    This is Volume 1 of a two-volume set of reports describing work conducted at North Carolina State University sponsored by Grant Number DE-FG05-95ER30250 by the U.S. Department of Energy. The title of the project is “Quantitative Analysis of Variability and Uncertainty in Acid Rain Assessments.” The work conducted under sponsorship of this grant pertains primarily to two main topics: (1) development of new methods for quantitative analysis of variability and uncertainty applicable to any type of model; and (2) analysis of variability and uncertainty in the performance, emissions, and cost of electric power plant combustion-based NOx control technologies. These two main topics are reported separately in Volumes 1 and 2.

  13. Security analysis of boolean algebra based on Zhang-Wang digital signature scheme

    SciTech Connect (OSTI)

    Zheng, Jinbin

    2014-10-06

    In 2005, Zhang and Wang proposed an improvement signature scheme without using one-way hash function and message redundancy. In this paper, we show that this scheme exits potential safety concerns through the analysis of boolean algebra, such as bitwise exclusive-or, and point out that mapping is not one to one between assembly instructions and machine code actually by means of the analysis of the result of the assembly program segment, and which possibly causes safety problems unknown to the software.

  14. Evaluation of food waste disposal options by LCC analysis from the perspective of global warming: Jungnang case, South Korea

    SciTech Connect (OSTI)

    Kim, Mi-Hyung; Song, Yul-Eum; Song, Han-Byul; Kim, Jung-Wk; Hwang, Sun-Jin

    2011-09-15

    Highlights: > Various food waste disposal options were evaluated from the perspective of global warming. > Costs of the options were compared by the methodology of life cycle assessment and life cycle cost analysis. > Carbon price and valuable by-products were used for analyzing environmental credits. > The benefit-cost ratio of wet feeding scenario was the highest. - Abstract: The costs associated with eight food waste disposal options, dry feeding, wet feeding, composting, anaerobic digestion, co-digestion with sewage sludge, food waste disposer, incineration, and landfilling, were evaluated in the perspective of global warming and energy and/or resource recovery. An expanded system boundary was employed to compare by-products. Life cycle cost was analyzed through the entire disposal process, which included discharge, separate collection, transportation, treatment, and final disposal stages, all of which were included in the system boundary. Costs and benefits were estimated by an avoided impact. Environmental benefits of each system per 1 tonne of food waste management were estimated using carbon prices resulting from CO{sub 2} reduction by avoided impact, as well as the prices of by-products such as animal feed, compost, and electricity. We found that the cost of landfilling was the lowest, followed by co-digestion. The benefits of wet feeding systems were the highest and landfilling the lowest.

  15. Analysis of the multigroup model for muon tomography based threat detection

    SciTech Connect (OSTI)

    Perry, J. O.; Bacon, J. D.; Borozdin, K. N.; Fabritius, J. M.; Morris, C. L.

    2014-02-14

    We compare different algorithms for detecting a 5?cm tungsten cube using cosmic ray muon technology. In each case, a simple tomographic technique was used for position reconstruction, but the scattering angles were used differently to obtain a density signal. Receiver operating characteristic curves were used to compare images made using average angle squared, median angle squared, average of the squared angle, and a multi-energy group fit of the angular distributions for scenes with and without a 5?cm tungsten cube. The receiver operating characteristic curves show that the multi-energy group treatment of the scattering angle distributions is the superior method for image reconstruction.

  16. DRIVE Analysis Tool Generates Custom Vehicle Drive Cycles Based on Real-World Data (Fact Sheet)

    SciTech Connect (OSTI)

    Not Available

    2013-04-01

    This fact sheet from the National Renewable Energy Laboratory describes the Drive-Cycle Rapid Investigation, Visualization, and Evaluation (DRIVE) analysis tool, which uses GPS and controller area network data to characterize vehicle operation and produce custom vehicle drive cycles, analyzing thousands of hours of data in a matter of minutes.

  17. Control Limits for Building Energy End Use Based on Engineering Judgment, Frequency Analysis, and Quantile Regression

    SciTech Connect (OSTI)

    Henze, G. P.; Pless, S.; Petersen, A.; Long, N.; Scambos, A. T.

    2014-02-01

    Approaches are needed to continuously characterize the energy performance of commercial buildings to allow for (1) timely response to excess energy use by building operators; and (2) building occupants to develop energy awareness and to actively engage in reducing energy use. Energy information systems, often involving graphical dashboards, are gaining popularity in presenting energy performance metrics to occupants and operators in a (near) real-time fashion. Such an energy information system, called Building Agent, has been developed at NREL and incorporates a dashboard for public display. Each building is, by virtue of its purpose, location, and construction, unique. Thus, assessing building energy performance is possible only in a relative sense, as comparison of absolute energy use out of context is not meaningful. In some cases, performance can be judged relative to average performance of comparable buildings. However, in cases of high-performance building designs, such as NREL's Research Support Facility (RSF) discussed in this report, relative performance is meaningful only when compared to historical performance of the facility or to a theoretical maximum performance of the facility as estimated through detailed building energy modeling.

  18. Five case studies of multifamily weatherization programs

    SciTech Connect (OSTI)

    Kinney, L; Wilson, T.; Lewis, G.; MacDonald, M.

    1997-12-31

    The multifamily case studies that are the subject of this report were conducted to provide a better understanding of the approach taken by program operators in weatherizing large buildings. Because of significant variations in building construction and energy systems across the country, five states were selected based on their high level of multifamily weatherization. This report summarizes findings from case studies conducted by multifamily weatherization operations in five cities. The case studies were conducted between January and November 1994. Each of the case studies involved extensive interviews with the staff of weatherization subgrantees conducting multifamily weatherization, the inspection of 4 to 12 buildings weatherized between 1991 and 1993, and the analysis of savings and costs. The case studies focused on innovative techniques which appear to work well.

  19. Comparing large scale CCS deployment potential in the USA and China: a detailed analysis based on country-specific CO2 transport & storage cost curves

    SciTech Connect (OSTI)

    Dahowski, Robert T.; Davidson, Casie L.; Dooley, James J.

    2011-04-18

    The United States and China are the two largest emitters of greenhouse gases in the world and their projected continued growth and reliance on fossil fuels, especially coal, make them strong candidates for CCS. Previous work has revealed that both nations have over 1600 large electric utility and other industrial point CO2 sources as well as very large CO2 storage resources on the order of 2,000 billion metric tons (Gt) of onshore storage capacity. In each case, the vast majority of this capacity is found in deep saline formations. In both the USA and China, candidate storage reservoirs are likely to be accessible by most sources with over 80% of these large industrial CO2 sources having a CO2 storage option within just 80 km. This suggests a strong potential for CCS deployment as a meaningful option to efforts to reduce CO2 emissions from these large, vibrant economies. However, while the USA and China possess many similarities with regards to the potential value that CCS might provide, including the range of costs at which CCS may be available to most large CO2 sources in each nation, there are a number of more subtle differences that may help us to understand the ways in which CCS deployment may differ between these two countries in order for the USA and China to work together - and in step with the rest of the world - to most efficiently reduce greenhouse gas emissions. This paper details the first ever analysis of CCS deployment costs in these two countries based on methodologically comparable CO2 source and sink inventories, economic analysis, geospatial source-sink matching and cost curve modeling. This type of analysis provides a valuable insight into the degree to which early and sustained opportunities for climate change mitigation via commercial-scale CCS are available to the two countries, and could facilitate greater collaboration in areas where those opportunities overlap.

  20. DanteR: an extensible R-based tool for quantitative analysis of -omics data

    SciTech Connect (OSTI)

    Taverner, Thomas; Karpievitch, Yuliya; Polpitiya, Ashoka D.; Brown, Joseph N.; Dabney, Alan R.; Anderson, Gordon A.; Smith, Richard D.

    2012-09-15

    Motivation: The size and complex nature of LC-MS proteomics data sets motivates development of specialized software for statistical data analysis and exploration. We present DanteR, a graphical R package that features extensive statistical and diagnostic functions for quantitative proteomics data analysis, including normalization, imputation, hypothesis testing, interactive visualization and peptide-to-protein rollup. More importantly, users can easily extend the existing functionality by including their own algorithms under the Add-On tab. Availability: DanteR and its associated user guide are available for download at http://omics.pnl.gov/software/. For Windows, a single click automatically installs DanteR along with the R programming environment. For Linux and Mac OS X, users must first install R and then follow instructions on the DanteR web site for package installation.

  1. Method of assessing a lipid-related health risk based on ion mobility analysis of lipoproteins

    DOE Patents [OSTI]

    Benner, W. Henry (Danville, CA); Krauss, Ronald M. (Berkeley, CA); Blanche, Patricia J. (Berkeley, CA)

    2010-12-14

    A medical diagnostic method and instrumentation system for analyzing noncovalently bonded agglomerated biological particles is described. The method and system comprises: a method of preparation for the biological particles; an electrospray generator; an alpha particle radiation source; a differential mobility analyzer; a particle counter; and data acquisition and analysis means. The medical device is useful for the assessment of human diseases, such as cardiac disease risk and hyperlipidemia, by rapid quantitative analysis of lipoprotein fraction densities. Initially, purification procedures are described to reduce an initial blood sample to an analytical input to the instrument. The measured sizes from the analytical sample are correlated with densities, resulting in a spectrum of lipoprotein densities. The lipoprotein density distribution can then be used to characterize cardiac and other lipid-related health risks.

  2. Study of vaneless diffuser rotating stall based on two-dimensional inviscid flow analysis

    SciTech Connect (OSTI)

    Tsujimoto, Yoshinobu; Yoshida, Yoshiki [Osaka Univ., Toyonaka, Osaka (Japan); Mori, Yasumasa [Mitsubishi Motors Corp., Ohta, Tokyo (Japan)

    1996-03-01

    Rotating stalls in vaneless diffusers are studied from the viewpoint that they are basically two-dimensional inviscid flow instability under the boundary conditions of vanishing velocity disturbance at the diffuser inlet and of vanishing pressure disturbance at the diffuser outlet. The linear analysis in the present report shows that the critical flow angle and the propagation velocity are functions of only the diffuser radius ratio. It is shown that the present analysis can reproduce most of the general characteristics observed in experiments: critical flow angle, propagation velocity, velocity, and pressure disturbance fields. It is shown that the vanishing velocity disturbance at the diffuser inlet is caused by the nature of impellers as a resistance and an inertial resistance, which is generally strong enough to suppress the velocity disturbance at the diffuser inlet. This explains the general experimental observations that vaneless diffuser rotating stalls are not largely affected by the impeller.

  3. Cogeneration: Economic and technical analysis. (Latest citations from the NTIS data base). Published Search

    SciTech Connect (OSTI)

    Not Available

    1992-05-01

    The bibliography contains citations concerning economic and technical analysis of cogeneration systems. Topics include electric power and steam generation, dual-purpose and fuel cell power plants, and on-site power generation. Tower focus power plants, solar cogeneration, biomass conversion, coal liquefaction and gasification, and refuse derived fuels are discussed. References cite feasibility studies, performance and economic evaluation, environmental impacts, and institutional factors. (Contains 250 citations and includes a subject term index and title list.)

  4. Industrial applications of accelerator-based infrared sources: Analysis using infrared microspectroscopy

    SciTech Connect (OSTI)

    Bantignies, J.L.; Fuchs, G.; Wilhelm, C.; Carr, G.L.; Dumas, P.

    1997-09-01

    Infrared Microspectroscopy, using a globar source, is now widely employed in the industrial environment, for the analysis of various materials. Since synchrotron radiation is a much brighter source, an enhancement of an order of magnitude in lateral resolution can be achieved. Thus, the combination of IR microspectroscopy and synchrotron radiation provides a powerful tool enabling sample regions only few microns size to be studied. This opens up the potential for analyzing small particles. Some examples for hair, bitumen and polymer are presented.

  5. Initial Business Case Analysis of Two Integrated Heat Pump HVAC Systems for Near-Zero-Energy Homes

    SciTech Connect (OSTI)

    Baxter, Van D

    2006-11-01

    The long range strategic goal of the Department of Energy's Building Technologies (DOE/BT) Program is to create, by 2020, technologies and design approaches that enable the construction of net-zero energy homes at low incremental cost (DOE/BT 2005). A net zero energy home (NZEH) is a residential building with greatly reduced needs for energy through efficiency gains, with the balance of energy needs supplied by renewable technologies. While initially focused on new construction, these technologies and design approaches are intended to have application to buildings constructed before 2020 as well resulting in substantial reduction in energy use for all building types and ages. DOE/BT's Emerging Technologies (ET) team is working to support this strategic goal by identifying and developing advanced heating, ventilating, air-conditioning, and water heating (HVAC/WH) technology options applicable to NZEHs. Although the energy efficiency of heating, ventilating, and air-conditioning (HVAC) equipment has increased substantially in recent years, new approaches are needed to continue this trend. Dramatic efficiency improvements are necessary to enable progress toward the NZEH goals, and will require a radical rethinking of opportunities to improve system performance. The large reductions in HVAC energy consumption necessary to support the NZEH goals require a systems-oriented analysis approach that characterizes each element of energy consumption, identifies alternatives, and determines the most cost-effective combination of options. In particular, HVAC equipment must be developed that addresses the range of special needs of NZEH applications in the areas of reduced HVAC and water heating energy use, humidity control, ventilation, uniform comfort, and ease of zoning. In FY05 ORNL conducted an initial Stage 1 (Applied Research) scoping assessment of HVAC/WH systems options for future NZEHs to help DOE/BT identify and prioritize alternative approaches for further development. Eleven system concepts with central air distribution ducting and nine multi-zone systems were selected and their annual and peak demand performance estimated for five locations: Atlanta (mixed-humid), Houston (hot-humid), Phoenix (hot-dry), San Francisco (marine), and Chicago (cold). Performance was estimated by simulating the systems using the TRNSYS simulation engine (Solar Energy Laboratory et al. 2006) in two 1800-ft{sup 2} houses--a Building America (BA) benchmark house and a prototype NZEH taken from BEopt results at the take-off (or crossover) point (i.e., a house incorporating those design features such that further progress towards ZEH is through the addition of photovoltaic power sources, as determined by current BEopt analyses conducted by NREL). Results were summarized in a project report, 'HVAC Equipment Design options for Near-Zero-Energy Homes--A Stage 2 Scoping Assessment,' ORNL/TM-2005/194 (Baxter 2005). The 2005 study report describes the HVAC options considered, the ranking criteria used, and the system rankings by priority. Table 1 summarizes the energy savings potential of the highest scoring options from the 2005 study for all five locations.

  6. Real Time Pricing as a Default or Optional Service for C&ICustomers: A Comparative Analysis of Eight Case Studies

    SciTech Connect (OSTI)

    Barbose, Galen; Goldman, Charles; Bharvirkar, Ranjit; Hopper,Nicole; Ting, Michael; Neenan, Bernie

    2005-08-01

    Demand response (DR) has been broadly recognized to be an integral component of well-functioning electricity markets, although currently underdeveloped in most regions. Among the various initiatives undertaken to remedy this deficiency, public utility commissions (PUC) and utilities have considered implementing dynamic pricing tariffs, such as real-time pricing (RTP), and other retail pricing mechanisms that communicate an incentive for electricity consumers to reduce their usage during periods of high generation supply costs or system reliability contingencies. Efforts to introduce DR into retail electricity markets confront a range of basic policy issues. First, a fundamental issue in any market context is how to organize the process for developing and implementing DR mechanisms in a manner that facilitates productive participation by affected stakeholder groups. Second, in regions with retail choice, policymakers and stakeholders face the threshold question of whether it is appropriate for utilities to offer a range of dynamic pricing tariffs and DR programs, or just ''plain vanilla'' default service. Although positions on this issue may be based primarily on principle, two empirical questions may have some bearing--namely, what level of price response can be expected through the competitive retail market, and whether establishing RTP as the default service is likely to result in an appreciable level of DR? Third, if utilities are to have a direct role in developing DR, what types of retail pricing mechanisms are most appropriate and likely to have the desired policy impact (e.g., RTP, other dynamic pricing options, DR programs, or some combination)? Given a decision to develop utility RTP tariffs, three basic implementation issues require attention. First, should it be a default or optional tariff, and for which customer classes? Second, what types of tariff design is most appropriate, given prevailing policy objectives, wholesale market structure, ratemaking practices and standards, and customer preferences? Third, if a primary goal for RTP implementation is to induce DR, what types of supplemental activities are warranted to support customer participation and price response (e.g., interval metering deployment, customer education, and technical assistance)?

  7. Reservoir characterization based on tracer response and rank analysis of production and injection rates

    SciTech Connect (OSTI)

    Refunjol, B.T.; Lake, L.W.

    1997-08-01

    Quantification of the spatial distribution of properties is important for many reservoir-engineering applications. But, before applying any reservoir-characterization technique, the type of problem to be tackled and the information available should be analyzed. This is important because difficulties arise in reservoirs where production records are the only information for analysis. This paper presents the results of a practical technique to determine preferential flow trends in a reservoir. The technique is a combination of reservoir geology, tracer data, and Spearman rank correlation coefficient analysis. The Spearman analysis, in particular, will prove to be important because it appears to be insightful and uses injection/production data that are prevalent in circumstances where other data are nonexistent. The technique is applied to the North Buck Draw field, Campbell County, Wyoming. This work provides guidelines to assess information about reservoir continuity in interwell regions from widely available measurements of production and injection rates at existing wells. The information gained from the application of this technique can contribute to both the daily reservoir management and the future design, control, and interpretation of subsequent projects in the reservoir, without the need for additional data.

  8. Structure-sequence based analysis for identification of conserved regions in proteins

    DOE Patents [OSTI]

    Zemla, Adam T; Zhou, Carol E; Lam, Marisa W; Smith, Jason R; Pardes, Elizabeth

    2013-05-28

    Disclosed are computational methods, and associated hardware and software products for scoring conservation in a protein structure based on a computationally identified family or cluster of protein structures. A method of computationally identifying a family or cluster of protein structures in also disclosed herein.

  9. GMR-based PhC biosensor: FOM analysis and experimental studies

    SciTech Connect (OSTI)

    Syamprasad, Jagadeesh; Narayanan, Roshni; Joseph, Joby; Takahashi, Hiroki; Sandhu, Adarsh; Jindal, Rajeev

    2014-02-20

    Guided Mode Resonance based Photonic crystal biosensor has a lot of potential applications. In our work, we are trying to improve their figure of merit values in order to achieve an optimum level through design and fabrication techniques. A robust and low-cost alternative for current biosensors is also explored through this research.

  10. Moving beyond mass-based parameters for conductivity analysis of sulfonated polymers

    SciTech Connect (OSTI)

    Kim, Yu Seung; Pivovar, Bryan

    2009-01-01

    Proton conductivity of polymer electrolytes is critical for fuel cells and has therefore been studied in significant detail. The conductivity of sulfonated polymers has been linked to material characteristics in order to elucidate trends. Mass based measurements based on water uptake and ion exchange capacity are two of the most common material characteristics used to make comparisons between polymer electrolytes, but have significant limitations when correlated to proton conductivity. These limitations arise in part because different polymers can have significantly different densities and conduction happens over length scales more appropriately represented by volume measurements rather than mass. Herein, we establish and review volume related parameters that can be used to compare proton conductivity of different polymer electrolytes. Morphological effects on proton conductivity are also considered. Finally, the impact of these phenomena on designing next generation sulfonated polymers for polymer electrolyte membrane fuel cells is discussed.

  11. Gene identification and analysis: an application of neural network-based information fusion

    SciTech Connect (OSTI)

    Matis, S.; Xu, Y.; Shah, M.B.; Mural, R.J.; Einstein, J.R.; Uberbacher, E.C.

    1996-10-01

    Identifying genes within large regions of uncharacterized DNA is a difficult undertaking and is currently the focus of many research efforts. We describe a gene localization and modeling system called GRAIL. GRAIL is a multiple sensor-neural network based system. It localizes genes in anonymous DNA sequence by recognizing gene features related to protein-coding slice sites, and then combines the recognized features using a neural network system. Localized coding regions are then optimally parsed into a gene mode. RNA polymerase II promoters can also be predicted. Through years of extensive testing, GRAIL consistently localizes about 90 percent of coding portions of test genes with a false positive rate of about 10 percent. A number of genes for major genetic diseases have been located through the use of GRAIL, and over 1000 research laboratories worldwide use GRAIL on regular bases for localization of genes on their newly sequenced DNA.

  12. Extending PowerPack for Profiling and Analysis of High Performance Accelerator-Based Systems

    SciTech Connect (OSTI)

    Li, Bo; Chang, Hung-Ching; Song, Shuaiwen; Su, Chun-Yi; Meyer, Timmy; Mooring, John; Cameron, Kirk

    2014-12-01

    Accelerators offer a substantial increase in efficiency for high-performance systems offering speedups for computational applications that leverage hardware support for highly-parallel codes. However, the power use of some accelerators exceeds 200 watts at idle which means use at exascale comes at a significant increase in power at a time when we face a power ceiling of about 20 megawatts. Despite the growing domination of accelerator-based systems in the Top500 and Green500 lists of fastest and most efficient supercomputers, there are few detailed studies comparing the power and energy use of common accelerators. In this work, we conduct detailed experimental studies of the power usage and distribution of Xeon-Phi-based systems in comparison to the NVIDIA Tesla and at SandyBridge.

  13. Safety assessment of historical masonry churches based on pre-assigned kinematic limit analysis, FE limit and pushover analyses

    SciTech Connect (OSTI)

    Milani, Gabriele Valente, Marco

    2014-10-06

    This study presents some results of a comprehensive numerical analysis on three masonry churches damaged by the recent Emilia-Romagna (Italy) seismic events occurred in May 2012. The numerical study comprises: (a) pushover analyses conducted with a commercial code, standard nonlinear material models and two different horizontal load distributions; (b) FE kinematic limit analyses performed using a non-commercial software based on a preliminary homogenization of the masonry materials and a subsequent limit analysis with triangular elements and interfaces; (c) kinematic limit analyses conducted in agreement with the Italian code and based on the a-priori assumption of preassigned failure mechanisms, where the masonry material is considered unable to withstand tensile stresses. All models are capable of giving information on the active failure mechanism and the base shear at failure, which, if properly made non-dimensional with the weight of the structure, gives also an indication of the horizontal peak ground acceleration causing the collapse of the church. The results obtained from all three models indicate that the collapse is usually due to the activation of partial mechanisms (apse, façade, lateral walls, etc.). Moreover the horizontal peak ground acceleration associated to the collapse is largely lower than that required in that seismic zone by the Italian code for ordinary buildings. These outcomes highlight that structural upgrading interventions would be extremely beneficial for the considerable reduction of the seismic vulnerability of such kind of historical structures.

  14. Development of simplified design aids based on the results of simulation analysis

    SciTech Connect (OSTI)

    Balcomb, J.D.

    1980-01-01

    The Solar Load Ratio method for estimating the performance of passive solar heating systems is described. It is a simplified technique which is based on correlating the monthly solar savings fraction in terms of the ratio of monthly solar radiation absorbed by the building to total monthly building thermal load. The effect of differences between actual design parameters and those used to develop the correlations is estimated afterwards using sensitivity curves. The technique is fast and simple and sufficiently accurate for design purposes.

  15. POD-based analysis of combustion images in optically accessible engines

    SciTech Connect (OSTI)

    Bizon, K.; Continillo, G.; Mancaruso, E.; Merola, S.S.; Vaglieco, B.M.

    2010-04-15

    This paper reports on 2D images of combustion-related luminosity taken in two optically accessible automobile engines of the most recent generation. The results are discussed to elucidate physical phenomena in the combustion chambers. Then, proper orthogonal decomposition (POD) is applied to the acquired images. The coefficients of the orthogonal modes are then used for the analysis of cycle variability, along with data of dynamic in-cylinder pressure and rate of heat release. The advantage is that statistical analysis can be run on a small number of scalar coefficients rather than on the full data set of pixel luminosity values. Statistics of the POD coefficients provide information on cycle variations of the luminosity field. POD modes are then discriminated by means of normality tests, to separate the mean from the coherent and the incoherent parts of the fluctuation of the luminosity field, in a non-truncated representation of the data. The morphology of the fluctuation components can finally be reconstructed by grouping coherent and incoherent modes. The structure of the incoherent component of the fluctuation is consistent with the underlying turbulent field. (author)

  16. Analysis of In-Use Fuel Economy Shortfall Based on Voluntarily Reported MPG Estimates

    SciTech Connect (OSTI)

    Greene, David L; Goeltz, Rick; Hopson, Dr Janet L; Tworek, Elzbieta

    2007-01-01

    The usefulness of the Environmental Protection Agency's (EPA) passenger car and light truck fuel economy estimates has been the subject of debate for the past three decades. For the labels on new vehicles and the fuel economy information given to the public, the EPA adjusts dynamometer test results downward by 10% for the city cycle and 22% for the highway cycle to better reflect real world driving conditions. These adjustment factors were developed in 1984 and their continued validity has repeatedly been questioned. In March of 2005 the U.S. Department of Energy (DOE) and EPA's fuel economy information website, www.fueleconomy.gov, began allowing users to voluntarily share fuel economy estimates. This paper presents an initial statistical analysis of more than 3,000 estimates submitted by website users. The analysis suggests two potentially important results: (1) adjusted, combined EPA fuel economy estimates appear to be approximately unbiased estimators of the average fuel economy consumers will experience in actual driving, and (2) the EPA estimates are highly imprecise predictors of any given individual's in-use fuel economy, an approximate 95% confidence interval being +/-7 MPG. These results imply that what is needed is not less biased adjustment factors for the EPA estimates but rather more precise methods of predicting the fuel economy individual consumers will achieve in their own driving.

  17. Femtosecond laser ablation-based mass spectrometry. An ideal tool for stoichiometric analysis of thin films

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    LaHaye, Nicole L.; Kurian, Jose; Diwakar, Prasoon K.; Alff, Lambert; Harilal, Sivanandan S.

    2015-08-19

    An accurate and routinely available method for stoichiometric analysis of thin films is a desideratum of modern materials science where a material’s properties depend sensitively on elemental composition. We thoroughly investigated femtosecond laser ablation-inductively coupled plasma-mass spectrometry (fs-LA-ICP-MS) as an analytical technique for determination of the stoichiometry of thin films down to the nanometer scale. The use of femtosecond laser ablation allows for precise removal of material with high spatial and depth resolution that can be coupled to an ICP-MS to obtain elemental and isotopic information. We used molecular beam epitaxy-grown thin films of LaPd(x)Sb2 and T´-La2CuO4 to demonstrate themore » capacity of fs-LA-ICP-MS for stoichiometric analysis and the spatial and depth resolution of the technique. Here we demonstrate that the stoichiometric information of thin films with a thickness of ~10 nm or lower can be determined. Furthermore, our results indicate that fs-LA-ICP-MS provides precise information on the thin film-substrate interface and is able to detect the interdiffusion of cations.« less

  18. Femtosecond laser ablation-based mass spectrometry. An ideal tool for stoichiometric analysis of thin films

    SciTech Connect (OSTI)

    LaHaye, Nicole L.; Kurian, Jose; Diwakar, Prasoon K.; Alff, Lambert; Harilal, Sivanandan S.

    2015-08-19

    An accurate and routinely available method for stoichiometric analysis of thin films is a desideratum of modern materials science where a material’s properties depend sensitively on elemental composition. We thoroughly investigated femtosecond laser ablation-inductively coupled plasma-mass spectrometry (fs-LA-ICP-MS) as an analytical technique for determination of the stoichiometry of thin films down to the nanometer scale. The use of femtosecond laser ablation allows for precise removal of material with high spatial and depth resolution that can be coupled to an ICP-MS to obtain elemental and isotopic information. We used molecular beam epitaxy-grown thin films of LaPd(x)Sb2 and T´-La2CuO4 to demonstrate the capacity of fs-LA-ICP-MS for stoichiometric analysis and the spatial and depth resolution of the technique. Here we demonstrate that the stoichiometric information of thin films with a thickness of ~10 nm or lower can be determined. Furthermore, our results indicate that fs-LA-ICP-MS provides precise information on the thin film-substrate interface and is able to detect the interdiffusion of cations.

  19. Experimental and numerical analysis of metal leaching from fly ash-amended highway bases

    SciTech Connect (OSTI)

    Cetin, Bora; Aydilek, Ahmet H.; Li, Lin

    2012-05-15

    Highlights: Black-Right-Pointing-Pointer This study is the evaluation of leaching potential of fly ash-lime mixed soils. Black-Right-Pointing-Pointer This objective is met with experimental and numerical analysis. Black-Right-Pointing-Pointer Zn leaching decreases with increase in fly ash content while Ba, B, Cu increases. Black-Right-Pointing-Pointer Decrease in lime content promoted leaching of Ba, B and Cu while Zn increases. Black-Right-Pointing-Pointer Numerical analysis predicted lower field metal concentrations. - Abstract: A study was conducted to evaluate the leaching potential of unpaved road materials (URM) mixed with lime activated high carbon fly ashes and to evaluate groundwater impacts of barium, boron, copper, and zinc leaching. This objective was met by a combination of batch water leach tests, column leach tests, and computer modeling. The laboratory tests were conducted on soil alone, fly ash alone, and URM-fly ash-lime kiln dust mixtures. The results indicated that an increase in fly ash and lime content has significant effects on leaching behavior of heavy metals from URM-fly ash mixture. An increase in fly ash content and a decrease in lime content promoted leaching of Ba, B and Cu whereas Zn leaching was primarily affected by the fly ash content. Numerically predicted field metal concentrations were significantly lower than the peak metal concentrations obtained in laboratory column leach tests, and field concentrations decreased with time and distance due to dispersion in soil vadose zone.

  20. First principles analysis of lattice dynamics for Fe-based superconductors and entropically-stabilized phases

    SciTech Connect (OSTI)

    Hahn, Steven

    2012-07-20

    Modern calculations are becoming an essential, complementary tool to inelastic x-ray scattering studies, where x-rays are scattered inelastically to resolve meV phonons. Calculations of the inelastic structure factor for any value of Q assist in both planning the experiment and analyzing the results. Moreover, differences between the measured data and theoretical calculations help identify important new physics driving the properties of novel correlated systems. We have used such calculations to better and more e#14;ciently measure the phonon dispersion and elastic constants of several iron pnictide superconductors. This dissertation describes calculations and measurements at room temperature in the tetragonal phase of CaFe{sub 2}As{sub 2} and LaFeAsO. In both cases, spin-polarized calculations imposing the antiferromagnetic order present in the low-temperature orthorhombic phase dramatically improves the agreement between theory and experiment. This is discussed in terms of the strong antiferromagnetic correlations that are known to persist in the tetragonal phase. In addition, we discuss a relatively new approach called self-consistent ab initio lattice dynamics (SCAILD), which goes beyond the harmonic approximation to include phonon-phonon interactions and produce a temperature-dependent phonon dispersion. We used this technique to study the HCP to BCC transition in beryllium.

  1. Exposure Based Health Issues Project Report: Phase I of High Level Tank Operations, Retrieval, Pretreatment, and Vitrification Exposure Based Health Issues Analysis

    SciTech Connect (OSTI)

    Stenner, Robert D.; Bowers, Harold N.; Kenoyer, Judson L.; Strenge, Dennis L.; Brady, William H.; Ladue, Buffi; Samuels, Joseph K.

    2001-11-30

    The Department of Energy (DOE) has the responsibility to understand the ''big picture'' of worker health and safety which includes fully recognizing the vulnerabilities and associated programs necessary to protect workers at the various DOE sites across the complex. Exposure analysis and medical surveillance are key aspects for understanding this big picture, as is understanding current health and safety practices and how they may need to change to relate to future health and safety management needs. The exposure-based health issues project was initiated to assemble the components necessary to understand potential exposure situations and their medical surveillance and clinical aspects. Phase I focused only on current Hanford tank farm operations and serves as a starting point for the overall project. It is also anticipated that once the pilot is fully developed for Hanford HLW (i.e., current operations, retrieval, pretreatment, vitrification, and disposal), the process and analysis methods developed will be available and applicable for other DOE operations and sites. The purpose of this Phase I project report is to present the health impact information collected regarding ongoing tank waste maintenance operations, show the various aspects of health and safety involved in protecting workers, introduce the reader to the kinds of information that will need to be analyzed in order to effectively manage worker safety.

  2. Solar Reserve Methodology for Renewable Energy Integration Studies Based on Sub-Hourly Variability Analysis: Preprint

    SciTech Connect (OSTI)

    Ibanez, E.; Brinkman, G.; Hummon, M.; Lew, D.

    2012-08-01

    Increasing penetrations of wind a solar energy are raising concerns among electric system operators because of the variability and uncertainty associated with power sources. Previous work focused on the quantification of reserves for systems with wind power. This paper presents a new methodology that allows the determination of necessary reserves for high penetrations of photovoltaic (PV) power and compares it to the wind-based methodology. The solar reserve methodology is applied to Phase 2 of the Western Wind and Solar Integration Study. A summary of the results is included.

  3. Model-Based Analysis of the Role of Biological, Hydrological and Geochemical Factors Affecting Uranium Bioremediation

    SciTech Connect (OSTI)

    Zhao, Jiao; Scheibe, Timothy D.; Mahadevan, Radhakrishnan

    2011-01-24

    Uranium contamination is a serious concern at several sites motivating the development of novel treatment strategies such as the Geobacter-mediated reductive immobilization of uranium. However, this bioremediation strategy has not yet been optimized for the sustained uranium removal. While several reactive-transport models have been developed to represent Geobacter-mediated bioremediation of uranium, these models often lack the detailed quantitative description of the microbial process (e.g., biomass build-up in both groundwater and sediments, electron transport system, etc.) and the interaction between biogeochemical and hydrological process. In this study, a novel multi-scale model was developed by integrating our recent model on electron capacitance of Geobacter (Zhao et al., 2010) with a comprehensive simulator of coupled fluid flow, hydrologic transport, heat transfer, and biogeochemical reactions. This mechanistic reactive-transport model accurately reproduces the experimental data for the bioremediation of uranium with acetate amendment. We subsequently performed global sensitivity analysis with the reactive-transport model in order to identify the main sources of prediction uncertainty caused by synergistic effects of biological, geochemical, and hydrological processes. The proposed approach successfully captured significant contributing factors across time and space, thereby improving the structure and parameterization of the comprehensive reactive-transport model. The global sensitivity analysis also provides a potentially useful tool to evaluate uranium bioremediation strategy. The simulations suggest that under difficult environments (e.g., highly contaminated with U(VI) at a high migration rate of solutes), the efficiency of uranium removal can be improved by adding Geobacter species to the contaminated site (bioaugmentation) in conjunction with the addition of electron donor (biostimulation). The simulations also highlight the interactive effect of initial cell concentration and flow rate on U(VI) reduction.

  4. Large deformation analysis of laminated composite structures by a continuum-based shell element with transverse deformation

    SciTech Connect (OSTI)

    Wung, Pey Min.

    1989-01-01

    In this work, a finite element formulation and associated computer program is developed for the transient large deformation analysis of laminated composite plate/shell structures. In order to satisfy the plate/shell surface traction boundary conditions and to have accurate stress description while maintaining the low cost of the analysis, a newly assumed displacement field theory is formulated by adding higher-order terms to the transverse displacement component of the first-order shear deformation theory. The laminated shell theory is formulated using the Updated Lagrangian description of a general continuum-based theory with assumptions on thickness deformation. The transverse deflection is approximated through the thickness by a quartic polynomial of the thickness coordinate. As a result both the plate/shell surface tractions (including nonzero tangential tractions and nonzero normal pressure) and the interlaminar shear stress continuity conditions at interfaces are satisfied simultaneously. Furthermore, the rotational degree of freedoms become layer dependent quantities and the laminate possesses a transverse deformation capability (i.e the normal strain is no longer zero). Analytical integration through the thickness direction is performed for both the linear analysis and the nonlinear analysis. Resultants of the stress integrations are expressed in terms of the laminate stacking sequence. Consequently, the laminate characteristics in the normal direction can be evaluated precisely and the cost of the overall analysis is reduced. The standard Newmark method and the modified Newton Raphson method are used for the solution of the nonlinear dynamic equilibrium equations. Finally, a variety of numerical examples are presented to demonstrate the validity and efficiency of the finite element program developed herein.

  5. Thermodynamic analysis of interactions between Ni-based solid oxide fuel cells (SOFC) anodes and trace species in a survey of coal syngas

    SciTech Connect (OSTI)

    Andrew Martinez; Kirk Gerdes; Randall Gemmen; James Postona

    2010-03-20

    A thermodynamic analysis was conducted to characterize the effects of trace contaminants in syngas derived from coal gasification on solid oxide fuel cell (SOFC) anode material. The effluents from 15 different gasification facilities were considered to assess the impact of fuel composition on anode susceptibility to contamination. For each syngas case, the study considers the magnitude of contaminant exposure resulting from operation of a warm gas cleanup unit at two different temperatures and operation of a nickel-based SOFC at three different temperatures. Contaminant elements arsenic (As), phosphorous (P), and antimony (Sb) are predicted to be present in warm gas cleanup effluent and will interact with the nickel (Ni) components of a SOFC anode. Phosphorous is the trace element found in the largest concentration of the three contaminants and is potentially the most detrimental. Poisoning was found to depend on the composition of the syngas as well as system operating conditions. Results for all trace elements tended to show invariance with cleanup operating temperature, but results were sensitive to syngas bulk composition. Synthesis gas with high steam content tended to resist poisoning.

  6. Analysis of ancient-river systems by 3D seismic time-slice technique: A case study in northeast Malay Basin, offshore Terengganu, Malaysia

    SciTech Connect (OSTI)

    Sulaiman, Noorzamzarina; Hamzah, Umar; Samsudin, Abdul Rahim

    2014-09-03

    Fluvial sandstones constitute one of the major clastic petroleum reservoir types in many sedimentary basins around the world. This study is based on the analysis of high-resolution, shallow (seabed to 500 m depth) 3D seismic data which generated three-dimensional (3D) time slices that provide exceptional imaging of the geometry, dimension and temporal and spatial distribution of fluvial channels. The study area is in the northeast of Malay Basin about 280 km to the east of Terengganu offshore. The Malay Basin comprises a thick (> 8 km), rift to post-rift Oligo-Miocene to Pliocene basin-fill. The youngest (Miocene to Pliocene), post-rift succession is dominated by a thick (1–5 km), cyclic succession of coastal plain and coastal deposits, which accumulated in a humid-tropical climatic setting. This study focuses on the Pleistocene to Recent (500 m thick) succession, which comprises a range of seismic facies analysis of the two-dimensional (2D) seismic sections, mainly reflecting changes in fluvial channel style and river architecture. The succession has been divided into four seismic units (Unit S1-S4), bounded by basin-wide strata surfaces. Two types of boundaries have been identified: 1) a boundary that is defined by a regionally-extensive erosion surface at the base of a prominent incised valley (S3 and S4); 2) a sequence boundary that is defined by more weakly-incised, straight and low-sinuosity channels which is interpreted as low-stand alluvial bypass channel systems (S1 and S2). Each unit displays a predictable vertical change of the channel pattern and scale, with wide low-sinuosity channels at the base passing gradationally upwards into narrow high-sinuosity channels at the top. The wide variation in channel style and size is interpreted to be controlled mainly by the sea-level fluctuations on the widely flat Sunda land Platform.

  7. Case Studies

    Broader source: Energy.gov [DOE]

    The following case studies are examples of integrating renewable energy into Federal new construction and major renovation projects. Additional renewable energy case studies are also available.

  8. Method And Apparatus For Two Dimensional Surface Property Analysis Based On Boundary Measurement

    DOE Patents [OSTI]

    Richardson, John G. (Idaho Falls, ID)

    2005-11-15

    An apparatus and method for determining properties of a conductive film is disclosed. A plurality of probe locations selected around a periphery of the conductive film define a plurality of measurement lines between each probe location and all other probe locations. Electrical resistance may be measured along each of the measurement lines. A lumped parameter model may be developed based on the measured values of electrical resistance. The lumped parameter model may be used to estimate resistivity at one or more selected locations encompassed by the plurality of probe locations. The resistivity may be extrapolated to other physical properties if the conductive film includes a correlation between resistivity and the other physical properties. A profile of the conductive film may be developed by determining resistivity at a plurality of locations. The conductive film may be applied to a structure such that resistivity may be estimated and profiled for the structure's surface.

  9. Review and model-based analysis of factors influencing soil carbon sequestration beneath switchgrass (Panicum virgatum)

    SciTech Connect (OSTI)

    Garten Jr, Charles T [ORNL

    2012-01-01

    Abstract. A simple, multi-compartment model was developed to predict soil carbon sequestration beneath switchgrass (Panicum virgatum) plantations in the southeastern United States. Soil carbon sequestration is an important component of sustainable switchgrass production for bioenergy because soil organic matter promotes water retention, nutrient supply, and soil properties that minimize erosion. A literature review was included for the purpose of model parameterization and five model-based experiments were conducted to predict how changes in environment (temperature) or crop management (cultivar, fertilization, and harvest efficiency) might affect soil carbon storage and nitrogen losses. Predictions of soil carbon sequestration were most sensitive to changes in annual biomass production, the ratio of belowground to aboveground biomass production, and temperature. Predictions of ecosystem nitrogen loss were most sensitive to changes in annual biomass production, the soil C/N ratio, and nitrogen remobilization efficiency (i.e., nitrogen cycling within the plant). Model-based experiments indicated that 1) soil carbon sequestration can be highly site specific depending on initial soil carbon stocks, temperature, and the amount of annual nitrogen fertilization, 2) response curves describing switchgrass yield as a function of annual nitrogen fertilization were important to model predictions, 3) plant improvements leading to greater belowground partitioning of biomass could increase soil carbon sequestration, 4) improvements in harvest efficiency have no indicated effects on soil carbon and nitrogen, but improve cumulative biomass yield, and 5) plant improvements that reduce organic matter decomposition rates could also increase soil carbon sequestration, even though the latter may not be consistent with desired improvements in plant tissue chemistry to maximize yields of cellulosic ethanol.

  10. Analysis of the environmental impact of China based on STIRPAT model

    SciTech Connect (OSTI)

    Lin Shoufu; Zhao Dingtao; Marinova, Dora

    2009-11-15

    Assuming that energy consumption is the main source of GHG emissions in China, this paper analyses the effect of population, urbanisation level, GDP per capita, industrialisation level and energy intensity on the country's environmental impact using the STIRPAT model with data for 1978-2006. The analysis shows that population has the largest potential effect on environmental impact, followed by urbanisation level, industrialisation level, GDP per capita and energy intensity. Hence, China's One Child Policy, which restrains rapid population growth, has been an effective way of reducing the country's environmental impact. However, due to the difference in growth rates, GDP per capita had a higher effect on the environmental impact, contributing to 38% of its increase (while population's contribution was at 32%). The rapid decrease in energy intensity was the main factor restraining the increase in China's environmental impact but recently it has also been rising. Against this background, the future of the country looks bleak unless a change in human behaviour towards more ecologically sensitive economic choices occurs.

  11. Review and comparison of web- and disk-based tools for residentialenergy analysis

    SciTech Connect (OSTI)

    Mills, Evan

    2002-08-25

    There exist hundreds of building energy software tools, both web- and disk-based. These tools exhibit considerable range in approach and creativity, with some being highly specialized and others able to consider the building as a whole. However, users are faced with a dizzying array of choices and, often, conflicting results. The fragmentation of development and deployment efforts has hampered tool quality and market penetration. The purpose of this review is to provide information for defining the desired characteristics of residential energy tools, and to encourage future tool development that improves on current practice. This project entails (1) creating a framework for describing possible technical and functional characteristics of such tools, (2) mapping existing tools onto this framework, (3) exploring issues of tool accuracy, and (4) identifying ''best practice'' and strategic opportunities for tool design. evaluated 50 web-based residential calculators, 21 of which we regard as ''whole-house'' tools(i.e., covering a range of end uses). Of the whole-house tools, 13 provide open-ended energy calculations, 5 normalize the results to actual costs (a.k.a ''bill-disaggregation tools''), and 3 provide both options. Across the whole-house tools, we found a range of 5 to 58 house-descriptive features (out of 68 identified in our framework) and 2 to 41 analytical and decision-support features (55 possible). We also evaluated 15 disk-based residential calculators, six of which are whole-house tools. Of these tools, 11 provide open-ended calculations, 1 normalizes the results to actual costs, and 3 provide both options. These tools offered ranges of 18 to 58 technical features (70 possible) and 10 to 40 user- and decision-support features (56 possible). The comparison shows that such tools can employ many approaches and levels of detail. Some tools require a relatively small number of well-considered inputs while others ask a myriad of questions and still miss key issues. The value of detail has a lot to do with the type of question(s) being asked by the user (e.g., the availability of dozens of miscellaneous appliances is immaterial for a user attempting to evaluate the potential for space-heating savings by installing a new furnace). More detail does not, according to our evaluation, automatically translate into a ''better'' or ''more accurate'' tool. Efforts to quantify and compare the ''accuracy'' of these tools are difficult at best, and prior tool-comparison studies have not undertaken this in a meaningful way. The ability to evaluate accuracy is inherently limited by the availability of measured data. Furthermore, certain tool outputs can only be measured against ''actual'' values that are themselves calculated (e.g., HVAC sizing), while others are rarely if ever available (e.g., measured energy use or savings for specific measures). Similarly challenging is to understand the sources of inaccuracies. There are many ways in which quantitative errors can occur in tools, ranging from programming errors to problems inherent in a tool's design. Due to hidden assumptions and non-variable ''defaults'', most tools cannot be fully tested across the desirable range of building configurations, operating conditions, weather locations, etc. Many factors conspire to confound performance comparisons among tools. Differences in inputs can range from weather city, to types of HVAC systems, to appliance characteristics, to occupant-driven effects such as thermostat management. Differences in results would thus no doubt emerge from an extensive comparative exercise, but the sources or implications of these differences for the purposes of accuracy evaluation or tool development would remain largely unidentifiable (especially given the paucity of technical documentation available for most tools). For the tools that we tested, the predicted energy bills for a single test building ranged widely (by nearly a factor of three), and far more so at the end-use level. Most tools over-predicted energy bills and all over-predicted consumption. Variability was lower among disk-based tools,but they more significantly over-predicted actual use. The deviations (over-predictions) we observed from actual bills corresponded to up to $1400 per year (approx. 250 percent of the actual bills). For bill-disaggregation tools, wherein the results are forced to equal actual bills, the accuracy issue shifts to whether or not the total is properly attributed to the various end uses and to whether savings calculations are done accurately (a challenge that demands relatively rare end-use data). Here, too, we observed a number of dubious results. Energy savings estimates automatically generated by the web-based tools varied from $46/year (5 percent of predicted use) to $625/year (52 percent of predicted use).

  12. Techno-Economic Analysis of Scalable Coal-Based Fuel Cells

    SciTech Connect (OSTI)

    Chuang, Steven S. C.

    2014-08-31

    Researchers at The University of Akron (UA) have demonstrated the technical feasibility of a laboratory coal fuel cell that can economically convert high sulfur coal into electricity with near zero negative environmental impact. Scaling up this coal fuel cell technology to the megawatt scale for the nation’s electric power supply requires two key elements: (i) developing the manufacturing technology for the components of the coal-based fuel cell, and (ii) long term testing of a kW scale fuel cell pilot plant. This project was expected to develop a scalable coal fuel cell manufacturing process through testing, demonstrating the feasibility of building a large-scale coal fuel cell power plant. We have developed a reproducible tape casting technique for the mass production of the planner fuel cells. Low cost interconnect and cathode current collector material was identified and current collection was improved. In addition, this study has demonstrated that electrochemical oxidation of carbon can take place on the Ni anode surface and the CO and CO2 product produced can further react with carbon to initiate the secondary reactions. One important secondary reaction is the reaction of carbon with CO2 to produce CO. We found CO and carbon can be electrochemically oxidized simultaneously inside of the anode porous structure and on the surface of anode for producing electricity. Since CH4 produced from coal during high temperature injection of coal into the anode chamber can cause severe deactivation of Ni-anode, we have studied how CH4 can interact with CO2 to produce in the anode chamber. CO produced was found able to inhibit coking and allow the rate of anode deactivation to be decreased. An injection system was developed to inject the solid carbon and coal fuels without bringing air into the anode chamber. Five planner fuel cells connected in a series configuration and tested. Extensive studies on the planner fuels and stack revealed that the planner fuel cell stack is not suitable for operation with carbon and coal fuels due to lack of mechanical strength and difficulty in sealing. We have developed scalable processes for manufacturing of process for planner and tubular cells. Our studies suggested that tubular cell stack could be the only option for scaling up the coal-based fuel cell. Although the direct feeding of coal into fuel cell can significantly simplify the fuel cell system, the durability of the fuel cell needs to be further improved before scaling up. We are developing a tubular fuel cell stack with a coal injection and a CO2 recycling unit.

  13. A High Resolution Hydrometer Phase Classifier Based on Analysis of Cloud Radar Doppler Spectra.

    SciTech Connect (OSTI)

    Luke,E.; Kollias, P.

    2007-08-06

    The lifecycle and radiative properties of clouds are highly sensitive to the phase of their hydrometeors (i.e., liquid or ice). Knowledge of cloud phase is essential for specifying the optical properties of clouds, or else, large errors can be introduced in the calculation of the cloud radiative fluxes. Current parameterizations of cloud water partition in liquid and ice based on temperature are characterized by large uncertainty (Curry et al., 1996; Hobbs and Rangno, 1998; Intriery et al., 2002). This is particularly important in high geographical latitudes and temperature ranges where both liquid droplets and ice crystal phases can exist (mixed-phase cloud). The mixture of phases has a large effect on cloud radiative properties, and the parameterization of mixed-phase clouds has a large impact on climate simulations (e.g., Gregory and Morris, 1996). Furthermore, the presence of both ice and liquid affects the macroscopic properties of clouds, including their propensity to precipitate. Despite their importance, mixed-phase clouds are severely understudied compared to the arguably simpler single-phase clouds. In-situ measurements in mixed-phase clouds are hindered due to aircraft icing, difficulties distinguishing hydrometeor phase, and discrepancies in methods for deriving physical quantities (Wendisch et al. 1996, Lawson et al. 2001). Satellite-based retrievals of cloud phase in high latitudes are often hindered by the highly reflecting ice-covered ground and persistent temperature inversions. From the ground, the retrieval of mixed-phase cloud properties has been the subject of extensive research over the past 20 years using polarization lidars (e.g., Sassen et al. 1990), dual radar wavelengths (e.g., Gosset and Sauvageot 1992; Sekelsky and McIntosh, 1996), and recently radar Doppler spectra (Shupe et al. 2004). Millimeter-wavelength radars have substantially improved our ability to observe non-precipitating clouds (Kollias et al., 2007) due to their excellent sensitivity that enables the detection of thin cloud layers and their ability to penetrate several non-precipitating cloud layers. However, in mixed-phase clouds conditions, the observed Doppler moments are dominated by the highly reflecting ice crystals and thus can not be used to identify the cloud phase. This limits our ability to identify the spatial distribution of cloud phase and our ability to identify the conditions under which mixed-phase clouds form.

  14. Origin of the Diverse Behavior of Oxygen Vacancies in ABO3 Perovskites: A Symmetry Based Analysis

    SciTech Connect (OSTI)

    Yin, W. J.; Wei, S. H.; Al-Jassim, M. M.; Yan, Y. F.

    2012-05-15

    Using band symmetry analysis and density functional theory calculations, we reveal the origin of why oxygen vacancy (V{sub O}) energy levels are shallow in some ABO{sub 3} perovskites, such as SrTiO{sub 3}, but are deep in some others, such as LaAlO{sub 3}. We show that this diverse behavior can be explained by the symmetry of the perovskite structure and the location (A or B site) of the metal atoms with low d orbital energies, such as Ti and La atoms. When the conduction band minimum (CBM) is an antibonding {Gamma}12 state, which is usually associated with the metal atom with low d orbital energies at the A site (e.g., LaAlO{sub 3}), then the V{sub O} energy levels are deep inside the gap. Otherwise, if the CBM is the nonbonding {Gamma}25{prime} state, which is usually associated with metal atoms with low d orbital energies at the B site (e.g., SrTiO{sub 3}), then the V{sub O} energy levels are shallow and often above the CBM. The V{sub O} energy level is also deep for some uncommon ABO{sub 3} perovskite materials that possess a low s orbital, or large-size cations, and an antibonding {Gamma}{sub 1} state CBM, such as ZnTiO{sub 3}. Our results, therefore, provide guidelines for designing ABO{sub 3} perovskite materials with desired functional behaviors.

  15. ANALYSIS OF QUIET-SUN INTERNETWORK MAGNETIC FIELDS BASED ON LINEAR POLARIZATION SIGNALS

    SciTech Connect (OSTI)

    Orozco Suarez, D.; Bellot Rubio, L. R.

    2012-05-20

    We present results from the analysis of Fe I 630 nm measurements of the quiet Sun taken with the spectropolarimeter of the Hinode satellite. Two data sets with noise levels of 1.2 Multiplication-Sign 10{sup -3} and 3 Multiplication-Sign 10{sup -4} are employed. We determine the distribution of field strengths and inclinations by inverting the two observations with a Milne-Eddington model atmosphere. The inversions show a predominance of weak, highly inclined fields. By means of several tests we conclude that these properties cannot be attributed to photon noise effects. To obtain the most accurate results, we focus on the 27.4% of the pixels in the second data set that have linear polarization amplitudes larger than 4.5 times the noise level. The vector magnetic field derived for these pixels is very precise because both circular and linear polarization signals are used simultaneously. The inferred field strength, inclination, and filling factor distributions agree with previous results, supporting the idea that internetwork (IN) fields are weak and very inclined, at least in about one quarter of the area occupied by the IN. These properties differ from those of network fields. The average magnetic flux density and the mean field strength derived from the 27.4% of the field of view with clear linear polarization signals are 16.3 Mx cm{sup -2} and 220 G, respectively. The ratio between the average horizontal and vertical components of the field is approximately 3.1. The IN fields do not follow an isotropic distribution of orientations.

  16. Building America Special Research Project: High-R Walls Case...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Building America Special Research Project: High-R Walls Case Study Analysis Building America Special Research Project: High-R Walls Case Study Analysis This report considers a ...

  17. A NMR-Based Carbon-Type Analysis of Diesel Fuel Blends From Various Sources

    SciTech Connect (OSTI)

    Bays, J. Timothy; King, David L.

    2013-05-10

    In collaboration with participants of the Coordinating Research Council (CRC) Advanced Vehicle/Fuels/Lubricants (AVFL) Committee, and project AVFL-19, the characteristics of fuels from advanced and renewable sources were compared to commercial diesel fuels. The main objective of this study was to highlight similarities and differences among the fuel types, i.e. ULSD, renewables, and alternative fuels, and among fuels within the different fuel types. This report summarizes the carbon-type analysis from 1H and 13C{1H} nuclear magnetic resonance spectroscopy (NMR) of 14 diesel fuel samples. The diesel fuel samples come from diverse sources and include four commercial ultra-low sulfur diesel fuels (ULSD), one gas-to-liquid diesel fuel (GTL), six renewable diesel fuels (RD), two shale oil-derived diesel fuels, and one oil sands-derived diesel fuel. Overall, the fuels examined fall into two groups. The two shale oil-derived samples and the oil-sand-derived sample closely resemble the four commercial ultra-low sulfur diesels, with SO1 and SO2 most closely matched with ULSD1, ULSD2, and ULSD4, and OS1 most closely matched with ULSD3. As might be expected, the renewable diesel fuels, with the exception of RD3, do not resemble the ULSD fuels because of their very low aromatic content, but more closely resemble the gas-to-liquid sample (GTL) in this respect. RD3 is significantly different from the other renewable diesel fuels in that the aromatic content more closely resembles the ULSD fuels. Fused-ring aromatics are readily observable in the ULSD, SO, and OS samples, as well as RD3, and are noticeably absent in the remaining RD and GTL fuels. Finally, ULSD3 differs from the other ULSD fuels by having a significantly lower aromatic carbon content and higher cycloparaffinic carbon content. In addition to providing important comparative compositional information regarding the various diesel fuels, this report also provides important information about the capabilities of NMR spectroscopy for the detailed characterization and comparison of fuels and fuel blends.

  18. FY01 Supplemental Science and Performance Analysis: Volume 1,Scientific Bases and Analyses

    SciTech Connect (OSTI)

    Bodvarsson, G.S.; Dobson, David

    2001-05-30

    The U.S. Department of Energy (DOE) is considering the possible recommendation of a site at Yucca Mountain, Nevada, for development as a geologic repository for the disposal of high-level radioactive waste and spent nuclear fuel. To facilitate public review and comment, in May 2001 the DOE released the Yucca Mountain Science and Engineering Report (S&ER) (DOE 2001 [DIRS 153849]), which presents technical information supporting the consideration of the possible site recommendation. The report summarizes the results of more than 20 years of scientific and engineering studies. A decision to recommend the site has not been made: the DOE has provided the S&ER and its supporting documents as an aid to the public in formulating comments on the possible recommendation. When the S&ER (DOE 2001 [DIRS 153849]) was released, the DOE acknowledged that technical and scientific analyses of the site were ongoing. Therefore, the DOE noted in the Federal Register Notice accompanying the report (66 FR 23013 [DIRS 155009], p. 2) that additional technical information would be released before the dates, locations, and times for public hearings on the possible recommendation were announced. This information includes: (1) the results of additional technical studies of a potential repository at Yucca Mountain, contained in this FY01 Supplemental Science and Performance Analyses: Vol. 1, Scientific Bases and Analyses; and FY01 Supplemental Science and Performance Analyses: Vol. 2, Performance Analyses (McNeish 2001 [DIRS 155023]) (collectively referred to as the SSPA) and (2) a preliminary evaluation of the Yucca Mountain site's preclosure and postclosure performance against the DOE's proposed site suitability guidelines (10 CFR Part 963 [64 FR 67054 [DIRS 124754

  19. An Analysis Technique for Active Neutron Multiplicity Measurements Based on First Principles

    SciTech Connect (OSTI)

    Evans, Louise G; Goddard, Braden; Charlton, William S; Peerani, Paolo

    2012-08-13

    Passive neutron multiplicity counting is commonly used to quantify the total mass of plutonium in a sample, without prior knowledge of the sample geometry. However, passive neutron counting is less applicable to uranium measurements due to the low spontaneous fission rates of uranium. Active neutron multiplicity measurements are therefore used to determine the {sup 235}U mass in a sample. Unfortunately, there are still additional challenges to overcome for uranium measurements, such as the coupling of the active source and the uranium sample. Techniques, such as the coupling method, have been developed to help reduce the dependence of calibration curves for active measurements on uranium samples; although, they still require similar geometry known standards. An advanced active neutron multiplicity measurement method is being developed by Texas A&M University, in collaboration with Los Alamos National Laboratory (LANL) in an attempt to overcome the calibration curve requirements. This method can be used to quantify the {sup 235}U mass in a sample containing uranium without using calibration curves. Furthermore, this method is based on existing detectors and nondestructive assay (NDA) systems, such as the LANL Epithermal Neutron Multiplicity Counter (ENMC). This method uses an inexpensive boron carbide liner to shield the uranium sample from thermal and epithermal neutrons while allowing fast neutrons to reach the sample. Due to the relatively low and constant fission and absorption energy dependent cross-sections at high neutron energies for uranium isotopes, fast neutrons can penetrate the sample without significant attenuation. Fast neutron interrogation therefore creates a homogeneous fission rate in the sample, allowing for first principle methods to be used to determine the {sup 235}U mass in the sample. This paper discusses the measurement method concept and development, including measurements and simulations performed to date, as well as the potential limitations.

  20. Analysis of Hanford-based Options for Sustainable DOE Facilities on the West Coast

    SciTech Connect (OSTI)

    Warwick, William M.

    2012-06-30

    Large-scale conventional energy projects result in lower costs of energy (COE). This is true for most renewable energy projects as well. The Office of Science is interested in its facilities meeting the renewable energy mandates set by Congress and the Administration. Those facilities on the west coast include a cluster in the Bay Area of California and at Hanford in central Washington State. Land constraints at the California facilities do not permit large scale projects. The Hanford Reservation has land and solar insolation available for a large scale solar project as well as access to a regional transmission system that can provide power to facilities in California. The premise of this study is that a large-scale solar project at Hanford may be able to provide renewable energy sufficient to meet the needs of select Office of Science facilities on the west coast at a COE that is competitive with costs in California despite the lower solar insolation values at Hanford. The study concludes that although the cost of solar projects continues to decline, estimated costs for a large-scale project at Hanford are still not competitive with avoided power costs for Office of Science facilities on the west coast. Further, although it is possible to transmit power from a solar project at Hanford to California facilities, the costs of doing so add additional costs. Consequently, development of a large- scale solar project at Hanford to meet the renewable goals of Office of Science facilities on the west coast is currently uneconomic. This may change as solar costs decrease and California-based facilities face increasing costs for conventional and renewable energy produced in the state. PNNL should monitor those cost trends.

  1. Economic analysis of operating alternatives for the South Vandenberg Power Plant at Vandenberg Air Force Base, California

    SciTech Connect (OSTI)

    Daellenbach, K.K.; Dagle, J.E.; Reilly, R.W.; Shankle, S.A.

    1993-02-01

    Vandenberg Air Force Base (VAFB), located approximately 50 miles northwest of Santa Barbara, California, commissioned the Pacific Northwest Laboratory to conduct an economic analysis of operating alternatives of the South Vandenberg Power Plant (SVPP). Recent concern over SVPP operating and environmental costs prompted VAFB personnel to consider other means to support the Missile Operation Support Requirement (MOSR). The natural gas-fired SVPP was originally designed to support the Space Transportation System launch activities. With cancellation of this mission, the SVPP has been used to provide primary and backup electric power to support MOSR activities for the Space Launch Complexes. This document provides economic analysis in support of VAFB decisions about future operation of the SVPP. This analysis complied with the life-cycle cost (LCC) analytical approach detailed in 10 CFR 436, which is used in support of all Federal energy decisions. Many of the SVPP operational and environmental cost estimates were provided by VAFB staff, with additional information from vendors and engineering contractors. The LCC analysis consisted of three primary operating strategies, each with a level of service equal to or better than the current status-quo operation. These scenarios are: Status-quo operation where the SVPP provides both primary and backup MOSR power; Purchased utility power providing primary MOSR support with backup power provided by an Uninterruptible Power Supply (UPS) system. The SVPP would be used to provide power for long-duration power outages; Purchased utility power provides primary MOSR support with backup power provided by a UPS system. A new set of dedicated generators would provide backup power for long-duration power outages.

  2. Prediction of global solar irradiance based on time series analysis: Application to solar thermal power plants energy production planning

    SciTech Connect (OSTI)

    Martin, Luis; Marchante, Ruth; Cony, Marco; Zarzalejo, Luis F.; Polo, Jesus; Navarro, Ana

    2010-10-15

    Due to strong increase of solar power generation, the predictions of incoming solar energy are acquiring more importance. Photovoltaic and solar thermal are the main sources of electricity generation from solar energy. In the case of solar thermal energy plants with storage energy system, its management and operation need reliable predictions of solar irradiance with the same temporal resolution as the temporal capacity of the back-up system. These plants can work like a conventional power plant and compete in the energy stock market avoiding intermittence in electricity production. This work presents a comparisons of statistical models based on time series applied to predict half daily values of global solar irradiance with a temporal horizon of 3 days. Half daily values consist of accumulated hourly global solar irradiance from solar raise to solar noon and from noon until dawn for each day. The dataset of ground solar radiation used belongs to stations of Spanish National Weather Service (AEMet). The models tested are autoregressive, neural networks and fuzzy logic models. Due to the fact that half daily solar irradiance time series is non-stationary, it has been necessary to transform it to two new stationary variables (clearness index and lost component) which are used as input of the predictive models. Improvement in terms of RMSD of the models essayed is compared against the model based on persistence. The validation process shows that all models essayed improve persistence. The best approach to forecast half daily values of solar irradiance is neural network models with lost component as input, except Lerida station where models based on clearness index have less uncertainty because this magnitude has a linear behaviour and it is easier to simulate by models. (author)

  3. Isotope Enrichment Detection by Laser Ablation - Laser Absorption Spectrometry: Automated Environmental Sampling and Laser-Based Analysis for HEU Detection

    SciTech Connect (OSTI)

    Anheier, Norman C.; Bushaw, Bruce A.

    2010-01-01

    The global expansion of nuclear power, and consequently the uranium enrichment industry, requires the development of new safeguards technology to mitigate proliferation risks. Current enrichment monitoring instruments exist that provide only yes/no detection of highly enriched uranium (HEU) production. More accurate accountancy measurements are typically restricted to gamma-ray and weight measurements taken in cylinder storage yards. Analysis of environmental and cylinder content samples have much higher effectiveness, but this approach requires onsite sampling, shipping, and time-consuming laboratory analysis and reporting. Given that large modern gaseous centrifuge enrichment plants (GCEPs) can quickly produce a significant quantity (SQ ) of HEU, these limitations in verification suggest the need for more timely detection of potential facility misuse. The Pacific Northwest National Laboratory (PNNL) is developing an unattended safeguards instrument concept, combining continuous aerosol particulate collection with uranium isotope assay, to provide timely analysis of enrichment levels within low enriched uranium facilities. This approach is based on laser vaporization of aerosol particulate samples, followed by wavelength tuned laser diode spectroscopy to characterize the uranium isotopic ratio through subtle differences in atomic absorption wavelengths. Environmental sampling (ES) media from an integrated aerosol collector is introduced into a small, reduced pressure chamber, where a focused pulsed laser vaporizes material from a 10 to 20-µm diameter spot of the surface of the sampling media. The plume of ejected material begins as high-temperature plasma that yields ions and atoms, as well as molecules and molecular ions. We concentrate on the plume of atomic vapor that remains after the plasma has expanded and then cooled by the surrounding cover gas. Tunable diode lasers are directed through this plume and each isotope is detected by monitoring absorbance signals on a shot-to-shot basis. The media is translated by a micron resolution scanning system, allowing the isotope analysis to cover the entire sample surface. We also report, to the best of our knowledge, the first demonstration of laser-based isotopic measurements on individual micron-sized particles that are minor target components in a much larger heterogeneous mix of ‘background’ particles. This composition is consistent with swipe and environmental aerosol samples typically collected for safeguards ES purposes. Single-shot detection sensitivity approaching the femtogram range and relative isotope abundance uncertainty better than 10% has been demonstrated using gadolinium isotopes as surrogate materials.

  4. Economic feasibility analysis of distributed electric power generation based upon the natural gas-fired fuel cell. Final report

    SciTech Connect (OSTI)

    Not Available

    1994-03-01

    The final report provides a summary of results of the Cost of Ownership Model and the circumstances under which a distributed fuel cell is economically viable. The analysis is based on a series of micro computer models estimate the capital and operations cost of a fuel cell central utility plant configuration. Using a survey of thermal and electrical demand profiles, the study defines a series of energy user classes. The energy user class demand requirements are entered into the central utility plant model to define the required size the fuel cell capacity and all supporting equipment. The central plant model includes provisions that enables the analyst to select optional plant features that are most appropriate to a fuel cell application, and that are cost effective. The model permits the choice of system features that would be suitable for a large condominium complex or a residential institution such as a hotel, boarding school or prison. Other applications are also practical; however, such applications have a higher relative demand for thermal energy, a characteristic that is well-suited to a fuel cell application with its free source of hot water or steam. The analysis combines the capital and operation from the preceding models into a Cost of Ownership Model to compute the plant capital and operating costs as a function of capacity and principal features and compares these estimates to the estimated operating cost of the same central plant configuration without a fuel cell.

  5. Characterization of electrically-active defects in ultraviolet light-emitting diodes with laser-based failure analysis techniques

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Miller, Mary A.; Tangyunyong, Paiboon; Edward I. Cole, Jr.

    2016-01-12

    In this study, laser-based failure analysis techniques demonstrate the ability to quickly and non-intrusively screen deep ultraviolet light-emitting diodes(LEDs) for electrically-active defects. In particular, two laser-based techniques, light-induced voltage alteration and thermally-induced voltage alteration, generate applied voltage maps (AVMs) that provide information on electrically-active defect behavior including turn-on bias, density, and spatial location. Here, multiple commercial LEDs were examined and found to have dark defect signals in the AVM indicating a site of reduced resistance or leakage through the diode. The existence of the dark defect signals in the AVM correlates strongly with an increased forward-bias leakage current. This increasedmore » leakage is not present in devices without AVM signals. Transmission electron microscopyanalysis of a dark defect signal site revealed a dislocation cluster through the pn junction. The cluster included an open core dislocation. Even though LEDs with few dark AVM defect signals did not correlate strongly with power loss, direct association between increased open core dislocation densities and reduced LED device performance has been presented elsewhere [M. W. Moseley et al., J. Appl. Phys. 117, 095301 (2015)].« less

  6. Well-to-Wheels analysis of landfill gas-based pathways and their addition to the GREET model.

    SciTech Connect (OSTI)

    Mintz, M.; Han, J.; Wang, M.; Saricks, C.; Energy Systems

    2010-06-30

    Today, approximately 300 million standard cubic ft/day (mmscfd) of natural gas and 1600 MW of electricity are produced from the decomposition of organic waste at 519 U.S. landfills (EPA 2010a). Since landfill gas (LFG) is a renewable resource, this energy is considered renewable. When used as a vehicle fuel, compressed natural gas (CNG) produced from LFG consumes up to 185,000 Btu of fossil fuel and generates from 1.5 to 18.4 kg of carbon dioxide-equivalent (CO{sub 2}e) emissions per million Btu of fuel on a 'well-to-wheel' (WTW) basis. This compares with approximately 1.1 million Btu and 78.2 kg of CO{sub 2}e per million Btu for CNG from fossil natural gas and 1.2 million Btu and 97.5 kg of CO{sub 2}e per million Btu for petroleum gasoline. Because of the additional energy required for liquefaction, LFG-based liquefied natural gas (LNG) requires more fossil fuel (222,000-227,000 Btu/million Btu WTW) and generates more GHG emissions (approximately 22 kg CO{sub 2}e /MM Btu WTW) if grid electricity is used for the liquefaction process. However, if some of the LFG is used to generate electricity for gas cleanup and liquefaction (or compression, in the case of CNG), vehicle fuel produced from LFG can have no fossil fuel input and only minimal GHG emissions (1.5-7.7 kg CO{sub 2}e /MM Btu) on a WTW basis. Thus, LFG-based natural gas can be one of the lowest GHG-emitting fuels for light- or heavy-duty vehicles. This report discusses the size and scope of biomethane resources from landfills and the pathways by which those resources can be turned into and utilized as vehicle fuel. It includes characterizations of the LFG stream and the processes used to convert low-Btu LFG into high-Btu renewable natural gas (RNG); documents the conversion efficiencies and losses of those processes, the choice of processes modeled in GREET, and other assumptions used to construct GREET pathways; and presents GREET results by pathway stage. GREET estimates of well-to-pump (WTP), pump-to-wheel (PTW), and WTW energy, fossil fuel, and GHG emissions for each LFG-based pathway are then summarized and compared with similar estimates for fossil natural gas and petroleum pathways.

  7. the-schedule-based-transit-model

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    The Schedule-Based transit model of the Chicago Metropolitan Area Vadim Sokolov Transportation Research and Analysis Computing Center Argonne National Laboratory List of Authors ================ Vadim Sokolov Transportation Research and Analysis Computing Center Argonne National Laboratory 277 International Drive West Chicago, IL 60185 Abstract ========= Usually public transit systems are modeled using so called frequency based approach. In this case transit route times are defined in terms of

  8. Development and Deployment of a Short Rotation Woody Crops Harvesting System Based on a Case New Holland Forage Harvester and SRC Woody Crop Header

    SciTech Connect (OSTI)

    Eisenbies, Mark; Volk, Timothy

    2014-10-03

    Demand for bioenergy sourced from woody biomass is projected to increase; however, the expansion and rapid deployment of short rotation woody crop systems in the United States has been constrained by high production costs and sluggish market acceptance due to problems with quality and consistency from first-generation harvesting systems. The objective of this study was to evaluate the effect of crop conditions on the performance of a single-pass, cut and chip harvester based on a standard New Holland FR-9000 series forage harvester with a dedicated 130FB short rotation coppice header, and the quality of chipped material. A time motion analysis was conducted to track the movement of machine and chipped material through the system for 153 separate loads over 10 days on a 54-ha harvest. Harvester performance was regulated by either ground conditions, or standing biomass on 153 loads. Material capacities increased linearly with standing biomass up to 40 Mgwet ha-1 and plateaued between 70 and 90 Mgwet hr-1. Moisture contents ranged from 39 to 51% with the majority of samples between 43 and 45%. Loads produced in freezing weather (average temperature over 10 hours preceding load production) had 4% more chips greater than 25.4 mm (P < 0.0119). Over 1.5 Mgdry ha-1 of potentially harvested material (6-9% of a load) was left on site, of which half was commercially undesirable meristematic pieces. The New Holland harvesting system is a reliable and predictable platform for harvesting material over a wide range of standing biomass; performance was consistent overall in 14 willow cultivars.

  9. A solar thermal cooling and heating system for a building: Experimental and model based performance analysis and design

    SciTech Connect (OSTI)

    Qu, Ming; Yin, Hongxi; Archer, David H.

    2010-02-15

    A solar thermal cooling and heating system at Carnegie Mellon University was studied through its design, installation, modeling, and evaluation to deal with the question of how solar energy might most effectively be used in supplying energy for the operation of a building. This solar cooling and heating system incorporates 52 m{sup 2} of linear parabolic trough solar collectors; a 16 kW double effect, water-lithium bromide (LiBr) absorption chiller, and a heat recovery heat exchanger with their circulation pumps and control valves. It generates chilled and heated water, dependent on the season, for space cooling and heating. This system is the smallest high temperature solar cooling system in the world. Till now, only this system of the kind has been successfully operated for more than one year. Performance of the system has been tested and the measured data were used to verify system performance models developed in the TRaNsient SYstem Simulation program (TRNSYS). On the basis of the installed solar system, base case performance models were programmed; and then they were modified and extended to investigate measures for improving system performance. The measures included changes in the area and orientation of the solar collectors, the inclusion of thermal storage in the system, changes in the pipe diameter and length, and various system operational control strategies. It was found that this solar thermal system could potentially supply 39% of cooling and 20% of heating energy for this building space in Pittsburgh, PA, if it included a properly sized storage tank and short, low diameter connecting pipes. Guidelines for the design and operation of an efficient and effective solar cooling and heating system for a given building space have been provided. (author)

  10. Life Cost Based FMEA Manual: A Step by Step Guide to Carrying Out a Cost-based Failure Modes and Effects Analysis

    SciTech Connect (OSTI)

    Rhee, Seung; Spencer, Cherrill; /Stanford U. /SLAC

    2009-01-23

    Failure occurs when one or more of the intended functions of a product are no longer fulfilled to the customer's satisfaction. The most critical product failures are those that escape design reviews and in-house quality inspection and are found by the customer. The product may work for a while until its performance degrades to an unacceptable level or it may have not worked even before customer took possession of the product. The end results of failures which may lead to unsafe conditions or major losses of the main function are rated high in severity. Failure Modes and Effects Analysis (FMEA) is a tool widely used in the automotive, aerospace, and electronics industries to identify, prioritize, and eliminate known potential failures, problems, and errors from systems under design, before the product is released (Stamatis, 1997). Several industrial FMEA standards such as those published by the Society of Automotive Engineers, US Department of Defense, and the Automotive Industry Action Group employ the Risk Priority Number (RPN) to measure risk and severity of failures. The Risk Priority Number (RPN) is a product of 3 indices: Occurrence (O), Severity (S), and Detection (D). In a traditional FMEA process design engineers typically analyze the 'root cause' and 'end-effects' of potential failures in a sub-system or component and assign penalty points through the O, S, D values to each failure. The analysis is organized around categories called failure modes, which link the causes and effects of failures. A few actions are taken upon completing the FMEA worksheet. The RPN column generally will identify the high-risk areas. The idea of performing FMEA is to eliminate or reduce known and potential failures before they reach the customers. Thus, a plan of action must be in place for the next task. Not all failures can be resolved during the product development cycle, thus prioritization of actions must be made within the design group. One definition of detection difficulty (D) is how well the organization controls the development process. Another definition relates to the detectability of a particular failure in the product when it is in the hands of the customer. The former asks 'What is the chance of catching the problem before we give it to the customer'? The latter asks 'What is the chance of the customer catching the problem before the problem results in a catastrophic failure?' (Palady, 1995) These differing definitions confuse the FMEA users when one tries to determine detection difficulty. Are we trying to measure how easy it is to detect where a failure has occurred or when it has occurred? Or are we trying to measure how easy or difficult it is to prevent failures? Ordinal scale variables are used to rank-order industries such as, hotels, restaurants, and movies (Note that a 4 star hotel is not necessarily twice as good as a 2 star hotel). Ordinal values preserve rank in a group of items, but the distance between the values cannot be measured since a distance function does not exist. Thus, the product or sum of ordinal variables loses its rank since each parameter has different scales. The RPN is a product of 3 independent ordinal variables, it can indicate that some failure types are 'worse' than others, but give no quantitative indication of their relative effects. To resolve the ambiguity of measuring detection difficulty and the irrational logic of multiplying 3 ordinal indices, a new methodology was created to overcome these shortcomings, Life Cost-Based FMEA. Life Cost-Based FMEA measures failure/risk in terms of monetary cost. Cost is a universal parameter that can be easily related to severity by engineers and others. Thus, failure cost can be estimated using the following simplest form: Expected Failure Cost = {sup n}{Sigma}{sub i=1}p{sub i}c{sub i}, p: Probability of a particular failure occurring; c: Monetary cost associated with that particular failure; and n: Total number of failure scenarios. FMEA is most effective when there are inputs into it from all concerned disciplines of the product development t

  11. Evaluation of coal-mineral association and coal cleanability by using SEM-based automated image analysis

    SciTech Connect (OSTI)

    Straszheim, W.E.; Younkin, K.A.; Markuszewski, R. ); Smith, F.J. )

    1988-06-01

    A technique employing SEM-based automated image analysis (AIA) has been developed for assessing the association of mineral particles with coal, and thus the cleanability of that coal, when the characteristics of the separation process are known. Data resulting from AIA include the mineral distribution by particle size, mineral phase, and extent of association with coal. This AIA technique was applied to samples of -325 mesh (-44 ..mu..m) coal from the Indiana No. 3, Upper Freeport, and Sunnyside (UT) seams. The coals were subjected to cleaning by float-sink separations at 1.3, 1.4, 1.6, and 1.9 specific gravity and by froth flotation. For the three coals, the float-sink procedure at a given specific gravity produced different amounts of clean coal, but with similar ash content. Froth flotation removed much less ash, yielding a product ash content of --8% for the Upper Freeport coal, regardless of recovery, while reducing the ash content to less than 5% for the other two coals. The AIA results documented significantly more association of minerals with the Upper Freeport coal, which thus led to the poor ash reduction.

  12. Data base for analysis of compositional characteristics of coal seams and macerals. Quarterly technical progress report, November-January 1981

    SciTech Connect (OSTI)

    Davis, A; Suhr, N H; Spackman, W; Painter, P C; Walker, P L; Given, P H

    1981-04-01

    The basic objectives of this program are, first, to understand the systematic relationships between the properties of coals, and, second, to determine the nature of the lateral and vertical variability in the properties of a single seam. Multivariate statistical analyses applied to the Coal Data Base confirm a number of known trends for coal properties. In addition, nitrogen and some components of the ash analysis bear interesting relationships to rank. The macroscopic petrography of column samples of the Lower Kittanning seam reveals a significant difference between the sample from a marine-influenced environment and those from toward the margins of the basin where conditions were non-marine. The various methods of determining the amount and mineralogy of the inorganic fraction of coals are reviewed. General trends in seam thickness, ash, sulfur, volatile matter yield, and vitrinite reflectance of the Lower Kittanning seam of western Pennsylvania are presented. Controls of sedimentation are discussed in relation to the areal variability which has been observed. Differential subsidence and paleotopography appear to have played a major role during the deposition of the coal. The same controls may have maintained some influence upon the coalification process after deposition, especially along the eastern margin of the Lower Kittanning basin.

  13. DOE BiomassDevelopment and Deployment of a Short Rotation Woody Crops Harvesting System Based on a Case New Holland Forage Harvester and SRC Woody Crop Header RDD Review Template

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Development and Deployment of a Short Rotation Woody Crops Harvesting System Based on a Case New Holland Forage Harvester and SRC Woody Crop Header March 25, 2015 Terrestrial Feedstocks Timothy A. Volk SUNY ESF This presentation does not contain any proprietary, confidential, or otherwise restricted information Goal Statement * Develop, test and deploy a single pass cut and chip harvester combined with a handling, transportation and storage system that is effective and efficient in a range of

  14. SU-E-T-129: Dosimetric Evaluation of the Impact of Density Correction On Dose Calculation of Breast Cancer Treatment: A Study Based On RTOG 1005 Cases

    SciTech Connect (OSTI)

    Li, J; Yu, Y

    2014-06-01

    Purpose: RTOG 1005 requires density correction in the dose calculation of breast cancer radiation treatment. The aim of the study was to evaluate the impact of density correction on the dose calculation. Methods: Eight cases were studied, which were planned on an XiO treatment planning system with pixel-by-pixel density correction using a superposition algorithm, following RTOG 1005 protocol requirements. Four were protocol Arm 1 (standard whole breast irradiation with sequential boost) cases and four were Arm 2 (hypofractionated whole breast irradiation with concurrent boost) cases. The plans were recalculated with the same monitor units without density correction. Dose calculations with and without density correction were compared. Results: Results of Arm 1 and Arm 2 cases showed similar trends in the comparison. The average differences between the calculations with and without density correction (difference = Without - With) among all the cases were: -0.82 Gy (range: -2.65??0.18 Gy) in breast PTV Eval D95, ?0.75 Gy (range: ?1.23?0.26 Gy) in breast PTV Eval D90, ?1.00 Gy (range: ?2.46??0.29 Gy) in lumpectomy PTV Eval D95, ?0.78 Gy (range: ?1.30?0.11 Gy) in lumpectomy PTV Eval D90, ?0.43% (range: ?0.95??0.14%) in ipsilateral lung V20, ?0.81% (range: ?1.62??0.26%) in V16, ?1.95% (range: ?4.13??0.84%) in V10, ?2.64% (?5.55??1.04%) in V8, ?4.19% (range: ?6.92??1.81%) in V5, and ?4.95% (range: ?7.49??2.01%) in V4, respectively. The differences in other normal tissues were minimal. Conclusion: The effect of density correction was observed in breast target doses (an average increase of ?1 Gy in D95 and D90, compared to the calculation without density correction) and exposed ipsilateral lung volumes in low dose region (average increases of ?4% and ?5% in V5 and V4, respectively)

  15. PanFunPro: Bacterial Pan-Genome Analysis Based on the Functional Profiles (Seventh Annual Sequencing, Finishing, Analysis in the Future (SFAF) Meeting 2012)

    ScienceCinema (OSTI)

    Lukjancenko, Oksana [Technical University of Denmark

    2013-01-25

    Julien Tremblay from DOE JGI presents "Evaluation of Multiplexed 16S rRNA Microbial Population Surveys Using Illumina MiSeq Platorm" at the 7th Annual Sequencing, Finishing, Analysis in the Future (SFAF) Meeting held in June, 2012 in Santa Fe, NM.

  16. Technology Solutions Case Study: Cost Analysis of Roof-Only Air Sealing and Insulation Strategies on 1-1/2 Story Homes in Cold Climates, Minneapolis, MN

    SciTech Connect (OSTI)

    2014-12-01

    This case study describes the External Thermal and Moisture Management System developed by the NorthernSTAR Building America Partnership. This system is typically used in deep energy retrofits and is a valuable approach for the roof-only portions of existing homes, particularly the 1 1/2-story home. It is effective in reducing energy loss through the building envelope, improving building durability, reducing ice dams, and providing opportunities to improve occupant comfort and health.

  17. A Ten Step Protocol and Plan for CCS Site Characterization, Based on an Analysis of the Rocky Mountain Region, USA

    SciTech Connect (OSTI)

    McPherson, Brian; Matthews, Vince

    2013-09-15

    This report expresses a Ten-Step Protocol for CO2 Storage Site Characterization, the final outcome of an extensive Site Characterization analysis of the Rocky Mountain region, USA. These ten steps include: (1) regional assessment and data gathering; (2) identification and analysis of appropriate local sites for characterization; (3) public engagement; (4) geologic and geophysical analysis of local site(s); (5) stratigraphic well drilling and coring; (6) core analysis and interpretation with other data; (7) database assembly and static model development; (8) storage capacity assessment; (9) simulation and uncertainty assessment; (10) risk assessment. While the results detailed here are primarily germane to the Rocky Mountain region, the intent of this protocol is to be portable or generally applicable for CO2 storage site characterization.

  18. Fuel Cell Case Study

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Kathy Loftus Global Leader, Sustainable Engineering, Maintenance & Energy Management Whole Foods Market, Inc. Fuel Cell Case Study 2 Holistic Approach from Development to Operation WFM Energy Management Negotiation Awareness Load Shaping Engineering Refrigeration HVAC Electrical Maintenance Performance Based Retailers Operational Practices Store Design & Construction Consultants Specifications Procurement Equipment Selection Life Cycle Costing Energy & Maintenance team can feedback

  19. Analysis of CASES-99 Lidar and Turbulence Data in Support of Wind Turbine Effects: April 1, 2001 to Januay 31, 2003

    SciTech Connect (OSTI)

    Banta, R. M.

    2003-06-01

    The nocturnal low-level jet (LLJ) of the Great Plains of the central United States has been identified as a promising source of high-momentum wind flow for wind energy. The acceleration of the winds after sunset above the surface produces a jet profile in the wind velocity, with maximum speeds that often exceed 10 m s-1 or more at heights near 100 m or more. These high wind speeds are advantageous for wind energy generation. The high speeds aloft, however, also produce a region of high shear between the LLJ and the earth's surface, where the nocturnal flow is often calm or nearly so. This shear zone below the LLJ generates atmospheric waves and turbulence that can cause strong vibration in the turbine rotors. It has been suggested that these vibrations contribute to premature failures in large wind turbines, which, of course, would be a considerable disadvantage for wind energy applications. In October 1999, a field project called the Cooperative Atmosphere-Surface Exchange Study 1999 campaign, or CASES-99, was conducted in southeastern Kansas to study the nocturnal stable boundary layer. One of the instruments deployed during CASES-99 was the High-Resolution Doppler Lidar, a new scanning, remote-sensing, wind-mapping instrument.

  20. Updated laser safety&hazard analysis for the ARES laser system based on the 2007 ANSI Z136.1 standard.

    SciTech Connect (OSTI)

    Augustoni, Arnold L.

    2007-08-01

    A laser safety and hazard analysis was performed for the temperature stabilized Big Sky Laser Technology (BSLT) laser central to the ARES system based on the 2007 version of the American National Standards Institute's (ANSI) Standard Z136.1, for Safe Use of Lasers and the 2005 version of the ANSI Standard Z136.6, for Safe Use of Lasers Outdoors. The ARES laser system is a Van/Truck based mobile platform, which is used to perform laser interaction experiments and tests at various national test sites.

  1. A Complexity Science-Based Framework for Global Joint Operations Analysis to Support Force Projection: LDRD Final Report.

    SciTech Connect (OSTI)

    Lawton, Craig R.

    2015-01-01

    The military is undergoing a significant transformation as it modernizes for the information age and adapts to address an emerging asymmetric threat beyond traditional cold war era adversaries. Techniques such as traditional large-scale, joint services war gaming analysis are no longer adequate to support program evaluation activities and mission planning analysis at the enterprise level because the operating environment is evolving too quickly. New analytical capabilities are necessary to address modernization of the Department of Defense (DoD) enterprise. This presents significant opportunity to Sandia in supporting the nation at this transformational enterprise scale. Although Sandia has significant experience with engineering system of systems (SoS) and Complex Adaptive System of Systems (CASoS), significant fundamental research is required to develop modeling, simulation and analysis capabilities at the enterprise scale. This report documents an enterprise modeling framework which will enable senior level decision makers to better understand their enterprise and required future investments.

  2. Automated Voxel-Based Analysis of Volumetric Dynamic Contrast-Enhanced CT Data Improves Measurement of Serial Changes in Tumor Vascular Biomarkers

    SciTech Connect (OSTI)

    Coolens, Catherine; Driscoll, Brandon; Chung, Caroline; Shek, Tina; Gorjizadeh, Alborz; Ménard, Cynthia; Jaffray, David

    2015-01-01

    Objectives: Development of perfusion imaging as a biomarker requires more robust methodologies for quantification of tumor physiology that allow assessment of volumetric tumor heterogeneity over time. This study proposes a parametric method for automatically analyzing perfused tissue from volumetric dynamic contrast-enhanced (DCE) computed tomography (CT) scans and assesses whether this 4-dimensional (4D) DCE approach is more robust and accurate than conventional, region-of-interest (ROI)-based CT methods in quantifying tumor perfusion with preliminary evaluation in metastatic brain cancer. Methods and Materials: Functional parameter reproducibility and analysis of sensitivity to imaging resolution and arterial input function were evaluated in image sets acquired from a 320-slice CT with a controlled flow phantom and patients with brain metastases, whose treatments were planned for stereotactic radiation surgery and who consented to a research ethics board-approved prospective imaging biomarker study. A voxel-based temporal dynamic analysis (TDA) methodology was used at baseline, at day 7, and at day 20 after treatment. The ability to detect changes in kinetic parameter maps in clinical data sets was investigated for both 4D TDA and conventional 2D ROI-based analysis methods. Results: A total of 7 brain metastases in 3 patients were evaluated over the 3 time points. The 4D TDA method showed improved spatial efficacy and accuracy of perfusion parameters compared to ROI-based DCE analysis (P<.005), with a reproducibility error of less than 2% when tested with DCE phantom data. Clinically, changes in transfer constant from the blood plasma into the extracellular extravascular space (K{sub trans}) were seen when using TDA, with substantially smaller errors than the 2D method on both day 7 post radiation surgery (±13%; P<.05) and by day 20 (±12%; P<.04). Standard methods showed a decrease in K{sub trans} but with large uncertainty (111.6 ± 150.5) %. Conclusions: Parametric voxel-based analysis of 4D DCE CT data resulted in greater accuracy and reliability in measuring changes in perfusion CT-based kinetic metrics, which have the potential to be used as biomarkers in patients with metastatic brain cancer.

  3. Initial Business Case Analysis of Two Integrated Heat Pump HVAC Systems for Near-Zero-Energy Homes -- Update to Include Analyses of an Economizer Option and Alternative Winter Water Heating Control Option

    SciTech Connect (OSTI)

    Baxter, Van D

    2006-12-01

    The long range strategic goal of the Department of Energy's Building Technologies (DOE/BT) Program is to create, by 2020, technologies and design approaches that enable the construction of net-zero energy homes at low incremental cost (DOE/BT 2005). A net zero energy home (NZEH) is a residential building with greatly reduced needs for energy through efficiency gains, with the balance of energy needs supplied by renewable technologies. While initially focused on new construction, these technologies and design approaches are intended to have application to buildings constructed before 2020 as well resulting in substantial reduction in energy use for all building types and ages. DOE/BT's Emerging Technologies (ET) team is working to support this strategic goal by identifying and developing advanced heating, ventilating, air-conditioning, and water heating (HVAC/WH) technology options applicable to NZEHs. Although the energy efficiency of heating, ventilating, and air-conditioning (HVAC) equipment has increased substantially in recent years, new approaches are needed to continue this trend. Dramatic efficiency improvements are necessary to enable progress toward the NZEH goals, and will require a radical rethinking of opportunities to improve system performance. The large reductions in HVAC energy consumption necessary to support the NZEH goals require a systems-oriented analysis approach that characterizes each element of energy consumption, identifies alternatives, and determines the most cost-effective combination of options. In particular, HVAC equipment must be developed that addresses the range of special needs of NZEH applications in the areas of reduced HVAC and water heating energy use, humidity control, ventilation, uniform comfort, and ease of zoning. In FY05 ORNL conducted an initial Stage 1 (Applied Research) scoping assessment of HVAC/WH systems options for future NZEHs to help DOE/BT identify and prioritize alternative approaches for further development. Eleven system concepts with central air distribution ducting and nine multi-zone systems were selected and their annual and peak demand performance estimated for five locations: Atlanta (mixed-humid), Houston (hot-humid), Phoenix (hot-dry), San Francisco (marine), and Chicago (cold). Performance was estimated by simulating the systems using the TRNSYS simulation engine (Solar Energy Laboratory et al. 2006) in two 1800-ft{sup 2} houses--a Building America (BA) benchmark house and a prototype NZEH taken from BEopt results at the take-off (or crossover) point (i.e., a house incorporating those design features such that further progress towards ZEH is through the addition of photovoltaic power sources, as determined by current BEopt analyses conducted by NREL). Results were summarized in a project report, HVAC Equipment Design options for Near-Zero-Energy Homes--A Stage 2 Scoping Assessment, ORNL/TM-2005/194 (Baxter 2005). The 2005 study report describes the HVAC options considered, the ranking criteria used, and the system rankings by priority. In 2006, the two top-ranked options from the 2005 study, air-source and ground-source versions of an integrated heat pump (IHP) system, were subjected to an initial business case study. The IHPs were subjected to a more rigorous hourly-based assessment of their performance potential compared to a baseline suite of equipment of legally minimum efficiency that provided the same heating, cooling, water heating, demand dehumidification, and ventilation services as the IHPs. Results were summarized in a project report, Initial Business Case Analysis of Two Integrated Heat Pump HVAC Systems for Near-Zero-Energy Homes, ORNL/TM-2006/130 (Baxter 2006). The present report is an update to that document. Its primary purpose is to summarize results of an analysis of the potential of adding an outdoor air economizer operating mode to the IHPs to take advantage of free cooling (using outdoor air to cool the house) whenever possible. In addition it provides some additional detail for an alternative winter water heating/space heating (WH/SH) control strategy briefly described in the original report and corrects some minor errors.

  4. Fuel Cell Power Model Version 2: Startup Guide, System Designs, and Case Studies. Modeling Electricity, Heat, and Hydrogen Generation from Fuel Cell-Based Distributed Energy Systems

    SciTech Connect (OSTI)

    Steward, D.; Penev, M.; Saur, G.; Becker, W.; Zuboy, J.

    2013-06-01

    This guide helps users get started with the U.S. Department of Energy/National Renewable Energy Laboratory Fuel Cell Power (FCPower) Model Version 2, which is a Microsoft Excel workbook that analyzes the technical and economic aspects of high-temperature fuel cell-based distributed energy systems with the aim of providing consistent, transparent, comparable results. This type of energy system would provide onsite-generated heat and electricity to large end users such as hospitals and office complexes. The hydrogen produced could be used for fueling vehicles or stored for later conversion to electricity.

  5. Comparing urban solid waste recycling from the viewpoint of urban metabolism based on physical input-output model: A case of Suzhou in China

    SciTech Connect (OSTI)

    Liang Sai; Zhang Tianzhu

    2012-01-15

    Highlights: Black-Right-Pointing-Pointer Impacts of solid waste recycling on Suzhou's urban metabolism in 2015 are analyzed. Black-Right-Pointing-Pointer Sludge recycling for biogas is regarded as an accepted method. Black-Right-Pointing-Pointer Technical levels of reusing scrap tires and food wastes should be improved. Black-Right-Pointing-Pointer Other fly ash utilization methods should be exploited. Black-Right-Pointing-Pointer Secondary wastes from reusing food wastes and sludge should be concerned. - Abstract: Investigating impacts of urban solid waste recycling on urban metabolism contributes to sustainable urban solid waste management and urban sustainability. Using a physical input-output model and scenario analysis, urban metabolism of Suzhou in 2015 is predicted and impacts of four categories of solid waste recycling on urban metabolism are illustrated: scrap tire recycling, food waste recycling, fly ash recycling and sludge recycling. Sludge recycling has positive effects on reducing all material flows. Thus, sludge recycling for biogas is regarded as an accepted method. Moreover, technical levels of scrap tire recycling and food waste recycling should be improved to produce positive effects on reducing more material flows. Fly ash recycling for cement production has negative effects on reducing all material flows except solid wastes. Thus, other fly ash utilization methods should be exploited. In addition, the utilization and treatment of secondary wastes from food waste recycling and sludge recycling should be concerned.

  6. BPO crude oil analysis data base user`s guide: Methods, publications, computer access correlations, uses, availability

    SciTech Connect (OSTI)

    Sellers, C.; Fox, B.; Paulz, J.

    1996-03-01

    The Department of Energy (DOE) has one of the largest and most complete collections of information on crude oil composition that is available to the public. The computer program that manages this database of crude oil analyses has recently been rewritten to allow easier access to this information. This report describes how the new system can be accessed and how the information contained in the Crude Oil Analysis Data Bank can be obtained.

  7. Building America Case Study: Apartment Compartmentalization with an Aerosol-Based Sealing Process - Queens, NY; Technology Solutions for New and Existing Homes, Energy Efficiency & Renewable Energy (EERE)

    SciTech Connect (OSTI)

    2015-07-01

    Air sealing of building enclosures is a difficult and time-consuming process. Current methods in new construction require laborers to physically locate small and sometimes large holes in multiple assemblies and then manually seal each of them. The innovation demonstrated under this research study was the automated air sealing and compartmentalization of buildings through the use of an aerosolized sealant, developed by the Western Cooling Efficiency Center at University of California Davis.
    CARB sought to demonstrate this new technology application in a multifamily building in Queens, NY. The effectiveness of the sealing process was evaluated by three methods: air leakage testing of overall apartment before and after sealing, point-source testing of individual leaks, and pressure measurements in the walls of the target apartment during sealing. Aerosolized sealing was successful by several measures in this study. Many individual leaks that are labor-intensive to address separately were well sealed by the aerosol particles. In addition, many diffuse leaks that are difficult to identify and treat were also sealed. The aerosol-based sealing process resulted in an average reduction of 71% in air leakage across three apartments and an average apartment airtightness of 0.08 CFM50/SF of enclosure area.

  8. NORASCO Case Engineering Group JV | Open Energy Information

    Open Energy Info (EERE)

    NORASCO Case Engineering Group JV Jump to: navigation, search Name: NORASCO & Case Engineering Group JV Place: India Sector: Solar Product: India-based JV developer of small solar...

  9. Impact of Boost Radiation in the Treatment of Ductal Carcinoma In Situ: A Population-Based Analysis

    SciTech Connect (OSTI)

    Rakovitch, Eileen; Institute for Clinical Evaluative Sciences, Toronto, Ontario; University of Toronto, Toronto, Ontario ; Narod, Steven A.; Women’s College Research Institute, Toronto, Ontario ; Nofech-Moses, Sharon; Hanna, Wedad; University of Toronto, Toronto, Ontario ; Thiruchelvam, Deva; Saskin, Refik; Taylor, Carole; Tuck, Alan; Youngson, Bruce; Miller, Naomi; Done, Susan J.; Sengupta, Sandip; Elavathil, Leela; Henderson General Hospital, 711 Concession Street, Hamilton, Ontario ; Jani, Prashant A.; Regional Health Sciences Centre, Thunder Bay, Ontario ; Bonin, Michel; Metcalfe, Stephanie; Paszat, Lawrence; Institute for Clinical Evaluative Sciences, Toronto, Ontario; University of Toronto, Toronto, Ontario

    2013-07-01

    Purpose: To report the outcomes of a population of women with ductal carcinoma in situ (DCIS) treated with breast-conserving surgery and radiation and to evaluate the independent effect of boost radiation on the development of local recurrence. Methods and Materials: All women diagnosed with DCIS and treated with breast-conserving surgery and radiation therapy in Ontario from 1994 to 2003 were identified. Treatments and outcomes were identified through administrative databases and validated by chart review. The impact of boost radiation on the development of local recurrence was determined using survival analyses. Results: We identified 1895 cases of DCIS that were treated by breast-conserving surgery and radiation therapy; 561 patients received boost radiation. The cumulative 10-year rate of local recurrence was 13% for women who received boost radiation and 12% for those who did not (P=.3). The 10-year local recurrence-free survival (LRFS) rate among women who did and who did not receive boost radiation was 88% and 87%, respectively (P=.27), 94% and 93% for invasive LRFS (P=.58), and was 95% and 93% for DCIS LRFS (P=.31). On multivariable analyses, boost radiation was not associated with a lower risk of local recurrence (hazard ratio = 0.82, 95% confidence interval 0.59-1.15) (P=.25). Conclusions: Among a population of women treated with breast-conserving surgery and radiation for DCIS, additional (boost) radiation was not associated with a lower risk of local or invasive recurrence.

  10. Computational analysis of an autophagy/translation switch based on mutual inhibition of MTORC1 and ULK1

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Szymańska, Paulina; Martin, Katie R.; MacKeigan, Jeffrey P.; Hlavacek, William S.; Lipniacki, Tomasz

    2015-03-11

    We constructed a mechanistic, computational model for regulation of (macro)autophagy and protein synthesis (at the level of translation). The model was formulated to study the system-level consequences of interactions among the following proteins: two key components of MTOR complex 1 (MTORC1), namely the protein kinase MTOR (mechanistic target of rapamycin) and the scaffold protein RPTOR; the autophagy-initiating protein kinase ULK1; and the multimeric energy-sensing AMP-activated protein kinase (AMPK). Inputs of the model include intrinsic AMPK kinase activity, which is taken as an adjustable surrogate parameter for cellular energy level or AMP:ATP ratio, and rapamycin dose, which controls MTORC1 activity. Outputsmore » of the model include the phosphorylation level of the translational repressor EIF4EBP1, a substrate of MTORC1, and the phosphorylation level of AMBRA1 (activating molecule in BECN1-regulated autophagy), a substrate of ULK1 critical for autophagosome formation. The model incorporates reciprocal regulation of mTORC1 and ULK1 by AMPK, mutual inhibition of MTORC1 and ULK1, and ULK1-mediated negative feedback regulation of AMPK. Through analysis of the model, we find that these processes may be responsible, depending on conditions, for graded responses to stress inputs, for bistable switching between autophagy and protein synthesis, or relaxation oscillations, comprising alternating periods of autophagy and protein synthesis. A sensitivity analysis indicates that the prediction of oscillatory behavior is robust to changes of the parameter values of the model. The model provides testable predictions about the behavior of the AMPK-MTORC1-ULK1 network, which plays a central role in maintaining cellular energy and nutrient homeostasis.« less

  11. Quantifying the Impact of Immediate Reconstruction in Postmastectomy Radiation: A Large, Dose-Volume Histogram-Based Analysis

    SciTech Connect (OSTI)

    Ohri, Nisha; Cordeiro, Peter G.; Keam, Jennifer; Ballangrud, Ase; Shi Weiji; Zhang Zhigang; Nerbun, Claire T.; Woch, Katherine M.; Stein, Nicholas F.; Zhou Ying; McCormick, Beryl; Powell, Simon N.; Ho, Alice Y.

    2012-10-01

    Purpose: To assess the impact of immediate breast reconstruction on postmastectomy radiation (PMRT) using dose-volume histogram (DVH) data. Methods and Materials: Two hundred forty-seven women underwent PMRT at our center, 196 with implant reconstruction and 51 without reconstruction. Patients with reconstruction were treated with tangential photons, and patients without reconstruction were treated with en-face electron fields and customized bolus. Twenty percent of patients received internal mammary node (IMN) treatment. The DVH data were compared between groups. Ipsilateral lung parameters included V20 (% volume receiving 20 Gy), V40 (% volume receiving 40 Gy), mean dose, and maximum dose. Heart parameters included V25 (% volume receiving 25 Gy), mean dose, and maximum dose. IMN coverage was assessed when applicable. Chest wall coverage was assessed in patients with reconstruction. Propensity-matched analysis adjusted for potential confounders of laterality and IMN treatment. Results: Reconstruction was associated with lower lung V20, mean dose, and maximum dose compared with no reconstruction (all P<.0001). These associations persisted on propensity-matched analysis (all P<.0001). Heart doses were similar between groups (P=NS). Ninety percent of patients with reconstruction had excellent chest wall coverage (D95 >98%). IMN coverage was superior in patients with reconstruction (D95 >92.0 vs 75.7%, P<.001). IMN treatment significantly increased lung and heart parameters in patients with reconstruction (all P<.05) but minimally affected those without reconstruction (all P>.05). Among IMN-treated patients, only lower lung V20 in those without reconstruction persisted (P=.022), and mean and maximum heart doses were higher than in patients without reconstruction (P=.006, P=.015, respectively). Conclusions: Implant reconstruction does not compromise the technical quality of PMRT when the IMNs are untreated. Treatment technique, not reconstruction, is the primary determinant of target coverage and normal tissue doses.

  12. Controlling Wind Turbines for Secondary Frequency Regulation: An Analysis of AGC Capabilities Under New Performance Based Compensation Policy: Preprint

    SciTech Connect (OSTI)

    Aho, J.; Pao, L. Y.; Fleming, P.; Ela, E.

    2015-02-01

    As wind energy becomes a larger portion of the world's energy portfolio there has been an increased interest for wind turbines to control their active power output to provide ancillary services which support grid reliability. One of these ancillary services is the provision of frequency regulation, also referred to as secondary frequency control or automatic generation control (AGC), which is often procured through markets which recently adopted performance-based compensation. A wind turbine with a control system developed to provide active power ancillary services can be used to provide frequency regulation services. Simulations have been performed to determine the AGC tracking performance at various power schedule set-points, participation levels, and wind conditions. The performance metrics used in this study are based on those used by several system operators in the US. Another metric that is analyzed is the damage equivalent loads (DELs) on turbine structural components, though the impacts on the turbine electrical components are not considered. The results of these single-turbine simulations show that high performance scores can be achieved when there is sufficient wind resource available. The capability of a wind turbine to rapidly and accurately follow power commands allows for high performance even when tracking rapidly changing AGC signals. As the turbine de-rates to meet decreased power schedule set-points there is a reduction in the DELs, and the participation in frequency regulation has a negligible impact on these loads.

  13. Characterization of the Fracture Toughness of TRIP 800 Sheet Steels Using Microstructure-Based Finite Element Analysis

    SciTech Connect (OSTI)

    Soulami, Ayoub; Choi, Kyoo Sil; Liu, Wenning N.; Sun, Xin; Khaleel, Mohammad A.

    2009-04-01

    Recently, several studies conducted by automotive industry revealed the tremendous advantages of Advanced High Strength Steels (AHSS). TRansformation Induced Plasticity (TRIP) steel is one of the typical representative of AHSS. This kind of materials exhibits high strength as well as high formability. Analyzing the crack behaviour in TRIP steels is a challenging task due to the microstructure level inhomogeneities between the different phases (Ferrite, Bainite, Austenite, Martensite) that constitute these materials. This paper aims at investigating the fracture resistance of TRIP steels. For this purpose, a micromechanical finite element model is developed based on the actual microstructure of a TRIP 800 steel. Uniaxial tensile tests on TRIP 800 sheet notched specimens were also conducted and tensile properties and R-curves (Resistance curves) were determined. The comparison between simulation and experimental results leads us to the conclusion that the method using microstructure-based representative volume element (RVE) captures well enough the complex behavior of TRIP steels. The effect of phase transformation, which occurs during the deformation process, on the toughness is observed and discussed.

  14. Micromagnetic analysis of dynamical bubble-like solitons based on the time domain evolution of the topological density

    SciTech Connect (OSTI)

    Puliafito, Vito Azzerboni, Bruno; Finocchio, Giovanni; Torres, Luis; Ozatay, Ozhan

    2014-05-07

    Dynamical bubble-like solitons have been recently investigated in nanocontact-based spin-torque oscillators with a perpendicular free layer. Those magnetic configurations can be excited also in different geometries as long as they consist of perpendicular materials. Thus, in this paper, a systematic study of the influence of both external field and high current on that kind of dynamics is performed for a spin-valve point-contact geometry where both free and fixed layers present strong perpendicular anisotropy. The usage of the topological density tool highlights the excitation of complex bubble/antibubble configurations. In particular, at high currents, a deformation of the soliton and its simultaneous shift from the contact area are observed and can be ascribable to the Oersted field. Results provide further detailed information on the excitation of solitons in perpendicular materials for application in spintronics, magnonics, and domain wall logic.

  15. CORE-BASED INTEGRATED SEDIMENTOLOGIC, STRATIGRAPHIC, AND GEOCHEMICAL ANALYSIS OF THE OIL SHALE BEARING GREEN RIVER FORMATION, UINTA BASIN, UTAH

    SciTech Connect (OSTI)

    Lauren P. Birgenheier; Michael D. Vanden Berg,

    2011-04-11

    An integrated detailed sedimentologic, stratigraphic, and geochemical study of Utah's Green River Formation has found that Lake Uinta evolved in three phases (1) a freshwater rising lake phase below the Mahogany zone, (2) an anoxic deep lake phase above the base of the Mahogany zone and (3) a hypersaline lake phase within the middle and upper R-8. This long term lake evolution was driven by tectonic basin development and the balance of sediment and water fill with the neighboring basins, as postulated by models developed from the Greater Green River Basin by Carroll and Bohacs (1999). Early Eocene abrupt global-warming events may have had significant control on deposition through the amount of sediment production and deposition rates, such that lean zones below the Mahogany zone record hyperthermal events and rich zones record periods between hyperthermals. This type of climatic control on short-term and long-term lake evolution and deposition has been previously overlooked. This geologic history contains key points relevant to oil shale development and engineering design including: (1) Stratigraphic changes in oil shale quality and composition are systematic and can be related to spatial and temporal changes in the depositional environment and basin dynamics. (2) The inorganic mineral matrix of oil shale units changes significantly from clay mineral/dolomite dominated to calcite above the base of the Mahogany zone. This variation may result in significant differences in pyrolysis products and geomechanical properties relevant to development and should be incorporated into engineering experiments. (3) This study includes a region in the Uinta Basin that would be highly prospective for application of in-situ production techniques. Stratigraphic targets for in-situ recovery techniques should extend above and below the Mahogany zone and include the upper R-6 and lower R-8.

  16. Manufacturing Cost Analysis for YSZ-Based FlexCells at Pilot and Full Scale Production Scales

    SciTech Connect (OSTI)

    Scott Swartz; Lora Thrun; Robin Kimbrell; Kellie Chenault

    2011-05-01

    Significant reductions in cell costs must be achieved in order to realize the full commercial potential of megawatt-scale SOFC power systems. The FlexCell designed by NexTech Materials is a scalable SOFC technology that offers particular advantages over competitive technologies. In this updated topical report, NexTech analyzes its FlexCell design and fabrication process to establish manufacturing costs at both pilot scale (10 MW/year) and full-scale (250 MW/year) production levels and benchmarks this against estimated anode supported cell costs at the 250 MW scale. This analysis will show that even with conservative assumptions for yield, materials usage, and cell power density, a cost of $35 per kilowatt can be achieved at high volume. Through advancements in cell size and membrane thickness, NexTech has identified paths for achieving cell manufacturing costs as low as $27 per kilowatt for its FlexCell technology. Also in this report, NexTech analyzes the impact of raw material costs on cell cost, showing the significant increases that result if target raw material costs cannot be achieved at this volume.

  17. Analysis of global radiation budgets and cloud forcing using three-dimensional cloud nephanalysis data base. Master's thesis

    SciTech Connect (OSTI)

    Mitchell, B.

    1990-12-01

    A one-dimensional radiative transfer model was used to compute the global radiative budget at the top of the atmosphere (TOA) and the surface for January and July. 1979. The model was also used to determine the global cloud radiative forcing for all clouds and for high and low cloud layers. In the computations. the authors used the monthly cloud data derived from the Air Force Three-Dimensional Cloud Nephanalysis (3DNEPH). These data were used in conjunction with conventional temperature and humidity profiles analyzed during the 1979 First GARP (Global Atmospheric Research Program) Global Experiment (FGGE) year. Global surface albedos were computed from available data and were included in the radiative transfer analysis. Comparisons of the model-produced outgoing solar and infrared fluxes with those derived from Nimbus 7 Earth Radiation Budget (ERS) data were made to validate the radiative model and cloud cover. For reflected solar and emitted infrared (IR) flux, differences within 20 w/sq m meters were shown.

  18. Microcomputer-based instrument for the detection and analysis of precession motion in a gas centrifuge machine. Revision 1

    SciTech Connect (OSTI)

    Paulus, S.S.

    1986-03-01

    The Centrifuge Procession Analyzer (CPA) is a microcomputer-based instrument which detects precession motion in a gas centrifuge machine and calculates the amplitude and frequency of precession. The CPA consists of a printed circuit board which contains signal-conditioning circuitry and a 24-bit counter and an INTEL iSBC 80/24 single/board computer. Pression motion is detected by monitoring a signal generated by a variable reluctance pick-up coil in the top of the centrifuge machine. This signal is called a Fidler signal. The initial Fidler signal triggers a counter which is clocked by a high-precision, 20.000000-MHz, temperature-controlled, crystal oscillator. The contents of the counter are read by the computer and the counter reset after every ten Fidler signals. The speed of the centrifuge machine and the amplitude and frequency of precession are calculated and the results are displayed on a liquid crystal display on the front panel of the CPA. The report contains results from data generated by a Fidler signal simulator and data taken when the centrifuge was operated under three test conditions: (1) nitrogen gas during drive-up, steady state, and drive-down; (2) xenon gas during slip test, steady state, and the addition of gas; and (3) no gas during steady state. The qualitative results were consistent with experience with centrifuge machines using UF/sub 6/ in that the amplitude of precession increased and the frequency of precession decreased during drive-up, drive-down and the slip check. The magnitude of the amplitude and frequency of precession were proportional to the molecular weight of the gases in steady state.

  19. Microcomputer-based instrument for the detection and analysis of precession motion in a gas centrifuge machine

    SciTech Connect (OSTI)

    Paulus, S.S.

    1986-03-01

    The Centrifuge Precession Analyzer (CPA) is a microcomputer-based instrument which detects precession motion in a gas centrifuge machine and calculates the amplitude and frequency of precession. The CPA consists of a printed circuit board which contains signal-conditioning circuitry and a 24-bit counter and an INTEL iSBC 80-/24 single-board computer. Precession motion is detected by monitoring a signal generated by a variable reluctance pick-up coil in the top of the centrifuge machine. This signal is called a Fidler signal. The initial Fidler signal triggers a counter which is clocked by a high-precision, 20.000000-MHz, temperature-controlled, crystal oscillator. The contents of the counter are read by the computer, and the counter reset after every ten Fidler signals. The speed of the centrifuge machine and the amplitude and frequency of precession are calculated, and the results are displayed on a liquid crystal display on the front panel of the CPA. The thesis contains results from data generated by a Fidler signal simulator and data taken when the centrifuge was operated under three test conditions: (1) nitrogen gas during drive-up, steady state, and drive-down, (2) xenon gas during slip test, steady state, and the addition of gas, and (3) no gas during steady state. The qualitative results were consistent with experience with centrifuge machines UF/sub 6/ in that the amplitude of precession increased and the frequency of precession decreased during drive-up, drive-down and the slip check. The magnitude of the amplitude and frequency of precession were proportional to the molecular weight of the gases in steady state.

  20. ARM - Field Campaign - CASES Data Analysis

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    night, morning) provide a robust dataset for looking at the diurnal changes of the wind, temperature, humidity and their vertical transports near the ground and through the lowest...

  1. Towards risk-based management of critical infrastructures : enabling insights and analysis methodologies from a focused study of the bulk power grid.

    SciTech Connect (OSTI)

    Richardson, Bryan T.; LaViolette, Randall A.; Cook, Benjamin Koger

    2008-02-01

    This report summarizes research on a holistic analysis framework to assess and manage risks in complex infrastructures, with a specific focus on the bulk electric power grid (grid). A comprehensive model of the grid is described that can approximate the coupled dynamics of its physical, control, and market components. New realism is achieved in a power simulator extended to include relevant control features such as relays. The simulator was applied to understand failure mechanisms in the grid. Results suggest that the implementation of simple controls might significantly alter the distribution of cascade failures in power systems. The absence of cascade failures in our results raises questions about the underlying failure mechanisms responsible for widespread outages, and specifically whether these outages are due to a system effect or large-scale component degradation. Finally, a new agent-based market model for bilateral trades in the short-term bulk power market is presented and compared against industry observations.

  2. Quantifying sources of black carbon in western North America using observationally based analysis and an emission tagging technique in the Community Atmosphere Model

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Zhang, R.; Wang, H.; Hegg, D. A.; Qian, Y.; Doherty, S. J.; Dang, C.; Ma, P.-L.; Rasch, P. J.; Fu, Q.

    2015-11-18

    The Community Atmosphere Model (CAM5), equipped with a technique to tag black carbon (BC) emissions by source regions and types, has been employed to establish source–receptor relationships for atmospheric BC and its deposition to snow over western North America. The CAM5 simulation was conducted with meteorological fields constrained by reanalysis for year 2013 when measurements of BC in both near-surface air and snow are available for model evaluation. We find that CAM5 has a significant low bias in predicted mixing ratios of BC in snow but only a small low bias in predicted atmospheric concentrations over northwestern USA and westernmore » Canada. Even with a strong low bias in snow mixing ratios, radiative transfer calculations show that the BC-in-snow darkening effect is substantially larger than the BC dimming effect at the surface by atmospheric BC. Local sources contribute more to near-surface atmospheric BC and to deposition than distant sources, while the latter are more important in the middle and upper troposphere where wet removal is relatively weak. Fossil fuel (FF) is the dominant source type for total column BC burden over the two regions. FF is also the dominant local source type for BC column burden, deposition, and near-surface BC, while for all distant source regions combined the contribution of biomass/biofuel (BB) is larger than FF. An observationally based positive matrix factorization (PMF) analysis of the snow-impurity chemistry is conducted to quantitatively evaluate the CAM5 BC source-type attribution. While CAM5 is qualitatively consistent with the PMF analysis with respect to partitioning of BC originating from BB and FF emissions, it significantly underestimates the relative contribution of BB. In addition to a possible low bias in BB emissions used in the simulation, the model is likely missing a significant source of snow darkening from local soil found in the observations.« less

  3. mu-Scale Variations Of Elemental Composition In Individual Atmospheric Particles By Means Of Synchrotron Radiation Based mu-XRF Analysis

    SciTech Connect (OSTI)

    Schleicher, N.; Kramar, U.; Norra, S.; Dietze, V.; Kaminski, U.; Cen, K.; Yu, Y.

    2010-04-06

    Atmospheric pollution poses a huge challenge especially for densely populated urban areas. Although a tremendous knowledge already exists on atmospheric particulate pollution, only very limited knowledge is available on mineral and chemical composition of single atmospheric particles because most studies on air pollution focus on total mass concentrations or bulk elemental analysis. However, it is of particular importance to investigate the properties of single particles since according to their individually composition they differ in their specific impact on climate change, negative environment and health effects, as well as accelerating the weathering of stone buildings in urban areas. Particles with sulfate and nitrate coatings together with sufficient moisture increase metal solubility and possibly catalyze further surface reactions on stone facades of buildings. From the viewpoint of health effects of aerosols it is important to consider agglomerations processes of fine anthropogenic and highly toxic particles with coarse geogenic and less toxic particles. With respect to fundamental research in mineralogy, processes forming composed coarse particles consisting of geogenic and anthropogenic substances are valuable to study since a new type of particle is produced. In this context, the important and still in detail unknown role of geogenic particles as catchers for anthropogenic aerosols can be investigated more closely. Coarse particles can provide a possible sink for fine particles. Moreover, the intermixture of particles from geogenic and anthropogenic sources and the spatial and temporal variations of contributions from different sources, which plays a decisive role in the study area of Beijing, can be clarified with this approach. For this study, particles were collected with the passive sampling device Sigma-2 and analyzed for particles from 3 to 96 {mu}m. The analyzed particles showed a very inhomogeneous distribution in their elemental composition. For this study, synchrotron radiation based mu-X-ray fluorescence analysis (mu-SXRF) proved to be an excellent tool to investigate mu-scalic distributions of main and trace element concentrations within individual airborne particles.

  4. Quantifying sources of black carbon in Western North America using observationally based analysis and an emission tagging technique in the Community Atmosphere Model

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Zhang, R.; Wang, H.; Hegg, D. A.; Qian, Y.; Doherty, S. J.; Dang, C.; Ma, P.-L.; Rasch, P. J.; Fu, Q.

    2015-05-04

    The Community Atmosphere Model (CAM5), equipped with a technique to tag black carbon (BC) emissions by source regions and types, has been employed to establish source-receptor relationships for atmospheric BC and its deposition to snow over Western North America. The CAM5 simulation was conducted with meteorological fields constrained by reanalysis for year 2013 when measurements of BC in both near-surface air and snow are available for model evaluation. We find that CAM5 has a significant low bias in predicted mixing ratios of BC in snow but only a small low bias in predicted atmospheric concentrations over the Northwest USA andmore » West Canada. Even with a strong low bias in snow mixing ratios, radiative transfer calculations show that the BC-in-snow darkening effect is substantially larger than the BC dimming effect at the surface by atmospheric BC. Local sources contribute more to near-surface atmospheric BC and to deposition than distant sources, while the latter are more important in the middle and upper troposphere where wet removal is relatively weak. Fossil fuel (FF) is the dominant source type for total column BC burden over the two regions. FF is also the dominant local source type for BC column burden, deposition, and near-surface BC, while for all distant source regions combined the contribution of biomass/biofuel (BB) is larger than FF. An observationally based Positive Matrix Factorization (PMF) analysis of the snow-impurity chemistry is conducted to quantitatively evaluate the CAM5 BC source-type attribution. While CAM5 is qualitatively consistent with the PMF analysis with respect to partitioning of BC originating from BB and FF emissions, it significantly underestimates the relative contribution of BB. In addition to a possible low bias in BB emissions used in the simulation, the model is likely missing a significant source of snow darkening from local soil found in the observations.« less

  5. Quantifying sources of black carbon in Western North America using observationally based analysis and an emission tagging technique in the Community Atmosphere Model

    SciTech Connect (OSTI)

    Zhang, Rudong; Wang, Hailong; Hegg, D. A.; Qian, Yun; Doherty, Sarah J.; Dang, Cheng; Ma, Po-Lun; Rasch, Philip J.; Fu, Qiang

    2015-11-18

    The Community Atmosphere Model (CAM5), equipped with a technique to tag black carbon (BC) emissions by source regions and types, has been employed to establish source-receptor relationships for atmospheric BC and its deposition to snow over Western North America. The CAM5 simulation was conducted with meteorological fields constrained by reanalysis for year 2013 when measurements of BC in both near-surface air and snow are available for model evaluation. We find that CAM5 has a significant low bias in predicted mixing ratios of BC in snow but only a small low bias in predicted atmospheric concentrations over the Northwest USA and West Canada. Even with a strong low bias in snow mixing ratios, radiative transfer calculations show that the BC-in-snow darkening effect is substantially larger than the BC dimming effect at the surface by atmospheric BC. Local sources contribute more to near-surface atmospheric BC and to deposition than distant sources, while the latter are more important in the middle and upper troposphere where wet removal is relatively weak. Fossil fuel (FF) is the dominant source type for total column BC burden over the two regions. FF is also the dominant local source type for BC column burden, deposition, and near-surface BC, while for all distant source regions combined the contribution of biomass/biofuel (BB) is larger than FF. An observationally based Positive Matrix Factorization (PMF) analysis of the snow-impurity chemistry is conducted to quantitatively evaluate the CAM5 BC source-type attribution. While CAM5 is qualitatively consistent with the PMF analysis with respect to partitioning of BC originating from BB and FF emissions, it significantly underestimates the relative contribution of BB. In addition to a possible low bias in BB emissions used in the simulation, the model is likely missing a significant source of snow darkening from local soil found in the observations.

  6. Decerns: A framework for multi-criteria decision analysis

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Yatsalo, Boris; Didenko, Vladimir; Gritsyuk, Sergey; Sullivan, Terry

    2015-02-27

    A new framework, Decerns, for multicriteria decision analysis (MCDA) of a wide range of practical problems on risk management is introduced. Decerns framework contains a library of modules that are the basis for two scalable systems: DecernsMCDA for analysis of multicriteria problems, and DecernsSDSS for multicriteria analysis of spatial options. DecernsMCDA includes well known MCDA methods and original methods for uncertainty treatment based on probabilistic approaches and fuzzy numbers. As a result, these MCDA methods are described along with a case study on analysis of multicriteria location problem.

  7. Technology Deployment Case Studies

    Broader source: Energy.gov [DOE]

    Find technology deployment case studies below. Click on each individual project link to see the full case study. You can also view a map of technology deployment case studies.

  8. Integrating Nuclear Energy to Oilfield Operations – Two Case Studies

    SciTech Connect (OSTI)

    Eric P. Robertson; Lee O. Nelson; Michael G. McKellar; Anastasia M. Gandrik; Mike W. Patterson

    2011-11-01

    Fossil fuel resources that require large energy inputs for extraction, such as the Canadian oil sands and the Green River oil shale resource in the western USA, could benefit from the use of nuclear power instead of power generated by natural gas combustion. This paper discusses the technical and economic aspects of integrating nuclear energy with oil sands operations and the development of oil shale resources. A high temperature gas reactor (HTGR) that produces heat in the form of high pressure steam (no electricity production) was selected as the nuclear power source for both fossil fuel resources. Both cases were based on 50,000 bbl/day output. The oil sands case was a steam-assisted, gravity-drainage (SAGD) operation located in the Canadian oil sands belt. The oil shale development was an in-situ oil shale retorting operation located in western Colorado, USA. The technical feasibility of the integrating nuclear power was assessed. The economic feasibility of each case was evaluated using a discounted cash flow, rate of return analysis. Integrating an HTGR to both the SAGD oil sands operation and the oil shale development was found to be technically feasible for both cases. In the oil sands case, integrating an HTGR eliminated natural gas combustion and associated CO2 emissions, although there were still some emissions associated with imported electrical power. In the in situ oil shale case, integrating an HTGR reduced CO2 emissions by 88% and increased natural gas production by 100%. Economic viabilities of both nuclear integrated cases were poorer than the non-nuclear-integrated cases when CO2 emissions were not taxed. However, taxing the CO2 emissions had a significant effect on the economics of the non-nuclear base cases, bringing them in line with the economics of the nuclear-integrated cases. As we move toward limiting CO2 emissions, integrating non-CO2-emitting energy sources to the development of energy-intense fossil fuel resources is becoming increasingly important. This paper attempts to reduce the barriers that have traditionally separated fossil fuel development and application of nuclear power and to promote serious discussion of ideas about hybrid energy systems.

  9. Rate Case Elements

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Proceeding Rate Information Residential Exchange Program Surplus Power Sales Reports Rate Case Elements BPA's rate cases are decided "on the record." That is, in making a decision...

  10. BP-12 Rate Case

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Skip navigation links Financial Information Financial Public Processes Asset Management Cost Verification Process Rate Cases BP-18 Rate Case Related Publications Meetings...

  11. BP-16 Rate Case

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Skip navigation links Financial Information Financial Public Processes Asset Management Cost Verification Process Rate Cases BP-18 Rate Case Related Publications Meetings...

  12. Before a Rate Case

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    links Financial Information Financial Public Processes Asset Management Cost Verification Process Rate Cases BP-18 Rate Case Related Publications Meetings and Workshops Customer...

  13. OSCARS Case Studies

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    OSCARS & JGI Science DMZ Case Studies Multi-facility Workflow Case Study News & Publications ESnet News Publications and Presentations Galleries ESnet Awards and Honors Blog ESnet...

  14. WARP Case Study

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    WARP Case Study WARP Case Study Background WARP is an accelerator code that is used to conduct detailed simulations of particle accelerators, among other high energy physics...

  15. [Environmental investigation of ground water contamination at Wright-Patterson Air Force Base, Ohio]. Volume 3, Sampling and analysis plan (SAP): Phase 1, Task 4, Field Investigation: Draft

    SciTech Connect (OSTI)

    Not Available

    1991-10-01

    In April 1990, Wright-Patterson Air Force Base (WPAFB), initiated an investigation to evaluate a potential Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) removal action to prevent, to the extent practicable, the offsite migration of contaminated ground water from WPAFB. WPAFB retained the services of the Environmental Management Operations (EMO) and its principle subcontractor, International Technology Corporation (IT) to complete Phase 1 of the environmental investigation of ground-water contamination at WPAFB. Phase 1 of the investigation involves the short-term evaluation and potential design for a program to remove ground-water contamination that appears to be migrating across the western boundary of Area C, and across the northern boundary of Area B along Springfield Pike. Primarily, Task 4 of Phase 1 focuses on collection of information at the Area C and Springfield Pike boundaries of WPAFB. This Sampling and Analysis Plan (SAP) has been prepared to assist in completion of the Task 4 field investigation and is comprised of the Quality Assurance Project Plan (QAPP) and the Field Sampling Plan (FSP).

  16. National Geo-Database for Biofuel Simulations and Regional Analysis of Biorefinery Siting Based on Cellulosic Feedstock Grown on Marginal Lands

    SciTech Connect (OSTI)

    Izaurralde, Roberto C.; Zhang, Xuesong; Sahajpal, Ritvik; Manowitz, David H.

    2012-04-01

    The goal of this project undertaken by GLBRC (Great Lakes Bioenergy Research Center) Area 4 (Sustainability) modelers is to develop a national capability to model feedstock supply, ethanol production, and biogeochemical impacts of cellulosic biofuels. The results of this project contribute to sustainability goals of the GLBRC; i.e. to contribute to developing a sustainable bioenergy economy: one that is profitable to farmers and refiners, acceptable to society, and environmentally sound. A sustainable bioenergy economy will also contribute, in a fundamental way, to meeting national objectives on energy security and climate mitigation. The specific objectives of this study are to: (1) develop a spatially explicit national geodatabase for conducting biofuel simulation studies and (4) locate possible sites for the establishment of cellulosic ethanol biorefineries. To address the first objective, we developed SENGBEM (Spatially Explicit National Geodatabase for Biofuel and Environmental Modeling), a 60-m resolution geodatabase of the conterminous USA containing data on: (1) climate, (2) soils, (3) topography, (4) hydrography, (5) land cover/ land use (LCLU), and (6) ancillary data (e.g., road networks, federal and state lands, national and state parks, etc.). A unique feature of SENGBEM is its 2008-2010 crop rotation data, a crucially important component for simulating productivity and biogeochemical cycles as well as land-use changes associated with biofuel cropping. ARRA support for this project and to the PNNL Joint Global Change Research Institute enabled us to create an advanced computing infrastructure to execute millions of simulations, conduct post-processing calculations, store input and output data, and visualize results. These computing resources included two components installed at the Research Data Center of the University of Maryland. The first resource was 'deltac': an 8-core Linux server, dedicated to county-level and state-level simulations and PostgreSQL database hosting. The second resource was the DOE-JGCRI 'Evergreen' cluster, capable of executing millions of simulations in relatively short periods. ARRA funding also supported a PhD student from UMD who worked on creating the geodatabases and executing some of the simulations in this study. Using a physically based classification of marginal lands, we simulated production of cellulosic feedstocks from perennial mixtures grown on these lands in the US Midwest. Marginal lands in the western states of the US Midwest appear to have significant potential to supply feedstocks to a cellulosic biofuel industry. Similar results were obtained with simulations of N-fertilized perennial mixtures. A detailed spatial analysis allowed for the identification of possible locations for the establishment of 34 cellulosic ethanol biorefineries with an annual production capacity of 5.6 billion gallons. In summary, we have reported on the development of a spatially explicit national geodatabase to conduct biofuel simulation studies and provided simulation results on the potential of perennial cropping systems to serve as feedstocks for the production of cellulosic ethanol. To accomplish this, we have employed sophisticated spatial analysis methods in combination with the process-based biogeochemical model EPIC. The results of this study will be submitted to the USDOE Bioenergy Knowledge Discovery Framework as a way to contribute to the development of a sustainable bioenergy industry. This work provided the opportunity to test the hypothesis that marginal lands can serve as sources of cellulosic feedstocks and thus contribute to avoid potential conflicts between bioenergy and food production systems. This work, we believe, opens the door for further analysis on the characteristics of cellulosic feedstocks as major contributors to the development of a sustainable bioenergy economy.

  17. Technical Comparative Analysis of "Best of Breed" Turnkey Si-Based Processes and Equipment, to be Used to Produce a Combined Multi-entity Research and Development Technology Roadmap for Thick and Thin Silicon PV

    SciTech Connect (OSTI)

    Hovel, Harold; Prettyman, Kevin

    2015-03-27

    A side-by-side analysis was done on then currently available technology, along with roadmaps to push each particular option forward. Variations in turnkey line processes can and do result in finished solar device performance. Together with variations in starting material quality, the result is a distribution of effciencies. Forensic analysis and characterization of each crystalline Si based technology will determine the most promising approach with respect to cost, efficiency and reliability. Forensic analysis will also shed light on the causes of binning variations. Si solar cells were forensically analyzed from each turn key supplier using a host of techniques

  18. BBRN Factsheet: Case Study: Community Engagement | Department of Energy

    Office of Environmental Management (EM)

    BBRN Factsheet: Case Study: Community Engagement BBRN Factsheet: Case Study: Community Engagement Case Study: Community Engagement, on the Community Home Energy Retrofit Project (CHERP), based in Claremont, California. PDF icon Case Study: Community Engagement More Documents & Publications Better Buildings Network View | December 2015 Better Buildings Network View | July-August 2015 Better Buildings Residential Network Orientation Webinar

  19. ENHANCED UNCERTAINTY ANALYSIS FOR SRS COMPOSITE ANALYSIS

    SciTech Connect (OSTI)

    Smith, F.; Phifer, M.

    2011-06-30

    The Composite Analysis (CA) performed for the Savannah River Site (SRS) in 2009 (SRS CA 2009) included a simplified uncertainty analysis. The uncertainty analysis in the CA (Smith et al. 2009b) was limited to considering at most five sources in a separate uncertainty calculation performed for each POA. To perform the uncertainty calculations in a reasonable amount of time, the analysis was limited to using 400 realizations, 2,000 years of simulated transport time, and the time steps used for the uncertainty analysis were increased from what was used in the CA base case analysis. As part of the CA maintenance plan, the Savannah River National Laboratory (SRNL) committed to improving the CA uncertainty/sensitivity analysis. The previous uncertainty analysis was constrained by the standard GoldSim licensing which limits the user to running at most four Monte Carlo uncertainty calculations (also called realizations) simultaneously. Some of the limitations on the number of realizations that could be practically run and the simulation time steps were removed by building a cluster of three HP Proliant windows servers with a total of 36 64-bit processors and by licensing the GoldSim DP-Plus distributed processing software. This allowed running as many as 35 realizations simultaneously (one processor is reserved as a master process that controls running the realizations). These enhancements to SRNL computing capabilities made uncertainty analysis: using 1000 realizations, using the time steps employed in the base case CA calculations, with more sources, and simulating radionuclide transport for 10,000 years feasible. In addition, an importance screening analysis was performed to identify the class of stochastic variables that have the most significant impact on model uncertainty. This analysis ran the uncertainty model separately testing the response to variations in the following five sets of model parameters: (a) K{sub d} values (72 parameters for the 36 CA elements in sand and clay), (b) Dose Parameters (34 parameters), (c) Material Properties (20 parameters), (d) Surface Water Flows (6 parameters), and (e) Vadose and Aquifer Flow (4 parameters). Results provided an assessment of which group of parameters is most significant in the dose uncertainty. It was found that K{sub d} and the vadose/aquifer flow parameters, both of which impact transport timing, had the greatest impact on dose uncertainty. Dose parameters had an intermediate level of impact while material properties and surface water flows had little impact on dose uncertainty. Results of the importance analysis are discussed further in Section 7 of this report. The objectives of this work were to address comments received during the CA review on the uncertainty analysis and to demonstrate an improved methodology for CA uncertainty calculations as part of CA maintenance. This report partially addresses the LFRG Review Team issue of producing an enhanced CA sensitivity and uncertainty analysis. This is described in Table 1-1 which provides specific responses to pertinent CA maintenance items extracted from Section 11 of the SRS CA (2009). As noted above, the original uncertainty analysis looked at each POA separately and only included the effects from at most five sources giving the highest peak doses at each POA. Only 17 of the 152 CA sources were used in the original uncertainty analysis and the simulation time was reduced from 10,000 to 2,000 years. A major constraint on the original uncertainty analysis was the limitation of only being able to use at most four distributed processes. This work expanded the analysis to 10,000 years using 39 of the CA sources, included cumulative dose effects at downstream POAs, with more realizations (1,000) and finer time steps. This was accomplished by using the GoldSim DP-Plus module and the 36 processors available on a new windows cluster. The last part of the work looked at the contribution to overall uncertainty from the main categories of uncertainty variables: K{sub d}s, dose parameters, flow parameters, and material propertie

  20. Techno-Economic Analysis of Liquid Fuel Production from Woody Biomass via Hydrothermal Liquefaction (HTL) and Upgrading

    SciTech Connect (OSTI)

    Zhu, Yunhua; Biddy, Mary J.; Jones, Susanne B.; Elliott, Douglas C.; Schmidt, Andrew J.

    2014-09-15

    A series of experimental work was conducted to convert woody biomass to gasoline and diesel range products via hydrothermal liquefaction (HTL) and catalytic hydroprocessing. Based on the best available test data, a techno-economic analysis (TEA) was developed for a large scale woody biomass based HTL and upgrading system to evaluate the feasibility of this technology. In this system, 2000 dry metric ton per day woody biomass was assumed to be converted to bio-oil in hot compressed water and the bio-oil was hydrotreated and/or hydrocracked to produce gasoline and diesel range liquid fuel. Two cases were evaluated: a stage-of-technology (SOT) case based on the tests results, and a goal case considering potential improvements based on the SOT case. Process simulation models were developed and cost analysis was implemented based on the performance results. The major performance results included final products and co-products yields, raw materials consumption, carbon efficiency, and energy efficiency. The overall efficiency (higher heating value basis) was 52% for the SOT case and 66% for the goal case. The production cost, with a 10% internal rate of return and 2007 constant dollars, was estimated to be $1.29 /L for the SOT case and $0.74 /L for the goal case. The cost impacts of major improvements for moving from the SOT to the goal case were evaluated and the assumption of reducing the organics loss to the water phase lead to the biggest reduction in the production cost. Sensitivity analysis indicated that the final products yields had the largest impact on the production cost compared to other parameters. Plant size analysis demonstrated that the process was economically attractive if the woody biomass feed rate was over 1,500 dry tonne/day, the production cost was competitive with the then current petroleum-based gasoline price.

  1. Utilizing the Inherent Electrolysis in a Chip-Based Nanoelectrospray Emitter System to Facilitate Selective Ionization and Mass Spectrometric Analysis of Metallo Alkylporphyrins

    SciTech Connect (OSTI)

    Van Berkel, Gary J; Kertesz, Vilmos

    2012-01-01

    A commercially available chip-based infusion nanoelectrospray ionization system was used to ionize metallo alkylporphyrins for mass spectrometric detection and structure elucidation by mass spectrometry. Different ionic forms of model compounds (nickel (II), vanadyl (II), copper (II) and cobalt (II) octaethylporphyrin) were created by using two different types of conductive pipette tips supplied with the device. These pipette tips provide the conductive contact to solution at which the electrolysis process inherent to electrospray takes places in the device. The original unmodified, bare carbon-impregnated plastic pipette tips, were exploited to intentionally electrochemically oxidize (ionize) the porphyrins to form molecular radical cations for detection. Use of modified pipette tips, with a surface coating devised to inhibit analyte mass transport to the surface, was shown to limit the ionic species observed in the mass spectra of these porphyrins largely, but not exclusively, to the protonated molecule. Under the conditions of these experiments, the effective upper potential limit for oxidation with the uncoated pipette tip was 1.1 V or less and the coated pipette tips effectively prevented the oxidation of analytes with redox potentials greater than about 0.25 V. Product ion spectra of either molecular ionic species could be used to determine the alkyl chain length on the porphyrin macrocycle. The utility of this electrochemical ionization approach for the analysis of naturally occurring samples was demonstrated using nickel geoporphyrin fractions isolated from Gilsonite bitumen. Acquiring neutral loss spectra as a means to improve the specificity of detection in these complex natural samples was also illustrated.

  2. Science DMZ Case Studies

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Science DMZ Case Studies Science DMZ @ UF Science DMZ @ CU Science DMZ @ Penn & VTTI Science DMZ @ NOAA Science DMZ @ NERSC Science DMZ @ ALS Multi-facility Workflow Case Study...

  3. COMBINING A NEW 3-D SEISMIC S-WAVE PROPAGATION ANALYSIS FOR REMOTE FRACTURE DETECTION WITH A ROBUST SUBSURFACE MICROFRACTURE-BASED VERIFICATION TECHNIQUE

    SciTech Connect (OSTI)

    Bob Hardage; M.M. Backus; M.V. DeAngelo; R.J. Graebner; S.E. Laubach; Paul Murray

    2004-02-01

    Fractures within the producing reservoirs at McElroy Field could not be studied with the industry-provided 3C3D seismic data used as a cost-sharing contribution in this study. The signal-to-noise character of the converted-SV data across the targeted reservoirs in these contributed data was not adequate for interpreting azimuth-dependent data effects. After illustrating the low signal quality of the converted-SV data at McElroy Field, the seismic portion of this report abandons the McElroy study site and defers to 3C3D seismic data acquired across a different fractured carbonate reservoir system to illustrate how 3C3D seismic data can provide useful information about fracture systems. Using these latter data, we illustrate how fast-S and slow-S data effects can be analyzed in the prestack domain to recognize fracture azimuth, and then demonstrate how fast-S and slow-S data volumes can be analyzed in the poststack domain to estimate fracture intensity. In the geologic portion of the report, we analyze published regional stress data near McElroy Field and numerous formation multi-imager (FMI) logs acquired across McElroy to develop possible fracture models for the McElroy system. Regional stress data imply a fracture orientation different from the orientations observed in most of the FMI logs. This report culminates Phase 2 of the study, ''Combining a New 3-D Seismic S-Wave Propagation Analysis for Remote Fracture Detection with a Robust Subsurface Microfracture-Based Verification Technique''. Phase 3 will not be initiated because wells were to be drilled in Phase 3 of the project to verify the validity of fracture-orientation maps and fracture-intensity maps produced in Phase 2. Such maps cannot be made across McElroy Field because of the limitations of the available 3C3D seismic data at the depth level of the reservoir target.

  4. Pilot Project Technology Business Case: Mobile Work Packages

    SciTech Connect (OSTI)

    Thomas, Ken; Lawrie, Sean; Niedermuller, Josef

    2015-05-01

    Performance advantages of the new pilot project technologies are widely acknowledged, but it has proven difficult for utilities to derive business cases for justifying investment in these new capabilities. Lack of a business case is often cited by utilities as a barrier to pursuing wide-scale application of digital technologies to nuclear plant work activities. The decision to move forward with funding usually hinges on demonstrating actual cost reductions that can be credited to budgets and thereby truly reduce O&M or capital costs. Technology enhancements, while enhancing work methods and making work more efficient, often fail to eliminate workload such that it changes overall staffing and material cost requirements. It is critical to demonstrate cost reductions or impacts on non-cost performance objectives in order for the business case to justify investment by nuclear operators. The Business Case Methodology (BCM) was developed in September of 2015 to frame the benefit side of II&C technologies to address the “benefit” side of the analysis—as opposed to the cost side—and how the organization evaluates discretionary projects (net present value (NPV), accounting effects of taxes, discount rates, etc.). The cost and analysis side is not particularly difficult for the organization and can usually be determined with a fair amount of precision (not withstanding implementation project cost overruns). It is in determining the “benefits” side of the analysis that utilities have more difficulty in technology projects and that is the focus of this methodology. The methodology is presented in the context of the entire process, but the tool provided is limited to determining the organizational benefits only. This report describes a the use of the BCM in building a business case for mobile work packages, which includes computer-based procedures and other automated elements of a work package. Key to those impacts will be identifying where the savings are “harvestable,” meaning they result in an actual reduction in headcount and/or cost. The report describes the specific activities conducted with a partner utility to examine the various work activities associated with mobile work packages to determine what time savings and error rate reductions are available. The report summarizes these findings in the form of a business case for the technology.

  5. Analysis of Geothermal Reservoir Stimulation using Geomechanics...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Stochastic Analysis of Injection-Induced Seismicity Analysis of Geothermal Reservoir Stimulation using Geomechanics-Based Stochastic Analysis of Injection-Induced Seismicity ...

  6. Business Case for CNG in Municipal Fleets (Presentation)

    SciTech Connect (OSTI)

    Johnson, C.

    2010-07-27

    Presentation about compressed natural gas in municipal fleets, assessing investment profitability, the VICE model, base-case scenarios, and pressing questions for fleet owners.

  7. No Sunset and Extended Policies Cases (released in AEO2010)

    Reports and Publications (EIA)

    2010-01-01

    The Annual Energy Outlook 2010 Reference case is best described as a current laws and regulations case, because it generally assumes that existing laws and fully promulgated regulations will remain unchanged throughout the projection period, unless the legislation establishing them specifically calls for them to end or change. The Reference case often serves as a starting point for the analysis of proposed legislative or regulatory changes, a task that would be difficult if the Reference case included projected legislative or regulatory changes.

  8. Accident tolerant fuel analysis

    SciTech Connect (OSTI)

    Smith, Curtis; Chichester, Heather; Johns, Jesse; Teague, Melissa; Tonks, Michael Idaho National Laboratory; Youngblood, Robert

    2014-09-01

    Safety is central to the design, licensing, operation, and economics of Nuclear Power Plants (NPPs). Consequently, the ability to better characterize and quantify safety margin holds the key to improved decision making about light water reactor design, operation, and plant life extension. A systematic approach to characterization of safety margins and the subsequent margins management options represents a vital input to the licensee and regulatory analysis and decision making that will be involved. The purpose of the Risk Informed Safety Margin Characterization (RISMC) Pathway research and development (R&D) is to support plant decisions for risk-informed margins management by improving economics and reliability, and sustaining safety, of current NPPs. Goals of the RISMC Pathway are twofold: (1) Develop and demonstrate a risk-assessment method coupled to safety margin quantification that can be used by NPP decision makers as part of their margin recovery strategies. (2) Create an advanced ''RISMC toolkit'' that enables more accurate representation of NPP safety margin. In order to carry out the R&D needed for the Pathway, the Idaho National Laboratory is performing a series of case studies that will explore methods- and tools-development issues, in addition to being of current interest in their own right. One such study is a comparative analysis of safety margins of plants using different fuel cladding types: specifically, a comparison between current-technology Zircaloy cladding and a notional ''accident-tolerant'' (e.g., SiC-based) cladding. The present report begins the process of applying capabilities that are still under development to the problem of assessing new fuel designs. The approach and lessons learned from this case study will be included in future Technical Basis Guides produced by the RISMC Pathway. These guides will be the mechanism for developing the specifications for RISMC tools and for defining how plant decision makers should propose and evaluate margin recovery strategies.

  9. Accident Tolerant Fuel Analysis

    SciTech Connect (OSTI)

    Curtis Smith; Heather Chichester; Jesse Johns; Melissa Teague; Michael Tonks; Robert Youngblood

    2014-09-01

    Safety is central to the design, licensing, operation, and economics of Nuclear Power Plants (NPPs). Consequently, the ability to better characterize and quantify safety margin holds the key to improved decision making about light water reactor design, operation, and plant life extension. A systematic approach to characterization of safety margins and the subsequent margins management options represents a vital input to the licensee and regulatory analysis and decision making that will be involved. The purpose of the Risk Informed Safety Margin Characterization (RISMC) Pathway research and development (R&D) is to support plant decisions for risk-informed margins management by improving economics and reliability, and sustaining safety, of current NPPs. Goals of the RISMC Pathway are twofold: (1) Develop and demonstrate a risk-assessment method coupled to safety margin quantification that can be used by NPP decision makers as part of their margin recovery strategies. (2) Create an advanced “RISMC toolkit” that enables more accurate representation of NPP safety margin. In order to carry out the R&D needed for the Pathway, the Idaho National Laboratory is performing a series of case studies that will explore methods- and tools-development issues, in addition to being of current interest in their own right. One such study is a comparative analysis of safety margins of plants using different fuel cladding types: specifically, a comparison between current-technology Zircaloy cladding and a notional “accident-tolerant” (e.g., SiC-based) cladding. The present report begins the process of applying capabilities that are still under development to the problem of assessing new fuel designs. The approach and lessons learned from this case study will be included in future Technical Basis Guides produced by the RISMC Pathway. These guides will be the mechanism for developing the specifications for RISMC tools and for defining how plant decision makers should propose and evaluate margin recovery strategies.

  10. NREL: Energy Analysis - Manufacturing Analysis

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Energy Analysis Home Capabilities & Expertise Key Activities Analysis of Project Finance ... Supply Constraints Analysis Workforce Development Analysis Resource Assessment Models & ...

  11. Quantitative Analysis of Variability and Uncertainty in Environmental Data and Models. Volume 2. Performance, Emissions, and Cost of Combustion-Based NOx Controls for Wall and Tangential Furnace Coal-Fired Power Plants

    SciTech Connect (OSTI)

    Frey, H. Christopher; Tran, Loan K.

    1999-04-30

    This is Volume 2 of a two-volume set of reports describing work conducted at North Carolina State University sponsored by Grant Number DE-FG05-95ER30250 by the U.S. Department of Energy. The title of the project is “Quantitative Analysis of Variability and Uncertainty in Acid Rain Assessments.” The work conducted under sponsorship of this grant pertains primarily to two main topics: (1) development of new methods for quantitative analysis of variability and uncertainty applicable to any type of model; and (2) analysis of variability and uncertainty in the performance, emissions, and cost of electric power plant combustion-based NOx control technologies. These two main topics are reported separately in Volumes 1 and 2.

  12. EMGeo Case Study

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    EMGeo Case Study EMGeo Case Study Background EMGeo is composed of two geophysical imaging applications: one for subsurface imaging using electromagnetic data and another using seismic data. Although the applications model different physics (Maxwell's equations in one case, the elastic wave equation in another) they have much in common. First, both are structured similarly, taking advantage of high-level data parallelism to solve many semi-independent sub-problems concurrently, yielding excellent

  13. Application Case Studies

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Studies Application Case Studies Early work with NESAP Staff at NERSC as well as Cray and Intel Engineers have lead to a number of application case studies. Early application case studies The Babbage test system was used to study representative applications and kernels in various scientific fields to gain experience with the challenges and strategies needed to optimize code performance on the MIC architecture. Below we highlight a few examples: BerkeleyGW The BerkeleyGW package is a materials

  14. Case Study FAQ

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Requirements Reviews: Target 2017 Requirements Reviews: Target 2014 Overview Published Reports Case Study FAQs NERSC HPC Achievement Awards Share Your Research User Submitted Research Citations NERSC Citations Home » Science at NERSC » HPC Requirements Reviews » Case Study FAQs Case Study FAQ General Questions What is NERSC? NERSC is the National Energy Research Scientific Computing Center, the high-end scientific computing facility for the Department of Energy's Office of Science. NERSC

  15. Thermal analysis finds optimum FCCU revamp scheme

    SciTech Connect (OSTI)

    Aguilar-Rodriquez, E.; Ortiz-Estrada, C.; Aguilera-Lopez, M. )

    1994-11-07

    The 25,000 b/d fluid catalytic cracking unit (FCCU) at Petroleos Mexicanos' idle Azcapotzalco refinery near Mexico City has been relocated to Pemex's 235,000 b/d Cadereyta refinery. The results of a thermal-integration analysis are being used to revamp the unit and optimize its vapor-recovery scheme. For the case of the Azcapotzalco FCCU, the old unit was designed in the 1950s, so modifications to the reactor/regenerator section incorporate many important changes, including a new riser, feed nozzles, cyclones, air distributor, and other internals. For the new scheme, the analysis was based on the following restrictions: (1) Two cases concerning gas oil feed conditions must be met. In the hot-feed case, feed is introduced from a processing unit outside battery limits (OSBL) at 188 C. For the cold-feed case, feed is introduced from OSBL from storage tanks at 70 C. (2) No new fire heaters are to be installed. (3) Existing equipment must be reused whenever possible. The paper describes and analyzes three alternative schemes.

  16. OSCARS Case Study

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    The Network OSCARS How It Works Who's Using OSCARS? OSCARS and Future Tech OSCARS Standard and Open Grid Forum OSCARS Developers Community Read More... OSCARS Case Study...

  17. Appendix A: Reference case

    Annual Energy Outlook [U.S. Energy Information Administration (EIA)]

    Reference case Energy Information Administration Annual Energy Outlook 2014 Table A17. Renewable energy consumption by sector and source (quadrillion Btu) Sector and source...

  18. Appendix A: Reference case

    Annual Energy Outlook [U.S. Energy Information Administration (EIA)]

    4 Reference case Table A2. Energy consumption by sector and source (quadrillion Btu per year, unless otherwise noted) Energy Information Administration Annual Energy Outlook 2014...

  19. Decerns: A framework for multi-criteria decision analysis

    SciTech Connect (OSTI)

    Yatsalo, Boris; Didenko, Vladimir; Gritsyuk, Sergey; Sullivan, Terry

    2015-02-27

    A new framework, Decerns, for multicriteria decision analysis (MCDA) of a wide range of practical problems on risk management is introduced. Decerns framework contains a library of modules that are the basis for two scalable systems: DecernsMCDA for analysis of multicriteria problems, and DecernsSDSS for multicriteria analysis of spatial options. DecernsMCDA includes well known MCDA methods and original methods for uncertainty treatment based on probabilistic approaches and fuzzy numbers. As a result, these MCDA methods are described along with a case study on analysis of multicriteria location problem.

  20. Cogeneration: Economic and technical analysis. (Latest citations from the INSPEC: Information Services for the Physics and Engineering Communities data base). Published Search

    SciTech Connect (OSTI)

    Not Available

    1992-08-01

    The bibliography contains citations concerning economic and technical analyses of cogeneration systems. Topics include electric power generation, industrial cogeneration, use by utilities, and fuel cell cogeneration. The citations explore steam power station, gas turbine and steam turbine technology, district heating, refuse derived fuels, environmental effects and regulations, bioenergy and solar energy conversion, waste heat and waste product recycling, and performance analysis. (Contains a minimum of 89 citations and includes a subject term index and title list.)

  1. EVALUATION OF THE EFFECTIVENESS OF TRUCK EFFICIENCY TECHNOLOGIES IN CLASS 8 TRACTOR-TRAILERS BASED ON A TRACTIVE ENERGY ANALYSIS USING MEASURED DRIVE CYCLE DATA

    SciTech Connect (OSTI)

    LaClair, Tim J; Gao, Zhiming; Fu, Joshua S.; Calcagno, Jimmy; Yun, Jeongran

    2014-01-01

    Quantifying the fuel savings that can be achieved from different truck fuel efficiency technologies for a fleet s specific usage allows the fleet to select the combination of technologies that will yield the greatest operational efficiency and profitability. This paper presents an analysis of vehicle usage in a commercial vehicle fleet and an assessment of advanced efficiency technologies using an analysis of measured drive cycle data for a class 8 regional commercial shipping fleet. Drive cycle measurements during a period of a full year from six tractor-trailers in normal operations in a less-than-truckload (LTL) carrier were analyzed to develop a characteristic drive cycle that is highly representative of the fleet s usage. The vehicle mass was also estimated to account for the variation of loads that the fleet experienced. The drive cycle and mass data were analyzed using a tractive energy analysis to quantify the fuel efficiency and CO2 emissions benefits that can be achieved on class 8 tractor-trailers when using advanced efficiency technologies, either individually or in combination. Although differences exist among class 8 tractor-trailer fleets, this study provides valuable insight into the energy and emissions reduction potential that various technologies can bring in this important trucking application.

  2. Stand-alone Renewable Energy-Economic and Financial Analysis...

    Open Energy Info (EERE)

    and Financial Analysis1 Background Economic Analysis of Solar Home Systems: A Case Study for the Philippines, Peter Meier, Prepared for The World Bank, Washington, D.C....

  3. Building America Case Study: Apartment Compartmentalization with an Aerosol-Based Sealing Process - Queens, NY (Fact Sheet), Technology Solutions for New and Existing Homes, Energy Efficiency & Renewable Energy (EERE)

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Apartment Compartmentalization with an Aerosol-Based Sealing Process Queens, New York PROJECT INFORMATION Construction: New Type: Multifamily Partners: Builder: Bluestone Organization, bluestoneorg.com Consortium for Advanced Residential Buildings, carb-swa.com Research Topic: Air sealing building enclosures Date Completed: 2014 Climate Zones: All PERFORMANCE DATA The aerosol process resulted in an average reduction of 71% in air leakage and an average apartment airtightness of 0.08 CFM50/ft 2

  4. EMGeo Case Study

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    methods (QMR in one case, and IDR in the other), both solvers are dominated by memory bandwidth intensive operations like sparse matrix-vector multiply (SpMV), dot...

  5. VASP Case Study

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    VASP Case Study VASP Case Study Code description and computational problem The Vienna Ab-initio Simulation Package (VASP) [1-2] is a widely used materials science application for performing ab-initio electronic structure calculations and quantum-mechanical molecular dynamics (MD) simulations using pseudopotentials or the projector-augmented wave method and a plane wave basis set. VASP computes an approximate solution to the many-body Schrödinger equation, either within the Density Functional

  6. WARP Case Study

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    WARP Case Study WARP Case Study Background WARP is an accelerator code that is used to conduct detailed simulations of particle accelerators, among other high energy physics applications. It is a so-called Particle-In-Cell (PIC) code that solves for the motion of charged particles acted upon by electric and magnetic forces. The particle motion is computed in a Lagrangian sense, following individual particles. The electric and magnetic fields acting on the particle are considered to be Eulerian

  7. Early application case studies

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Early application case studies Early application case studies The Babbage test system was used to study representative applications and kernels in various scientific fields to gain experience with the challenges and strategies needed to optimize code performance on the MIC architecture. Below we highlight a few examples: BerkeleyGW The BerkeleyGW package is a materials science application that calculates electronic and optical properties with quantitative accuracy, a critical need in materials

  8. Better Buildings Case Competition

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Better Buildings Case Competition 2014 Building Technologies Office Peer Review Elena Alschuler, Elena.Alschuler@ee.Doe.Gov Department of Energy Project Summary Timeline: Start date: 2012 Planned end date: Annual event Key Milestones 9/23/13 - Student Team Registration Opened 11/11/13- Cases distributed 2/17/13 - Solution proposals due 3/14/14 - Solution proposals presented at US DOE, winners selected by industry and expert judges April 2014 - solutions posted Budget: Total DOE $ to date:

  9. Single casing reheat turbine

    SciTech Connect (OSTI)

    Matsushima, Tatsuro; Nishimura, Shigeo

    1999-07-01

    For conventional power plants, regenerative reheat steam turbines have been accepted as the most practical method to meet the demand for efficient and economical power generation. Recently the application of reheat steam turbines for combined cycle power plant began according to the development of large-capacity high temperature gas turbine. The two casing double flow turbine has been applied for this size of reheat steam turbine. The single casing reheat turbine can offer economical and compact power plant. Through development of HP-LP combined rotor and long LP blading series, Mitsubishi Heavy Industries, Ltd. had developed a single casing reheat steam turbine series and began to use it in actual plants. Six units are already in operation and another seven units are under manufacturing. Multiple benefits of single casing reheat turbine are smaller space requirements, shorter construction and erection period, equally good performance, easier operation and maintenance, shorter overhaul period, smaller initial investment, lower transportation expense and so on. Furthermore, single exhaust steam turbine makes possible to apply axial exhaust type, which will lower the height of T/G foundation and T/G housing. The single casing reheat turbine has not only compact and economical configuration itself but also it can reduce the cost of civil construction. In this paper, major developments and design features of the single casing reheat turbine are briefly discussed and operating experience, line-up and technical consideration for performance improvement are presented.

  10. An Analysis Of The Impact Of Selected Carbon Capture And Storage Policy Scenarios On The US Fossil-Based Electric Power Sector

    SciTech Connect (OSTI)

    Davidson, Casie L.; Dooley, James J.; Dahowski, Robert T.; Mahasenan, N Maha

    2003-09-13

    CO2 capture and storage (CCS) is rapidly emerging as a potential key climate change mitigation option. However, as policymakers and industrial stakeholders begin the process of formulating new policy for implementing CCS technologies, participants require a tool to assess large-scale CCS deployment over a number of different possible future scenarios. This paper will analyze several scenarios using two state-of-the-art Battelle developed models, the MiniCAM and the CO2-GIS for examining CCS deployment. Outputs include the total amount of CO2 captured, total annual emissions, and fossil-based generating capacity.

  11. Performance-based ratemaking for electric utilities: Review of plans and analysis of economic and resource-planning issues. Volume 2, Appendices

    SciTech Connect (OSTI)

    Comnes, G.A.; Stoft, S.; Greene, N.; Hill, L.J.

    1995-11-01

    This document contains summaries of the electric utilities performance-based rate plans for the following companies: Alabama Power Company; Central Maine Power Company; Consolidated Edison of New York; Mississippi Power Company; New York State Electric and Gas Corporation; Niagara Mohawk Power Corporation; PacifiCorp; Pacific Gas and Electric; Southern California Edison; San Diego Gas & Electric; and Tucson Electric Power. In addition, this document also contains information about LBNL`s Power Index and Incentive Properties of a Hybrid Cap and Long-Run Demand Elasticity.

  12. Generic Argillite/Shale Disposal Reference Case

    SciTech Connect (OSTI)

    Zheng, Liange; Colon, Carlos Jové; Bianchi, Marco; Birkholzer, Jens

    2014-08-08

    Radioactive waste disposal in a deep subsurface repository hosted in clay/shale/argillite is a subject of widespread interest given the desirable isolation properties, geochemically reduced conditions, and widespread geologic occurrence of this rock type (Hansen 2010; Bianchi et al. 2013). Bianchi et al. (2013) provides a description of diffusion in a clay-hosted repository based on single-phase flow and full saturation using parametric data from documented studies in Europe (e.g., ANDRA 2005). The predominance of diffusive transport and sorption phenomena in this clay media are key attributes to impede radionuclide mobility making clay rock formations target sites for disposal of high-level radioactive waste. The reports by Hansen et al. (2010) and those from numerous studies in clay-hosted underground research laboratories (URLs) in Belgium, France and Switzerland outline the extensive scientific knowledge obtained to assess long-term clay/shale/argillite repository isolation performance of nuclear waste. In the past several years under the UFDC, various kinds of models have been developed for argillite repository to demonstrate the model capability, understand the spatial and temporal alteration of the repository, and evaluate different scenarios. These models include the coupled Thermal-Hydrological-Mechanical (THM) and Thermal-Hydrological-Mechanical-Chemical (THMC) models (e.g. Liu et al. 2013; Rutqvist et al. 2014a, Zheng et al. 2014a) that focus on THMC processes in the Engineered Barrier System (EBS) bentonite and argillite host hock, the large scale hydrogeologic model (Bianchi et al. 2014) that investigates the hydraulic connection between an emplacement drift and surrounding hydrogeological units, and Disposal Systems Evaluation Framework (DSEF) models (Greenberg et al. 2013) that evaluate thermal evolution in the host rock approximated as a thermal conduction process to facilitate the analysis of design options. However, the assumptions and the properties (parameters) used in these models are different, which not only make inter-model comparisons difficult, but also compromise the applicability of the lessons learned from one model to another model. The establishment of a reference case would therefore be helpful to set up a baseline for model development. A generic salt repository reference case was developed in Freeze et al. (2013) and the generic argillite repository reference case is presented in this report. The definition of a reference case requires the characterization of the waste inventory, waste form, waste package, repository layout, EBS backfill, host rock, and biosphere. This report mainly documents the processes in EBS bentonite and host rock that are potentially important for performance assessment and properties that are needed to describe these processes, with brief description other components such as waste inventory, waste form, waste package, repository layout, aquifer, and biosphere. A thorough description of the generic argillite repository reference case will be given in Jové Colon et al. (2014).

  13. Comprehensive Report For Proposed Elevated Temperature Elastic Perfectly Plastic (EPP) Code Cases Representative Example Problems

    SciTech Connect (OSTI)

    Greg L. Hollinger

    2014-06-01

    Background: The current rules in the nuclear section of the ASME Boiler and Pressure Vessel (B&PV) Code , Section III, Subsection NH for the evaluation of strain limits and creep-fatigue damage using simplified methods based on elastic analysis have been deemed inappropriate for Alloy 617 at temperatures above 1200F (650C)1. To address this issue, proposed code rules have been developed which are based on the use of elastic-perfectly plastic (E-PP) analysis methods and which are expected to be applicable to very high temperatures. The proposed rules for strain limits and creep-fatigue evaluation were initially documented in the technical literature 2, 3, and have been recently revised to incorporate comments and simplify their application. The revised code cases have been developed. Task Objectives: The goal of the Sample Problem task is to exercise these code cases through example problems to demonstrate their feasibility and, also, to identify potential corrections and improvements should problems be encountered. This will provide input to the development of technical background documents for consideration by the applicable B&PV committees considering these code cases for approval. This task has been performed by Hollinger and Pease of Becht Engineering Co., Inc., Nuclear Services Division and a report detailing the results of the E-PP analyses conducted on example problems per the procedures of the E-PP strain limits and creep-fatigue draft code cases is enclosed as Enclosure 1. Conclusions: The feasibility of the application of the E-PP code cases has been demonstrated through example problems that consist of realistic geometry (a nozzle attached to a semi-hemispheric shell with a circumferential weld) and load (pressure; pipe reaction load applied at the end of the nozzle, including axial and shear forces, bending and torsional moments; through-wall transient temperature gradient) and design and operating conditions (Levels A, B and C).

  14. A Case for Climate Neutrality: Case Studies on Moving Towards...

    Open Energy Info (EERE)

    TOOL Name: A Case for Climate Neutrality: Case Studies on Moving Towards a Low Carbon Economy AgencyCompany Organization: United Nations Environment Programme (UNEP) Sector:...

  15. Kinetics of Cold-Cap Reactions for Vitrification of Nuclear Waste Glass Based on Simultaneous Differential Scanning Calorimetry - Thermogravimetry (DSC-TGA) and Evolved Gas Analysis (EGA)

    SciTech Connect (OSTI)

    Rodriguez, Carmen P.; Pierce, David A.; Schweiger, Michael J.; Kruger, Albert A.; Chun, Jaehun; Hrma, Pavel R.

    2013-12-03

    For vitrifying nuclear waste glass, the feed, a mixture of waste with glass-forming and modifying additives, is charged onto the cold cap that covers 90-100% of the melt surface. The cold cap consists of a layer of reacting molten glass floating on the surface of the melt in an all-electric, continuous glass melter. As the feed moves through the cold cap, it undergoes chemical reactions and phase transitions through which it is converted to molten glass that moves from the cold cap into the melt pool. The process involves a series of reactions that generate multiple gases and subsequent mass loss and foaming significantly influence the mass and heat transfers. The rate of glass melting, which is greatly influenced by mass and heat transfers, affects the vitrification process and the efficiency of the immobilization of nuclear waste. We studied the cold-cap reactions of a representative waste glass feed using both the simultaneous differential scanning calorimetry thermogravimetry (DSC-TGA) and the thermogravimetry coupled with gas chromatography-mass spectrometer (TGA-GC-MS) as complementary tools to perform evolved gas analysis (EGA). Analyses from DSC-TGA and EGA on the cold-cap reactions provide a key element for the development of an advanced cold-cap model. It also helps to formulate melter feeds for higher production rate.

  16. PEM Electrolysis H2A Production Case Study Documentation

    SciTech Connect (OSTI)

    James, Brian; Colella, Whitney; Moton, Jennie; Saur, G.; Ramsden, T.

    2013-12-31

    This report documents the development of four DOE Hydrogen Analysis (H2A) case studies for polymer electrolyte membrane (PEM) electrolysis. The four cases characterize PEM electrolyzer technology for two hydrogen production plant sizes (Forecourt and Central) and for two technology development time horizons (Current and Future).

  17. The Science Manager's Guide to Case Studies

    SciTech Connect (OSTI)

    Branch, Kristi M.; Peffers, Melissa S.; Ruegg, Rosalie T.; Vallario, Robert W.

    2001-09-24

    This guide takes the science manager through the steps of planning, implementing, validating, communicating, and using case studies. It outlines the major methods of analysis, describing their relative merits and applicability while providing relevant examples and sources of additional information. Well-designed case studies can provide a combination of rich qualitative and quantitative information, offering valuable insights into the nature, outputs, and longer-term impacts of the research. An objective, systematic, and credible approach to the evaluation of U.S. Department of Energy Office of Science programs adds value to the research process and is the subject of this guide.

  18. Development and Performance of Detectors for the Cryogenic Dark Matter Search Experiment with an Increased Sensitivity Based on a Maximum Likelihood Analysis of Beta Contamination

    SciTech Connect (OSTI)

    Driscoll, Donald D.; /Case Western Reserve U.

    2004-01-01

    The Cryogenic Dark Matter Search (CDMS) uses cryogenically-cooled detectors made of germanium and silicon in an attempt to detect dark matter in the form of Weakly-Interacting Massive Particles (WIMPs). The expected interaction rate of these particles is on the order of 1/kg/day, far below the 200/kg/day expected rate of background interactions after passive shielding and an active cosmic ray muon veto. Our detectors are instrumented to make a simultaneous measurement of both the ionization energy and thermal energy deposited by the interaction of a particle with the crystal substrate. A comparison of these two quantities allows for the rejection of a background of electromagnetically-interacting particles at a level of better than 99.9%. The dominant remaining background at a depth of {approx} 11 m below the surface comes from fast neutrons produced by cosmic ray muons interacting in the rock surrounding the experiment. Contamination of our detectors by a beta emitter can add an unknown source of unrejected background. In the energy range of interest for a WIMP study, electrons will have a short penetration depth and preferentially interact near the surface. Some of the ionization signal can be lost to the charge contacts there and a decreased ionization signal relative to the thermal signal will cause a background event which interacts at the surface to be misidentified as a signal event. We can use information about the shape of the thermal signal pulse to discriminate against these surface events. Using a subset of our calibration set which contains a large fraction of electron events, we can characterize the expected behavior of surface events and construct a cut to remove them from our candidate signal events. This thesis describes the development of the 6 detectors (4 x 250 g Ge and 2 x 100 g Si) used in the 2001-2002 CDMS data run at the Stanford Underground Facility with a total of 119 livedays of data. The preliminary results presented are based on the first use of a beta-eliminating cut based on a maximum-likelihood characterization described above.

  19. Stereotactic Body Radiotherapy Versus Surgery for Medically Operable Stage I Non-Small-Cell Lung Cancer: A Markov Model-Based Decision Analysis

    SciTech Connect (OSTI)

    Louie, Alexander V.; Rodrigues, George; Palma, David A.; Cao, Jeffrey Q.; Yaremko, Brian P.; Malthaner, Richard; Mocanu, Joseph D.

    2011-11-15

    Purpose: To compare the quality-adjusted life expectancy and overall survival in patients with Stage I non-small-cell lung cancer (NSCLC) treated with either stereotactic body radiation therapy (SBRT) or surgery. Methods and Materials: We constructed a Markov model to describe health states after either SBRT or lobectomy for Stage I NSCLC for a 5-year time frame. We report various treatment strategy survival outcomes stratified by age, sex, and pack-year history of smoking, and compared these with an external outcome prediction tool (Adjuvant{exclamation_point} Online). Results: Overall survival, cancer-specific survival, and other causes of death as predicted by our model correlated closely with those predicted by the external prediction tool. Overall survival at 5 years as predicted by baseline analysis of our model is in favor of surgery, with a benefit ranging from 2.2% to 3.0% for all cohorts. Mean quality-adjusted life expectancy ranged from 3.28 to 3.78 years after surgery and from 3.35 to 3.87 years for SBRT. The utility threshold for preferring SBRT over surgery was 0.90. Outcomes were sensitive to quality of life, the proportion of local and regional recurrences treated with standard vs. palliative treatments, and the surgery- and SBRT-related mortalities. Conclusions: The role of SBRT in the medically operable patient is yet to be defined. Our model indicates that SBRT may offer comparable overall survival and quality-adjusted life expectancy as compared with surgical resection. Well-powered prospective studies comparing surgery vs. SBRT in early-stage lung cancer are warranted to further investigate the relative survival, quality of life, and cost characteristics of both treatment paradigms.

  20. Performance-based ratemaking for electric utilities: Review of plans and analysis of economic and resource-planning issues. Volume 1

    SciTech Connect (OSTI)

    Comnes, G.A.; Stoft, S.; Greene, N.; Hill, L.J. |

    1995-11-01

    Performance-Based Ratemaking (PBR) is a form of utility regulation that strengthens the financial incentives to lower rates, lower costs, or improve nonprice performance relative traditional regulation, which the authors call cost-of-service, rate-of-return (COS/ROR) regulation. Although the electric utility industry has considerable experience with incentive mechanisms that target specific areas of performance, implementation of mechanisms that cover a comprehensive set of utility costs or services is relatively rare. In recent years, interest in PBR has increased as a result of growing dissatisfaction with COS/ROR and as a result of economic and technological trends that are leading to more competition in certain segments of the electricity industry. In addition, incentive regulation has been used with some success in other public utility industries, most notably telecommunications in the US and telecommunications, energy, and water in the United Kingdom. In this report, the authors analyze comprehensive PBR mechanisms for electric utilities in four ways: (1) they describe different types of PBR mechanisms, (2) they review a sample of actual PBR plans, (3) they consider the interaction of PBR and utility-funded energy efficiency programs, and (4) they examine how PBR interacts with electric utility resource planning and industry restructuring. The report should be of interest to technical staff of utilities and regulatory commissions that are actively considering or designing PBR mechanisms. 16 figs., 17 tabs.

  1. In Case of Emergency

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    In Case of Emergency In Case of Emergency Print Fire/Police Emergency: ext. 7911 Cell phone or off-site: 510-486-7911 When dialing from off-site, the following numbers need to be proceeded by 486-. the area code for the LBNL is (510). Fire Department (non-emergency): ext. 6015 Police Department (non-emergency): ext. 5472 Non-Emergency Reporting: ext. 6999 Additional information about emergency procedures at Berkeley Lab can be found on the red Emergency Response Guides posted around the lab and

  2. Preliminary hazards analysis -- vitrification process

    SciTech Connect (OSTI)

    Coordes, D.; Ruggieri, M.; Russell, J.; TenBrook, W.; Yimbo, P.

    1994-06-01

    This paper presents a Preliminary Hazards Analysis (PHA) for mixed waste vitrification by joule heating. The purpose of performing a PHA is to establish an initial hazard categorization for a DOE nuclear facility and to identify those processes and structures which may have an impact on or be important to safety. The PHA is typically performed during and provides input to project conceptual design. The PHA is then followed by a Preliminary Safety Analysis Report (PSAR) performed during Title 1 and 2 design. The PSAR then leads to performance of the Final Safety Analysis Report performed during the facility`s construction and testing. It should be completed before routine operation of the facility commences. This PHA addresses the first four chapters of the safety analysis process, in accordance with the requirements of DOE Safety Guidelines in SG 830.110. The hazards associated with vitrification processes are evaluated using standard safety analysis methods which include: identification of credible potential hazardous energy sources; identification of preventative features of the facility or system; identification of mitigative features; and analyses of credible hazards. Maximal facility inventories of radioactive and hazardous materials are postulated to evaluate worst case accident consequences. These inventories were based on DOE-STD-1027-92 guidance and the surrogate waste streams defined by Mayberry, et al. Radiological assessments indicate that a facility, depending on the radioactive material inventory, may be an exempt, Category 3, or Category 2 facility. The calculated impacts would result in no significant impact to offsite personnel or the environment. Hazardous materials assessment indicates that a Mixed Waste Vitrification facility will be a Low Hazard facility having minimal impacts to offsite personnel and the environment.

  3. Stress analysis of closure bolts for shipping casks

    SciTech Connect (OSTI)

    Mok, G.C.; Fischer, L.E. ); Hsu, S.T. )

    1993-01-01

    This report specifies the requirements and criteria for stress analysis of closure bolts for shipping casks containing nuclear spent fuels or high level radioactive materials. The specification is based on existing information conceming the structural behavior, analysis, and design of bolted joints. The approach taken was to extend the ASME Boiler and Pressure Vessel Code requirements and criteria for bolting analysis of nuclear piping and pressure vessels to include the appropriate design and load characteristics of the shipping cask. The characteristics considered are large, flat, closure lids with metal-to-metal contact within the bolted joint; significant temperature and impact loads; and possible prying and bending effects. Specific formulas and procedures developed apply to the bolt stress analysis of a circular, flat, bolted closure. The report also includes critical load cases and desirable design practices for the bolted closure, an in-depth review of the structural behavior of bolted joints, and a comprehensive bibliography of current information on bolted joints.

  4. Structure-Based Analysis of Toxoplasma gondii Profilin: A Parasite-Specific Motif Is Required for Recognition by Toll-Like Receptor 11

    SciTech Connect (OSTI)

    K Kucera; A Koblansky; L Saunders; K Frederick; E De La Cruz; S Ghosh; Y Modis

    2011-12-31

    Profilins promote actin polymerization by exchanging ADP for ATP on monomeric actin and delivering ATP-actin to growing filament barbed ends. Apicomplexan protozoa such as Toxoplasma gondii invade host cells using an actin-dependent gliding motility. Toll-like receptor (TLR) 11 generates an innate immune response upon sensing T. gondii profilin (TgPRF). The crystal structure of TgPRF reveals a parasite-specific surface motif consisting of an acidic loop, followed by a long {beta}-hairpin. A series of structure-based profilin mutants show that TLR11 recognition of the acidic loop is responsible for most of the interleukin (IL)-12 secretion response to TgPRF in peritoneal macrophages. Deletion of both the acidic loop and the {beta}-hairpin completely abrogates IL-12 secretion. Insertion of the T. gondii acidic loop and {beta}-hairpin into yeast profilin is sufficient to generate TLR11-dependent signaling. Substitution of the acidic loop in TgPRF with the homologous loop from the apicomplexan parasite Cryptosporidium parvum does not affect TLR11-dependent IL-12 secretion, while substitution with the acidic loop from Plasmodium falciparum results in reduced but significant IL-12 secretion. We conclude that the parasite-specific motif in TgPRF is the key molecular pattern recognized by TLR11. Unlike other profilins, TgPRF slows nucleotide exchange on monomeric rabbit actin and binds rabbit actin weakly. The putative TgPRF actin-binding surface includes the {beta}-hairpin and diverges widely from the actin-binding surfaces of vertebrate profilins.

  5. Geothermal Case Studies

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    Young, Katherine

    database.) In fiscal year 2015, NREL is working with universities to populate additional case studies on OpenEI. The goal is to provide a large enough dataset to start conducting analyses of exploration programs to identify correlations between successful exploration plans for areas with similar geologic occurrence models.

  6. Geothermal Case Studies

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    Young, Katherine

    2014-09-30

    database.) In fiscal year 2015, NREL is working with universities to populate additional case studies on OpenEI. The goal is to provide a large enough dataset to start conducting analyses of exploration programs to identify correlations between successful exploration plans for areas with similar geologic occurrence models.

  7. Advancing the surgical implantation of electronic tags in fish: a gap analysis and research agenda based on a review of trends in intracoelomic tagging effects studies

    SciTech Connect (OSTI)

    Cooke, Steven J.; Woodley, Christa M.; Eppard, M. B.; Brown, Richard S.; Nielsen, Jennifer L.

    2011-03-08

    Early approaches to surgical implantation of electronic tags in fish were often through trial and error, however, in recent years there has been an interest in using scientific research to identify techniques and procedures that improve the outcome of surgical procedures and determine the effects of tagging on individuals. Here we summarize the trends in 108 peer-reviewed electronic tagging effect studies focused on intracoleomic implantation to determine opportunities for future research. To date, almost all of the studies have been conducted in freshwater, typically in laboratory environments, and have focused on biotelemetry devices. The majority of studies have focused on salmonids, cyprinids, ictalurids and centrarchids, with a regional bias towards North America, Europe and Australia. Most studies have focused on determining whether there is a negative effect of tagging relative to control fish, with proportionally fewer that have contrasted different aspects of the surgical procedure (e.g., methods of sterilization, incision location, wound closure material) that could advance the discipline. Many of these studies included routine endpoints such as mortality, growth, healing and tag retention, with fewer addressing sublethal measures such as swimming ability, predator avoidance, physiological costs, or fitness. Continued research is needed to further elevate the practice of electronic tag implantation in fish in order to ensure that the data generated are relevant to untagged conspecifics (i.e., no long-term behavioural or physiological consequences) and the surgical procedure does not impair the health and welfare status of the tagged fish. To that end, we advocate for i) rigorous controlled manipulations based on statistical designs that have adequate power, account for inter-individual variation, and include controls and shams, ii) studies that transcend the laboratory and the field with more studies in marine waters, iii) incorporation of knowledge and techniques emerging from the medical and veterinary disciplines, iv) addressing all components of the surgical event, v) comparative studies that evaluate the same surgical techniques on multiple species and in different environments, vi) consideration of how biotic factors (e.g., sex, age, size) influence tagging outcomes, and vii) studies that cover a range of endpoints over ecologically-relevant time periods.

  8. RESULTS OF THE TECHNICAL AND ECONOMIC FEASIBILITY ANALYSIS FOR A NOVEL BIOMASS GASIFICATION-BASED POWER GENERATION SYSTEM FOR THE FOREST PRODUCTS INDUSTRY

    SciTech Connect (OSTI)

    Bruce Bryan; Joseph Rabovitser; Sunil Ghose; Jim Patel

    2003-11-01

    In 2001, the Gas Technology Institute (GTI) entered into Cooperative Agreement DE-FC26-01NT41108 with the U.S. Department of Energy (DOE) for an Agenda 2020 project to develop an advanced biomass gasification-based power generation system for near-term deployment in the Forest Products Industry (FPI). The advanced power system combines three advanced components, including biomass gasification, 3-stage stoker-fired combustion for biomass conversion, and externally recuperated gas turbines (ERGTs) for power generation. The primary performance goals for the advanced power system are to provide increased self-generated power production for the mill and to increase wastewood utilization while decreasing fossil fuel use. Additional goals are to reduce boiler NOx and CO{sub 2} emissions. The current study was conducted to determine the technical and economic feasibility of an Advanced Power Generation System capable of meeting these goals so that a capital investment decision can be made regarding its implementation at a paper mill demonstration site in DeRidder, LA. Preliminary designs and cost estimates were developed for all major equipment, boiler modifications and balance of plant requirements including all utilities required for the project. A three-step implementation plan was developed to reduce technology risk. The plant design was found to meet the primary objectives of the project for increased bark utilization, decreased fossil fuel use, and increased self-generated power in the mill. Bark utilization for the modified plant is significantly higher (90-130%) than current operation compared to the 50% design goal. For equivalent steam production, the total gas usage for the fully implemented plant is 29% lower than current operation. While the current average steam production from No.2 Boiler is about 213,000 lb/h, the total steam production from the modified plant is 379,000 lb/h. This steam production increase will be accomplished at a grate heat release rate (GHRR) equal to the original boiler design. Boiler efficiencies (cogeneration-steam plus air) is increased from the original design value of 70% to 78.9% due to a combination of improved burnout, operation with lower excess air, and drier fuel. For the fully implemented plant, the thermal efficiency of fuel to electricity conversion is 79.8% in the cogeneration mode, 5% above the design goal. Finally, self-generated electricity will be increased from the 10.8 MW currently attributable to No.2 Boiler to 46.7MW, an increase of 332%. Environmental benefits derived from the system include a reduction in NOx emissions from the boiler of about 30-50% (90-130 tons/year) through syngas reburning, improved carbon burnout and lower excess air. This does not count NOx reduction that may be associated with replacement of purchased electricity. The project would reduce CO{sub 2} emissions from the generation of electricity to meet the mill's power requirements, including 50,000 tons/yr from a net reduction in gas usage in the mill and an additional 410,000 tons/yr reduction in CO{sub 2} emissions due to a 34 MW reduction of purchased electricity. The total CO{sub 2} reduction amounts to about 33% of the CO{sub 2} currently generated to meet the mills electricity requirement. The overall conclusion of the study is that while significant engineering challenges are presented by the proposed system, they can be met with operationally acceptable and cost effective solutions. The benefits of the system can be realized in an economic manner, with a simple payback period on the order of 6 years. The results of the study are applicable to many paper mills in the U.S. firing woodwastes and other solid fuels for steam and power production.

  9. Modelling renewable electric resources: A case study of wind

    SciTech Connect (OSTI)

    Bernow, S.; Biewald, B.; Hall, J.; Singh, D.

    1994-07-01

    The central issue facing renewables in the integrated resource planning process is the appropriate assessment of the value of renewables to utility systems. This includes their impact on both energy and capacity costs (avoided costs), and on emissions and environmental impacts, taking account of the reliability, system characteristics, interactions (in dispatch), seasonality, and other characteristics and costs of the technologies. These are system-specific considerations whose relationships may have some generic implications. In this report, we focus on the reliability contribution of wind electric generating systems, measured as the amount of fossil capacity they can displace while meeting the system reliability criterion. We examine this issue for a case study system at different wind characteristics and penetration, for different years, with different system characteristics, and with different modelling techniques. In an accompanying analysis we also examine the economics of wind electric generation, as well as its emissions and social costs, for the case study system. This report was undertaken for the {open_quotes}Innovative IRP{close_quotes} program of the U.S. Department of Energy, and is based on work by both Union of Concerned Scientists (UCS) and Tellus Institute, including America`s Energy Choices and the UCS Midwest Renewables Project.

  10. Wind to Hydrogen in California: Case Study

    SciTech Connect (OSTI)

    Antonia, O.; Saur, G.

    2012-08-01

    This analysis presents a case study in California for a large scale, standalone wind electrolysis site. This is a techno-economic analysis of the 40,000 kg/day renewable production of hydrogen and subsequent delivery by truck to a fueling station in the Los Angeles area. This quantity of hydrogen represents about 1% vehicle market penetration for a city such as Los Angeles (assuming 0.62 kg/day/vehicle and 0.69 vehicles/person) [8]. A wind site near the Mojave Desert was selected for proximity to the LA area where hydrogen refueling stations are already built.

  11. RACORO continental boundary layer cloud investigations. Part I: Case study

    Office of Scientific and Technical Information (OSTI)

    development and ensemble large-scale forcings (Journal Article) | SciTech Connect RACORO continental boundary layer cloud investigations. Part I: Case study development and ensemble large-scale forcings Citation Details In-Document Search This content will become publicly available on June 19, 2016 Title: RACORO continental boundary layer cloud investigations. Part I: Case study development and ensemble large-scale forcings Observation-based modeling case studies of continental boundary

  12. Risk Informed Safety Margin Characterization Case Study: Selection of

    Energy Savers [EERE]

    Electrical Equipment To Be Subjected to Environmental Qualification | Department of Energy Case Study: Selection of Electrical Equipment To Be Subjected to Environmental Qualification Risk Informed Safety Margin Characterization Case Study: Selection of Electrical Equipment To Be Subjected to Environmental Qualification Reference 1 discussed key elements of the process for developing a margins-based "safety case" to support safe and efficient operation for an extended period. The

  13. NREL: State and Local Governments - DIY Solar Market Analysis...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    DIY Solar Market Analysis STAT Webinars The DIY Solar Market Analysis series introduces web-based solar analysis tools as part of the Solar Technical Assistance Team (STAT)...

  14. FES Case Study Worksheets

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Worksheets FES Case Study Worksheets This workshop is closed, and the worksheets can no longer be edited. If you have questions, please report any problems or suggestions for improvement to Richard Gerber (ragerber@lbl.gov). Please choose your worksheet template: Lee Berry, Paul Bonoli, David Green [Read] Jeff Candy [Read] CS Chang [Read] Stephane Ethier [Read] Alex Friedman [Read] Kai Germaschewski [Read] Martin Greenwald [Read] Stephen Jardin [Read] Charlson Kim [Read] Scott Kruger [Read]

  15. Agent-based Infrastructure Interdependency Model

    Energy Science and Technology Software Center (OSTI)

    2003-10-01

    The software is used to analyze infrastructure interdependencies. Agent-based modeling is used for the analysis.

  16. DOE Zero Energy Ready Home Case Study: Palo Duro Homes, Albuquerque...

    Energy Savers [EERE]

    Energy Ready Home Case Study: Palo Duro Homes, Albuquerque, NM Case study of a New Mexico-based home builder who has built more DOE Zero Energy Ready certified homes than any...

  17. SEP CASE STUDY WEBINAR: MEDIMMUNE

    Broader source: Energy.gov [DOE]

    This Measurement and Verification Case Study webinar is the first in a series of case study webinars to highlight the successes of facilities that have achieved Superior Energy Performance (SEP)...

  18. Geoscience/engineering characterization of the interwell environment in carbonate reservoirs based on outcrop analogs, Permian Basin, West Texas and New Mexico--waterflood performance analysis for the South Cowden Grayburg Reservoir, Ector County, Texas. Final report

    SciTech Connect (OSTI)

    Jennings, J.W. Jr.

    1997-05-01

    A reservoir engineering study was conducted of waterflood performance in the South Cowden field, an Upper Permian Grayburg reservoir on the Central Basin Platform in West Texas. The study was undertaken to understand the historically poor waterflood performance, evaluate three techniques for incorporating petrophysical measurements and geological interpretation into heterogeneous reservoir models, and identify issues in heterogeneity modeling and fluid-flow scaleup that require further research. The approach included analysis of relative permeability data, analysis of injection and production data, heterogeneity modeling, and waterflood simulation. The poor South Cowden waterflood recovery is due, in part, to completion of wells in only the top half of the formation. Recompletion of wells through the entire formation is estimated to improve recovery in ten years by 6 percent of the original oil in place in some areas of the field. A direct three-dimensional stochastic approach to heterogeneity modeling produced the best fit to waterflood performance and injectivity, but a more conventional model based on smooth mapping of layer-averaged properties was almost as good. The results reaffirm the importance of large-scale heterogeneities in waterflood modeling but demonstrate only a slight advantage for stochastic modeling at this scale. All the flow simulations required a reduction to the measured whole-core k{sub v}/k{sub h} to explain waterflood behavior, suggesting the presence of barriers to vertical flow not explicitly accounted for in any of the heterogeneity models. They also required modifications to the measured steady-state relative permeabilities, suggesting the importance of small-scale heterogeneities and scaleup. Vertical flow barriers, small-scale heterogeneity modeling, and relative permeability scaleup require additional research for waterflood performance prediction in reservoirs like South Cowden.

  19. Appendix A: Reference case

    Annual Energy Outlook [U.S. Energy Information Administration (EIA)]

    12.92 12.90 13.09 -0.2% 1 Commercial trucks 8,501 to 10,000 pounds gross vehicle weight rating. 2 CAFE standard based on projected new vehicle sales. 3 Includes CAFE credits for...

  20. Research and evaluation of biomass resources/conversion/utilization systems (market/experimental analysis for development of a data base for a fuels from biomass model). Quarterly technical progress report, Februray 1, 1980-April 30, 1980

    SciTech Connect (OSTI)

    Ahn, Y.K.; Chen, Y.C.; Chen, H.T.; Helm, R.W.; Nelson, E.T.; Shields, K.J.

    1980-01-01

    The project will result in two distinct products: (1) a biomass allocation model which will serve as a tool for the energy planner. (2) the experimental data is being generated to help compare and contrast the behavior of a large number of biomass material in thermochemical environments. Based on information in the literature, values have been developed for regional biomass costs and availabilities and for fuel costs and demands. This data is now stored in data banks and may be updated as better data become available. Seventeen biomass materials have been run on the small TGA and the results partially analyzed. Ash analysis has been performed on 60 biomass materials. The Effluent Gas Analyzer with its associated gas chromatographs has been made operational and some runs have been carried out. Using a computerized program for developing product costs, parametric studies on all but 1 of the 14 process configurations being considered have been performed. Background economic data for all the configuration have been developed. Models to simulate biomass gasifications in an entrained and fixed bed have been developed using models previously used for coal gasification. Runs have been carried out in the fluidized and fixed bed reactor modes using a variety of biomass materials in atmospheres of steam, O/sub 2/ and air. Check aout of the system continues using fabricated manufacturing cost and efficiency data. A users manual has been written.

  1. Renewable Fuels Legislation Impact Analysis

    Reports and Publications (EIA)

    2005-01-01

    An analysis based on an extension of the ethanol supply curve in our model to allow for enough ethanol production to meet the requirements of S. 650. This analysis provides an update of the May 23, 2005 analysis, with revised ethanol production and cost assumptions.

  2. Evaluation of energy system analysis techniques for identifying underground facilities

    SciTech Connect (OSTI)

    VanKuiken, J.C.; Kavicky, J.A.; Portante, E.C.

    1996-03-01

    This report describes the results of a study to determine the feasibility and potential usefulness of applying energy system analysis techniques to help detect and characterize underground facilities that could be used for clandestine activities. Four off-the-shelf energy system modeling tools were considered: (1) ENPEP (Energy and Power Evaluation Program) - a total energy system supply/demand model, (2) ICARUS (Investigation of Costs and Reliability in Utility Systems) - an electric utility system dispatching (or production cost and reliability) model, (3) SMN (Spot Market Network) - an aggregate electric power transmission network model, and (4) PECO/LF (Philadelphia Electric Company/Load Flow) - a detailed electricity load flow model. For the purposes of most of this work, underground facilities were assumed to consume about 500 kW to 3 MW of electricity. For some of the work, facilities as large as 10-20 MW were considered. The analysis of each model was conducted in three stages: data evaluation, base-case analysis, and comparative case analysis. For ENPEP and ICARUS, open source data from Pakistan were used for the evaluations. For SMN and PECO/LF, the country data were not readily available, so data for the state of Arizona were used to test the general concept.

  3. Explosively separable casing

    DOE Patents [OSTI]

    Jacobson, Albin K. (Albuquerque, NM); Rychnovsky, Raymond E. (Livermore, CA); Visbeck, Cornelius N. (Livermore, CA)

    1985-01-01

    An explosively separable casing including a cylindrical afterbody and a circular cover for one end of the afterbody is disclosed. The afterbody has a cylindrical tongue extending longitudinally from one end which is matingly received in a corresponding groove in the cover. The groove is sized to provide a pocket between the end of the tongue and the remainder of the groove so that an explosive can be located therein. A seal is also provided between the tongue and the groove for sealing the pocket from the atmosphere. A frangible holding device is utilized to hold the cover to the afterbody. When the explosive is ignited, the increase in pressure in the pocket causes the cover to be accelerated away from the afterbody. Preferably, the inner wall of the afterbody is in the same plane as the inner wall of the tongue to provide a maximum space for storage in the afterbody and the side wall of the cover is thicker than the side wall of the afterbody so as to provide a sufficiently strong surrounding portion for the pocket in which the explosion takes place. The detonator for the explosive is also located on the cover and is carried away with the cover during separation. The seal is preferably located at the longitudinal end of the tongue and has a chevron cross section.

  4. Restricted Natural Gas Supply Case (released in AEO2005)

    Reports and Publications (EIA)

    2005-01-01

    The restricted natural gas supply case provides an analysis of the energy-economic implications of a scenario in which future gas supply is significantly more constrained than assumed in the reference case. Future natural gas supply conditions could be constrained because of problems with the construction and operation of large new energy projects, and because the future rate of technological progress could be significantly lower than the historical rate. Although the restricted natural gas supply case represents a plausible set of constraints on future natural gas supply, it is not intended to represent what is likely to happen in the future.

  5. Technology Deployment Case Studies | Department of Energy

    Office of Environmental Management (EM)

    Deployment Technology Deployment Case Studies Technology Deployment Case Studies These case studies describe evaluations of energy-efficient technologies being used in federal...

  6. Better Buildings Residential Network Case Study: Partnerships...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Network Case Study: Partnerships Better Buildings Residential Network Case Study: Partnerships Better Buildings Residential Network Case Study: Partnerships, from the U.S. ...

  7. Geothermal Case Study Challenge | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Case Study Challenge Geothermal Case Study Challenge Geothermal Case Study Challenge The ... student competition in exploration research to engage students pursuing STEM careers ...

  8. Patrick Case | Y-12 National Security Complex

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Blake Case Larry Case Patrick Case Dorothy Coker Gordon Fee Linda Fellers Louis Freels Marie Guy Nathan Henry Agnes Houser John Rice Irwin Harvey Kite Charlie Manning Alice...

  9. Blake Case | Y-12 National Security Complex

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Blake Case Larry Case Patrick Case Dorothy Coker Gordon Fee Linda Fellers Louis Freels Marie Guy Nathan Henry Agnes Houser John Rice Irwin Harvey Kite Charlie Manning Alice...

  10. Water Efficiency Case Studies | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Water Efficiency Case Studies Water Efficiency Case Studies These case studies offer examples of water efficiency projects implemented by federal agencies. They are organized by ...

  11. BBRN Factsheet: Case Study: Community Engagement | Department...

    Office of Environmental Management (EM)

    BBRN Factsheet: Case Study: Community Engagement BBRN Factsheet: Case Study: Community Engagement Case Study: Community Engagement, on the Community Home Energy Retrofit Project...

  12. A Business Case for Home Performance Contracting

    SciTech Connect (OSTI)

    Baechler, Michael C.; Antonopoulos, Chrissi A.; Sevigny, Maureen; Gilbride, Theresa L.; Hefty, Marye G.

    2012-10-01

    This report was prepared by PNNL for the DOE Building America program. The report provides information for businesses considering entering the home performance contracting industry. Metrics discussed include industry trends and drivers, specific points of entry, business models, startup costs, and marketing strategies. The report includes detailed analysis of eight businesses around the country that have successfully entered the home performance contracting industry. Data is provided on their financial structures, program participation, marketing efforts, and staff training. This report will be distributed via the DOE Building America website, www.buildingamerica.gov. Individual case studies will also be cleared separately.

  13. Non-ferromagnetic overburden casing

    DOE Patents [OSTI]

    Vinegar, Harold J. (Bellaire, TX); Harris, Christopher Kelvin (Houston, TX); Mason, Stanley Leroy (Allen, TX)

    2010-09-14

    Systems, methods, and heaters for treating a subsurface formation are described herein. At least one system for electrically insulating an overburden portion of a heater wellbore is described. The system may include a heater wellbore located in a subsurface formation and an electrically insulating casing located in the overburden portion of the heater wellbore. The casing may include at least one non-ferromagnetic material such that ferromagnetic effects are inhibited in the casing.

  14. FAQ for Case Study Authors

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Reviews » FAQ for Case Study Authors Science Engagement Move your data Programs & Workshops Science Requirements Reviews Network Requirements Reviews Documents and Background Materials FAQ for Case Study Authors BER Requirements Review 2015 ASCR Requirements Review 2015 Previous Reviews Requirements Review Reports Case Studies Contact Us Technical Assistance: 1 800-33-ESnet (Inside US) 1 800-333-7638 (Inside US) 1 510-486-7600 (Globally) 1 510-486-7607 (Globally) Report Network Problems:

  15. FAQ for Case Study Authors

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Reviews FAQ for Case Study Authors Science Engagement Move your data Programs & Workshops Science Requirements Reviews Network Requirements Reviews Documents and Background...

  16. EPICS BASE

    Energy Science and Technology Software Center (OSTI)

    002230MLTPL00 Experimental Physics and Industrial Control System BASE  http://www.aps.anl.gov/epics 

  17. Transmittal of the Calculation Package that Supports the Analysis of Performance of the Environmental Management Waste Management Facility Oak Ridge, Tennessee (Based 5-Cell Design Issued 8/14/09)

    SciTech Connect (OSTI)

    Williams M.J.

    2009-09-14

    This document presents the results of an assessment of the performance of a build-out of the Environmental Management Waste Management Facility (EMWMF). The EMWMF configuration that was assessed includes the as-constructed Cells 1 through 4, with a groundwater underdrain that was installed beneath Cell 3 during the winter of 2003-2004, and Cell 5, whose proposed design is an Addendum to Remedial Design Report for the Disposal of Oak Ridge Reservation Comprehensive Environmental Response, Compensation, and Liability Act of 1980 Waste, Oak Ridge, Tennessee, DOE/OR/01-1873&D2/A5/R1. The total capacity of the EMWMF with 5 cells is about 1.7 million cubic yards. This assessment was conducted to determine the conditions under which the approved Waste Acceptance Criteria (WAC) for the EMWMF found in the Attainment Plan for Risk/Toxicity-Based Waste Acceptance Criteria at the Oak Ridge Reservation, Oak Ridge, Tennessee [U.S. Department of Energy (DOE) 2001a], as revised for constituents added up to October 2008, would remain protective of public health and safety for a five-cell disposal facility. For consistency, the methods of analyses and the exposure scenario used to predict the performance of a five-cell disposal facility were identical to those used in the Remedial Investigation and Feasibility Study (RI/FS) and its addendum (DOE 1998a, DOE 1998b) to develop the approved WAC. To take advantage of new information and design changes departing from the conceptual design, the modeling domain and model calibration were upaded from those used in the RI/FS and its addendum. It should be noted that this analysis is not intended to justify or propose a change in the approved WAC.

  18. Steel catenary risers for semisubmersible based floating production systems

    SciTech Connect (OSTI)

    Hays, P.R.

    1996-12-31

    The DeepStar production riser committee has investigated the feasibility of using steel catenary risers (SCRs) in water depths of 3,000--6,000 ft. Using Sonat`s George Richardson as the base semisubmersible, DeepStar has examined both extreme event response and fatigue life of an SCR made of pipe sections welded end-to-end. Concepts using alternative materials were investigated. This included steel, steel with titanium and titanium catenary risers. The pros and cons of frequency domain versus time domain analysis were investigated with a commercially available analysis package. A second study outlined a definitive analysis procedure which optimized the analysis time requirements. Analyses showed that steel catenary risers are feasible for semisubmersible based floating production systems. For the DeepStar Gulf of Mexico design criteria, alternative materials are not required. The greatest fatigue damage occurs in the touchdown region of the riser. Mild sea states contribute most to fatigue damage near riser touchdown. Wave drift and wind forces provide a significant contribution to touchdown area fatigue damage. Estimated fatigue lives are unacceptable. Although the rotations of the upper end of the riser are large relative to an SCR attached to a TLP, the rotation required can probably be accommodated with existing technology. For the case of product export, steel catenary risers provide very cost effective and readily installable deep water riser alternatives.

  19. SUPPLEMENT ANALYSIS

    Energy Savers [EERE]

    812 Supplement Analysis 1 October 2013 SUPPLEMENT ANALYSIS for the FINAL ENVIRONMENTAL ASSESSMENT for NECO (FORMERLY HAXTUN) WIND ENERGY PROJECT LOGAN AND PHILLIPS COUNTIES, COLORADO U. S. Department of Energy Office of Energy Efficiency and Renewable Energy Golden Field Office and U.S. Department of Energy Western Area Power Administration Rocky Mountain Customer Service Region OCTOBER 2013 DOE/EA-1812/SA-1 DOE/EA-1812 Supplement Analysis 2 October 2013 SUPPLEMENT ANALYSIS for the FINAL

  20. Thermal initiation caused by fragment impact on cased explosives

    SciTech Connect (OSTI)

    Schnurr, N.M. )

    1989-01-01

    Numerical calculations have been used to predict the velocity threshold for thermal initiation of a cased explosive caused by fragment impact. A structural analysis code was used to determine temperature profiles and a thermal analysis code was used to calculate reaction rates. Results generated for the United States Air Force MK 82 bomb indicate that the velocity threshold for thermal initiation is slightly higher than that for the shock-to-detonation process. 8 refs., 5 figs., 2 tabs.

  1. Incorporating Experience Curves in Appliance Standards Analysis

    SciTech Connect (OSTI)

    Garbesi, Karina; Chan, Peter; Greenblatt, Jeffery; Kantner, Colleen; Lekov, Alex; Meyers, Stephen; Rosenquist, Gregory; Buskirk, Robert Van; Yang, Hung-Chia; Desroches, Louis-Benoit

    2011-10-31

    The technical analyses in support of U.S. energy conservation standards for residential appliances and commercial equipment have typically assumed that manufacturing costs and retail prices remain constant during the projected 30-year analysis period. There is, however, considerable evidence that this assumption does not reflect real market prices. Costs and prices generally fall in relation to cumulative production, a phenomenon known as experience and modeled by a fairly robust empirical experience curve. Using price data from the Bureau of Labor Statistics, and shipment data obtained as part of the standards analysis process, we present U.S. experience curves for room air conditioners, clothes dryers, central air conditioners, furnaces, and refrigerators and freezers. These allow us to develop more representative appliance price projections than the assumption-based approach of constant prices. These experience curves were incorporated into recent energy conservation standards for these products. The impact on the national modeling can be significant, often increasing the net present value of potential standard levels in the analysis. In some cases a previously cost-negative potential standard level demonstrates a benefit when incorporating experience. These results imply that past energy conservation standards analyses may have undervalued the economic benefits of potential standard levels.

  2. Spatial data analysis and environmental justice

    SciTech Connect (OSTI)

    Bahadur, R.; Samuels, W.B.; Williams, J.W.; Zeitoun, A.H.

    1997-08-01

    Evaluations of environmental justice for government actions concerned with the transportation of hazardous materials over cross country routes presents a significant challenge in spatial data analysis. The sheer volume of data required for accurate identification of minority and low-income populations along the routes and at the endpoints can be formidable. Managing and integrating large volumes of information with state-of-the-art tools is essential in the analysis of environmental justice and equity concerns surrounding transportation of hazardous materials. This paper discusses the role and limitations of geographical information systems in the analysis and visualization of populations potentially affected by the transportation of hazardous materials over transcontinental ground and water routes. Case studies are used to demonstrate the types of data and analyses needed for evaluations of environmental justice for cross country routes and end points. Inherent capabilities and limitations in spatial resolution are evaluated for environmental assessments in which potentially affected areas are quantified based on the physical characteristics of the hazardous cargo.

  3. Elizabeth Case | Department of Energy

    Office of Environmental Management (EM)

    Case About Us Elizabeth Case - Guest Blogger, Cycle for Science Most Recent Rain or Shine: We Cycle for Science July 2 Mountains, and Teachers, and a Bear, Oh My! June 2 Sol-Cycle: Biking Across America for Science Education May 1

  4. Business Case for Technical Qualification Program Accreditation...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Business Case for Technical Qualification Program Accreditation Incentives Business Case for Technical Qualification Program Accreditation Incentives TQP Accreditation standardize ...

  5. Case Study for the ARRA-funded GSHP Demonstration at University at Albany

    SciTech Connect (OSTI)

    Liu, Xiaobing; Malhotra, Mini; Xiong, Zeyu

    2015-03-01

    High initial costs and lack of public awareness of ground-source heat pump (GSHP) technology are the two major barriers preventing rapid deployment of this energy-saving technology in the United States. Under the American Recovery and Reinvestment Act (ARRA), 26 GSHP projects have been competitively selected and carried out to demonstrate the benefits of GSHP systems and innovative technologies for cost reduction and/or performance improvement. This report highlights the findings of a case study of one of the ARRA-funded GSHP demonstration projects—a distributed GSHP system at a new 500-bed apartment-style student residence hall at the University at Albany. This case study is based on the analysis of detailed design documents, measured performance data, published catalog data of heat pump equipment, and actual construction costs. Simulations with a calibrated computer model are performed for both the demonstrated GSHP system and a baseline heating, ventilation, and airconditioning (HVAC) system to determine the energy savings and other related benefits achieved by the GSHP system. The evaluated performance metrics include the energy efficiency of the heat pump equipment and the overall GSHP system, as well as the pumping performance, energy savings, carbon emission reductions, and cost-effectiveness of the demonstrated GSHP system compared with the baseline HVAC system. This case study also identifies opportunities for improving the operational efficiency of the demonstrated GSHP system.

  6. Case study for ARRA-funded ground-source heat pump (GSHP) demonstration at Oakland University

    SciTech Connect (OSTI)

    Im, Piljae; Liu, Xiaobing

    2015-09-01

    High initial costs and lack of public awareness of ground-source heat pump (GSHP) technology are the two major barriers preventing rapid deployment of this energy-saving technology in the United States. Under the American Recovery and Reinvestment Act (ARRA), 26 GSHP projects have been competitively selected and carried out to demonstrate the benefits of GSHP systems and innovative technologies for cost reduction and/or performance improvement. This paper highlights the findings of a case study of one of the ARRA-funded GSHP demonstration projects, a ground-source variable refrigerant flow (GS-VRF) system installed at the Human Health Building at Oakland University in Rochester, Michigan. This case study is based on the analysis of measured performance data, maintenance records, construction costs, and simulations of the energy consumption of conventional central heating, ventilation, and air-conditioning (HVAC) systems providing the same level of space conditioning as the demonstrated GS-VRF system. The evaluated performance metrics include the energy efficiency of the heat pump equipment and the overall GS-VRF system, pumping performance, energy savings, carbon emission reductions, and cost-effectiveness of the GS-VRF system compared with conventional HVAC systems. This case study also identified opportunities for reducing uncertainties in the performance evaluation, improving the operational efficiency, and reducing the installed cost of similar GSHP systems in the future.

  7. General Dynamics Case Study for Superior Energy Performance | Department of

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Energy Technical Assistance » Superior Energy Performance » General Dynamics Case Study for Superior Energy Performance General Dynamics Case Study for Superior Energy Performance Superior Energy Performance logo General Dynamics became the first U.S. defense contractor to achieve certification under both ISO 50001 and Superior Energy Performance® (SEP(tm)), based on the company's energy management system at a plant it operates in Scranton, Pennsylvania, USA. PDF icon General Dynamics

  8. Interactive savings calculations for RCS measures, six case studies

    SciTech Connect (OSTI)

    Stovall, T.K.

    1983-11-01

    Many Residential Conservation Service (RCS) audits are based, in whole or in part, on the RCS Model Audit. This audit calculates the savings for each measure independently, that is, as if no other conservation actions were taken. This method overestimates the total savings due to a group of measures, and an explanatory warning is given to the customer. Presenting interactive results to consumers would increase the perceived credibility of the audit results by eliminating the need for the warning about uncalculated interactive effects. An increased level of credibility would hopefully lead to an increased level of conservation actions based on the audit results. Because many of the existing RCS audits are based on the RCS Model Audit, six case studies were produced to show that the Model Audit algorithms can be used to produce interactive savings estimates. These six Model Audit case studies, as well as two Computerized Instrumented Residential Audit cases, are presented along with a discussion of the calculation methods used.

  9. Computer aided cogeneration feasibility analysis

    SciTech Connect (OSTI)

    Anaya, D.A.; Caltenco, E.J.L.; Robles, L.F.

    1996-12-31

    A successful cogeneration system design depends of several factors, and the optimal configuration can be founded using a steam and power simulation software. The key characteristics of one of this kind of software are described below, and its application on a process plant cogeneration feasibility analysis is shown in this paper. Finally a study case is illustrated. 4 refs., 2 figs.

  10. Infrastructure Analysis Tools: A Focus on Cash Flow Analysis (Presentation)

    SciTech Connect (OSTI)

    Melaina, M.; Penev, M.

    2012-09-01

    NREL has developed and maintains a variety of infrastructure analysis models for the U.S. Department of Energy. Business case analysis has recently been added to this tool set. This presentation focuses on cash flow analysis. Cash flows depend upon infrastructure costs, optimized spatially and temporally, and assumptions about financing and revenue. NREL has incorporated detailed metrics on financing and incentives into the models. Next steps in modeling include continuing to collect feedback on regional/local infrastructure development activities and 'roadmap' dynamics, and incorporating consumer preference assumptions on infrastructure to provide direct feedback between vehicles and station rollout.

  11. RACORO continental boundary layer cloud investigations. Part I: Case study development and ensemble large-scale forcings

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; Endo, Satoshi; Lin, Wuyin; Wang, Jian; Feng, Sha; Zhang, Yunyan; Turner, David D.; Liu, Yangang; et al

    2015-06-19

    Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60-hour case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in-situ measurements from the RACORO field campaign and remote-sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functionsmore » for concise representation in models. Values of the aerosol hygroscopicity parameter, κ, are derived from observations to be ~0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing datasets are derived from the ARM variational analysis, ECMWF forecasts, and a multi-scale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in 'trial' large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary clouds.« less

  12. RACORO continental boundary layer cloud investigations. Part I: Case study development and ensemble large-scale forcings

    SciTech Connect (OSTI)

    Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; Endo, Satoshi; Lin, Wuyin; Wang, Jian; Feng, Sha; Zhang, Yunyan; Turner, David D.; Liu, Yangang; Li, Zhijin; Xie, Shaocheng; Ackerman, Andrew S.; Zhang, Minghua; Khairoutdinov, Marat

    2015-06-19

    Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60-hour case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in-situ measurements from the RACORO field campaign and remote-sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functions for concise representation in models. Values of the aerosol hygroscopicity parameter, κ, are derived from observations to be ~0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing datasets are derived from the ARM variational analysis, ECMWF forecasts, and a multi-scale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in 'trial' large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary clouds.

  13. Industrial process heat case studies. [PROSYS/ECONMAT code

    SciTech Connect (OSTI)

    Hooker, D.W.; May, E.K.; West, R.E.

    1980-05-01

    Commercially available solar collectors have the potential to provide a large fraction of the energy consumed for industrial process heat (IPH). Detailed case studies of individual industrial plants are required in order to make an accurate assessment of the technical and economic feasibility of applications. This report documents the results of seven such case studies. The objectives of the case study program are to determine the near-term feasibility of solar IPH in selected industries, identify energy conservation measures, identify conditions of IPH systems that affect solar applications, test SERI's IPH analysis software (PROSYS/ECONOMAT), disseminate information to the industrial community, and provide inputs to the SERI research program. The detailed results from the case studies are presented. Although few near-term, economical solar applications were found, the conditions that would enhance the opportunities for solar IPH applications are identified.

  14. Methods for spectral image analysis by exploiting spatial simplicity

    DOE Patents [OSTI]

    Keenan, Michael R.

    2010-05-25

    Several full-spectrum imaging techniques have been introduced in recent years that promise to provide rapid and comprehensive chemical characterization of complex samples. One of the remaining obstacles to adopting these techniques for routine use is the difficulty of reducing the vast quantities of raw spectral data to meaningful chemical information. Multivariate factor analysis techniques, such as Principal Component Analysis and Alternating Least Squares-based Multivariate Curve Resolution, have proven effective for extracting the essential chemical information from high dimensional spectral image data sets into a limited number of components that describe the spectral characteristics and spatial distributions of the chemical species comprising the sample. There are many cases, however, in which those constraints are not effective and where alternative approaches may provide new analytical insights. For many cases of practical importance, imaged samples are "simple" in the sense that they consist of relatively discrete chemical phases. That is, at any given location, only one or a few of the chemical species comprising the entire sample have non-zero concentrations. The methods of spectral image analysis of the present invention exploit this simplicity in the spatial domain to make the resulting factor models more realistic. Therefore, more physically accurate and interpretable spectral and abundance components can be extracted from spectral images that have spatially simple structure.

  15. Methods for spectral image analysis by exploiting spatial simplicity

    DOE Patents [OSTI]

    Keenan, Michael R. (Albuquerque, NM)

    2010-11-23

    Several full-spectrum imaging techniques have been introduced in recent years that promise to provide rapid and comprehensive chemical characterization of complex samples. One of the remaining obstacles to adopting these techniques for routine use is the difficulty of reducing the vast quantities of raw spectral data to meaningful chemical information. Multivariate factor analysis techniques, such as Principal Component Analysis and Alternating Least Squares-based Multivariate Curve Resolution, have proven effective for extracting the essential chemical information from high dimensional spectral image data sets into a limited number of components that describe the spectral characteristics and spatial distributions of the chemical species comprising the sample. There are many cases, however, in which those constraints are not effective and where alternative approaches may provide new analytical insights. For many cases of practical importance, imaged samples are "simple" in the sense that they consist of relatively discrete chemical phases. That is, at any given location, only one or a few of the chemical species comprising the entire sample have non-zero concentrations. The methods of spectral image analysis of the present invention exploit this simplicity in the spatial domain to make the resulting factor models more realistic. Therefore, more physically accurate and interpretable spectral and abundance components can be extracted from spectral images that have spatially simple structure.

  16. Geographically Based Hydrogen Consumer Demand and Infrastructure...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Geographically Based Hydrogen Consumer Demand and Infrastructure Analysis Final Report M. Melendez and A. Milbrandt Technical Report NRELTP-540-40373 October 2006 NREL is operated...

  17. Uncertainty Analysis

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Analysis - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion ...

  18. OHA Whistleblower Cases Archive File

    Broader source: Energy.gov [DOE]

    This is a archive file of our Whistleblower decisions, Please download this file to your local computer and use the build in adobe search feature. Individual cases are listed in the bookmark...

  19. OHA Security Cases Archive File

    Broader source: Energy.gov [DOE]

    This is a archive file of our Security decisions, Please download this file to your local computer and use the build in adobe search feature. Individual cases are listed in the bookmark section of...

  20. Appendix A. Reference case projections

    Gasoline and Diesel Fuel Update (EIA)

    by region and country, Low Oil Price case, 2009-40 (million barrels per day) Region History Projections Average annual percent change, 2010-40 2009 2010 2011 2020 2025 2030...

  1. Alternative Fuels Data Center: Case Studies

    Alternative Fuels and Advanced Vehicles Data Center [Office of Energy Efficiency and Renewable Energy (EERE)]

    Case Studies Printable Version Share this resource Send a link to Alternative Fuels Data Center: Case Studies to someone by E-mail Share Alternative Fuels Data Center: Case Studies on Facebook Tweet about Alternative Fuels Data Center: Case Studies on Twitter Bookmark Alternative Fuels Data Center: Case Studies on Google Bookmark Alternative Fuels Data Center: Case Studies on Delicious Rank Alternative Fuels Data Center: Case Studies on Digg Find More places to share Alternative Fuels Data

  2. BerkeleyGW Case Study

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    BerkeleyGW Case Study BerkeleyGW Case Study Code Description and Science Problem BerkeleyGW is a Materials Science application for calculating the excited state properties of materials such as band gaps, band structures, absoprtion spectroscopy, photoemission spectroscopy and more. It requires as input the Kohn-Sham orbitals and energies from a DFT code like Quantum ESPRESSO, PARATEC, PARSEC etc. Like such DFT codes, it is heavily depedent on FFTs, Dense Linear algebra and tensor contraction

  3. Chombo-Crunch Case Study

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Chombo-Crunch Case Study Chombo-Crunch Case Study Background Chombo-Crunch is a high-performance software package which has been developed jointly by research scientists from Applied Numerical Algorithms Group, Computational Research Division (PI: David Trebotich) and Earth Sciences Division at LBNL for large-scale numerical simulations of complex fluid flows with particular interest in modeling of subsurface flows. One important application example of subsurface flow is a carbon sequestration -

  4. EIA Cases | Department of Energy

    Energy Savers [EERE]

    EIA Cases EIA Cases RSS February 14, 2011 TEE-0073 - In the Matter of Cole Distributing, Inc. On December 13, 2010, Cole Distributing, Inc. (Cole) filed an Application for Exception with the Office of Hearings and Appeals (OHA) of the Department of Energy (DOE). The firm requests that it be permanently relieved of the requirement to prepare and file the Energy Information Administration (EIA) Form EIA-782B, entitled "Resellers'/Retailers' Monthly Petroleum Product Sales Report." As

  5. FOIA Cases | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    FOIA Cases FOIA Cases RSS December 22, 2015 FIA-15-0066 - In the Matter of Government Accountability Project On December 22, 2015, OHA denied a FOIA Appeal filed by Government Accountability Project from a determination issued by the DOE Office of Inspector General (OIG). In the Appeal, the Appellant challenged the redactions made in the responsive documents. OHA found, however, that the redactions made pursuant FOIA Exemptions 5 and 6 were appropriate. December 10, 2015 FIA-15-0064 - In the

  6. Whistleblower Cases | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Whistleblower Cases Whistleblower Cases RSS December 31, 2015 WBA-15-0009 - In the Matter of Sandra Black On December 31, 2015, OHA denied an Appeal involving a Complaint filed by Sandra Black against Savannah River Nuclear Solutions, LLC (SRNS) under the DOE's Contractor Employee Protection Program, 10 CFR Part 708. In her Complaint, Black alleged SRNS terminated her for engaging in protected activities, specifically citing her participation in a Government Accountability Office review as a

  7. Security Cases | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Security Cases Security Cases RSS December 22, 2015 PSH-15-0067 - In the Matter of Personnel Security Hearing On December 22, 2015, an Administrative Judge issued a decision in which she determined that an individual's access authorization should not be restored. During a personnel security interview in April 2014 and a credit report review, the Local Security Office (LSO) learned that the individual had a number of charge-off accounts totaling $14,941, as well as an outstanding collection

  8. Analysis of surface integrity of grinded gears using Barkhausen noise analysis and x-ray diffraction

    SciTech Connect (OSTI)

    Vrkoslavová, Lucie; Louda, Petr; Malec, Ji?i

    2014-02-18

    The contribution is focused to present results of study grinded gears made of 18CrNiMo7-6 steel used in the wind power plant for support (service) purposes. These gears were case-hardened due to standard hard case and soft core formation. This heat treatment increases wear resistance and fatigue strength of machine parts. During serial production some troubles with surface integrity have occurred. When solving complex problems lots of samples were prepared. For grinding of gears were used different parameters of cutting speed, number of material removal and lots from different subsuppliers. Material characterization was carried out using Barkhausen noise analysis (BNA) device; X-ray diffraction (XRD) measurement of surface residual stresses was done as well. Depth profile of measured characteristics, e.g. magnetoelastic parameter and residual stress was obtained by step by step layers' removing using electrolytic etching. BNA software Viewscan was used to measure magnetizing frequency sweep (MFS) and magnetizing voltage sweep (MVS). Scanning of Magnetoelastic parameter (MP) endwise individual teeth were also carried out with Viewscan. These measurements were done to find problematic surface areas after grinding such as thermal damaged locations. Plots of the hardness and thickness of case-hardened layer on cross sections were measurered as well. Evaluation of structure of subsurface case-hardened layer and core was made on etched metallographic patterns. The aim of performed measurements was to find correlation between conditions of grinding, residual stresses and structural and magnetoelastic parameters. Based on correlation of measured values and technological parameters optimizing the production of gears will be done.

  9. Human Factors Engineering Analysis Tool

    Energy Science and Technology Software Center (OSTI)

    2002-03-04

    HFE-AT is a human factors engineering (HFE) software analysis tool (AT) for human-system interface design of process control systems, and is based primarily on NUREG-0700 guidance.

  10. NREL: Energy Analysis - Market Analysis

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Market Analysis The laboratory's market analysis helps increase the use of renewable energy (RE) and energy efficiency (EE) technologies in the marketplace by providing strategic information to stakeholders interested in rapidly changing electricity markets. Our high-quality and objective crosscutting assessments and analysis support informed decision making. Primary focuses include: Energy Technology/Program Cost, Performance, and Market Data The Office of Energy Efficiency and Renewable Energy

  11. Budget Risk & Prioritization Analysis Tool

    Energy Science and Technology Software Center (OSTI)

    2010-12-31

    BRPAtool performs the following: ?Assists managers in making solid decisions on what scope/activities to reduce and/or eliminate, to meet constrained budgets, based on multiple risk factors ?Enables analysis of different budget scenarios ?Can analyze risks and cost for each activity based on technical, quantifiable risk criteria and management-determined risks ?Real-time analysis ?Enables managers to determine the multipliers and where funding is best applied ?Promotes solid budget defense

  12. REVIEW OF NRC APPROVED DIGITAL CONTROL SYSTEMS ANALYSIS

    SciTech Connect (OSTI)

    D.W. Markman

    1999-09-17

    Preliminary design concepts for the proposed Subsurface Repository at Yucca Mountain indicate extensive reliance on modern, computer-based, digital control technologies. The purpose of this analysis is to investigate the degree to which the U. S. Nuclear Regulatory Commission (NRC) has accepted and approved the use of digital control technology for safety-related applications within the nuclear power industry. This analysis reviews cases of existing digitally-based control systems that have been approved by the NRC. These cases can serve as precedence for using similar types of digitally-based control technologies within the Subsurface Repository. While it is anticipated that the Yucca Mountain Project (YMP) will not contain control systems as complex as those required for a nuclear power plant, the review of these existing NRC approved applications will provide the YMP with valuable insight into the NRCs review process and design expectations for safety-related digital control systems. According to the YMP Compliance Program Guidance, portions of various NUREGS, Regulatory Guidelines, and nuclear IEEE standards the nuclear power plant safety related concept would be applied to some of the designs on a case-by-case basis. This analysis will consider key design methods, capabilities, successes, and important limitations or problems of selected control systems that have been approved for use in the Nuclear Power industry. An additional purpose of this analysis is to provide background information in support of further development of design criteria for the YMP. The scope and primary objectives of this analysis are to: (1) Identify and research the extent and precedence of digital control and remotely operated systems approved by the NRC for the nuclear power industry. Help provide a basis for using and relying on digital technologies for nuclear related safety critical applications. (2) Identify the basic control architecture and methods of key digital control systems approved for use in the nuclear power industry by the NRC. (3) Identify and discuss key design issues, features, benefits, and limitations of these NRC approved digital control systems that can be applied as design guidance and correlated to the Monitored Geologic Repository (MGR) design requirements. (4) Identify codes and standards used in the design of these NRC approved digital control systems and discuss their possible applicability to the design of a subsurface nuclear waste repository. (5) Evaluate the NRC approved digital control system's safety, reliability and maintainability features and issues. Apply these to MGR design methodologies and requirements. (6) Provide recommendations for use in developing design criteria in the System Description Documents for the digital control systems of the subsurface nuclear waste repository at Yucca Mountain. (7) Develop recommendations for applying NRC approval methods for digital control systems for the subsurface nuclear waste repository at Yucca Mountain. This analysis will focus on the development of the issues, criteria and methods used and required for identifying the appropriate requirements for digital based control systems. Attention will be placed on development of recommended design criteria for digital controls including interpretation of codes, standards and regulations. Attention will also focus on the use of digital controls and COTS (Commercial Off-the-shelf) technology and equipment in selected NRC approved digital control systems, and as referenced in applicable codes, standards and regulations. The analysis will address design issues related to COTS technology and how they were dealt with in previous NRC approved digital control systems.

  13. Choosing among alternative recycling systems: An economic analysis

    SciTech Connect (OSTI)

    Stedge, G.D. . Dept. of Agricultural and Applied Economics); Halstead, J.M. . Dept. of Resource Economics and Development)

    1994-03-01

    Due to the increasing concern over the disposal of municipal solid waste, municipalities have begun searching for ways to recycle a larger percentage to their waste stream at a reasonable cost. This report examines bag-based recycling. This system, due to its efficient collection and separation method, and its convenience, should be able to capture a larger share of the waste stream at a lower cost per metric ton than conventional recycling programs. Using a case study approach, a bag-based program is compared with a curbside-sort program and a drop-off program. Using time/motion analysis, a garbage composition study, a household survey, and the recording of set-out rates of a sample of dwelling units, the efficiency of the three programs was defined and estimated. The efficiency of the bag-based system was also estimated for three areas with distinct household densities. Although the curbside-sort program was found to divert a larger percentage of the residential waste stream than the bag-based system, the cost per metric ton of the bag-based system is so much lower that it clearly is the most efficient of the three programs. The drop-off program had a very low cost per metric ton; however, if failed to divert the minimum acceptable level of the waste stream. The bag-based system proved to be more efficient in areas with higher household densities.

  14. ALL-PATHWAYS DOSE ANALYSIS FOR THE PORTSMOUTH ON-SITE WASTE DISPOSAL FACILITY

    SciTech Connect (OSTI)

    Smith, F.; Phifer, M.

    2014-04-10

    A Portsmouth On-Site Waste Disposal Facility (OSWDF) All-Pathways analysis has been conducted that considers the radiological impacts to a resident farmer. It is assumed that the resident farmer utilizes a farm pond contaminated by the OSWDF to irrigate a garden and pasture and water livestock from which food for the resident farmer is obtained, and that the farmer utilizes groundwater from the Berea sandstone aquifer for domestic purposes (i.e. drinking water and showering). As described by FBP 2014b the Hydrologic Evaluation of Landfill Performance (HELP) model (Schroeder et al. 1994) and the Surface Transport Over Multiple Phases (STOMP) model (White and Oostrom 2000, 2006) were used to model the flow and transport from the OSWDF to the Points of Assessment (POAs) associated with the 680-ft elevation sandstone layer (680 SSL) and the Berea sandstone aquifer. From this modeling the activity concentrations radionuclides were projected over time at the POAs. The activity concentrations were utilized as input to a GoldSimTM (GTG 2010) dose model, described herein, in order to project the dose to a resident farmer over time. A base case and five sensitivity cases were analyzed. The sensitivity cases included an evaluation of the impacts of using a conservative inventory, an uncased well to the Berea sandstone aquifer, a low waste zone uranium distribution coefficient (Kd), different transfer factors, and reference person exposure parameters (i.e. at 95 percentile). The maximum base case dose within the 1,000 year assessment period was projected to be 1.5E-14 mrem/yr, and the maximum base case dose at any time less than 10,000 years was projected to be 0.002 mrem/yr. The maximum projected dose of any sensitivity case was approximately 2.6 mrem/yr associated with the use of an uncased well to the Berea sandstone aquifer. This sensitivity case is considered very unlikely because it assumes leakage from the location of greatest concentration in the 680 SSL in to the Berea sandstone aquiver over time and does not conform to standard private water well construction practices. The bottom-line is that all predicted doses from the base case and five sensitivity cases fall well below the DOE all-pathways 25 mrem/yr Performance Objective.

  15. NREL: Energy Analysis - David Mooney

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Mooney Photo of David Mooney David Mooney is the center director of the Strategic Energy Analysis Center. Center Director On staff since August 2002 Phone number: 303-384-6782 E-mail: David.Mooney@nrel.gov Analysis expertise Strategic planning Broad knowledge base in technologies and markets for energy technologies and their integration into the current energy infrastructure. Technical and operational knowledge of the photovoltaics technologies and industry Design and cost analysis of

  16. Automated Transportation Logistics and Analysis System (ATLAS...

    Office of Environmental Management (EM)

    Automated Transportation Logistics and Analysis System (ATLAS) ATLAS is an integrated web-based logistics management system allowing users to manage inbound and outbound freight ...

  17. Comparison of strength and load-based methods for testing wind turbine blades

    SciTech Connect (OSTI)

    Musial, W.D.; Clark, M.E.; Egging, N.

    1996-11-01

    The purpose of this paper is to compare two methods of blade test loading and show how they are applied in an actual blade test. Strength and load-based methods were examined to determine the test load for an Atlantic Orient Corporation (AOC) 15/50 wind turbine blade for fatigue and static testing. Fatigue load-based analysis was performed using measured field test loads extrapolated for extreme rare events and scaled to thirty-year spectra. An accelerated constant amplitude fatigue test that gives equivalent damage at critical locations was developed using Miner`s Rule and the material S-N curves. Test load factors were applied to adjust the test loads for uncertainties, and differences between the test and operating environment. Similar analyses were carried, out for the strength-based fatigue test using the strength of the blade and the material properties to determine the load level and number of constant amplitude cycles to failure. Static tests were also developed using load and strength criteria. The resulting test loads were compared and contrasted. The analysis shows that, for the AOC 15/50 blade, the strength-based test loads are higher than any of the static load-based cases considered but were exceeded in the fatigue analysis for a severe hot/wet environment.

  18. Comprehensive Energy Program at Patrick Air Force Base Set to...

    Office of Environmental Management (EM)

    at Patrick Air Force Base Set to Exceed Energy Goals Federal Energy Management Program case study focuses on Patrick Air Force Base's use of a utility energy services contract...

  19. Diagnostic and Prognostic Analysis of Battery Performance & Aging...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    of Battery Performance & Aging based on Kinetic and Thermodynamic Principles Diagnostic and Prognostic Analysis of Battery Performance & Aging based on Kinetic and...

  20. Y-12 and the Jack Case Center

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    February 21, 2014, Patrick Case, Jack Case's youngest son, called me. He was at the New Hope Center and wanted to visit the Jack Case Center. I explained that it would have to wait...

  1. Technology Deployment Case Studies | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Technology Deployment Case Studies Technology Deployment Case Studies Technology Deployment Case Studies Find efficient technologies and products for federal applications on the Federal Energy Management Program website. View All Maps Addthis

  2. Using Whole Building Performance Measurement to Develop a Business Case

    SciTech Connect (OSTI)

    Fowler, Kimberly M.

    2006-09-15

    Since 1998 the U.S. Navy?s Naval Facilities Engineering Command (NAVFAC) has had a policy for incorporating sustainable design principles into new building construction. The policy also states it is the intent of NAVFAC to accomplish this within the given budget constraints and while meeting customer requirements. Programming a building using a first cost approach instead of a life cycle cost approach is one of the biggest challenges for integrating sustainable design into projects at the Navy. Due to this hurdle, an attempt to develop a Navy specific business case was undertaken. Through this process, it was discovered that consistent data were not being collected for all applicable Navy buildings. Therefore, the current business case information being used by the Navy is the conglomeration of existing business case analysis in the literature. Although this business case information is useful, there is still a need for collecting and analyzing the Navy business case. To develop the Navy specific business case, NAVFAC is developing program metrics to capture the status of buildings in the design and construction phase and they have started to collect whole building cost and performance data for 14 buildings (7 sustainably designed and 7 traditionally designed buildings) to capture data on their existing inventory of sustainably design buildings. Performance measurement data are being collected on water, energy, operations and maintenance, waste generation, purchasing, occupant satisfaction, and transportation. The building cost and performance data will be collected for a minimum of 12 months. Both of these data collection and analysis efforts have offered lessons learned that will be shared alongside the current Navy business case information.

  3. Building Energy Information Systems: User Case Studies

    SciTech Connect (OSTI)

    Granderson, Jessica; Piette, Mary Ann; Ghatikar, Girish

    2010-03-22

    Measured energy performance data are essential to national efforts to improve building efficiency, as evidenced in recent benchmarking mandates, and in a growing body of work that indicates the value of permanent monitoring and energy information feedback. This paper presents case studies of energy information systems (EIS) at four enterprises and university campuses, focusing on the attained energy savings, and successes and challenges in technology use and integration. EIS are broadly defined as performance monitoring software, data acquisition hardware, and communication systems to store, analyze and display building energy information. Case investigations showed that the most common energy savings and instances of waste concerned scheduling errors, measurement and verification, and inefficient operations. Data quality is critical to effective EIS use, and is most challenging at the subsystem or component level, and with non-electric energy sources. Sophisticated prediction algorithms may not be well understood but can be applied quite effectively, and sites with custom benchmark models or metrics are more likely to perform analyses external to the EIS. Finally, resources and staffing were identified as a universal challenge, indicating a need to identify additional models of EIS use that extend beyond exclusive in-house use, to analysis services.

  4. Larry Case | Y-12 National Security Complex

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Larry Case Larry Case Oral History Videos Speakers INTRODUCTION Ed Bailey Jim Bailey Kay Bailey Ken Bernander Willard Brock Wilma Brooks Elmer Brummitt Naomi Brummitt Blake Case Larry Case Patrick Case Dorothy Coker Gordon Fee Linda Fellers Louis Freels Marie Guy Nathan Henry Agnes Houser John Rice Irwin Harvey Kite Charlie Manning Alice Piercey Donald Raby Jack Rains Ray Smith Ken Sommerfeld Kay Steed Bill Wilcox Beverly Woods more ... Options Hide Chapters Show Transcript Larry Case Jack

  5. Case Studies by System | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Technical Assistance » Case Studies by System Case Studies by System Case studies document the energy savings achieved by large manufacturing companies using AMO's software tools, other technical publications, and best practices. Case studies are available below for the following systems: Steam, Process Heating, Compressed Air, Motor, Pump, Fan, and Plant Wide. Case studies are also available for Combined Heat & Power. Plant-Wide Case Studies Alcoa: C-Suite Participation in Energy

  6. Market Analysis Reports | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Information Resources » Technical Publications » Market Analysis Reports Market Analysis Reports Reports about fuel cell and hydrogen technology market analysis are provided in these publication categories: Fuel Cell Technologies Office Market Reports Pathways to Commercial Success Business Case for Fuel Cells State of the States General Fuel Cell Technologies Office Market Reports 2014 Fuel Cell Technologies Market Report (Fuel Cell Technologies Office, October 2015) 2013 Fuel Cell

  7. Program Analysis

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    and abate nearly 20,000 mmt of CO2 through 2050. EE portfolio analysis wNEMS+MARKAL (Frances Wood, OnLocation, and Chip Friley, BNL) Heavily leverages ANL's Autonomie Tool...

  8. Supplement Analysis

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    (based on a radionuclide inventory as of March 1, 1985) are small and no occupational cancer fatalities would be expected for any of the alternatives. The action alternatives...

  9. Supplement Analysis

    Energy Savers [EERE]

    Supplement Analysis to the LCLS-ll Environmental Assessment, July. 2014 U.S. DEPARTMENT OF Office of *ENERGY 1 Science SLAG Site Office SLAC National Accelerator Laboratory 2575 Sand Hill Road, MS-8A Menlo Park, CA 94025 DATE: September 15, 2015 MEMORANDUM FOR: Paul Golan, Site Manager, SLAC Site Office THROUGH: James Elmore, ISC-OR NEPA Compliance Officer, Oak Ridge Office FROM: Mitzi Heard, NEPA Coornator, SLAC Site Office SUBJECT: Supplement Analysis to SLAC LCLS-I1 Environmental Assessment.

  10. BoxLib Case Study

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    BoxLib Case Study BoxLib Case Study Background BoxLib is a publicly available software framework for building massively parallel block-structured AMR applications. Key features of BoxLib include Support for block-structured AMR with optional subcycling in time Support for cell-centered, face-centered and node-centered data Support for hyperbolic, parabolic, and elliptic solves on hierarchical grid structure C++ and Fortran90 versions Support for hybrid parallelism model with MPI and OpenMP Basis

  11. Case Western University | Open Energy Information

    Open Energy Info (EERE)

    University Jump to: navigation, search Name Case Western University Facility Case Western University Sector Wind energy Facility Type Small Scale Wind Facility Status In Service...

  12. Building America Technologies Solutions Case Study: Ventilation...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Technologies Solutions Case Study: Ventilation System Effectiveness and Tested Indoor Air Quality Impacts Building America Technologies Solutions Case Study: Ventilation System ...

  13. Building America Technology Solutions Case Study: Ventilation...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Case Study: Ventilation System Effectiveness and Tested Indoor Air Quality Impacts Building America Technology Solutions Case Study: Ventilation System Effectiveness and Tested ...

  14. Renewable Energy Case Studies | Open Energy Information

    Open Energy Info (EERE)

    Case Studies Jump to: navigation, search Tool Summary LAUNCH TOOL Name: Renewable Energy Case Studies AgencyCompany Organization: National Renewable Energy Laboratory Sector:...

  15. CASE Design/Remodeling | Open Energy Information

    Open Energy Info (EERE)

    DesignRemodeling Jump to: navigation, search Name: CASE DesignRemodeling Place: Bethesda, MD Website: www.casedesignremodeling.com References: CASE DesignRemodeling1...

  16. Real time analysis under EDS

    SciTech Connect (OSTI)

    Schneberk, D.

    1985-07-01

    This paper describes the analysis component of the Enrichment Diagnostic System (EDS) developed for the Atomic Vapor Laser Isotope Separation Program (AVLIS) at Lawrence Livermore National Laboratory (LLNL). Four different types of analysis are performed on data acquired through EDS: (1) absorption spectroscopy on laser-generated spectral lines, (2) mass spectrometer analysis, (3) general purpose waveform analysis, and (4) separation performance calculations. The information produced from this data includes: measures of particle density and velocity, partial pressures of residual gases, and overall measures of isotope enrichment. The analysis component supports a variety of real-time modeling tasks, a means for broadcasting data to other nodes, and a great degree of flexibility for tailoring computations to the exact needs of the process. A particular data base structure and program flow is common to all types of analysis. Key elements of the analysis component are: (1) a fast access data base which can configure all types of analysis, (2) a selected set of analysis routines, (3) a general purpose data manipulation and graphics package for the results of real time analysis. Each of these components are described with an emphasis upon how each contributes to overall system capability. 3 figs.

  17. Climate Action Champions: Case Studies | Department of Energy

    Energy Savers [EERE]

    Case Studies Climate Action Champions: Case Studies PDF icon Boston Case Study PDF icon Dubuque Case Study PDF icon Knoxville Case Study PDF icon Metropolitan Washington Council of Governments Case Study PDF icon Oberlin Case Study PDF icon Portland Case Study PDF icon Salt Lake City Case Study PDF icon San Francisco Case Study PDF icon Seattle Case Study PDF icon Sonoma Case Study PDF icon Southeast Florida Case Study More Documents & Publications Community Organizing and Outreach Climate

  18. Techno-economic Analysis for the Thermochemical Conversion of Biomass to Liquid Fuels

    SciTech Connect (OSTI)

    Zhu, Yunhua; Tjokro Rahardjo, Sandra A.; Valkenburt, Corinne; Snowden-Swan, Lesley J.; Jones, Susanne B.; Machinal, Michelle A.

    2011-06-01

    ). This study is part of an ongoing effort within the Department of Energy to meet the renewable energy goals for liquid transportation fuels. The objective of this report is to present a techno-economic evaluation of the performance and cost of various biomass based thermochemical fuel production. This report also documents the economics that were originally developed for the report entitled “Biofuels in Oregon and Washington: A Business Case Analysis of Opportunities and Challenges” (Stiles et al. 2008). Although the resource assessments were specific to the Pacific Northwest, the production economics presented in this report are not regionally limited. This study uses a consistent technical and economic analysis approach and assumptions to gasification and liquefaction based fuel production technologies. The end fuels studied are methanol, ethanol, DME, SNG, gasoline and diesel.

  19. NREL: Energy Analysis - Sustainability Analysis

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Sustainability Analysis The laboratory's Sustainability Analysis looks at the environmental, life-cycle, climate, and other impacts of renewable energy technologies. Our energy choices have global implications that affect greenhouse gas emissions, water resource distribution, mineral consumption, and equipment manufacturing and transportation. The school of thought is that renewable energy technologies are more sustainable than many current sources of energy. However, we need to verify that this

  20. ACCEPTABILITY ENVELOPE FOR METAL HYDRIDE-BASED HYDROGEN STORAGE SYSTEMS

    SciTech Connect (OSTI)

    Hardy, B.; Corgnale, C.; Tamburello, D.; Garrison, S.; Anton, D.

    2011-07-18

    The design and evaluation of media based hydrogen storage systems requires the use of detailed numerical models and experimental studies, with significant amount of time and monetary investment. Thus a scoping tool, referred to as the Acceptability Envelope, was developed to screen preliminary candidate media and storage vessel designs, identifying the range of chemical, physical and geometrical parameters for the coupled media and storage vessel system that allow it to meet performance targets. The model which underpins the analysis allows simplifying the storage system, thus resulting in one input-one output scheme, by grouping of selected quantities. Two cases have been analyzed and results are presented here. In the first application the DOE technical targets (Year 2010, Year 2015 and Ultimate) are used to determine the range of parameters required for the metal hydride media and storage vessel. In the second case the most promising metal hydrides available are compared, highlighting the potential of storage systems, utilizing them, to achieve 40% of the 2010 DOE technical target. Results show that systems based on Li-Mg media have the best potential to attain these performance targets.

  1. Expediting Scientific Data Analysis with Reorganization of Data

    SciTech Connect (OSTI)

    Byna, Surendra; Wu, Kesheng

    2013-08-19

    Data producers typically optimize the layout of data files to minimize the write time. In most cases, data analysis tasks read these files in access patterns different from the write patterns causing poor read performance. In this paper, we introduce Scientific Data Services (SDS), a framework for bridging the performance gap between writing and reading scientific data. SDS reorganizes data to match the read patterns of analysis tasks and enables transparent data reads from the reorganized data. We implemented a HDF5 Virtual Object Layer (VOL) plugin to redirect the HDF5 dataset read calls to the reorganized data. To demonstrate the effectiveness of SDS, we applied two parallel data organization techniques: a sort-based organization on a plasma physics data and a transpose-based organization on mass spectrometry imaging data. We also extended the HDF5 data access API to allow selection of data based on their values through a query interface, called SDS Query. We evaluated the execution time in accessing various subsets of data through existing HDF5 Read API and SDS Query. We showed that reading the reorganized data using SDS is up to 55X faster than reading the original data.

  2. Towards Effective Clustering Techniques for the Analysis of Electric Power Grids

    SciTech Connect (OSTI)

    Hogan, Emilie A.; Cotilla Sanchez, Jose E.; Halappanavar, Mahantesh; Wang, Shaobu; Mackey, Patrick S.; Hines, Paul; Huang, Zhenyu

    2013-11-30

    Clustering is an important data analysis technique with numerous applications in the analysis of electric power grids. Standard clustering techniques are oblivious to the rich structural and dynamic information available for power grids. Therefore, by exploiting the inherent topological and electrical structure in the power grid data, we propose new methods for clustering with applications to model reduction, locational marginal pricing, phasor measurement unit (PMU or synchrophasor) placement, and power system protection. We focus our attention on model reduction for analysis based on time-series information from synchrophasor measurement devices, and spectral techniques for clustering. By comparing different clustering techniques on two instances of realistic power grids we show that the solutions are related and therefore one could leverage that relationship for a computational advantage. Thus, by contrasting different clustering techniques we make a case for exploiting structure inherent in the data with implications for several domains including power systems.

  3. MS Based Metabonomics

    SciTech Connect (OSTI)

    Want, Elizabeth J.; Metz, Thomas O.

    2010-03-01

    Metabonomics is the latest and least mature of the systems biology triad, which also includes genomics and proteomics, and has its origins in the early orthomolecular medicine work pioneered by Linus Pauling and Arthur Robinson. It was defined by Nicholson and colleagues in 1999 as the quantitative measurement of perturbations in the metabolite complement of an integrated biological system in response to internal or external stimuli, and is often used today to describe many non-global types of metabolite analyses. Applications of metabonomics are extensive and include toxicology, nutrition, pharmaceutical research and development, physiological monitoring and disease diagnosis. For example, blood samples from millions of neonates are tested routinely by mass spectrometry (MS) as a diagnostic tool for inborn errors of metabolism. The metabonome encompasses a wide range of structurally diverse metabolites; therefore, no single analytical platform will be sufficient. Specialized sample preparation and detection techniques are required, and advances in NMR and MS technologies have led to enhanced metabonome coverage, which in turn demands improved data analysis approaches. The role of MS in metabonomics is still evolving as instrumentation and software becomes more sophisticated and as researchers realize the strengths and limitations of current technology. MS offers a wide dynamic range, high sensitivity, and reproducible, quantitative analysis. These attributes are essential for addressing the challenges of metabonomics, as the range of metabolite concentrations easily exceeds nine orders of magnitude in biofluids, and the diversity of molecular species ranges from simple amino and organic acids to lipids and complex carbohydrates. Additional challenges arise in generating a comprehensive metabolite profile, downstream data processing and analysis, and structural characterization of important metabolites. A typical workflow of MS-based metabonomics is shown in Figure 1. Gas chromatography-(GC)-MS was the most commonly used MS-based method for small molecule analysis in the 1970s and 1980s. It is still used today for the detection of many metabolic disorders and plays a strong role in plant metabonomics. Liquid chromatography (LC)-MS approaches have grown in popularity for metabolite studies, due to simpler sample preparation, reduced analysis times through the introduction of ultra-high performance liquid chromatography (UPLC)-MS and the ability to observe a wider range of metabolites. This chapter will discuss the role of MS in metabonomics, the techniques involved in this exciting area, and the current and future applications of the field. The various bioinformatics tools and multivariate analysis techniques used to maximize information recovery and to aid in the interpretation of the very large data sets typically obtained in metabonomics studies will also be discussed. While there are many different MS-based approaches utilized in metabonomics studies, emphasis will be placed on more established methods.

  4. Analyzing Dynamic Probabilistic Risk Assessment Data through Topology-Based Clustering

    SciTech Connect (OSTI)

    Diego Mandelli; Dan Maljovec; BeiWang; Valerio Pascucci; Peer-Timo Bremer

    2013-09-01

    We investigate the use of a topology-based clustering technique on the data generated by dynamic event tree methodologies. The clustering technique we utilizes focuses on a domain-partitioning algorithm based on topological structures known as the Morse-Smale complex, which partitions the data points into clusters based on their uniform gradient flow behavior. We perform both end state analysis and transient analysis to classify the set of nuclear scenarios. We demonstrate our methodology on a dataset generated for a sodium-cooled fast reactor during an aircraft crash scenario. The simulation tracks the temperature of the reactor as well as the time for a recovery team to fix the passive cooling system. Combined with clustering results obtained previously through mean shift methodology, we present the user with complementary views of the data that help illuminate key features that may be otherwise hidden using a single methodology. By clustering the data, the number of relevant test cases to be selected for further analysis can be drastically reduced by selecting a representative from each cluster. Identifying the similarities of simulations within a cluster can also aid in the drawing of important conclusions with respect to safety analysis.

  5. International Energy Agency Building Energy Simulation Test and Diagnostic Method (IEA BESTEST) Multi-Zone Non-Airflow In-Depth Diagnostic Cases: MZ320 -- MZ360

    SciTech Connect (OSTI)

    Neymark, J.; Judkoff, R.; Alexander, D.; Felsmann, C.; Strachan, P.; Wijsman, A.

    2008-09-01

    This report documents a set of diagnostic test cases for multi-zone heat transfer models. The methodology combines empirical validation, analytical verification, and comparative analysis techniques.

  6. Initial Decision and Risk Analysis

    SciTech Connect (OSTI)

    Engel, David W.

    2012-02-29

    Decision and Risk Analysis capabilities will be developed for industry consideration and possible adoption within Year 1. These tools will provide a methodology for merging qualitative ranking of technology maturity and acknowledged risk contributors with quantitative metrics that drive investment decision processes. Methods and tools will be initially introduced as applications to the A650.1 case study, but modular spreadsheets and analysis routines will be offered to industry collaborators as soon as possible to stimulate user feedback and co-development opportunities.

  7. Fuel Cell Case Study | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Case Study Fuel Cell Case Study Presented at the Clean Energy States Alliance and U.S. Department of Energy Webinar: Fuel Cells for Supermarkets, April 4, 2011. PDF icon infocallapr11_loftus.pdf More Documents & Publications The Business Case for Fuel Cells 2011: Energizing America's Top Companies The Business Case for Fuel Cells 2010: Why Top Companies are Purchasing Fuel Cells Today DOE Zero Energy Ready Home Case Study: Glastonbury Housesmith, Hickory Drive, South Glastonbury, CT

  8. Analysis Repository

    SciTech Connect (OSTI)

    DOE

    2012-03-16

    The Analysis Repository is a compilation of analyses and analytical models relevant to assessing hydrogen fuel and fuel cell issues. Projects in the repository relate to: hydrogen production, delivery, storage, fuel cells, and hydrogen vehicle technology; hydrogen production feedstock cost and availability; electricity production, central and distributed; energy resource estimation and forecasting.

  9. Appendix A. Reference case projections

    Gasoline and Diesel Fuel Update (EIA)

    U.S. Energy Information Administration | International Energy Outlook 2014 High Oil Price case projections Table B3. World petroleum and other liquids consumption by region and end-use sector, High Oil Price case, 2010-40 (quadrillion Btu) Region History Projections Average annual percent change, 2010-40 2010 2020 2025 2030 2035 2040 OECD United States Residential 1.1 0.9 0.8 0.7 0.7 0.6 -1.9 Commercial 0.7 0.6 0.6 0.6 0.6 0.6 -0.3 Industrial 8.1 9.4 9.9 9.9 10.0 10.0 0.7 Transportation 26.9

  10. Appendix A. Reference case projections

    Gasoline and Diesel Fuel Update (EIA)

    3 U.S. Energy Information Administration | International Energy Outlook 2014 High Oil Price case projections Table B4. World petroleum and other liquids production by region and country, High Oil Price case, 2009-40 (million barrels per day) Region History Projections Average annual percent change, 2010-40 2009 2010 2011 2020 2025 2030 2035 2040 OPEC a 34.1 35.4 35.7 33.1 34.5 37.8 41.0 43.7 0.7 Middle East 23.2 24.3 25.9 22.6 23.6 26.6 29.4 31.8 0.9 North Africa 3.8 3.7 2.4 3.2 3.3 3.4 3.6 3.7

  11. Appendix A. Reference case projections

    Gasoline and Diesel Fuel Update (EIA)

    5 U.S. Energy Information Administration | International Energy Outlook 2014 High Oil Price case projections Table B6. World other liquid fuels a production by region and country, High Oil Price case, 2009-40 (million barrels per day) Region History Projections Average annual percent change, 2010-40 2009 2010 2011 2020 2025 2030 2035 2040 OPEC b 3.1 3.3 3.5 4.6 4.9 5.3 5.8 5.9 1.9 Natural gas plant liquids 3.1 3.3 3.4 4.3 4.6 4.9 5.3 5.3 1.6 Biofuels c 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -

  12. Appendix A. Reference case projections

    Gasoline and Diesel Fuel Update (EIA)

    5 U.S. Energy Information Administration | International Energy Outlook 2014 Low Oil Price case projections Table C4. World petroleum and other liquids production by region and country, Low Oil Price case, 2009-40 (million barrels per day) Region History Projections Average annual percent change, 2010-40 2009 2010 2011 2020 2025 2030 2035 2040 OPEC a 34.1 35.4 35.7 43.3 48.7 54.6 59.9 65.3 2.1 Middle East 23.2 24.3 25.9 30.4 34.5 38.9 43.0 47.3 2.2 North Africa 3.8 3.7 2.4 3.7 4.0 4.3 4.7 4.9

  13. Appendix A. Reference case projections

    Gasoline and Diesel Fuel Update (EIA)

    7 U.S. Energy Information Administration | International Energy Outlook 2014 Low Oil Price case projections Table C6. World other liquid fuels a production by region and country, Low Oil Price case, 2009-40 (million barrels per day) Region History Projections Average annual percent change, 2010-40 2009 2010 2011 2020 2025 2030 2035 2040 OPEC b 3.1 3.3 3.5 4.3 4.5 4.8 5.1 5.0 1.4 Natural gas plant liquids 3.1 3.3 3.4 4.0 4.2 4.4 4.8 4.7 1.2 Biofuels c 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -

  14. Appendix A. Reference case projections

    Gasoline and Diesel Fuel Update (EIA)

    U.S. Energy Information Administration | International Energy Outlook 2014 Reference case projections Table A3. World petroleum and other liquids consumption by region and end-use sector, Reference case, 2010-40 (quadrillion Btu) Region History Projections Average annual percent change, 2010-40 2010 2020 2025 2030 2035 2040 OECD United States Residential 1.1 0.9 0.8 0.8 0.7 0.7 -1.8 Commercial 0.7 0.7 0.7 0.7 0.7 0.7 0.1 Industrial 8.1 9.6 9.9 10.1 10.1 10.1 0.7 Transportation 26.9 25.6 24.7

  15. Appendix A. Reference case projections

    Gasoline and Diesel Fuel Update (EIA)

    U.S. Energy Information Administration | International Energy Outlook 2014 Reference case projections Table A4. World petroleum and other liquids production by region and country, Reference case, 2009-40 (million barrels per day) Region History Projections Average annual percent change, 2010-40 2009 2010 2011 2020 2025 2030 2035 2040 OPEC a 34.1 35.4 35.7 38.7 40.7 44.4 48.2 52.1 1.3 Middle East 23.2 24.3 25.9 27.1 28.8 32.2 35.5 38.8 1.6 North Africa 3.8 3.7 2.4 3.5 3.6 3.7 3.8 4.1 0.3 West

  16. Appendix A. Reference case projections

    Gasoline and Diesel Fuel Update (EIA)

    3 U.S. Energy Information Administration | International Energy Outlook 2014 Reference case projections Table A6. World other liquid fuels a production by region and country, Reference case, 2009-40 (million barrels per day) Region History Projections Average annual percent change, 2010-40 2009 2010 2011 2020 2025 2030 2035 2040 OPEC b 3.1 3.3 3.5 4.3 4.6 4.9 5.3 5.9 1.9 Natural gas plant liquids 3.1 3.3 3.4 4.0 4.2 4.5 4.9 5.4 1.7 Biofuels c 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 - Coal-to-liquids 0.0

  17. Orange and Rockland Case Study

    Energy Savers [EERE]

    10 2012 Orange and Rockland Case Study 1 Voltage Control Device A "Model-Centric" Approach to Smarter Electric Distribution Systems Orange and Rockland Utilities (ORU), is an investor-owned utility and a subsidiary of Consolidated Edison Incorporated (Con Edison), and is located in suburban New York, New Jersey, and Pennsylvania, west of New York City. ORU is a key participant in Con Edison's $272 million Smart Grid Investment Grant (SGIG) project to modernize electric distribution

  18. West Virginia: A Compelling Case

    Broader source: Energy.gov (indexed) [DOE]

    West Virginia: A Compelling Case Rich energy history; solid energy expertise West Virginia is an energy state. With a population of just 1.8 million, the state contributes significantly to the energy needs of the eastern United States.  West Virginia is No. 2 in coal production behind Wyoming  West Virginia is No. 4 behind Pennsylvania, Alabama and Illinois in net electricity exports, exporting 60 percent of the electricity it generates  West Virginia is No. 10 in natural gas production

  19. Case Study - Liquefied Natural Gas

    Alternative Fuels and Advanced Vehicles Data Center [Office of Energy Efficiency and Renewable Energy (EERE)]

    Environmental Science Enviro Express Kenworth LNG tractor. Connecticut Clean Cities Future Fuels Project Case Study - Liquefied Natural Gas As a part of the U.S. Department of Energy's broad effort to develop cleaner transportation technologies that reduce U.S. dependence on imported oil, this study examines advanced 2011 natural gas fueled trucks using liquefied natural gas (LNG) replacing older diesel fueled trucks. The trucks are used 6 days per week in regional city-to-landfill long hauls of

  20. MGR External Events Hazards Analysis

    SciTech Connect (OSTI)

    L. Booth

    1999-11-06

    The purpose and objective of this analysis is to apply an external events Hazards Analysis (HA) to the License Application Design Selection Enhanced Design Alternative 11 [(LADS EDA II design (Reference 8.32))]. The output of the HA is called a Hazards List (HL). This analysis supersedes the external hazards portion of Rev. 00 of the PHA (Reference 8.1). The PHA for internal events will also be updated to the LADS EDA II design but under a separate analysis. Like the PHA methodology, the HA methodology provides a systematic method to identify potential hazards during the 100-year Monitored Geologic Repository (MGR) operating period updated to reflect the EDA II design. The resulting events on the HL are candidates that may have potential radiological consequences as determined during Design Basis Events (DBEs) analyses. Therefore, the HL that results from this analysis will undergo further screening and analysis based on the criteria that apply during the performance of DBE analyses.

  1. Policy and Analysis Publications | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Publications Policy and Analysis Publications Policy and Analysis conducts cross-cutting and portfolio-based analyses of EERE technologies and the interrelationships among technologies, markets and policies; and provides quantified impacts of EERE investments in clean energy technology innovation and deployment. For more energy data and analysis resources, visit OpenEI. Year Publication Type Keyword Publications List

  2. Energy Market Analysis

    Broader source: Energy.gov [DOE]

    Energy Market Analysis synthesizes all analysis efforts in the analysis spectrum. Scenario analyses, in the context of market analysis, are used to answer several questions:

  3. DOE Zero Energy Ready Home Case Study: Palo Duro Homes, Albuquerque, NM

    Broader source: Energy.gov [DOE]

    Case study of a New Mexico-based home builder who has built more DOE Zero Energy Ready certified homes than any builder in the nation. One example home achieved a HERS score of HERS 55 without PV...

  4. Cloud-Based Model Calibration Using OpenStudio: Preprint

    SciTech Connect (OSTI)

    Hale, E.; Lisell, L.; Goldwasser, D.; Macumber, D.; Dean, J.; Metzger, I.; Parker, A.; Long, N.; Ball, B.; Schott, M.; Weaver, E.; Brackney, L.

    2014-03-01

    OpenStudio is a free, open source Software Development Kit (SDK) and application suite for performing building energy modeling and analysis. The OpenStudio Parametric Analysis Tool has been extended to allow cloud-based simulation of multiple OpenStudio models parametrically related to a baseline model. This paper describes the new cloud-based simulation functionality and presents a model cali-bration case study. Calibration is initiated by entering actual monthly utility bill data into the baseline model. Multiple parameters are then varied over multiple iterations to reduce the difference between actual energy consumption and model simulation results, as calculated and visualized by billing period and by fuel type. Simulations are per-formed in parallel using the Amazon Elastic Cloud service. This paper highlights model parameterizations (measures) used for calibration, but the same multi-nodal computing architecture is available for other purposes, for example, recommending combinations of retrofit energy saving measures using the calibrated model as the new baseline.

  5. Financial Analysis

    Energy Science and Technology Software Center (OSTI)

    2010-12-31

    This tool takes into account the net cost savings, implementation costs, and operations and maintenance costs of an energy conservation measure, as well as typical project lifetime and the relating discount and escalation rates. The result is a cash flow analysis over the project lifetime with calculations for simple payback, discounted payback, net present value, and savings to investment ratio. The tool also displays the results graphically.

  6. Uncertainty Analysis

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Analysis - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced Nuclear Energy

  7. Polyculture Analysis

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Polyculture Analysis 24 March 2015 Deborah Newby, Ph.D. Idaho National Laboratory This presentation does not contain any proprietary, confidential, or otherwise restricted information Algal Feedstocks DOE Bioenergy Technologies Office (BETO) 2015 Project Peer Review Hub -- Sandia National Laboratories Oak Ridge National Laboratory Pacific Northwest National Laboratory 2 | Bioenergy Technologies Office Goal Statement (Hub) Challenge: * To contribute to meeting the Renewable Fuel Standard, DOE

  8. systems analysis

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    systems analysis - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced Nuclear

  9. Microalgae Analysis

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Microalgae Analysis March 24, 2015 Algal Feedstocks Review Mark Wigmosta Pacific Northwest National Laboratory This presentation does not contain any proprietary, confidential, or otherwise restricted information Goal: Enable Economically Feasible and Sustainable Algal Biofuels to Achieve Advanced Biofuel Targets Key challenge: "A national assessment of land requirements for algae cultivation that takes into account climatic conditions; fresh water, inland and coastal saline water, and

  10. Enhanced AFCI Sampling, Analysis, and Safeguards Technology Review

    SciTech Connect (OSTI)

    John Svoboda

    2009-09-01

    The focus of this study includes the investigation of sampling technologies used in industry and their potential application to nuclear fuel processing. The goal is to identify innovative sampling methods using state of the art techniques that could evolve into the next generation sampling and analysis system for metallic elements. Sampling and analysis of nuclear fuel recycling plant processes is required both to monitor the operations and ensure Safeguards and Security goals are met. In addition, environmental regulations lead to additional samples and analysis to meet licensing requirements. The volume of samples taken by conventional means, can restrain productivity while results samples are analyzed, require process holding tanks that are sized to meet analytical issues rather than process issues (and that create a larger facility footprint), or, in some cases, simply overwhelm analytical laboratory capabilities. These issues only grow when process flowsheets propose new separations systems and new byproduct material for transmutation purposes. Novel means of streamlining both sampling and analysis are being evaluated to increase the efficiency while meeting all requirements for information. This report addresses just a part of the effort to develop and study novel methods by focusing on the sampling and analysis of aqueous samples for metallic elements. It presents an overview of the sampling requirements, including frequency, sensitivity, accuracy, and programmatic drivers, to demonstrate the magnitude of the task. The sampling and analysis system needed for metallic element measurements is then discussed, and novel options being applied to other industrial analytical needs are presented. Inductively coupled mass spectrometry instruments are the most versatile for metallic element analyses and are thus chosen as the focus for the study. Candidate novel means of process sampling, as well as modifications that are necessary to couple such instruments to introduce these samples, are discussed. A suggested path forward based on an automated microchip capillary based sampling system interfaced to the analysis spectrometer is presented. The ability to obtain micro liter volume samples coupled with remote automated means of sample tracking and transport to the instrument would greatly improve analytical efficiency while reducing both personnel exposure and radioactive waste. Application of this sampling technique to new types of mass spectrometers for selective elemental isotopic analysis could also provide significant improvements in safeguards and security analyses.

  11. Analyzing and Comparing Biomass Feedstock Supply Systems in China: Corn Stover and Sweet Sorghum Case Studies

    SciTech Connect (OSTI)

    Ren, Lantian; Cafferty, Kara; Roni, Mohammad; Jacobson, Jacob; Xie, Guanghui; Ovard, Leslie; Wright, Christopher

    2015-06-11

    This paper analyzes the rural Chinese biomass supply system and models supply chain operations according to U.S. concepts of logistical unit operations: harvest and collection, storage, transportation, preprocessing, and handling and queuing. In this paper, we quantify the logistics cost of corn stover and sweet sorghum in China under different scenarios. We analyze three scenarios of corn stover logistics from northeast China and three scenarios of sweet sorghum stalks logistics from Inner Mongolia in China. The case study estimates that the logistics cost of corn stover and sweet sorghum stalk to be $52.95/dry metric ton and $52.64/dry metric ton, respectively, for the current labor-based biomass logistics system. However, if the feedstock logistics operation is mechanized, the cost of corn stover and sweet sorghum stalk decreases to $36.01/dry metric ton and $35.76/dry metric ton, respectively. The study also includes a sensitivity analysis to identify the cost factors that cause logistics cost variation. Results of the sensitivity analysis show that labor price has the most influence on the logistics cost of corn stover and sweet sorghum stalk, with a variation of $6 to $12/dry metric ton.

  12. Analyzing and Comparing Biomass Feedstock Supply Systems in China: Corn Stover and Sweet Sorghum Case Studies

    SciTech Connect (OSTI)

    Mohammad S. Roni; Kara G. Cafferty; Christopher T Wright; Lantian Ren

    2015-06-01

    China has abundant biomass resources, which can be used as a potential source of bioenergy. However, China faces challenges implementing biomass as an energy source, because China has not developed the highly networked, high-volume biomass logistics systems and infrastructure. This paper analyzes the rural Chinese biomass supply system and models supply chain operations according to the U.S. concepts of logistical unit operations: harvest and collection, storage, transportation, preprocessing, and handling and queuing. In this paper, we quantify the logistics cost of corn stover and sweet sorghum under different scenarios in China. We analyze three scenarios of corn stover logistics from northeast China and three scenarios of sweet sorghum stalks logistics from Inner Mongolia in China. The case study shows that the logistics cost of corn stover and sweet sorghum stalk will be $52.95/dry metric ton and $52.64/ dry metric ton, respectively, for the current labor-based biomass logistics system. However, if the feedstock logistics operation is mechanized, the cost of corn stover and sweet sorghum stalk will be down to $36.01/ dry metric ton and $35.76/dry metric ton, respectively. The study also performed a sensitivity analysis to find the cost factors that cause logistics cost variation. A sensitivity analysis shows that labor price has the most influence on the logistics cost of corn stover and sweet sorghum stalk, causing a variation of $6 to $12/metric ton.

  13. Analyzing and Comparing Biomass Feedstock Supply Systems in China: Corn Stover and Sweet Sorghum Case Studies

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Ren, Lantian; Cafferty, Kara; Roni, Mohammad; Jacobson, Jacob; Xie, Guanghui; Ovard, Leslie; Wright, Christopher

    2015-06-11

    This paper analyzes the rural Chinese biomass supply system and models supply chain operations according to U.S. concepts of logistical unit operations: harvest and collection, storage, transportation, preprocessing, and handling and queuing. In this paper, we quantify the logistics cost of corn stover and sweet sorghum in China under different scenarios. We analyze three scenarios of corn stover logistics from northeast China and three scenarios of sweet sorghum stalks logistics from Inner Mongolia in China. The case study estimates that the logistics cost of corn stover and sweet sorghum stalk to be $52.95/dry metric ton and $52.64/dry metric ton, respectively,more » for the current labor-based biomass logistics system. However, if the feedstock logistics operation is mechanized, the cost of corn stover and sweet sorghum stalk decreases to $36.01/dry metric ton and $35.76/dry metric ton, respectively. The study also includes a sensitivity analysis to identify the cost factors that cause logistics cost variation. Results of the sensitivity analysis show that labor price has the most influence on the logistics cost of corn stover and sweet sorghum stalk, with a variation of $6 to $12/dry metric ton.« less

  14. A Comparison of the Prognostic Value of Early PSA Test-Based Variables Following External Beam Radiotherapy, With or Without Preceding Androgen Deprivation: Analysis of Data From the TROG 96.01 Randomized Trial

    SciTech Connect (OSTI)

    Lamb, David S.; Denham, James W.; Joseph, David; Matthews, John; Atkinson, Chris; Spry, Nigel A.; Duchesne, Gillian; Ebert, Martin; Steigler, Allison; Delahunt, Brett; D'Este, Catherine

    2011-02-01

    Purpose: We sought to compare the prognostic value of early prostate-specific antigen (PSA) test-based variables for the 802 eligible patients treated in the Trans-Tasman Radiation Oncology Group 96.01 randomized trial. Methods and Materials: Patients in this trial had T2b, T2c, T3, and T4 N0 prostate cancer and were randomized to 0, 3, or 6 months of neoadjuvant androgen deprivation therapy (NADT) prior to and during radiation treatment at 66 Gy to the prostate and seminal vesicles. The early PSA test-based variables evaluated were the pretreatment initial PSA (iPSA) value, PSA values at 2 and 4 months into NADT, the PSA nadir (nPSA) value after radiation in all patients, and PSA response signatures in men receiving radiation. Comparisons of endpoints were made using Cox models of local progression-free survival, distant failure-free survival, biochemical failure-free survival, and prostate cancer-specific survival. Results: The nPSA value was a powerful predictor of all endpoints regardless of whether NADT was given before radiation. PSA response signatures also predicted all endpoints in men treated by radiation alone. iPSA and PSA results at 2 and 4 months into NADT predicted biochemical failure-free survival but not any of the clinical endpoints. nPSA values correlated with those of iPSA, Gleason grade, and T stage and were significantly higher in men receiving radiation alone than in those receiving NADT. Conclusions: The postradiation nPSA value is the strongest prognostic indicator of all early PSA-based variables. However, its use as a surrogate endpoint needs to take into account its dependence on pretreatment variables and treatment method.

  15. DNA analysis conference in Santa Fe

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    DNA analysis conference in Santa Fe DNA analysis conference in Santa Fe Los Alamos National Laboratory is hosting a DNA sequence analysis and bioinformatics event, the 10th annual Sequencing, Finishing and Analysis in the Future (SFAF) workshop. May 27, 2015 DNA extracted from a soil sample is stored in a small vial of clear liquid. In general, living cells function by using the sequences of bases in their DNA as a blueprint for assembling proteins. A particularly important type of protein is

  16. A hybrid method for quasi-three-dimensional slope stability analysis in a municipal solid waste landfill

    SciTech Connect (OSTI)

    Yu, L.; Batlle, F.

    2011-12-15

    Highlights: > A quasi-three-dimensional slope stability analysis method was proposed. > The proposed method is a good engineering tool for 3D slope stability analysis. > Factor of safety from 3D analysis is higher than from 2D analysis. > 3D analysis results are more sensitive to cohesion than 2D analysis. - Abstract: Limited space for accommodating the ever increasing mounds of municipal solid waste (MSW) demands the capacity of MSW landfill be maximized by building landfills to greater heights with steeper slopes. This situation has raised concerns regarding the stability of high MSW landfills. A hybrid method for quasi-three-dimensional slope stability analysis based on the finite element stress analysis was applied in a case study at a MSW landfill in north-east Spain. Potential slides can be assumed to be located within the waste mass due to the lack of weak foundation soils and geosynthetic membranes at the landfill base. The only triggering factor of deep-seated slope failure is the higher leachate level and the relatively high and steep slope in the front. The valley-shaped geometry and layered construction procedure at the site make three-dimensional slope stability analyses necessary for this landfill. In the finite element stress analysis, variations of leachate level during construction and continuous settlement of the landfill were taken into account. The 'equivalent' three-dimensional factor of safety (FoS) was computed from the individual result of the two-dimensional analysis for a series of evenly spaced cross sections within the potential sliding body. Results indicate that the hybrid method for quasi-three-dimensional slope stability analysis adopted in this paper is capable of locating roughly the spatial position of the potential sliding mass. This easy to manipulate method can serve as an engineering tool in the preliminary estimate of the FoS as well as the approximate position and extent of the potential sliding mass. The result that FoS obtained from three-dimensional analysis increases as much as 50% compared to that from two-dimensional analysis implies the significance of the three-dimensional effect for this study-case. Influences of shear parameters, time elapse after landfill closure, leachate level as well as unit weight of waste on FoS were also investigated in this paper. These sensitivity analyses serve as the guidelines of construction practices and operating procedures for the MSW landfill under study.

  17. Budget Risk & Prioritization Analysis Tool

    Energy Science and Technology Software Center (OSTI)

    2010-12-31

    BRPAtool performs the following: •Assists managers in making solid decisions on what scope/activities to reduce and/or eliminate, to meet constrained budgets, based on multiple risk factors •Enables analysis of different budget scenarios •Can analyze risks and cost for each activity based on technical, quantifiable risk criteria and management-determined risks •Real-time analysis •Enables managers to determine the multipliers and where funding is best applied •Promotes solid budget defense

  18. Hydrogen Production: Fundamentals and Case Study Summaries (Presentation)

    SciTech Connect (OSTI)

    Harrison, K.; Remick, R.; Hoskin, A.; Martin, G.

    2010-05-19

    This presentation summarizes hydrogen production fundamentals and case studies, including hydrogen to wind case studies.

  19. Are shorted pipeline casings a problem

    SciTech Connect (OSTI)

    Gibson, W.F. )

    1994-11-01

    The pipeline industry has many road and railroad crossings with casings which have been in service for more than 50 years without exhibiting any major problems, regardless of whether the casing is shorted to or isolated from the carrier pipe. The use of smart pigging and continual visual inspection when retrieving a cased pipeline segment have shown that whether shorted or isolated, casings have no significant bearing on the presence or absence of corrosion on the carrier pipe.

  20. Federal Utility Energy Service Contract Case Studies

    Broader source: Energy.gov [DOE]

    These case studies feature examples of federal projects made possible by the use of utility energy service contracts (UESCs).

  1. Federal Energy Savings Performance Contract Case Studies

    Broader source: Energy.gov [DOE]

    These case studies feature examples of projects made possible by the use of energy savings performance contracts (ESPCs).

  2. New tools for the analysis and design of building envelopes

    SciTech Connect (OSTI)

    Papamichael, K.; Winkelmann, F.C.; Buhl, W.F.; Chauvet, H.

    1994-08-01

    We describe the integrated development of PowerDOE, a new version of the DOE-2 building energy analysis program, and the Building Design Advisor (BDA), a multimedia-based design tool that assists building designers with the concurrent consideration of multiple design solutions with respect to multiple design criteria. PowerDOE has a windows-based Graphical User Interface (GUI) that makes it easier to use than DOE-2, while retaining DOE-2`s calculation power and accuracy. BDA, with a similar GUI, is designed to link to multiple analytical models and databases. In its first release it is linked to PowerDOE and a Daylighting Analysis Module, as well as to a Case Studies Database and a Schematic Graphic Editor. These allow building designers to set performance goals and address key building envelope parameters from the initial, schematic phases of building design to the detailed specification of building components and systems required by PowerDOE. The consideration of the thermal performance of building envelopes through PowerDOE and BDA is integrated with non-thermal envelope performance aspects, such as daylighting, as well as with the performance of non-envelope building components and systems, such as electric lighting and HVAC. Future versions of BDA will support links to CAD and electronic product catalogs, as well as provide context-dependent design advice to improve performance.

  3. Estimating the greenhouse gas benefits of forestry projects: A Costa Rican Case Study

    SciTech Connect (OSTI)

    Busch, Christopher; Sathaye, Jayant; Sanchez Azofeifa, G. Arturo

    2000-09-01

    If the Clean Development Mechanism proposed under the Kyoto Protocol is to serve as an effective means for combating global climate change, it will depend upon reliable estimates of greenhouse gas benefits. This paper sketches the theoretical basis for estimating the greenhouse gas benefits of forestry projects and suggests lessons learned based on a case study of Costa Rica's Protected Areas Project, which is a 500,000 hectare effort to reduce deforestation and enhance reforestation. The Protected Areas Project in many senses advances the state of the art for Clean Development Mechanism-type forestry projects, as does the third-party verification work of SGS International Certification Services on the project. Nonetheless, sensitivity analysis shows that carbon benefit estimates for the project vary widely based on the imputed deforestation rate in the baseline scenario, e.g. the deforestation rate expected if the project were not implemented. This, along with a newly available national dataset that confirms other research showing a slower rate of deforestation in Costa Rica, suggests that the use of the 1979--1992 forest cover data originally as the basis for estimating carbon savings should be reconsidered. When the newly available data is substituted, carbon savings amount to 8.9 Mt (million tones) of carbon, down from the original estimate of 15.7 Mt. The primary general conclusion is that project developers should give more attention to the forecasting land use and land cover change scenarios underlying estimates of greenhouse gas benefits.

  4. Synthesis and X-ray structure analysis of a new binuclear Schiff base Co(II) complex with the ligand N,N'-bis(3-methoxysalicylidene)-1,4-butanediamine

    SciTech Connect (OSTI)

    Nasr-Esfahani, M.

    2009-12-15

    The title binuclear complex, tris[N,N-bis(3-methoxysalicylidene)-1,4-diaminobutane] dicobalt(II), C{sub 60}H{sub 70}Co{sub 2}N{sub 6}O{sub 15}, was prepared by the reaction of the tetradentate Schiff base ligand bis(3-methoxysalicylidene)-1,4-diaminobutane and Co(CH{sub 3}COO){sub 2} . 4H{sub 2}O in a ethanol solution and structurally characterized by single-crystal X-ray diffraction. This complex has a dinuclear structure where two Co(II) ions are bridged by one N{sup 0},N'-bis(3-methoxysalicylidene)-1,4-diaminobutane. The two Co(II) ions, have two distorted octahedral coordination involving two O and two N atoms.

  5. Analysis of PWR RCS Injection Strategy During Severe Accident

    SciTech Connect (OSTI)

    Wang, S.-J. [Institute of Nuclear Energy Research, Taiwan (China); Chiang, K.-S. [Institute of Nuclear Energy Research, Taiwan (China); Chiang, S.-C. [Taiwan Power Company, Taiwan (China)

    2004-05-15

    Reactor coolant system (RCS) injection is an important strategy for severe accident management of a pressurized water reactor (PWR) system. Maanshan is a typical Westinghouse PWR nuclear power plant (NPP) with large, dry containment. The severe accident management guideline (SAMG) of Maanshan NPP is developed based on the Westinghouse Owners Group (WOG) SAMG.The purpose of this work is to analyze the RCS injection strategy of PWR system in an overheated core condition. Power is assumed recovered as the vessel water level drops to the bottom of active fuel. The Modular Accident Analysis Program version 4.0.4 (MAAP4) code is chosen as a tool for analysis. A postulated station blackout sequence for Maanshan NPP is cited as a reference case for this analysis. The hot leg creep rupture occurs during the mitigation action with immediate injection after power recovery according to WOG SAMG, which is not desired. This phenomenon is not considered while developing the WOG SAMG. Two other RCS injection methods are analyzed by using MAAP4. The RCS injection strategy is modified in the Maanshan SAMG. These results can be applied for typical PWR NPPs.

  6. Accident analysis and DOE criteria

    SciTech Connect (OSTI)

    Graf, J.M.; Elder, J.C.

    1982-01-01

    In analyzing the radiological consequences of major accidents at DOE facilities one finds that many facilities fall so far below the limits of DOE Order 6430 that compliance is easily demonstrated by simple analysis. For those cases where the amount of radioactive material and the dispersive energy available are enough for accident consequences to approach the limits, the models and assumptions used become critical. In some cases the models themselves are the difference between meeting the criteria or not meeting them. Further, in one case, we found that not only did the selection of models determine compliance but the selection of applicable criteria from different chapters of Order 6430 also made the difference. DOE has recognized the problem of different criteria in different chapters applying to one facility, and has proceeded to make changes for the sake of consistency. We have proposed to outline the specific steps needed in an accident analysis and suggest appropriate models, parameters, and assumptions. As a result we feed DOE siting and design criteria will be more fairly and consistently applied.

  7. Building Energy Consumption Analysis

    Energy Science and Technology Software Center (OSTI)

    2005-03-02

    DOE2.1E-121SUNOS is a set of modules for energy analysis in buildings. Modules are included to calculate the heating and cooling loads for each space in a building for each hour of a year (LOADS), to simulate the operation and response of the equipment and systems that control temperature and humidity and distribute heating, cooling and ventilation to the building (SYSTEMS), to model energy conversion equipment that uses fuel or electricity to provide the required heating,more » cooling and electricity (PLANT), and to compute the cost of energy and building operation based on utility rate schedule and economic parameters (ECONOMICS).« less

  8. Tiling Microarray Analysis Tools

    SciTech Connect (OSTI)

    2005-05-04

    TiMAT is a package of 23 command line Java applications for use in the analysis of Affymetrix tiled genomic microarray data. TiMAT enables: 1) Rebuilding the genome annotation for entire tiled arrays (repeat filtering, chromosomal coordinate assignment). 2) Post processing of oligo intensity values (quantile normalization, median scaling, PMMM transformation), 3) Significance testing (Wilcoxon rank sum and signed rank tests, intensity difference and ratio tests) and Interval refinement (filtering based on multiple statistics, overlap comparisons), 4) Data visualization (detailed thumbnail/zoomed view with Interval Plots and data export to Affymetrix's Integrated Genome Browser) and Data reports (spreadsheet summaries and detailed profiles)

  9. Tiling Microarray Analysis Tools

    Energy Science and Technology Software Center (OSTI)

    2005-05-04

    TiMAT is a package of 23 command line Java applications for use in the analysis of Affymetrix tiled genomic microarray data. TiMAT enables: 1) Rebuilding the genome annotation for entire tiled arrays (repeat filtering, chromosomal coordinate assignment). 2) Post processing of oligo intensity values (quantile normalization, median scaling, PMMM transformation), 3) Significance testing (Wilcoxon rank sum and signed rank tests, intensity difference and ratio tests) and Interval refinement (filtering based on multiple statistics, overlap comparisons),more »4) Data visualization (detailed thumbnail/zoomed view with Interval Plots and data export to Affymetrix's Integrated Genome Browser) and Data reports (spreadsheet summaries and detailed profiles)« less

  10. Data Analysis from Ground Source Heat Pump Demonstration Projects |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy Analysis from Ground Source Heat Pump Demonstration Projects Data Analysis from Ground Source Heat Pump Demonstration Projects Comparison of building energy use before and after GSHP retrofit (result from the case study for one of the ARRA-funded GSHP demo projects) Credit: Oak Ridge National Lab Comparison of building energy use before and after GSHP retrofit (result from the case study for one of the ARRA-funded GSHP demo projects) Credit: Oak Ridge National Lab Images

  11. Appendix A. Reference case projections

    Gasoline and Diesel Fuel Update (EIA)

    U.S. Energy Information Administration | International Energy Outlook 2014 38 Appendix B Table B2. World liquids consumption by region, High Oil Price case, 2009-40 (million barrels per day) Region History Projections Average annual percent change, 2010-40 2009 2010 2020 2025 2030 2035 2040 OECD OECD Americas 23.1 23.5 23.4 22.9 22.3 22.0 22.0 -0.2 United States a 18.6 18.9 18.6 18.0 17.4 17.2 17.2 -0.3 Canada 2.2 2.2 2.2 2.1 2.0 2.0 1.9 -0.5 Mexico/Chile 2.4 2.4 2.6 2.7 2.8 2.8 2.9 0.6 OECD

  12. Appendix A. Reference case projections

    Gasoline and Diesel Fuel Update (EIA)

    U.S. Energy Information Administration | International Energy Outlook 2014 50 Appendix C Table C2. World liquids consumption by region, Low Oil Price case, 2009-40 (million barrels per day) Region History Projections Average annual percent change, 2010-40 2009 2010 2020 2025 2030 2035 2040 OECD OECD Americas 23.1 23.5 24.8 24.8 24.7 24.9 25.5 0.3 United States a 18.6 18.9 19.6 19.5 19.4 19.4 19.8 0.1 Canada 2.2 2.2 2.4 2.4 2.4 2.5 2.6 0.5 Mexico/Chile 2.4 2.4 2.8 2.9 3.0 3.0 3.2 1.0 OECD Europe

  13. Distributed Design and Analysis of Computer Experiments

    Energy Science and Technology Software Center (OSTI)

    2002-11-11

    DDACE is a C++ object-oriented software library for the design and analysis of computer experiments. DDACE can be used to generate samples from a variety of sampling techniques. These samples may be used as input to a application code. DDACE also contains statistical tools such as response surface models and correlation coefficients to analyze input/output relationships between variables in an application code. DDACE can generate input values for uncertain variables within a user's application. Formore » example, a user might like to vary a temperature variable as well as some material variables in a series of simulations. Through the series of simulations the user might be looking for optimal settings of parameters based on some user criteria. Or the user may be interested in the sensitivity to input variability shown by an output variable. In either case, the user may provide information about the suspected ranges and distributions of a set of input variables, along with a sampling scheme, and DDACE will generate input points based on these specifications. The input values generated by DDACE and the one or more outputs computed through the user's application code can be analyzed with a variety of statistical methods. This can lead to a wealth of information about the relationships between the variables in the problem. While statistical and mathematical packages may be employeed to carry out the analysis on the input/output relationships, DDACE also contains some tools for analyzing the simulation data. DDACE incorporates a software package called MARS (Multivariate Adaptive Regression Splines), developed by Jerome Friedman. MARS is used for generating a spline surface fit of the data. With MARS, a model simplification may be calculated using the input and corresponding output, values for the user's application problem. The MARS grid data may be used for generating 3-dimensional response surface plots of the simulation data. DDACE also contains an implementation of an algorithm by Michael McKay to compute variable correlations. DDACE can also be used to carry out a main-effects analysis to calculate the sensitivity of an output variable to each of the varied inputs taken individually. 1 Continued« less

  14. Objective analysis of toolmarks in forensics

    SciTech Connect (OSTI)

    Grieve, Taylor N.

    2013-03-01

    Since the 1993 court case of Daubert v. Merrell Dow Pharmaceuticals, Inc. the subjective nature of toolmark comparison has been questioned by attorneys and law enforcement agencies alike. This has led to an increased drive to establish objective comparison techniques with known error rates, much like those that DNA analysis is able to provide. This push has created research in which the 3-D surface profile of two different marks are characterized and the marks’ cross-sections are run through a comparative statistical algorithm to acquire a value that is intended to indicate the likelihood of a match between the marks. The aforementioned algorithm has been developed and extensively tested through comparison of evenly striated marks made by screwdrivers. However, this algorithm has yet to be applied to quasi-striated marks such as those made by the shear edge of slip-joint pliers. The results of this algorithm’s application to the surface of copper wire will be presented. Objective mark comparison also extends to comparison of toolmarks made by firearms. In an effort to create objective comparisons, microstamping of firing pins and breech faces has been introduced. This process involves placing unique alphanumeric identifiers surrounded by a radial code on the surface of firing pins, which transfer to the cartridge’s primer upon firing. Three different guns equipped with microstamped firing pins were used to fire 3000 cartridges. These cartridges are evaluated based on the clarity of their alphanumeric transfers and the clarity of the radial code surrounding the alphanumerics.

  15. Phosphonium-based ionic liquids and uses

    DOE Patents [OSTI]

    Del Sesto, Rico E; Koppisch, Andrew T; Lovejoy, Katherine S; Purdy, Geraldine M

    2014-12-30

    Phosphonium-based room temperature ionic liquids ("RTILs") were prepared. They were used as matrices for Matrix-Assisted Laser Desorption Ionization (MALDI) mass spectrometry and also for preparing samples of dyes for analysis.

  16. NID Copper Sample Analysis

    SciTech Connect (OSTI)

    Kouzes, Richard T.; Zhu, Zihua

    2011-09-12

    The current focal point of the nuclear physics program at PNNL is the MAJORANA DEMONSTRATOR, and the follow-on Tonne-Scale experiment, a large array of ultra-low background high-purity germanium detectors, enriched in 76Ge, designed to search for zero-neutrino double-beta decay (0???). This experiment requires the use of germanium isotopically enriched in 76Ge. The MAJORANA DEMONSTRATOR is a DOE and NSF funded project with a major science impact. The DEMONSTRATOR will utilize 76Ge from Russia, but for the Tonne-Scale experiment it is hoped that an alternate technology, possibly one under development at Nonlinear Ion Dynamics (NID), will be a viable, US-based, lower-cost source of separated material. Samples of separated material from NID require analysis to determine the isotopic distribution and impurities. DOE is funding NID through an SBIR grant for development of their separation technology for application to the Tonne-Scale experiment. The Environmental Molecular Sciences facility (EMSL), a DOE user facility at PNNL, has the required mass spectroscopy instruments for making isotopic measurements that are essential to the quality assurance for the MAJORANA DEMONSTRATOR and for the development of the future separation technology required for the Tonne-Scale experiment. A sample of isotopically separated copper was provided by NID to PNNL in January 2011 for isotopic analysis as a test of the NID technology. The results of that analysis are reported here. A second sample of isotopically separated copper was provided by NID to PNNL in August 2011 for isotopic analysis as a test of the NID technology. The results of that analysis are also reported here.

  17. Breckinridge Project, initial effort. Report X. Economic analysis and financial plan

    SciTech Connect (OSTI)

    1982-01-01

    The economic evaluation presented in this volume is based upon the cost estimates developed in the Phase Zero effort and an evaluation of product market values developed by the PACE Company Engineers and Consultants, Inc. All costs and revenues have been adjusted to reflect the impact of inflation, consistent with the forecast shown in Table 2.1, Page 2-19. Tax treatment reflects expert interpretation of the tax law in effect January 1981. The Marketing Analysis section is an abstract of a detailed report prepared by the PACE Company for the Breckinridge Project. It provides the reader with an understanding of the methodology used to establish product values, and identifies and interprets the effects of key variables that impact market prices. The base case economic scenario, considered the most likely to occur, anticipates that the world economic growth, as well as that of the United States, will be substantially less than that experienced during the previous twenty years. Under the scenario, major disruptions in crude oil supply will not occur. Therefore, prices in real terms at the end of this century are projected to be slightly higher than the peak price of 1981. Domestic natural gas supplies are expected to expand as a result of deregulation and increased importation of LNG. Two alternate economic scenarios are also considered. Sensitivity analysis of both alternate economic scenarios and key project variables clearly point to the market price of crude oil as the dominant economic factor determining this project's soundness. The base case forecast is considered to be not only the most likely case but one not likely to be proven optimistic. The Financial Plan section outlines provisions and presents a plan for financial management of the project.

  18. Analysis of the Climate Change Technology Initiative

    Reports and Publications (EIA)

    1999-01-01

    Analysis of the impact of specific policies on the reduction of carbon emissions and their impact on U.S. energy use and prices in the 2008-2012 time frame. Also, analyzes the impact of the President's Climate Change Technology Initiative, as defined for the 2000 budget, on reducing carbon emissions from the levels forecast in the Annual Energy Outlook 1999 reference case.

  19. Annual Energy Outlook Retrospective Review: Evaluation of 2014 and Prior Reference Case Projections

    Gasoline and Diesel Fuel Update (EIA)

    Annual Energy Outlook Retrospective Review: Evaluation of 2014 and Prior Reference Case Projections March 2015 Independent Statistics & Analysis www.eia.gov U.S. Department of Energy Washington, DC 20585 U.S. Energy Information Administration | AEO Retrospective Review: Evaluation of 2014 and Prior Reference Case Projections i This report was prepared by the U.S. Energy Information Administration (EIA), the statistical and analytical agency within the U.S. Department of Energy. By law, EIA's

  20. Resource Analysis | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Resource Analysis Technological Feasibility & Cost Analysis Environmental Analysis Delivery Analysis Infrastructure Development & Financial Analysis Energy Market Analysis DOE H2A ...

  1. Economic Analysis of Policy Effects Analysis Platform

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Economic Analysis of Policy Effects Analysis Platform March 24, 2015 Jason Hansen, PhD ... * Annual Milestone (93015): Identify economic benefits of co- products on biorefinery ...

  2. An Updated Annual Energy Outlook 2009 Reference Case Reflecting Provisions of the American Recovery and Reinvestment Act and Recent Changes in the Economic Outlook

    Reports and Publications (EIA)

    2009-01-01

    This report updates the Reference Case presented in the Annual Energy Outlook 2009 based on recently enacted legislation and the changing macroeconomic environment.

  3. Analysis of embedded waste storage tanks subjected to seismic loading

    SciTech Connect (OSTI)

    Zaslawsky, M.; Sammaddar, S.; Kennedy, W.N.

    1991-01-01

    At the Savannah River Site, High Activity Wastes are stored in carbon steel tanks that are within reinforced concrete vaults. These soil-embedded tank/vault structures are approximately 80 ft. in diameter and 40 ft. deep. The tanks were studied to determine the essentials of governing variables, to reduce the problem to the least number of governing cases to optimize analysis effort without introducing excessive conservatism. The problem reduced to a limited number of cases of soil-structure interaction and fluid (tank contents) -- structure interaction problems. It was theorized that substantially reduced input would be realized from soil structure interaction (SSI) but that it was also possible that tank-to-tank proximity would result in (re)amplification of the input. To determine the governing seismic input motion, the three dimensional SSI code, SASSI, was used. Significant among the issues relative to waste tanks is to the determination of fluid response and tank behavior as a function of tank contents viscosity. Tank seismic analyses and studies have been based on low viscosity fluids (water) and the behavior is quite well understood. Typical wastes (salts, sludge), which are highly viscous, have not been the subject of studies to understand the effect of viscosity on seismic response. The computer code DYNA3D was used to study how viscosity alters tank wall pressure distribution and tank base shear and overturning moments. A parallel hand calculation was performed using standard procedures. Conclusions based on the study provide insight into the quantification of the reduction of seismic inputs for soil structure interaction for a soft'' soil site.

  4. Analysis of embedded waste storage tanks subjected to seismic loading

    SciTech Connect (OSTI)

    Zaslawsky, M.; Sammaddar, S.; Kennedy, W.N.

    1991-12-31

    At the Savannah River Site, High Activity Wastes are stored in carbon steel tanks that are within reinforced concrete vaults. These soil-embedded tank/vault structures are approximately 80 ft. in diameter and 40 ft. deep. The tanks were studied to determine the essentials of governing variables, to reduce the problem to the least number of governing cases to optimize analysis effort without introducing excessive conservatism. The problem reduced to a limited number of cases of soil-structure interaction and fluid (tank contents) -- structure interaction problems. It was theorized that substantially reduced input would be realized from soil structure interaction (SSI) but that it was also possible that tank-to-tank proximity would result in (re)amplification of the input. To determine the governing seismic input motion, the three dimensional SSI code, SASSI, was used. Significant among the issues relative to waste tanks is to the determination of fluid response and tank behavior as a function of tank contents viscosity. Tank seismic analyses and studies have been based on low viscosity fluids (water) and the behavior is quite well understood. Typical wastes (salts, sludge), which are highly viscous, have not been the subject of studies to understand the effect of viscosity on seismic response. The computer code DYNA3D was used to study how viscosity alters tank wall pressure distribution and tank base shear and overturning moments. A parallel hand calculation was performed using standard procedures. Conclusions based on the study provide insight into the quantification of the reduction of seismic inputs for soil structure interaction for a ``soft`` soil site.

  5. DOE H2A Analysis | Department of Energy

    Office of Environmental Management (EM)

    Systems Analysis » DOE H2A Analysis DOE H2A Analysis Realistic assumptions, both market- and technology-based, are critical to an accurate analytical study. DOE's H2A Analysis Group develops the building blocks and frameworks needed to conduct rigorous and consistent analyses of a wide range of hydrogen technologies. Established in FY 2003, H2A (which stands for hydrogen analysis) brings together the analysis expertise in the hydrogen community, drawing from industry, academia, and DOE's

  6. Risk Analysis Virtual ENvironment

    Energy Science and Technology Software Center (OSTI)

    2014-02-10

    RAVEN has 3 major functionalities: 1. Provides a Graphical User Interface for the pre- and post-processing of the RELAP-7 input and output. 2. Provides the capability to model nuclear power plants control logic for the RELAP-7 code and dynamic control of the accident scenario evolution. This capability is based on a software structure that realizes a direct connection between the RELAP-7 solver engine (MOOSE) and a python environment where the variables describing the plant statusmore » are accessible in a scripting environment. RAVEN support the generation of the probabilistic scenario control by supplying a wide range of probability and cumulative distribution functions and their inverse functions. 3. Provides a general environment to perform probability risk analysis for RELAP-7, RELAP-5 and any generic MOOSE based applications. The probabilistic analysis is performed by sampling the input space of the coupled code parameters and it is enhanced by using modern artificial intelligence algorithms that accelerate the identification of the areas of major risk (in the input parameter space). This environment also provides a graphical visualization capability to analyze the outcomes. Among other approaches, the classical Monte Carlo and Latin Hypercube sampling algorithms are available. For the acceleration of the convergence of the sampling methodologies, Support Vector Machines, Bayesian regression, and collocation stochastic polynomials chaos are implemented. The same methodologies here described could be used to solve optimization and uncertainties propagation problems using the RAVEN framework.« less

  7. GIS-Based Infrastructure Modeling | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    GIS-Based Infrastructure Modeling GIS-Based Infrastructure Modeling Presentation by NREL's Keith Parks at the 2010 - 2025 Scenario Analysis for Hydrogen Fuel Cell Vehicles and Infrastructure Meeting on August 9 - 10, 2006 in Washington, D.C. PDF icon parks_gis_infrastructure_modeling.pdf More Documents & Publications DOE Hydrogen Transition Analysis Workshop Geographically-Based Infrastructure Analysis for California Hydrogen and FCV Implementation Scenarios, 2010 - 2025

  8. Radiation analysis devices, radiation analysis methods, and articles of manufacture

    DOE Patents [OSTI]

    Roybal, Lyle Gene

    2010-06-08

    Radiation analysis devices include circuitry configured to determine respective radiation count data for a plurality of sections of an area of interest and combine the radiation count data of individual of sections to determine whether a selected radioactive material is present in the area of interest. An amount of the radiation count data for an individual section is insufficient to determine whether the selected radioactive material is present in the individual section. An article of manufacture includes media comprising programming configured to cause processing circuitry to perform processing comprising determining one or more correction factors based on a calibration of a radiation analysis device, measuring radiation received by the radiation analysis device using the one or more correction factors, and presenting information relating to an amount of radiation measured by the radiation analysis device having one of a plurality of specified radiation energy levels of a range of interest.

  9. NID Copper Sample Analysis

    SciTech Connect (OSTI)

    Kouzes, Richard T.; Zhu, Zihua

    2011-02-01

    The current focal point of the nuclear physics program at PNNL is the MAJORANA DEMONSTRATOR, and the follow-on Tonne-Scale experiment, a large array of ultra-low background high-purity germanium detectors, enriched in 76Ge, designed to search for zero-neutrino double-beta decay (0???). This experiment requires the use of germanium isotopically enriched in 76Ge. The DEMONSTRATOR will utilize 76Ge from Russia, but for the Tonne-Scale experiment it is hoped that an alternate technology under development at Nonlinear Ion Dynamics (NID) will be a viable, US-based, lower-cost source of separated material. Samples of separated material from NID require analysis to determine the isotopic distribution and impurities. The MAJORANA DEMONSTRATOR is a DOE and NSF funded project with a major science impact. DOE is funding NID through an SBIR grant for development of their separation technology for application to the Tonne-Scale experiment. The Environmental Molecular Sciences facility (EMSL), a DOE user facility at PNNL, has the required mass spectroscopy instruments for making these isotopic measurements that are essential to the quality assurance for the MAJORANA DEMONSTRATOR and for the development of the future separation technology required for the Tonne-Scale experiment. A sample of isotopically separated copper was provided by NID to PNNL for isotopic analysis as a test of the NID technology. The results of that analysis are reported here.

  10. Appendix A. Reference case projections

    Gasoline and Diesel Fuel Update (EIA)

    44 Appendix B Table B5. World crude and lease condensate a production by region and country, High Oil Price case, 2009-40 (million barrels per day) Region History Projections Average annual percent change, 2010-40 2009 2010 2011 2020 2025 2030 2035 2040 OPEC b 31.0 32.0 32.2 28.5 29.6 32.5 35.2 37.9 0.6 Middle East 20.8 21.7 23.0 19.1 19.9 22.6 25.2 27.5 0.8 North Africa 3.3 3.2 2.0 2.6 2.6 2.7 2.7 2.7 -0.6 West Africa 4.1 4.4 4.3 4.3 4.5 4.6 4.8 4.8 0.3 South America 2.8 2.7 2.8 2.5 2.5 2.5 2.6

  11. Appendix A. Reference case projections

    Gasoline and Diesel Fuel Update (EIA)

    6 Appendix C Table C5. World crude and lease condensate a production by region and country, Low Oil Price case, 2009-40 (million barrels per day) Region History Projections Average annual percent change, 2010-40 2009 2010 2011 2020 2025 2030 2035 2040 OPEC b 31.0 32.0 32.2 39.0 44.2 49.9 54.8 60.2 2.1 Middle East 20.8 21.7 23.0 27.1 31.0 35.3 39.3 43.6 2.3 North Africa 3.3 3.2 2.0 3.2 3.4 3.6 3.8 4.1 0.8 West Africa 4.1 4.4 4.3 5.4 6.0 6.7 7.0 7.3 1.7 South America 2.8 2.7 2.8 3.3 3.7 4.2 4.6

  12. Appendix A. Reference case projections

    Gasoline and Diesel Fuel Update (EIA)

    26 Appendix A Table A2. World liquids consumption by region, Reference case, 2009-40 (million barrels per day) Region History Projections Average annual percent change, 2010-40 2009 2010 2020 2025 2030 2035 2040 OECD OECD Americas 23.1 23.5 24.3 24.0 23.6 23.4 23.5 0.0 United States a 18.6 18.9 19.2 19.0 18.6 18.5 18.4 -0.1 Canada 2.2 2.2 2.3 2.2 2.2 2.2 2.1 -0.1 Mexico/Chile 2.4 2.4 2.7 2.8 2.8 2.8 2.9 0.7 OECD Europe 15.0 14.8 14.1 14.1 14.0 13.9 14.0 -0.2 OECD Asia 7.7 7.7 8.0 7.9 7.7 7.4 7.2

  13. Appendix A. Reference case projections

    Gasoline and Diesel Fuel Update (EIA)

    2 Appendix A Table A5. World crude and lease condensate a production by region and country, Reference case, 2009-40 (million barrels per day) Region History Projections Average annual percent change, 2010-40 2009 2010 2011 2020 2025 2030 2035 2040 OPEC b 31.0 32.0 32.2 34.4 36.1 39.5 42.9 46.2 1.2 Middle East 20.8 21.7 23.0 23.8 25.2 28.4 31.5 34.5 1.6 North Africa 3.3 3.2 2.0 2.9 2.9 2.9 2.9 3.0 -0.3 West Africa 4.1 4.4 4.3 4.9 5.0 5.1 5.2 5.3 0.6 South America 2.8 2.7 2.8 2.9 2.9 3.0 3.2 3.5

  14. Twenty Years On!: Updating the IEA BESTEST Building Thermal Fabric Test Cases for ASHRAE Standard 140

    SciTech Connect (OSTI)

    Judkoff, R.; Neymark, J.

    2013-07-01

    ANSI/ASHRAE Standard 140, Standard Method of Test for the Evaluation of Building Energy Analysis Computer Programs applies the IEA BESTEST building thermal fabric test cases and example simulation results originally published in 1995. These software accuracy test cases and their example simulation results, which comprise the first test suite adapted for the initial 2001 version of Standard 140, are approaching their 20th anniversary. In response to the evolution of the state of the art in building thermal fabric modeling since the test cases and example simulation results were developed, work is commencing to update the normative test specification and the informative example results.

  15. Fuel Cell Tri-Generation System Case Study using the H2A Stationary Model |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy Tri-Generation System Case Study using the H2A Stationary Model Fuel Cell Tri-Generation System Case Study using the H2A Stationary Model Overview of H2A stationary model concept, results, strategy for analysis, Federal incentives for fuel cells, and summary of next steps PDF icon tspi_steward.pdf More Documents & Publications Fuel Cell Power Model for CHHP System Economics and Performance Analysis Tri-Generation Success Story: World's First Tri-Gen Energy

  16. Evaluating the Potential for Marine and Hydrokinetic Devices to Act as Artificial Reefs or Fish Aggregating Devices. Based on Analysis of Surrogates in Tropical, Subtropical, and Temperate U.S. West Coast and Hawaiian Coastal Waters

    SciTech Connect (OSTI)

    Kramer, Sharon H.; Hamilton, Christine D.; Spencer, Gregory C.; Ogston, Heather O.

    2015-05-12

    Wave energy converters (WECs) and tidal energy converters (TECs) are only beginning to be deployed along the U.S. West Coast and in Hawai‘i, and a better understanding of their ecological effects on fish, particularly on special-status fish (e.g., threatened and endangered) is needed to facilitate project design and environmental permitting. The structures of WECs and TECs placed on to the seabed, such as anchors and foundations, may function as artificial reefs that attract reef-associated fishes, while the midwater and surface structures, such as mooring lines, buoys, and wave or tidal power devices, may function as fish aggregating devices (FADs), forming the nuclei for groups of fishes. Little is known about the potential for WECs and TECs to function as artificial reefs and FADs in coastal waters of the U.S. West Coast and Hawai‘i. We evaluated these potential ecological interactions by reviewing relevant information about fish associations with surrogate structures, such as artificial reefs, natural reefs, kelps, floating debris, oil and gas platforms, marine debris, anchored FADs deployed to enhance fishing opportunities, net-cages used for mariculture, and piers and docks. Based on our review, we postulate that the structures of WECs and TECs placed on or near the seabed in coastal waters of the U.S. West Coast and Hawai‘i likely will function as small-scale artificial reefs and attract potentially high densities of reef-associated fishes (including special-status rockfish species [Sebastes spp.] along the mainland), and that the midwater and surface structures of WECs placed in the tropical waters of Hawai‘i likely will function as de facto FADs with species assemblages varying by distance from shore and deployment depth. Along the U.S. West Coast, frequent associations with midwater and surface structures may be less likely: juvenile, semipelagic, kelp-associated rockfishes may occur at midwater and surface structures of WECs in coastal waters of southern California to Washington, and occasional, seasonal, or transitory associations of coastal pelagic fishes such as jack mackerel (Trachurus symmetricus) may also occur at WECs in these waters. Importantly, our review indicated that negative effects of WEC structures on special-status fish species, such as increased predation of juvenile salmonids or rockfishes, are not likely. In addition, WECs installed in coastal California, especially in southern California waters, have the potential to attract high densities of reef-associated fishes and may even contribute to rockfish productivity, if fish respond to the WECs similarly to oil and gas platforms, which have some of the highest secondary production per unit area of seafloor of any marine habitat studied globally (Claisse et al. 2014). We encountered some information gaps, owing to the paucity or lack, in key locations, of comparable surrogate structures in which fish assemblages and ecological interactions were studied. TECs are most likely to be used in the Puget Sound area, but suitable surrogates are lacking there. However, in similarly cold-temperate waters of Europe and Maine, benthopelagic fish occurred around tidal turbines during lower tidal velocities, and this type of interaction may be expected by similar species at TECs in Puget Sound. To address information gaps in the near term, such as whether WECs would function as FADs in temperate waters, studies of navigation buoys using hydroacoustics are recommended.

  17. Water Use Reduction Case Studies | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Facilities Water Use Reduction Water Use Reduction Case Studies Water Use Reduction Case Studies These case studies offer examples of water use reduction projects implemented...

  18. WP-07 Power Rate Case (rates/ratecases)

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Meetings & Workshops Rate Case Parties Web Site WP-07 Supplemental Rate Case ASC Methodology Adjustments (2007-2009) Adjustments (2002-2006) Previous Rate Cases Financial...

  19. Hydrothermal Liquefaction Treatment Preliminary Hazard Analysis Report

    SciTech Connect (OSTI)

    Lowry, Peter P.; Wagner, Katie A.

    2015-08-31

    A preliminary hazard assessment was completed during February 2015 to evaluate the conceptual design of the modular hydrothermal liquefaction treatment system. The hazard assessment was performed in 2 stages. An initial assessment utilizing Hazard Identification and Preliminary Hazards Analysis (PHA) techniques identified areas with significant or unique hazards (process safety-related hazards) that fall outside of the normal operating envelope of PNNL and warranted additional analysis. The subsequent assessment was based on a qualitative What-If analysis. This analysis was augmented, as necessary, by additional quantitative analysis for scenarios involving a release of hazardous material or energy with the potential for affecting the public.

  20. Preliminary Reference Case Results for Oil and Natural Gas

    U.S. Energy Information Administration (EIA) Indexed Site

    Preliminary Reference Case Results for Oil and Natural Gas AEO2014 Oil and Gas Supply Working Group Meeting Office of Petroleum, Gas, and Biofuels Analysis September 26, 2013 | Washington, DC WORKING GROUP PRESENTATION FOR DISCUSSION PURPOSES DO NOT QUOTE OR CITE AS RESULTS ARE SUBJECT TO CHANGE AEO2014P uses ref2014.d092413a AEO2013 uses ref2013.d102312a Changes for AEO2014 2 * Revised shale & tight play resources (EURs, type curves) * Updated classification of shale gas, tight gas, &