National Library of Energy BETA

Sample records for metrics benchmarks actions

  1. Self-benchmarking Guide for Data Centers: Metrics, Benchmarks, Actions

    SciTech Connect (OSTI)

    Mathew, Paul; Ganguly, Srirupa; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in data centers. This guide is primarily intended for personnel who have responsibility for managing energy use in existing data centers - including facilities managers, energy managers, and their engineering consultants. Additionally, data center designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior data center benchmarking studies supported by the California Energy Commission. Much of the benchmarking data are drawn from the LBNL data center benchmarking database that was developed from these studies. Additional benchmark data were obtained from engineering experts including facility designers and energy managers. This guide also builds on recent research supported by the U.S. Department of Energy's Save Energy Now program.

  2. Self-benchmarking Guide for Cleanrooms: Metrics, Benchmarks, Actions

    SciTech Connect (OSTI)

    Mathew, Paul; Sartor, Dale; Tschudi, William

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  3. Self-benchmarking Guide for Laboratory Buildings: Metrics, Benchmarks, Actions

    SciTech Connect (OSTI)

    Mathew, Paul; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  4. Metrics and Benchmarks for Energy Efficiency in Laboratories

    SciTech Connect (OSTI)

    Mathew, Paul

    2007-10-26

    A wide spectrum of laboratory owners, ranging from universities to federal agencies, have explicit goals for energy efficiency in their facilities. For example, the Energy Policy Act of 2005 (EPACT 2005) requires all new federal buildings to exceed ASHRAE 90.1-2004 1 by at least 30 percent. The University of California Regents Policy requires all new construction to exceed California Title 24 2 by at least 20 percent. A new laboratory is much more likely to meet energy efficiency goals if quantitative metrics and targets are explicitly specified in programming documents and tracked during the course of the delivery process. If efficiency targets are not explicitly and properly defined, any additional capital costs or design time associated with attaining higher efficiencies can be difficult to justify. The purpose of this guide is to provide guidance on how to specify and compute energy efficiency metrics and benchmarks for laboratories, at the whole building as well as the system level. The information in this guide can be used to incorporate quantitative metrics and targets into the programming of new laboratory facilities. Many of these metrics can also be applied to evaluate existing facilities. For information on strategies and technologies to achieve energy efficiency, the reader is referred to Labs21 resources, including technology best practice guides, case studies, and the design guide (available at www.labs21century.gov/toolkit).

  5. Conceptual Soundness, Metric Development, Benchmarking, and Targeting for PATH Subprogram Evaluation

    SciTech Connect (OSTI)

    Mosey. G.; Doris, E.; Coggeshall, C.; Antes, M.; Ruch, J.; Mortensen, J.

    2009-01-01

    The objective of this study is to evaluate the conceptual soundness of the U.S. Department of Housing and Urban Development (HUD) Partnership for Advancing Technology in Housing (PATH) program's revised goals and establish and apply a framework to identify and recommend metrics that are the most useful for measuring PATH's progress. This report provides an evaluative review of PATH's revised goals, outlines a structured method for identifying and selecting metrics, proposes metrics and benchmarks for a sampling of individual PATH programs, and discusses other metrics that potentially could be developed that may add value to the evaluation process. The framework and individual program metrics can be used for ongoing management improvement efforts and to inform broader program-level metrics for government reporting requirements.

  6. Action-Oriented Benchmarking: Using the CEUS Database to Benchmark Commercial Buildings in California

    SciTech Connect (OSTI)

    Mathew, Paul; Mills, Evan; Bourassa, Norman; Brook, Martha

    2008-02-01

    The 2006 Commercial End Use Survey (CEUS) database developed by the California Energy Commission is a far richer source of energy end-use data for non-residential buildings than has previously been available and opens the possibility of creating new and more powerful energy benchmarking processes and tools. In this article--Part 2 of a two-part series--we describe the methodology and selected results from an action-oriented benchmarking approach using the new CEUS database. This approach goes beyond whole-building energy benchmarking to more advanced end-use and component-level benchmarking that enables users to identify and prioritize specific energy efficiency opportunities - an improvement on benchmarking tools typically in use today.

  7. How Does Your Data Center Measure Up? Energy Efficiency Metrics and Benchmarks for Data Center Infrastructure Systems

    SciTech Connect (OSTI)

    Mathew, Paul; Greenberg, Steve; Ganguly, Srirupa; Sartor, Dale; Tschudi, William

    2009-04-01

    Data centers are among the most energy intensive types of facilities, and they are growing dramatically in terms of size and intensity [EPA 2007]. As a result, in the last few years there has been increasing interest from stakeholders - ranging from data center managers to policy makers - to improve the energy efficiency of data centers, and there are several industry and government organizations that have developed tools, guidelines, and training programs. There are many opportunities to reduce energy use in data centers and benchmarking studies reveal a wide range of efficiency practices. Data center operators may not be aware of how efficient their facility may be relative to their peers, even for the same levels of service. Benchmarking is an effective way to compare one facility to another, and also to track the performance of a given facility over time. Toward that end, this article presents the key metrics that facility managers can use to assess, track, and manage the efficiency of the infrastructure systems in data centers, and thereby identify potential efficiency actions. Most of the benchmarking data presented in this article are drawn from the data center benchmarking database at Lawrence Berkeley National Laboratory (LBNL). The database was developed from studies commissioned by the California Energy Commission, Pacific Gas and Electric Co., the U.S. Department of Energy and the New York State Energy Research and Development Authority.

  8. Metrics

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Metrics Metrics Los Alamos expands its innovation network by engaging in sponsored research and licensing across technical disciplines. These agreements are the basis of a working relationship with industry and other research institutions and highlight the diversity of our collaborations. Los Alamos has a remarkable 70-year legacy of creating entirely new technologies that have revolutionized the country's understanding of science and engineering. Collaborations Data from Fiscal Year 2014. FY14

  9. Evaluation of metrics and baselines for tracking greenhouse gas emissions trends: Recommendations for the California climate action registry

    SciTech Connect (OSTI)

    Price, Lynn; Murtishaw, Scott; Worrell, Ernst

    2003-06-01

    Energy Commission (Energy Commission) related to the Registry in three areas: (1) assessing the availability and usefulness of industry-specific metrics, (2) evaluating various methods for establishing baselines for calculating GHG emissions reductions related to specific actions taken by Registry participants, and (3) establishing methods for calculating electricity CO2 emission factors. The third area of research was completed in 2002 and is documented in Estimating Carbon Dioxide Emissions Factors for the California Electric Power Sector (Marnay et al., 2002). This report documents our findings related to the first areas of research. For the first area of research, the overall objective was to evaluate the metrics, such as emissions per economic unit or emissions per unit of production that can be used to report GHG emissions trends for potential Registry participants. This research began with an effort to identify methodologies, benchmarking programs, inventories, protocols, and registries that u se industry-specific metrics to track trends in energy use or GHG emissions in order to determine what types of metrics have already been developed. The next step in developing industry-specific metrics was to assess the availability of data needed to determine metric development priorities. Berkeley Lab also determined the relative importance of different potential Registry participant categories in order to asses s the availability of sectoral or industry-specific metrics and then identified industry-specific metrics in use around the world. While a plethora of metrics was identified, no one metric that adequately tracks trends in GHG emissions while maintaining confidentiality of data was identified. As a result of this review, Berkeley Lab recommends the development of a GHG intensity index as a new metric for reporting and tracking GHG emissions trends.Such an index could provide an industry-specific metric for reporting and tracking GHG emissions trends to accurately

  10. K-12 Schools Project Performance Benchmarks | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    K-12 Schools Project Performance Benchmarks K-12 Schools Project Performance Benchmarks Reports five major performance metrics that can be used to benchmark proposed energy service ...

  11. En route to Background Independence: Broken split-symmetry, and how to restore it with bi-metric average actions

    SciTech Connect (OSTI)

    Becker, D. Reuter, M.

    2014-11-15

    The most momentous requirement a quantum theory of gravity must satisfy is Background Independence, necessitating in particular an ab initio derivation of the arena all non-gravitational physics takes place in, namely spacetime. Using the background field technique, this requirement translates into the condition of an unbroken split-symmetry connecting the (quantized) metric fluctuations to the (classical) background metric. If the regularization scheme used violates split-symmetry during the quantization process it is mandatory to restore it in the end at the level of observable physics. In this paper we present a detailed investigation of split-symmetry breaking and restoration within the Effective Average Action (EAA) approach to Quantum Einstein Gravity (QEG) with a special emphasis on the Asymptotic Safety conjecture. In particular we demonstrate for the first time in a non-trivial setting that the two key requirements of Background Independence and Asymptotic Safety can be satisfied simultaneously. Carefully disentangling fluctuation and background fields, we employ a ‘bi-metric’ ansatz for the EAA and project the flow generated by its functional renormalization group equation on a truncated theory space spanned by two separate Einstein–Hilbert actions for the dynamical and the background metric, respectively. A new powerful method is used to derive the corresponding renormalization group (RG) equations for the Newton- and cosmological constant, both in the dynamical and the background sector. We classify and analyze their solutions in detail, determine their fixed point structure, and identify an attractor mechanism which turns out instrumental in the split-symmetry restoration. We show that there exists a subset of RG trajectories which are both asymptotically safe and split-symmetry restoring: In the ultraviolet they emanate from a non-Gaussian fixed point, and in the infrared they loose all symmetry violating contributions inflicted on them by the

  12. Healthcare Project Performance Benchmarks | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Healthcare Project Performance Benchmarks Healthcare Project Performance Benchmarks Reports five major performance metrics that can be used to benchmark proposed energy service company projects within the healthcare industry, disaggregated and reported by major retrofit strategy. Author: U.S. Department of Energy healthcareprojectperformancebenchmarks.pdf (1.76 MB) More Documents & Publications Public Housing Project Performance Benchmarks Federal Government Project Performance Benchmarks

  13. Thermal Performance Benchmarking (Presentation)

    SciTech Connect (OSTI)

    Moreno, G.

    2014-11-01

    This project will benchmark the thermal characteristics of automotive power electronics and electric motor thermal management systems. Recent vehicle systems will be benchmarked to establish baseline metrics, evaluate advantages and disadvantages of different thermal management systems, and identify areas of improvement to advance the state-of-the-art.

  14. Guide to Benchmarking Residential Program Progress - CALL FOR...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    ... BBNP Peer Group Benchmarking Examples Step 7 Recommended Benchmarking Metrics ... should review and approve of the methodology used by contractors to estimate ...

  15. Benchmark Distribution & Run Rules

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Rules Benchmark Distribution & Run Rules Applications and micro-benchmarks for the Crossroads/NERSC-9 procurement. You can find more information by clicking on the header for each of the topics listed below. Change Log Change and update notes for the benchmark suite. Application Benchmarks The following applications will be used by the Sustained System Improvement metric in measuring the performance improvement of proposed systems relative to NERSC's Edison platform. General Run Rules

  16. Buildings Performance Metrics Terminology | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Performance Metrics Terminology Buildings Performance Metrics Terminology This document provides the terms and definitions used in the Department of Energys Performance Metrics Research Project. metrics_terminology_20090203.pdf (152.35 KB) More Documents & Publications Procuring Architectural and Engineering Services for Energy Efficiency and Sustainability Transmittal Letter for the Statewide Benchmarking Process Evaluation Guide for Benchmarking Residential Energy Efficiency Program

  17. CBEI Localized Benchmarking: Users and Analytics

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    ... Real Estate Life Cycle Data, metrics, and content: * Benchmarking * DOE SEED * DOE BPD * ... Budget History CBEI BP3 (past) 212013 - 4302014 CBEI BP4 (current) 512014 - 430...

  18. Benchmarks used

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Benchmarks used Benchmarks used Using a set of benchmarks described below, different optimization options for the different compilers on Edison. The compilers are also compared against one another on the benchmarks. NERSC6 Benchmarks We used these benchmarks from the NERSC6 procurement: Nersc 6 procurement mpi benchmarks Benchmark Science Area Algorithms Concurrency Languages GTC Fusion PIC, finite difference 2048 (weak scaling) f90 IMPACT-T Accelerator Physics PIC, FFT 1024 (strong scaling) f90

  19. Public Housing Project Performance Benchmarks

    Broader source: Energy.gov [DOE]

    Reports five major performance metrics that can be used to benchmark proposed energy service company projects within public housing, disaggregated and reported by major retrofit strategy. Author: U.S. Department of Energy

  20. Federal Government Project Performance Benchmarks

    Broader source: Energy.gov [DOE]

    Reports five major performance metrics that can be used to benchmark proposed energy service company projects within the federal government, disaggregated and reported by major retrofit strategy. Author: U.S. Department of Energy

  1. Post Secondary Project Performance Benchmarks

    Broader source: Energy.gov [DOE]

    Reports five major performance metrics that can be used to benchmark proposed energy service company projects within post secondary education facilities, disaggregated and reported by major retrofit strategy. Author: U.S. Department of Energy

  2. MPI Benchmarks

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    MPI Benchmarks MPI Benchmarks The APEX RFP calls out several MPI related requirements that can be categorized as two-sided or one-sided and the respective measures of message rate, bandwidth and latency for each. In addition, collective operations are called out. The general philosophy for MPI benchmarking is to use publicly available micro-benchmarks were appropriate and to develop new micro-benchmarks where there are gaps in the public benchmark suites. Unless a benchmark is explicitly called

  3. Benchmarks used

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Benchmarks were run, all at a concurrency of 1024 processes. They are all written in Fortran. NAS PARALLEL MPI BENCHMARKS - VERSION 3.3.1 Benchmark Full Name Description Level BT...

  4. Hospital Energy Benchmarking Guidance - Version 1.0

    SciTech Connect (OSTI)

    Singer, Brett C.

    2009-09-08

    This document describes an energy benchmarking framework for hospitals. The document is organized as follows. The introduction provides a brief primer on benchmarking and its application to hospitals. The next two sections discuss special considerations including the identification of normalizing factors. The presentation of metrics is preceded by a description of the overall framework and the rationale for the grouping of metrics. Following the presentation of metrics, a high-level protocol is provided. The next section presents draft benchmarks for some metrics; benchmarks are not available for many metrics owing to a lack of data. This document ends with a list of research needs for further development.

  5. Verification and validation benchmarks.

    SciTech Connect (OSTI)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  6. Evaluating IMRT and VMAT dose accuracy: Practical examples of failure to detect systematic errors when applying a commonly used metric and action levels

    SciTech Connect (OSTI)

    Nelms, Benjamin E.; Chan, Maria F.; Jarry, Genevive; Lemire, Matthieu; Lowden, John; Hampton, Carnell

    2013-11-15

    Purpose: This study (1) examines a variety of real-world cases where systematic errors were not detected by widely accepted methods for IMRT/VMAT dosimetric accuracy evaluation, and (2) drills-down to identify failure modes and their corresponding means for detection, diagnosis, and mitigation. The primary goal of detailing these case studies is to explore different, more sensitive methods and metrics that could be used more effectively for evaluating accuracy of dose algorithms, delivery systems, and QA devices.Methods: The authors present seven real-world case studies representing a variety of combinations of the treatment planning system (TPS), linac, delivery modality, and systematic error type. These case studies are typical to what might be used as part of an IMRT or VMAT commissioning test suite, varying in complexity. Each case study is analyzed according to TG-119 instructions for gamma passing rates and action levels for per-beam and/or composite plan dosimetric QA. Then, each case study is analyzed in-depth with advanced diagnostic methods (dose profile examination, EPID-based measurements, dose difference pattern analysis, 3D measurement-guided dose reconstruction, and dose grid inspection) and more sensitive metrics (2% local normalization/2 mm DTA and estimated DVH comparisons).Results: For these case studies, the conventional 3%/3 mm gamma passing rates exceeded 99% for IMRT per-beam analyses and ranged from 93.9% to 100% for composite plan dose analysis, well above the TG-119 action levels of 90% and 88%, respectively. However, all cases had systematic errors that were detected only by using advanced diagnostic techniques and more sensitive metrics. The systematic errors caused variable but noteworthy impact, including estimated target dose coverage loss of up to 5.5% and local dose deviations up to 31.5%. Types of errors included TPS model settings, algorithm limitations, and modeling and alignment of QA phantoms in the TPS. Most of the errors were

  7. STAR METRICS

    Broader source: Energy.gov [DOE]

    Energy continues to define Phase II of the STAR METRICS program, a collaborative initiative to track Research and Development expenditures and their outcomes. Visit the STAR METRICS website for...

  8. Resilience Metrics

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    for Quadrennial Energy Review Technical Workshop on Resilience Metrics for Energy Transmission and Distribution Infrastructure April 28, 2014 Infrastructure Assurance Center ...

  9. Metric Presentation

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... MODERN GRID S T R A T E G Y 14 14 Value Metrics - Work to date Reliability Outage duration and frequency Momentary outages Power Quality measures Security Ratio of distributed ...

  10. Acquisition Letter 07 - Benchmark Compensation Amount for Individual...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Compensation Amount for Individual Executive Salary Actions Acquisition Letter 07 - Benchmark Compensation Amount for Individual Executive Salary Actions The purpose of ...

  11. State/Local Government Project Performance Benchmarks | Department of

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Energy State/Local Government Project Performance Benchmarks State/Local Government Project Performance Benchmarks Reports five major performance metrics that can be used to benchmark proposed energy service company projects within state and local government facilities, disaggregated and reported by major retrofit strategy. Author: U.S. Department of Energy State/Local Government Project Performance Benchmarks (1.96 MB) More Documents & Publications Federal Government Project Performance

  12. Policy Flash 2014-29 Acquisition Letter 2014-07 - Benchmark Compensati...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    9 Acquisition Letter 2014-07 - Benchmark Compensation Amount for Individual Executive Salary Actions Policy Flash 2014-29 Acquisition Letter 2014-07 - Benchmark Compensation Amount...

  13. performance metrics

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    performance metrics - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Energy Defense Waste Management Programs Advanced Nuclear

  14. Metric Presentation

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    MODERN GRID S T R A T E G Y Smart Grid Metrics Monitoring our Progress Smart Grid Implementation Workshop Joe Miller - Modern Grid Team June 19, 2008 1 Conducted by the National Energy Technology Laboratory Funded by the U.S. Department of Energy, Office of Electricity Delivery and Energy Reliability 2 Office of Electricity Delivery and Energy Reliability MODERN GRID S T R A T E G Y Many are working on the Smart Grid FERC DOE-OE Grid 2030 GridWise Alliance EEI NERC (FM) DOE/NETL Modern Grid

  15. A framework for benchmarking land models

    SciTech Connect (OSTI)

    Luo, Yiqi; Randerson, J.; Abramowitz, G.; Bacour, C.; Blyth, E.; Carvalhais, N.; Ciais, Philippe; Dalmonech, D.; Fisher, J.B.; Fisher, R.; Friedlingstein, P.; Hibbard, Kathleen A.; Hoffman, F. M.; Huntzinger, Deborah; Jones, C.; Koven, C.; Lawrence, David M.; Li, D.J.; Mahecha, M.; Niu, S.L.; Norby, Richard J.; Piao, S.L.; Qi, X.; Peylin, P.; Prentice, I.C.; Riley, William; Reichstein, M.; Schwalm, C.; Wang, Y.; Xia, J. Y.; Zaehle, S.; Zhou, X. H.

    2012-10-09

    Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1) targeted aspects of model performance to be evaluated, (2) a set of benchmarks as defined references to test model performance, (3) metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4) model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1) a priori thresholds of acceptable model performance and (2) a scoring system to combine datamodel mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties of land models

  16. A framework for benchmarking land models

    SciTech Connect (OSTI)

    Luo, Yiqi; Randerson, James T.; Hoffman, Forrest; Norby, Richard J

    2012-01-01

    Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1) targeted aspects of model performance to be evaluated, (2) a set of benchmarks as defined references to test model performance, (3) metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4) model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1) a priori thresholds of acceptable model performance and (2) a scoring system to combine data model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties of land models

  17. Enclosure - FY 2015 Q4 Metrics Report 2015-11-02.xlsx

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Fourth Quarter Overall Root Cause Analysis (RCA)Corrective Action Plan (CAP) Performance Metrics No. ContractProject Management Performance Metrics FY 2015 Target Comment No. 2 3 ...

  18. Microsoft Word - 2014-5-27 RCA Qtr 2 Metrics Attachment_R1

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Second Quarter Overall Root Cause Analysis (RCA)Corrective Action Plan (CAP) Performance Metrics 1 ContractProject Management Performance Metric FY 2014 Target FY 2014 Projected ...

  19. Introduction to Benchmarking: Starting a Benchmarking Plan

    Broader source: Energy.gov [DOE]

    Presentation for the Introduction to Benchmarking: Starting a Benchmarking Plan webinar, presented on February 21, 2013 as part of the U.S. Department of Energy's Technical Assistance Program (TAP).

  20. The NERSC GTC Benchmark

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    GTC The NERSC GTC Benchmark Complete Readme Overview Building and Optimization Running and Timing Performance Data Download Benchmark Last edited: 2015-01-06 15:04:18...

  1. The NERSC MADBench Benchmark

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    MADBench The NERSC MADBench Benchmark Complete Readme Overview Building and Optimization Running and Timing Performance Data Download Benchmark Last edited: 2015-01-06 15:10:14...

  2. The NERSC GAMESS Benchmark

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    GAMESS The NERSC GAMESS Benchmark Complete Readme Overview Building and Optimization Running and Timing Performance Data Download Benchmark Last edited: 2015-01-06 14:48:10...

  3. The NERSC CAM Benchmark

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    CAM The NERSC CAM Benchmark Complete Readme Overview Building and Optimization Running and Timing Performance Data Download Benchmark Last edited: 2015-01-06 14:32:44...

  4. The NERSC PMEMD Benchmark

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    PMEMD The NERSC PMEMD Benchmark Complete Readme Overview Building and Optimization Running and Timing Performance Data Download Benchmark Last edited: 2015-01-06 15:55:50...

  5. The NERSC PARATEC Benchmark

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    PARATEC The NERSC PARATEC Benchmark Complete Readme Overview Building and Optimization Running and Timing Performance Data Download Benchmark Last edited: 2015-01-06 15:16:06...

  6. The NERSC MILC Benchmark

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    MILC The NERSC MILC Benchmark Complete Readme Overview Building and Optimization Running and Timing Performance Data Download Benchmark Last edited: 2015-01-06 15:12:32...

  7. The NERSC CAM Benchmark

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    CAM The NERSC CAM Benchmark Complete Readme Overview Building and Optimization Running and Timing Performance Data Download Benchmark Last edited: 2015-01-06 14:32:44

  8. The NERSC GAMESS Benchmark

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    GAMESS The NERSC GAMESS Benchmark Complete Readme Overview Building and Optimization Running and Timing Performance Data Download Benchmark Last edited: 2016-04-29 11:35:09

  9. The NERSC GTC Benchmark

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    GTC The NERSC GTC Benchmark Complete Readme Overview Building and Optimization Running and Timing Performance Data Download Benchmark Last edited: 2016-04-29 11:34:23

  10. The NERSC MADBench Benchmark

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    MADBench The NERSC MADBench Benchmark Complete Readme Overview Building and Optimization Running and Timing Performance Data Download Benchmark Last edited: 2016-04-29 11:35:18

  11. The NERSC MILC Benchmark

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    MILC The NERSC MILC Benchmark Complete Readme Overview Building and Optimization Running and Timing Performance Data Download Benchmark Last edited: 2015-01-06 15:12:32

  12. The NERSC PARATEC Benchmark

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    PARATEC The NERSC PARATEC Benchmark Complete Readme Overview Building and Optimization Running and Timing Performance Data Download Benchmark Last edited: 2016-04-29 11:35:25

  13. The NERSC PMEMD Benchmark

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    PMEMD The NERSC PMEMD Benchmark Complete Readme Overview Building and Optimization Running and Timing Performance Data Download Benchmark Last edited: 2016-04-29 11:35:01

  14. STEP Program Benchmark Report

    Broader source: Energy.gov [DOE]

    STEP Program Benchmark Report, from the Tool Kit Framework: Small Town University Energy Program (STEP).

  15. NERSC-8 Benchmarks

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    8 Benchmarks NERSC-8 Benchmarks The NERSC-8 micro- and application benchmarks that were used in the acquisition process that will be the NERSC Cray XC40 ("Cori") system. All of the benchmarks used for this procurement may be found here Last edited: 2016-04-29 11:34:31

  16. Benchmarking Data Cleansing: A Rite of Passage Along the Benchmarking...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Data Cleansing: A Rite of Passage Along the Benchmarking Journey Benchmarking Data Cleansing: A Rite of Passage Along the Benchmarking Journey Hosted by the Technical Assistance ...

  17. NERSC-8 / Trinity Benchmarks

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Benchmarks NERSC-8 / Trinity Benchmarks These benchmark programs are for use as part of the joint NERSC / ACES NERSC-8/Trinity system procurement. There are two basic kinds of benchmarks: MiniApplications: miniFE, miniGhost, AMG, UMT, GTC, MILC, SNAP, and miniDFT MicroBenchmarks: Pynamic, STREAM, OMB, SMB, ZiaTest, IOR, Metabench, PSNAP, FSTest, mpimemu, and UPC_FT The SSP is an aggregate measure based on selected runs of the MiniApplications. The benchmark run rules are available here (PDF,

  18. CBEI - Improving Benchmarking Data Quality

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Improving Benchmarking Data Quality 2015 Building Technologies Office Peer Review Scott ... Analysis of 2013 Philadelphia benchmarking data; Evaluation of Proficiency in Benchmarking ...

  19. NERSC-5 Benchmarks

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    5 Benchmarks NERSC-5 Benchmarks The NERSC-5 application benchmarks were used in the acquisition process that resulted in the NERSC Cray XT4 system ("Franklin"). CAM: CCSM Community Climate Model GAMESS: Computational Chemistry GTC: 3D Gyrokinetic Toroidal Code MADBench: Microwave Anisotropy Dataset Computational Analysis Benchmark MILC: MIMD Lattice Computation PARATEC: Parallel Total Energy Code PMEMD: Particle Mesh Ewald Molecular Dynamics Last edited: 2016-04-29 11:34:58

  20. Benchmarking for Cost Improvement. Final report

    SciTech Connect (OSTI)

    Not Available

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  1. Acquisition Letter 07 - Benchmark Compensation Amount for Individual

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Executive Salary Actions | Department of Energy 7 - Benchmark Compensation Amount for Individual Executive Salary Actions Acquisition Letter 07 - Benchmark Compensation Amount for Individual Executive Salary Actions The purpose of Acquisition Letter (AL) 2014-07 is to establish the form, "Compensation Subject to the Executive CAP (OFPP Limitation)" as the minimum required documentation to support DOE/NNSA Contracting Officers (CO) and Contractor Human Resources Specialists'

  2. International land Model Benchmarking (ILAMB) Package v002.00

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    Collier, Nathaniel; Hoffman, Forrest M. [Climage Modeling.org; Mu, Mingquan [University of California, Irvine; Randerson, James T. [University of California, Irvine; Riley, William J. [Lawrence Berkeley National Laboratory

    2016-05-09

    As a contribution to International Land Model Benchmarking (ILAMB) Project, we are providing new analysis approaches, benchmarking tools, and science leadership. The goal of ILAMB is to assess and improve the performance of land models through international cooperation and to inform the design of new measurement campaigns and field studies to reduce uncertainties associated with key biogeochemical processes and feedbacks. ILAMB is expected to be a primary analysis tool for CMIP6 and future model-data intercomparison experiments. This team has developed initial prototype benchmarking systems for ILAMB, which will be improved and extended to include ocean model metrics and diagnostics.

  3. International land Model Benchmarking (ILAMB) Package v001.00

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    Mu, Mingquan [University of California, Irvine; Randerson, James T. [University of California, Irvine; Riley, William J. [Lawrence Berkeley National Laboratory; Hoffman, Forrest M. [Climage Modeling.org; Oak Ridge National Laboratory

    2016-05-02

    As a contribution to International Land Model Benchmarking (ILAMB) Project, we are providing new analysis approaches, benchmarking tools, and science leadership. The goal of ILAMB is to assess and improve the performance of land models through international cooperation and to inform the design of new measurement campaigns and field studies to reduce uncertainties associated with key biogeochemical processes and feedbacks. ILAMB is expected to be a primary analysis tool for CMIP6 and future model-data intercomparison experiments. This team has developed initial prototype benchmarking systems for ILAMB, which will be improved and extended to include ocean model metrics and diagnostics.

  4. Benchmarking Database - JCAP

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ©bobpaz.com0121.JPG Benchmarking Database Research Why Solar Fuels Goals & Objectives Thrust 1 Thrust 2 Thrust 3 Thrust 4 Publications Research Highlights Videos Innovations User Facilities Expert Team Benchmarking Database Device Simulation Tool XPS Spectral Database Research Introduction Why Solar Fuels? Goals & Objectives Thrusts Thrust 1 Thrust 2 Thrust 3 Thrust 4 Library Publications Research Highlights Videos Resources User Facilities Expert Team Benchmarking Database Device

  5. Benchmarks & Workflows

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Home » R & D » APEX » Benchmarks & Workflows Benchmarks & Workflows For the Crossroads/NERSC-9 procurement: NERSC conducted a workload analysis on the Hopper and Edison systems analyzing algorithmic diversity, MPI and OpenMP concurrency, memory utilization, and I/O and storage needs. Benchmarks will be used to measure the sustained performance of proposed systems. Example workflows are being provided to give prospective offerors a better understanding of the current I/O usage

  6. Benchmarking & Workload Characterization

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    CAM: CCSM Community Climate Model GAMESS: Computational Chemistry GTC: 3D Gyrokinetic Toroidal Code MADBench: Microwave Anisotropy Dataset Computational Analysis Benchmark MILC: ...

  7. NERSC-5 Benchmarks

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    CAM: CCSM Community Climate Model GAMESS: Computational Chemistry GTC: 3D Gyrokinetic Toroidal Code MADBench: Microwave Anisotropy Dataset Computational Analysis Benchmark MILC: ...

  8. Optional Residential Program Benchmarking

    Broader source: Energy.gov [DOE]

    Better Buildings Residential Network Data and Evaluation Peer Exchange Call Series: Optional Residential Program Benchmarking, Call Slides and Discussion Summary, January 23, 2014.

  9. Energy Performance Benchmarking and Disclosure Policies for Public and

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Commercial Buildings | Department of Energy Performance Benchmarking and Disclosure Policies for Public and Commercial Buildings Energy Performance Benchmarking and Disclosure Policies for Public and Commercial Buildings This presentation is part of the SEE Action Series and provides information on Energy Performance Benchmarking and Disclosure Policies for Public and Commercial Buildings Presentation (6.87 MB) Transcript (126.5 KB) More Documents & Publications

  10. ARM - 2008 Performance Metrics

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Atmospheric System Research (ASR) Earth System Modeling Regional & Global Climate Modeling Terrestrial Ecosystem Science Performance Metrics User Meetings Past ARM Science Team ...

  11. ARM - 2006 Performance Metrics

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Atmospheric System Research (ASR) Earth System Modeling Regional & Global Climate Modeling Terrestrial Ecosystem Science Performance Metrics User Meetings Past ARM Science Team ...

  12. ARM - 2007 Performance Metrics

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Atmospheric System Research (ASR) Earth System Modeling Regional & Global Climate Modeling Terrestrial Ecosystem Science Performance Metrics User Meetings Past ARM Science Team ...

  13. Surveillance metrics sensitivity study.

    SciTech Connect (OSTI)

    Hamada, Michael S.; Bierbaum, Rene Lynn; Robertson, Alix A.

    2011-09-01

    In September of 2009, a Tri-Lab team was formed to develop a set of metrics relating to the NNSA nuclear weapon surveillance program. The purpose of the metrics was to develop a more quantitative and/or qualitative metric(s) describing the results of realized or non-realized surveillance activities on our confidence in reporting reliability and assessing the stockpile. As a part of this effort, a statistical sub-team investigated various techniques and developed a complementary set of statistical metrics that could serve as a foundation for characterizing aspects of meeting the surveillance program objectives. The metrics are a combination of tolerance limit calculations and power calculations, intending to answer level-of-confidence type questions with respect to the ability to detect certain undesirable behaviors (catastrophic defects, margin insufficiency defects, and deviations from a model). Note that the metrics are not intended to gauge product performance but instead the adequacy of surveillance. This report gives a short description of four metrics types that were explored and the results of a sensitivity study conducted to investigate their behavior for various inputs. The results of the sensitivity study can be used to set the risk parameters that specify the level of stockpile problem that the surveillance program should be addressing.

  14. NIF Target Shot Metrics

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    target shot metrics NIF Target Shot Metrics Exp Cap - Experimental Capability Natl Sec Appl - National Security Applications DS - Discovery Science ICF - Inertial Confinement Fusion HED - High Energy Density For internal LLNL firewall viewing - if the page is blank, please open www.google.com to flush out BCB

  15. HPGMG 1.0: A Benchmark for Ranking High Performance Computing Systems

    SciTech Connect (OSTI)

    Adams, Mark; Brown, Jed; Shalf, John; Straalen, Brian Van; Strohmaier, Erich; Williams, Sam

    2014-05-05

    This document provides an overview of the benchmark ? HPGMG ? for ranking large scale general purpose computers for use on the Top500 list [8]. We provide a rationale for the need for a replacement for the current metric HPL, some background of the Top500 list and the challenges of developing such a metric; we discuss our design philosophy and methodology, and an overview of the specification of the benchmark. The primary documentation with maintained details on the specification can be found at hpgmg.org and the Wiki and benchmark code itself can be found in the repository https://bitbucket.org/hpgmg/hpgmg.

  16. NERSC-8 / Trinity Benchmarks

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    computation. NPB UPC-FT This is the NAS Parallel Benchmark FFT program written in the UPC language. Pynamic Pynamic tests dynamic loading subsystem design and the ability to handle...

  17. Energy Benchmarking and Disclosure

    Broader source: Energy.gov [DOE]

    Energy benchmarking and disclosure is a market-based policy tool used to increase building energy performance awareness and transparency among key stakeholders and create demand for energy efficiency improvements.

  18. Metric Construction | Open Energy Information

    Open Energy Info (EERE)

    Metric Construction Jump to: navigation, search Name: Metric Construction Place: Boston, MA Information About Partnership with NREL Partnership with NREL Yes Partnership Type Test...

  19. Benchmarking: The foundation for performance improvement

    SciTech Connect (OSTI)

    Fogarty, J.; Miller, R.; Dong, C.

    1996-07-01

    This paper focuses on the key role that Benchmarking plays in supporting and accelerating all forms of performance improvement. It outlines the continuing need for benchmarking as the industry becomes more competitive, as well as stressing the need for sharing practices both in and outside of the industry. The successful benchmarking program for performance improvement must start by identifying areas that present the most opportunity for improvement. In preparing for external benchmarking one must concentrate on picking partners that complement one`s weaknesses, while also addressing confidentiality, data accuracy, validation, and normalization issues. Essential to success is that the project yield actionable results and help set concrete improvement targets. A description of how to use benchmarking findings to identify areas with maximum opportunity and solutions (i.e., identify areas with target gaps and practices to close the gaps) is given. Additionally, an example improvement plan and summary of results from its implementation at Los Angeles Department of Water and Power (LADWP) are provided.

  20. Cyber threat metrics.

    SciTech Connect (OSTI)

    Frye, Jason Neal; Veitch, Cynthia K.; Mateski, Mark Elliot; Michalski, John T.; Harris, James Mark; Trevino, Cassandra M.; Maruoka, Scott

    2012-03-01

    Threats are generally much easier to list than to describe, and much easier to describe than to measure. As a result, many organizations list threats. Fewer describe them in useful terms, and still fewer measure them in meaningful ways. This is particularly true in the dynamic and nebulous domain of cyber threats - a domain that tends to resist easy measurement and, in some cases, appears to defy any measurement. We believe the problem is tractable. In this report we describe threat metrics and models for characterizing threats consistently and unambiguously. The purpose of this report is to support the Operational Threat Assessment (OTA) phase of risk and vulnerability assessment. To this end, we focus on the task of characterizing cyber threats using consistent threat metrics and models. In particular, we address threat metrics and models for describing malicious cyber threats to US FCEB agencies and systems.

  1. Monte Carlo Benchmark

    Energy Science and Technology Software Center (OSTI)

    2010-10-20

    The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.

  2. Designing a Benchmarking Plan

    Office of Energy Efficiency and Renewable Energy (EERE)

    U.S. Department of Energy Office of Energy Efficiency and Renewable Energy (EERE) Weatherization and Intergovernmental Program (WIP) Solution Center document about how state and local governments, Indian tribes, and overseas U.S. territories can design a plan to benchmark the energy consumption in public buildings.

  3. Optional Residential Program Benchmarking | Department of Energy

    Energy Savers [EERE]

    Guide to Benchmarking Residential Program Progress Webcast Slides Lessons Learned: Measuring Program Outcomes and Using Benchmarks Guide for Benchmarking Residential Energy ...

  4. NERSC-6/7 Benchmarks

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    6/7 Benchmarks NERSC-6/7 Benchmarks The NERSC-6/7 application benchmarks were used in the acquisition process that resulted in the NERSC Cray XE6 ("Hopper") system and the follow on Cray XC30 system ("Edison") . A technical report describing the benchmark programs used in the NERSC-6 acquisition and the science drivers behind them is available here. Last edited: 2016-04-29 11:34:40

  5. Metrics for Energy Resilience

    SciTech Connect (OSTI)

    Paul E. Roege; Zachary A. Collier; James Mancillas; John A. McDonagh; Igor Linkov

    2014-09-01

    Energy lies at the backbone of any advanced society and constitutes an essential prerequisite for economic growth, social order and national defense. However there is an Achilles heel to today?s energy and technology relationship; namely a precarious intimacy between energy and the fiscal, social, and technical systems it supports. Recently, widespread and persistent disruptions in energy systems have highlighted the extent of this dependence and the vulnerability of increasingly optimized systems to changing conditions. Resilience is an emerging concept that offers to reconcile considerations of performance under dynamic environments and across multiple time frames by supplementing traditionally static system performance measures to consider behaviors under changing conditions and complex interactions among physical, information and human domains. This paper identifies metrics useful to implement guidance for energy-related planning, design, investment, and operation. Recommendations are presented using a matrix format to provide a structured and comprehensive framework of metrics relevant to a system?s energy resilience. The study synthesizes previously proposed metrics and emergent resilience literature to provide a multi-dimensional model intended for use by leaders and practitioners as they transform our energy posture from one of stasis and reaction to one that is proactive and which fosters sustainable growth.

  6. Sequoia Messaging Rate Benchmark

    Energy Science and Technology Software Center (OSTI)

    2008-01-22

    The purpose of this benchmark is to measure the maximal message rate of a single compute node. The first num_cores ranks are expected to reside on the 'core' compute node for which message rate is being tested. After that, the next num_nbors ranks are neighbors for the first core rank, the next set of num_nbors ranks are neighbors for the second core rank, and so on. For example, testing an 8-core node (num_cores = 8)more » with 4 neighbors (num_nbors = 4) requires 8 + 8 * 4 - 40 ranks. The first 8 of those 40 ranks are expected to be on the 'core' node being benchmarked, while the rest of the ranks are on separate nodes.« less

  7. Operating and Maintaining Energy Smart Schools Action Plan Template - All Action Plans

    SciTech Connect (OSTI)

    none,

    2009-07-01

    EnergySmart Schools action plan templates for benchmarking, lighting, HVAC, water heating, building envelope, transformer, plug loads, kitchen equipment, swimming pool, building automation system, other.

  8. MPI Multicore Linktest Benchmark

    Energy Science and Technology Software Center (OSTI)

    2008-01-25

    The MPI Multicore Linktest (LinkTest) measures the aggregate bandwidth from/to a multicore node in a parallel system. It allows the user to specify a variety of different node layout and communication routine variations and reports the maximal observed bandwidth across all specified options. In particular, this benchmark is able to vary the number of tasks on the root node and thereby allows users to study the impact of multicore architectures on MPI communication performance.

  9. Commercial and Multifamily Building Benchmarking and Disclosure...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    and Multifamily Building Benchmarking and Disclosure Commercial and Multifamily Building Benchmarking and Disclosure Better Buildings Residential Network Peer Exchange Call: ...

  10. House Simulation Protocols (Building America Benchmark) - Building...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    House Simulation Protocols (Building America Benchmark) - Building America Top Innovation House Simulation Protocols (Building America Benchmark) - Building America Top Innovation ...

  11. Algebraic Multigrid Benchmark

    Energy Science and Technology Software Center (OSTI)

    2013-05-06

    AMG2013 is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. It has been derived directly from the Boomer AMG solver in the hypre library, a large linear solvers library that is being developed in the Center for Applied Scientific Computing (CASC) at LLNL. The driver provided in the benchmark can build various test problems. The default problem is a Laplace type problem on an unstructured domain with various jumpsmore » and an anisotropy in one part.« less

  12. Ames Laboratory Metrics | The Ames Laboratory

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Metrics Document Number: NA Effective Date: 01/2016 File (public): PDF icon ameslab_metrics_01-14-16

  13. Decommissioning Benchmarking Study Final Report

    Office of Energy Efficiency and Renewable Energy (EERE)

    DOE's former Office of Environmental Restoration (EM-40) conducted a benchmarking study of its decommissioning program to analyze physical activities in facility decommissioning and to determine...

  14. Benchmarking foreign electronics technologies

    SciTech Connect (OSTI)

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  15. FireHose Streaming Benchmarks

    Energy Science and Technology Software Center (OSTI)

    2015-01-27

    The FireHose Streaming Benchmarks are a suite of stream-processing benchmarks defined to enable comparison of streaming software and hardware, both quantitatively vis-a-vis the rate at which they can process data, and qualitatively by judging the effort involved to implement and run the benchmarks. Each benchmark has two parts. The first is a generator which produces and outputs datums at a high rate in a specific format. The second is an analytic which reads the streammore » of datums and is required to perform a well-defined calculation on the collection of datums, typically to find anomalous datums that have been created in the stream by the generator. The FireHose suite provides code for the generators, sample code for the analytics (which users are free to re-implement in their own custom frameworks), and a precise definition of each benchmark calculation.« less

  16. FireHose Streaming Benchmarks

    SciTech Connect (OSTI)

    Karl Anderson, Steve Plimpton

    2015-01-27

    The FireHose Streaming Benchmarks are a suite of stream-processing benchmarks defined to enable comparison of streaming software and hardware, both quantitatively vis-a-vis the rate at which they can process data, and qualitatively by judging the effort involved to implement and run the benchmarks. Each benchmark has two parts. The first is a generator which produces and outputs datums at a high rate in a specific format. The second is an analytic which reads the stream of datums and is required to perform a well-defined calculation on the collection of datums, typically to find anomalous datums that have been created in the stream by the generator. The FireHose suite provides code for the generators, sample code for the analytics (which users are free to re-implement in their own custom frameworks), and a precise definition of each benchmark calculation.

  17. Metric redefinitions in Einstein-Aether theory

    SciTech Connect (OSTI)

    Foster, Brendan Z.

    2005-08-15

    'Einstein-Aether' theory, in which gravity couples to a dynamical, timelike, unit-norm vector field, provides a means for studying Lorentz violation in a generally covariant setting. Demonstrated here is the effect of a redefinition of the metric and 'aether' fields in terms of the original fields and two free parameters. The net effect is a change of the coupling constants appearing in the action. Using such a redefinition, one of the coupling constants can be set to zero, simplifying studies of solutions of the theory.

  18. Measuring the Impact of Benchmarking & Transparency - Methodologies...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Measuring the Impact of Benchmarking & Transparency - Methodologies and the NYC Example Measuring the Impact of Benchmarking & Transparency - Methodologies and the NYC Example ...

  19. Monitoring and Benchmarking for Energy Information Systems |...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Monitoring and Benchmarking for Energy Information Systems Monitoring and Benchmarking for Energy Information Systems Figure 1: Features of a sample CBERD energy information system ...

  20. Guide for Benchmarking Residential Energy Efficiency Program...

    Energy Savers [EERE]

    Energy Efficiency Program Progress Guide for Benchmarking Residential Energy Efficiency Program Progress Guide for Benchmarking Residential Energy Efficiency Program Progress as ...

  1. California commercial building energy benchmarking

    SciTech Connect (OSTI)

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the identities of building owners might be revealed and

  2. Variable metric conjugate gradient methods

    SciTech Connect (OSTI)

    Barth, T.; Manteuffel, T.

    1994-07-01

    1.1 Motivation. In this paper we present a framework that includes many well known iterative methods for the solution of nonsymmetric linear systems of equations, Ax = b. Section 2 begins with a brief review of the conjugate gradient method. Next, we describe a broader class of methods, known as projection methods, to which the conjugate gradient (CG) method and most conjugate gradient-like methods belong. The concept of a method having either a fixed or a variable metric is introduced. Methods that have a metric are referred to as either fixed or variable metric methods. Some relationships between projection methods and fixed (variable) metric methods are discussed. The main emphasis of the remainder of this paper is on variable metric methods. In Section 3 we show how the biconjugate gradient (BCG), and the quasi-minimal residual (QMR) methods fit into this framework as variable metric methods. By modifying the underlying Lanczos biorthogonalization process used in the implementation of BCG and QMR, we obtain other variable metric methods. These, we refer to as generalizations of BCG and QMR.

  3. Daylight metrics and energy savings

    SciTech Connect (OSTI)

    Mardaljevic, John; Heschong, Lisa; Lee, Eleanor

    2009-12-31

    The drive towards sustainable, low-energy buildings has increased the need for simple, yet accurate methods to evaluate whether a daylit building meets minimum standards for energy and human comfort performance. Current metrics do not account for the temporal and spatial aspects of daylight, nor of occupants comfort or interventions. This paper reviews the historical basis of current compliance methods for achieving daylit buildings, proposes a technical basis for development of better metrics, and provides two case study examples to stimulate dialogue on how metrics can be applied in a practical, real-world context.

  4. Federal Government Project Performance Benchmarks

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Federal Government Project Performance Benchmarks (All ASHRAE Zones) We define an ESCO as ... ESCOs in a similar climate zone (based on ASHRAE climate zones) or market segment (e.g., ...

  5. Data-Intensive Benchmarking Suite

    Energy Science and Technology Software Center (OSTI)

    2008-11-26

    The Data-Intensive Benchmark Suite is a set of programs written for the study of data-or storage-intensive science and engineering problems, The benchmark sets cover: general graph searching (basic and Hadoop Map/Reduce breadth-first search), genome sequence searching, HTTP request classification (basic and Hadoop Map/Reduce), low-level data communication, and storage device micro-beachmarking

  6. List of SEP Reporting Metrics

    Broader source: Energy.gov [DOE]

    DOE State Energy Program List of Reporting Metrics, which was produced by the Office of Energy Efficiency and Renewable Energy Weatherization and Intergovernmental Program for SEP and the Energy Efficiency and Conservation Block Grants (EECBG) programs.

  7. Common Carbon Metric | Open Energy Information

    Open Energy Info (EERE)

    Common Carbon Metric Jump to: navigation, search Tool Summary LAUNCH TOOL Name: Common Carbon Metric AgencyCompany Organization: United Nations Environment Programme, World...

  8. Benchmarking Help Center Guide | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Help Center Guide Benchmarking Help Center Guide This guide provides recommendations for establishing a benchmarking help center based on experiences and lessons learned in New York City and Seattle. Benchmarking Help Center Guide (884.45 KB) More Documents & Publications Energy Disclosure and Leasing Standards: Best Practices Benchmarking Outreach and Data Collection Techniques for External Portfolios Energy Performance Benchmarking and Disclosure Policies for Public and Commercial

  9. Radiation Detection Computational Benchmark Scenarios

    SciTech Connect (OSTI)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  10. Performance Metrics Tiers | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Performance Metrics Tiers Performance Metrics Tiers The performance metrics defined by the Commercial Buildings Integration Program offer different tiers of information to address the needs of various users. On this page you will find information about the various goals users are trying to achieve by using performance metrics and the tiers of metrics. Goals in Measuring Performance Many individuals and groups are involved with a building over its lifetime, and all have different interests in and

  11. Thermodynamic Metrics and Optimal Paths

    SciTech Connect (OSTI)

    Sivak, David; Crooks, Gavin

    2012-05-08

    A fundamental problem in modern thermodynamics is how a molecular-scale machine performs useful work, while operating away from thermal equilibrium without excessive dissipation. To this end, we derive a friction tensor that induces a Riemannian manifold on the space of thermodynamic states. Within the linear-response regime, this metric structure controls the dissipation of finite-time transformations, and bestows optimal protocols with many useful properties. We discuss the connection to the existing thermodynamic length formalism, and demonstrate the utility of this metric by solving for optimal control parameter protocols in a simple nonequilibrium model.

  12. Real-Time Benchmark Suite

    Energy Science and Technology Software Center (OSTI)

    1992-01-17

    This software provides a portable benchmark suite for real time kernels. It tests the performance of many of the system calls, as well as the interrupt response time and task response time to interrupts. These numbers provide a baseline for comparing various real-time kernels and hardware platforms.

  13. PRISMATIC CORE COUPLED TRANSIENT BENCHMARK

    SciTech Connect (OSTI)

    J. Ortensi; M.A. Pope; G. Strydom; R.S. Sen; M.D. DeHart; H.D. Gougar; C. Ellis; A. Baxter; V. Seker; T.J. Downar; K. Vierow; K. Ivanov

    2011-06-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  14. PyMPI Dynamic Benchmark

    Energy Science and Technology Software Center (OSTI)

    2007-02-16

    Pynamic is a benchmark designed to test a system's ability to handle the Dynamic Linking and Loading (DLL) requirements of Python-based scientific applications. This benchmark is developed to add a workload to our testing environment, a workload that represents a newly emerging class of DLL behaviors. Pynamic buildins on pyMPI, and MPI extension to Python C-extension dummy codes and a glue layer that facilitates linking and loading of the generated dynamic modules into the resultingmore » pyMPI. Pynamic is configurable, enabling modeling the static properties of a specific code as described in section 5. It does not, however, model any significant computationss of the target and hence, it is not subjected to the same level of control as the target code. In fact, HPC computer vendors and tool developers will be encouraged to add it to their tesitn suite once the code release is completed. an ability to produce and run this benchmark is an effective test for valifating the capability of a compiler and linker/loader as well as an OS kernel and other runtime system of HPC computer vendors. In addition, the benchmark is designed as a test case for stressing code development tools. Though Python has recently gained popularity in the HPC community, it heavy DLL operations have hindered certain HPC code development tools, notably parallel debuggers, from performing optimally.« less

  15. Processor Emulator with Benchmark Applications

    Energy Science and Technology Software Center (OSTI)

    2015-11-13

    A processor emulator and a suite of benchmark applications have been developed to assist in characterizing the performance of data-centric workloads on current and future computer architectures. Some of the applications have been collected from other open source projects. For more details on the emulator and an example of its usage, see reference [1].

  16. Measuring Program Outcomes and Using Benchmarks Webinar

    Broader source: Energy.gov [DOE]

    Measuring Program Outcomes and Using Benchmarks, a webinar from the U.S. Department of Energy's Better Buildings program.

  17. CLEERS Coordination & Joint Development of Benchmark Kinetics...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    More Documents & Publications CLEERS Coordination & Joint Development of Benchmark Kinetics for LNT & SCR CLEERS Coordination & Development of Catalyst Process Kinetic...

  18. Building Energy Use Benchmarking | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Building Energy Use Benchmarking Building Energy Use Benchmarking Benchmarking is the practice of comparing the measured performance of a device, process, facility, or organization to itself, its peers, or established norms, with the goal of informing and motivating performance improvement. When applied to building energy use, benchmarking serves as a mechanism to measure energy performance of a single building over time, relative to other similar buildings, or to modeled simulations of a

  19. Commercial and Multifamily Building Benchmarking and Disclosure

    Broader source: Energy.gov [DOE]

    Better Buildings Residential Network Peer Exchange Call: Commercial and Multifamily Building Benchmarking and Disclosure, Call Slides, July 25, 2013.

  20. Analytic Methods for Benchmarking Hydrogen and Fuel Cell Technologies (Presentation), NREL (National Renewable Energy Laboratory)

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    NREL/PR-5400-64420 NREL is a national laboratory of the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, operated by the Alliance for Sustainable Energy, LLC. Analytic Methods for Benchmarking Hydrogen and Fuel Cell Technologies 227 th ECS Meeting, Chicago, Illinois Marc Melaina, Genevieve Saur, Todd Ramsden, Joshua Eichman May 28, 2015 2 Presentation Overview: Four Metrics Analysis projects focus on low-carbon and economic transportation and stationary fuel cell

  1. Testing (Validating?) Cross Sections with ICSBEP Benchmarks

    SciTech Connect (OSTI)

    Kahler, Albert C. III

    2012-06-28

    We discuss how to use critical benchmarks from the International Handbook of Evaluated Criticality Safety Benchmark Experiments to determine the applicability of specific cross sections to the end-user's problem of interest. Particular attention is paid to making sure the selected suite of benchmarks includes the user's range of applicability (ROA).

  2. Benchmarking of Competitive Technologies | Department of Energy

    Broader source: Energy.gov (indexed) [DOE]

    1 DOE Hydrogen and Fuel Cells Program, and Vehicle Technologies Program Annual Merit Review and Peer Evaluation ape006_burress_2011_o.pdf (733.68 KB) More Documents & Publications Benchmarking of Competitive Technologies Benchmarking of Competitive Technologies Vehicle Technologies Office Merit Review 2016: Benchmarking EV and HEV Technologies

  3. Multi-Metric Sustainability Analysis

    SciTech Connect (OSTI)

    Cowlin, S.; Heimiller, D.; Macknick, J.; Mann, M.; Pless, J.; Munoz, D.

    2014-12-01

    A readily accessible framework that allows for evaluating impacts and comparing tradeoffs among factors in energy policy, expansion planning, and investment decision making is lacking. Recognizing this, the Joint Institute for Strategic Energy Analysis (JISEA) funded an exploration of multi-metric sustainability analysis (MMSA) to provide energy decision makers with a means to make more comprehensive comparisons of energy technologies. The resulting MMSA tool lets decision makers simultaneously compare technologies and potential deployment locations.

  4. Comparing Resource Adequacy Metrics: Preprint

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Comparing Resource Adequacy Metrics Preprint E. Ibanez and M. Milligan National Renewable Energy Laboratory To be presented at the 13th International Workshop on Large-Scale Integration of Wind Power into Power Systems as Well as on Transmission Networks for Offshore Wind Power Plants Berlin, Germany November 11-13, 2014 Conference Paper NREL/CP-5D00-62847 September 2014 NOTICE The submitted manuscript has been offered by an employee of the Alliance for Sustainable Energy, LLC (Alliance), a

  5. Geothermal Heat Pump Benchmarking Report

    SciTech Connect (OSTI)

    1997-01-17

    A benchmarking study was conducted on behalf of the Department of Energy to determine the critical factors in successful utility geothermal heat pump programs. A Successful program is one that has achieved significant market penetration. Successfully marketing geothermal heat pumps has presented some major challenges to the utility industry. However, select utilities have developed programs that generate significant GHP sales. This benchmarking study concludes that there are three factors critical to the success of utility GHP marking programs: (1) Top management marketing commitment; (2) An understanding of the fundamentals of marketing and business development; and (3) An aggressive competitive posture. To generate significant GHP sales, competitive market forces must by used. However, because utilities have functioned only in a regulated arena, these companies and their leaders are unschooled in competitive business practices. Therefore, a lack of experience coupled with an intrinsically non-competitive culture yields an industry environment that impedes the generation of significant GHP sales in many, but not all, utilities.

  6. MPI Multicore Torus Communication Benchmark

    Energy Science and Technology Software Center (OSTI)

    2008-02-05

    The MPI Multicore Torus Communications Benchmark (TorusTest) measues the aggegate bandwidth across all six links from/to any multicore node in a logical torus. It can run in wo modi: using a static or a random mapping of tasks to torus locations. The former can be used to achieve optimal mappings and aggregate bandwidths that can be achieved with varying node mappings.

  7. Post Secondary Project Performance Benchmarks

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Post Secondary Project Performance Benchmarks (All ASHRAE Zones) continued > We define an ESCO as a company that provides energy efficiency-related and other value-added services and that employs performance contracting as a core part of its energy efficiency services business. 1 For projects with electricity savings, we assume site energy conversion (1 kWh = 3,412 Btu). We did not estimate avoided Btus from gallons of water conserved. In general, we followed the analytical approach

  8. Public Housing Project Performance Benchmarks

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Public Housing Project Performance Benchmarks (All ASHRAE Zones) We define an ESCO as a company that provides energy efficiency-related and other value-added services and that employs performance contracting as a core part of its energy efficiency services business. 1 For projects with electricity savings, we assume site energy conversion (1 kWh = 3,412 Btu). We did not estimate avoided Btus from gallons of water conserved. In general, we followed the analytical approach documented in Hopper et

  9. EECBG SEP Attachment 1 - Process metric list

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    10-07B/SEP 10-006A Attachment 1: Process Metrics List Metric Area Metric Primary or Optional Metric Item(s) to Report On 1. Building Retrofits 1a. Buildings retrofitted, by sector Number of buildings retrofitted Square footage of buildings retrofitted 1b. Energy management systems installed, by sector Number of energy management systems installed Square footage of buildings under management 1c. Building roofs retrofitted, by sector Number of building roofs retrofitted Square footage of building

  10. Definition of GPRA08 benefits metrics

    SciTech Connect (OSTI)

    None, None

    2009-01-18

    Background information for the FY 2007 GPRA methodology review on the definitions of GPRA08 benefits metrics.

  11. Module 6- Metrics, Performance Measurements and Forecasting

    Broader source: Energy.gov [DOE]

    This module reviews metrics such as cost and schedule variance along with cost and schedule performance indices.

  12. Comparing Resource Adequacy Metrics: Preprint

    SciTech Connect (OSTI)

    Ibanez, E.; Milligan, M.

    2014-09-01

    As the penetration of variable generation (wind and solar) increases around the world, there is an accompanying growing interest and importance in accurately assessing the contribution that these resources can make toward planning reserve. This contribution, also known as the capacity credit or capacity value of the resource, is best quantified by using a probabilistic measure of overall resource adequacy. In recognizing the variable nature of these renewable resources, there has been interest in exploring the use of reliability metrics other than loss of load expectation. In this paper, we undertake some comparisons using data from the Western Electricity Coordinating Council in the western United States.

  13. Spent Fuel Criticality Benchmark Experiments

    SciTech Connect (OSTI)

    J.M. Scaglione

    2001-07-23

    Characteristics between commercial spent fuel waste packages (WP), Laboratory Critical Experiments (LCEs), and commercial reactor critical (CRC) evaluations are compared in this work. Emphasis is placed upon comparisons of CRC benchmark results and the relative neutron flux spectra in each system. Benchmark evaluations were performed for four different pressurized water reactors using four different sets of isotopes. As expected, as the number of fission products used to represent the burned fuel inventory approached reality, the closer to unity k{sub eff} became. Examination of material and geometry characteristics indicate several fundamental similarities between the WP and CRC systems. In addition, spectral evaluations were performed on a representative pressurized water reactor CRC, a 21-assembly area of the core modeled in a potential WP configuration, and three LCEs considered applicable benchmarks for storage packages. Fission and absorption reaction spectra as well as relative neutron flux spectra are generated and compared for each system. The energy dependent reaction rates are the product of the neutron flux spectrum and the energy dependent total macroscopic cross section. With constant source distribution functions, and the total macroscopic cross sections for the fuel region in the CRCs and WP being composed of nearly the same isotopics, the resulting relative flux spectra in the CRCs and WP are very nearly the same. Differences in the relative neutron flux spectra between WPs and CRCs are evident in the thermal energy range as expected. However, the relative energy distribution of the absorption, fission, and scattering reaction rates in both the CRCs and the WP are essentially the same.

  14. Thermal Performance Benchmarking; NREL (National Renewable Energy Laboratory)

    SciTech Connect (OSTI)

    Moreno, Gilbert

    2015-06-09

    This project proposes to seek out the SOA power electronics and motor technologies to thermally benchmark their performance. The benchmarking will focus on the thermal aspects of the system. System metrics including the junction-to-coolant thermal resistance and the parasitic power consumption (i.e., coolant flow rates and pressure drop performance) of the heat exchanger will be measured. The type of heat exchanger (i.e., channel flow, brazed, folded-fin) and any enhancement features (i.e., enhanced surfaces) will be identified and evaluated to understand their effect on performance. Additionally, the thermal resistance/conductivity of the power module’s passive stack and motor’s laminations and copper winding bundles will also be measured. The research conducted will allow insight into the various cooling strategies to understand which heat exchangers are most effective in terms of thermal performance and efficiency. Modeling analysis and fluid-flow visualization may also be carried out to better understand the heat transfer and fluid dynamics of the systems.

  15. Vehicle Technologies Office: Benchmarking | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Modeling, Testing, Data & Results » Vehicle Technologies Office: Benchmarking Vehicle Technologies Office: Benchmarking Research funded by the Vehicle Technologies Office produces a great deal of valuable data, but it is important to compare those research results with both baseline data and similar work done elsewhere in the world. Through laboratory testing to develop points of reference (known as benchmarking), researchers can compare vehicles and components to validate models, support

  16. Pynamic: the Python Dynamic Benchmark

    SciTech Connect (OSTI)

    Lee, G L; Ahn, D H; de Supinksi, B R; Gyllenhaal, J C; Miller, P J

    2007-07-10

    Python is widely used in scientific computing to facilitate application development and to support features such as computational steering. Making full use of some of Python's popular features, which improve programmer productivity, leads to applications that access extremely high numbers of dynamically linked libraries (DLLs). As a result, some important Python-based applications severely stress a system's dynamic linking and loading capabilities and also cause significant difficulties for most development environment tools, such as debuggers. Furthermore, using the Python paradigm for large scale MPI-based applications can create significant file IO and further stress tools and operating systems. In this paper, we present Pynamic, the first benchmark program to support configurable emulation of a wide-range of the DLL usage of Python-based applications for large scale systems. Pynamic has already accurately reproduced system software and tool issues encountered by important large Python-based scientific applications on our supercomputers. Pynamic provided insight for our system software and tool vendors, and our application developers, into the impact of several design decisions. As we describe the Pynamic benchmark, we will highlight some of the issues discovered in our large scale system software and tools using Pynamic.

  17. Preliminary Benchmarking Efforts and MCNP Simulation Results...

    Office of Scientific and Technical Information (OSTI)

    Specifically, a recent measurement made in support of national security at the Nevada Test ... BENCHMARKS; NATIONAL SECURITY; NEVADA TEST SITE; RADIATION PROTECTION; RADIATIONS; ...

  18. Verification and validation benchmarks. (Technical Report) |...

    Office of Scientific and Technical Information (OSTI)

    Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and ...

  19. Method and system for benchmarking computers

    DOE Patents [OSTI]

    Gustafson, John L.

    1993-09-14

    A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.

  20. Micro Kernel Benchmark for Evaluating Computer Performance

    Energy Science and Technology Software Center (OSTI)

    2007-04-06

    Crystal_mk is a micro benchmark that LLNL will use to evaluate vendor's software(e.g. compiler) and hardware(e.g. processor speed, memory design).

  1. Comparing Apples to Apples: Benchmarking Electrocatalysts for...

    Office of Science (SC) Website

    Comparing Apples to Apples: Benchmarking Electrocatalysts for Solar Water-Splitting Devices Basic Energy Sciences (BES) BES Home About Research Facilities Science Highlights ...

  2. CLEERS Coordination & Joint Development of Benchmark Kinetics...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    of Catalyst Process Kinetic Data CLEERS Coordination & Joint Development of Benchmark Kinetics for LNT & SCR Functionality of Commercial NOx Storage-Reduction Catalysts...

  3. Benchmark the Fuel Cost of Steam Generation

    Broader source: Energy.gov [DOE]

    This tip sheet on benchmarking the fuel cost of steam provides how-to advice for improving industrial steam systems using low-cost, proven practices and technologies.

  4. Efficient Synchronization Stability Metrics for Fault Clearing...

    Office of Scientific and Technical Information (OSTI)

    Title: Efficient Synchronization Stability Metrics for Fault Clearing Authors: Backhaus, Scott N. 1 ; Chertkov, Michael 1 ; Bent, Russell Whitford 1 ; Bienstock, Daniel 2...

  5. Module 6 - Metrics, Performance Measurements and Forecasting...

    Broader source: Energy.gov (indexed) [DOE]

    This module reviews metrics such as cost and schedule variance along with cost and schedule performance indices. In addition, this module will outline forecasting tools such as ...

  6. Western Resource Adequacy: Challenges - Approaches - Metrics...

    Energy Savers [EERE]

    Eastern Wind Integration and Transmission Study (EWITS) (Revised) Conceptual Framework for Developing Resilience Metrics for the Electricity, Oil, and Gas Sectors in the United ...

  7. Microsoft Word - QER Resilience Metrics - Technical Workshp ...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Workshop Resilience Metrics for Energy Transmission and Distribution Infrastructure Offices of Electricity Delivery and Energy Reliability (OE) and Energy Policy and Systems ...

  8. Microsoft Word - QER Resilience Metrics - Technical Workshp ...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Quadrennial Energy Review Technical Workshop on Resilience Metrics for Energy Transmission and Distribution Infrastructure April, 29th, 2014 777 North Capitol St NE Ste 300, ...

  9. Guide for Benchmarking Residential Program Progress with Examples |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy Program Progress with Examples Guide for Benchmarking Residential Program Progress with Examples Better Buildings Residential Network: Guide for Benchmarking Residential Program Progress with Examples. Guide for Benchmarking Residential Program Progress with Examples (544.53 KB) More Documents & Publications Guide for Benchmarking Residential Energy Efficiency Program Progress Guide to Benchmarking Residential Program Progress Webcast Slides Optional Residential

  10. HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks

    SciTech Connect (OSTI)

    Paulson, Patrick R.; Purohit, Sumit; Rodriguez, Luke R.

    2015-05-01

    This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.

  11. Sheet1 Water Availability Metric (Acre-Feet/Yr) Water Cost Metric...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Sheet1 Water Availability Metric (Acre-FeetYr) Water Cost Metric (Acre-Foot) Current Water Use (Acre-FeetYr) Projected Use in 2030 (Acre-FeetYr) HUC8 STATE BASIN SUBBASIN ...

  12. Benchmark the Fuel Cost of Steam Generation | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Benchmark the Fuel Cost of Steam Generation Benchmark the Fuel Cost of Steam Generation This tip sheet on benchmarking the fuel cost of steam provides how-to advice for improving...

  13. Smart Grid Status and Metrics Report Appendices

    SciTech Connect (OSTI)

    Balducci, Patrick J.; Antonopoulos, Chrissi A.; Clements, Samuel L.; Gorrissen, Willy J.; Kirkham, Harold; Ruiz, Kathleen A.; Smith, David L.; Weimar, Mark R.; Gardner, Chris; Varney, Jeff

    2014-07-01

    A smart grid uses digital power control and communication technology to improve the reliability, security, flexibility, and efficiency of the electric system, from large generation through the delivery systems to electricity consumers and a growing number of distributed generation and storage resources. To convey progress made in achieving the vision of a smart grid, this report uses a set of six characteristics derived from the National Energy Technology Laboratory Modern Grid Strategy. The Smart Grid Status and Metrics Report defines and examines 21 metrics that collectively provide insight into the grid’s capacity to embody these characteristics. This appendix presents papers covering each of the 21 metrics identified in Section 2.1 of the Smart Grid Status and Metrics Report. These metric papers were prepared in advance of the main body of the report and collectively form its informational backbone.

  14. Metrics for border management systems.

    SciTech Connect (OSTI)

    Duggan, Ruth Ann

    2009-07-01

    There are as many unique and disparate manifestations of border systems as there are borders to protect. Border Security is a highly complex system analysis problem with global, regional, national, sector, and border element dimensions for land, water, and air domains. The complexity increases with the multiple, and sometimes conflicting, missions for regulating the flow of people and goods across borders, while securing them for national security. These systems include frontier border surveillance, immigration management and customs functions that must operate in a variety of weather, terrain, operational conditions, cultural constraints, and geopolitical contexts. As part of a Laboratory Directed Research and Development Project 08-684 (Year 1), the team developed a reference framework to decompose this complex system into international/regional, national, and border elements levels covering customs, immigration, and border policing functions. This generalized architecture is relevant to both domestic and international borders. As part of year two of this project (09-1204), the team determined relevant relative measures to better understand border management performance. This paper describes those relative metrics and how they can be used to improve border management systems.

  15. Guide for Benchmarking Residential Energy Efficiency Program Progress |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy Energy Efficiency Program Progress Guide for Benchmarking Residential Energy Efficiency Program Progress Guide for Benchmarking Residential Energy Efficiency Program Progress as part of the DOE Better Buildings Program. Guide for Benchmarking Residential Energy Efficiency Program Progress (1.27 MB) More Documents & Publications Guide for Benchmarking Residential Program Progress with Examples Optional Residential Program Benchmarking Guide to Benchmarking Residential

  16. Energy Performance Benchmarking and Disclosure Policies for Public...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Performance Benchmarking and Disclosure Policies for Public and Commercial Buildings Energy Performance Benchmarking and Disclosure Policies for Public and Commercial Buildings ...

  17. Benchmarking and Disclosure: State and Local Policy Design Guide...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Benchmarking and Disclosure: State and Local Policy Design Guide and Sample Policy Language State and local policy design guide. Benchmarking and Disclosure: State and Local Policy ...

  18. Energy Benchmarking, Rating, and Disclosure for Local Governments...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Local Governments Energy Benchmarking, Rating, and Disclosure for Local Governments Existing Commercial Buildings Working Group fact sheet about energy benchmarking. Energy ...

  19. Federal Building Energy Use Benchmarking Guidance, August 2014...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Energy Use Benchmarking Guidance, August 2014 Update Federal Building Energy Use Benchmarking Guidance, August 2014 Update Guidance describes the Energy Independence and Security ...

  20. Benchmarking and Energy Saving Tool | Open Energy Information

    Open Energy Info (EERE)

    User Interface: Spreadsheet Website: industrial-energy.lbl.govnode100 Cost: Free Language: English References: Benchmarking and Energy Saving Tool 1 Logo: Benchmarking and...

  1. Energy Benchmarking, Rating, and Disclosure for State Governments

    SciTech Connect (OSTI)

    Existing Commercial Buildings Working Group

    2012-05-23

    Provides information on how energy use data access can help state governments lead by example through benchmarking and disclosing results and implement benchmarking policies for the private sector.

  2. POLICY FLASH 2014-15 Determination of Benchmark Compensation...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    5 Determination of Benchmark Compensation Amount for Certain Executives and Employees POLICY FLASH 2014-15 Determination of Benchmark Compensation Amount for Certain Executives and...

  3. POLICY FLASH 2014-15 Determination of Benchmark Compensation...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    5 Determination of Benchmark Compensation Amount for Certain Executives and Employees (Update) POLICY FLASH 2014-15 Determination of Benchmark Compensation Amount for Certain...

  4. Transcript of March 28, 2013, TAP webinar titled Internal Benchmarking...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    TAP webinar titled Internal Benchmarking Outreach and Data Collection Techniques Transcript of March 28, 2013, TAP webinar titled Internal Benchmarking Outreach and Data ...

  5. Preliminary Benchmarking and MCNP Simulation Results for Homeland...

    Office of Scientific and Technical Information (OSTI)

    Preliminary Benchmarking and MCNP Simulation Results for Homeland Security Citation Details In-Document Search Title: Preliminary Benchmarking and MCNP Simulation Results for ...

  6. Guide for Benchmarking Residential Program Progress with Examples...

    Office of Environmental Management (EM)

    Guide for Benchmarking Residential Program Progress with Examples (544.53 KB) More Documents & Publications Guide for Benchmarking Residential Energy Efficiency Program Progress ...

  7. DOE Resources Help Measure Building Energy Benchmarking Policy...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Resources Help Measure Building Energy Benchmarking Policy & Program Effectiveness DOE Resources Help Measure Building Energy Benchmarking Policy & Program Effectiveness May 21,...

  8. Developing integrated benchmarks for DOE performance measurement

    SciTech Connect (OSTI)

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  9. Metrics for comparison of crystallographic maps

    SciTech Connect (OSTI)

    Urzhumtsev, Alexandre; Afonine, Pavel V.; Lunin, Vladimir Y.; Terwilliger, Thomas C.; Adams, Paul D.

    2014-10-01

    Numerical comparison of crystallographic contour maps is used extensively in structure solution and model refinement, analysis and validation. However, traditional metrics such as the map correlation coefficient (map CC, real-space CC or RSCC) sometimes contradict the results of visual assessment of the corresponding maps. This article explains such apparent contradictions and suggests new metrics and tools to compare crystallographic contour maps. The key to the new methods is rank scaling of the Fourier syntheses. The new metrics are complementary to the usual map CC and can be more helpful in map comparison, in particular when only some of their aspects, such as regions of high density, are of interest.

  10. Internal Benchmarking Outreach and Data Collection Techniques

    Broader source: Energy.gov [DOE]

    U.S. Department of Energy (DOE) Technical Assistance Program (TAP) presentation at a TAP webinar held on April 11, 2013 and dealing with internal benchmarking outreach and data collection techniques.

  11. Advanced Technology Vehicle Lab Benchmarking - Level 1

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Vehicle Lab Benchmarking - Level 1 2014 U.S. DOE Vehicle Technologies Program Annual Merit Review and Peer Evaluation Meeting Kevin Stutenberg - Principal Investigator Argonne National Laboratory June 17, 2014 Project ID # VSS030 This presentation does not contain any proprietary, confidential, or otherwise restricted information. Overview  Timeline - Benchmarking at ANL started in 1998 - FY13 & FY14 Completed Testing: * 10 vehicles tested in FY13, 4 in FY14 * Thermal impact study *

  12. Clean Cities Annual Metrics Report 2009 (Revised)

    SciTech Connect (OSTI)

    Johnson, C.

    2011-08-01

    Document provides Clean Cities coalition metrics about the use of alternative fuels; the deployment of alternative fuel vehicles, hybrid electric vehicles (HEVs), and idle reduction initiatives; fuel economy activities; and programs to reduce vehicle miles driven.

  13. Technical Workshop: Resilience Metrics for Energy Transmission...

    Broader source: Energy.gov (indexed) [DOE]

    List (55.27 KB) Sandia Report: Conceptual Framework for Developing Resilience Metrics for the Electricity, Oil, and Gas Sectors in the United States (14.49 MB) Sandia ...

  14. Label-invariant Mesh Quality Metrics. (Conference) | SciTech...

    Office of Scientific and Technical Information (OSTI)

    Label-invariant Mesh Quality Metrics. Citation Details In-Document Search Title: Label-invariant Mesh Quality Metrics. Abstract not provided. Authors: Knupp, Patrick Publication ...

  15. Business Metrics for High-Performance Homes: A Colorado Springs...

    Office of Scientific and Technical Information (OSTI)

    Technical Report: Business Metrics for High-Performance Homes: A Colorado Springs Case Study Citation Details In-Document Search Title: Business Metrics for High-Performance Homes: ...

  16. FY 2014 Q3 Metric Summary | Department of Energy

    Office of Environmental Management (EM)

    FY 2014 Overall Contract and Project Management Improvement Performance Metrics and Targets FY 2015 Overall Contract and Project Management Improvement Performance Metrics and ...

  17. Texas CO2 Capture Demonstration Project Hits Three Million Metric...

    Office of Environmental Management (EM)

    Texas CO2 Capture Demonstration Project Hits Three Million Metric Ton Milestone Texas CO2 Capture Demonstration Project Hits Three Million Metric Ton Milestone June 30, 2016 - ...

  18. CBEI: Improving Benchmarking Data Quality - 2015 Peer Review | Department

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    of Energy Benchmarking Data Quality - 2015 Peer Review CBEI: Improving Benchmarking Data Quality - 2015 Peer Review Presenter: Scott Wagner, PSU View the Presentation CBEI: Improving Benchmarking Data Quality - 2015 Peer Review (1.49 MB) More Documents & Publications CBEI: Aligning Owners and Service Providers - 2015 Peer Review CBEI: Using DOE Tools - 2015 Peer Review CBEI: Benchmarking Analytics Tools - 2015

  19. Energy Benchmarking, Rating, and Disclosure for State Governments

    Broader source: Energy.gov [DOE]

    Existing Commercial Buildings Working Group fact sheet about energy benchmarking for state governments.

  20. Benchmarking Outreach and Data Collection Techniques for External Portfolios

    Broader source: Energy.gov [DOE]

    This presentation contains information on Benchmarking Outreach and Data Collection Techniques for External Portfolios.

  1. Transmittal Letter for the Statewide Benchmarking Process Evaluation |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy Transmittal Letter for the Statewide Benchmarking Process Evaluation Transmittal Letter for the Statewide Benchmarking Process Evaluation This report by the California Public Utilities Commission examines the value of benchmarking as a tool to encourage energy efficiency, including a discussion of analysis tools. Transmittal Letter for the Statewide Benchmarking Process Evaluation (1.58 MB) More Documents & Publications Designing a Benchmarking Plan Efficiency Data

  2. Implementing the Data Center Energy Productivity Metric

    SciTech Connect (OSTI)

    Sego, Landon H.; Marquez, Andres; Rawson, Andrew; Cader, Tahir; Fox, Kevin M.; Gustafson, William I.; Mundy, Christopher J.

    2012-10-01

    As data centers proliferate in both size and number, their energy efficiency is becoming increasingly important. We discuss the properties of a number of the proposed metrics of energy efficiency and productivity. In particular, we focus on the Data Center Energy Productivity (DCeP) metric, which is the ratio of useful work produced by the data center to the energy consumed performing that work. We describe our approach for using DCeP as the principal outcome of a designed experiment using a highly instrumented, high performance computing data center. We found that DCeP was successful in clearly distinguishing between different operational states in the data center, thereby validating its utility as a metric for identifying configurations of hardware and software that would improve (or even maximize) energy productivity. We also discuss some of the challenges and benefits associated with implementing the DCeP metric, and we examine the efficacy of the metric in making comparisons within a data center and among data centers.

  3. Instructions for EM Corporate Performance Metrics | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Instructions for EM Corporate Performance Metrics Instructions for EM Corporate Performance Metrics Quality Program Criteria Instructions for EM Corporate Performance Metrics (128.47 KB) More Documents & Publications EM Corporate QA Performance Metrics CPMS Tables QA Corporate Board Meeting - July 2008

  4. Metrics for Evaluating the Accuracy of Solar Power Forecasting (Presentation)

    SciTech Connect (OSTI)

    Zhang, J.; Hodge, B.; Florita, A.; Lu, S.; Hamann, H.; Banunarayanan, V.

    2013-10-01

    This presentation proposes a suite of metrics for evaluating the performance of solar power forecasting.

  5. Benchmark field study of deep neutron penetration

    SciTech Connect (OSTI)

    Morgan, J.F.; Sale, K. ); Gold, R.; Roberts, J.H.; Preston, C.C. )

    1991-06-10

    A unique benchmark neutron field has been established at the Lawrence Livermore National Laboratory (LLNL) to study deep penetration neutron transport. At LLNL, a tandem accelerator is used to generate a monoenergetic neutron source that permits investigation of deep neutron penetration under conditions that are virtually ideal to model, namely the transport of mono-energetic neutrons through a single material in a simple geometry. General features of the Lawrence Tandem (LATAN) benchmark field are described with emphasis on neutron source characteristics and room return background. The single material chosen for the first benchmark, LATAN-1, is a steel representative of Light Water Reactor (LWR) Pressure Vessels (PV). Also included is a brief description of the Little Boy replica, a critical reactor assembly designed to mimic the radiation doses from the atomic bomb dropped on Hiroshima, and its us in neutron spectrometry. 18 refs.

  6. Metrics for comparison of crystallographic maps

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Urzhumtsev, Alexandre; Afonine, Pavel V.; Lunin, Vladimir Y.; Terwilliger, Thomas C.; Adams, Paul D.

    2014-10-01

    Numerical comparison of crystallographic contour maps is used extensively in structure solution and model refinement, analysis and validation. However, traditional metrics such as the map correlation coefficient (map CC, real-space CC or RSCC) sometimes contradict the results of visual assessment of the corresponding maps. This article explains such apparent contradictions and suggests new metrics and tools to compare crystallographic contour maps. The key to the new methods is rank scaling of the Fourier syntheses. The new metrics are complementary to the usual map CC and can be more helpful in map comparison, in particular when only some of their aspects,more » such as regions of high density, are of interest.« less

  7. Enhanced Accident Tolerant LWR Fuels: Metrics Development

    SciTech Connect (OSTI)

    Shannon Bragg-Sitton; Lori Braase; Rose Montgomery; Chris Stanek; Robert Montgomery; Lance Snead; Larry Ott; Mike Billone

    2013-09-01

    The Department of Energy (DOE) Fuel Cycle Research and Development (FCRD) Advanced Fuels Campaign (AFC) is conducting research and development on enhanced Accident Tolerant Fuels (ATF) for light water reactors (LWRs). This mission emphasizes the development of novel fuel and cladding concepts to replace the current zirconium alloy-uranium dioxide (UO2) fuel system. The overall mission of the ATF research is to develop advanced fuels/cladding with improved performance, reliability and safety characteristics during normal operations and accident conditions, while minimizing waste generation. The initial effort will focus on implementation in operating reactors or reactors with design certifications. To initiate the development of quantitative metrics for ATR, a LWR Enhanced Accident Tolerant Fuels Metrics Development Workshop was held in October 2012 in Germantown, MD. This paper summarizes the outcome of that workshop and the current status of metrics development for LWR ATF.

  8. EECBG SEP Attachment 1 - Process metric list | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    SEP Attachment 1 - Process metric list EECBG SEP Attachment 1 - Process metric list Reporting Guidance Process Metric List eecbg_10_07b_sep__10_006a_attachment1_process_metric_list.pdf (93.56 KB) More Documents & Publications EECBG 10-07C/SEP 10-006B Attachment 1: Process Metrics List EECBG Program Notice 10-07A DOE Recovery Act Reporting Requirements for the State Energy Program

  9. Benchmarking density functionals for hydrogen-helium mixtures with quantum Monte Carlo: Energetics, pressures, and forces

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Clay, Raymond C.; Holzmann, Markus; Ceperley, David M.; Morales, Maguel A.

    2016-01-19

    An accurate understanding of the phase diagram of dense hydrogen and helium mixtures is a crucial component in the construction of accurate models of Jupiter, Saturn, and Jovian extrasolar planets. Though DFT based rst principles methods have the potential to provide the accuracy and computational e ciency required for this task, recent benchmarking in hydrogen has shown that achieving this accuracy requires a judicious choice of functional, and a quanti cation of the errors introduced. In this work, we present a quantum Monte Carlo based benchmarking study of a wide range of density functionals for use in hydrogen-helium mixtures atmore » thermodynamic conditions relevant for Jovian planets. Not only do we continue our program of benchmarking energetics and pressures, but we deploy QMC based force estimators and use them to gain insights into how well the local liquid structure is captured by di erent density functionals. We nd that TPSS, BLYP and vdW-DF are the most accurate functionals by most metrics, and that the enthalpy, energy, and pressure errors are very well behaved as a function of helium concentration. Beyond this, we highlight and analyze the major error trends and relative di erences exhibited by the major classes of functionals, and estimate the magnitudes of these e ects when possible.« less

  10. Action Items

    Office of Environmental Management (EM)

    ACTION ITEMS Presentation to the DOE High Level Waste Corporate Board July 29, 2009 Kurt Gerdes Office of Waste Processing DOE-EM Office of Engineering & Technology 2 ACTION ITEMS...

  11. ACTION PLAN

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    -1 ACTION PLAN 1.0 INTRODUCTION 1.1 PURPOSE The purpose of this action plan is to establish the overall plan for hazardous waste permitting, meeting closure and postclosure requirements, and remedial action under the Federal Resource Conservation and Recovery Act (RCRA) and Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), and the Washington State Hazardous Waste Management Act. All actions required to be taken pursuant to this Agreement shall be taken in accordance

  12. Performance Metrics Research Project - Final Report

    SciTech Connect (OSTI)

    Deru, M.; Torcellini, P.

    2005-10-01

    NREL began work for DOE on this project to standardize the measurement and characterization of building energy performance. NREL's primary research objectives were to determine which performance metrics have greatest value for determining energy performance and to develop standard definitions and methods of measuring and reporting that performance.

  13. Clean Cities 2011 Annual Metrics Report

    SciTech Connect (OSTI)

    Johnson, C.

    2012-12-01

    This report details the petroleum savings and vehicle emissions reductions achieved by the U.S. Department of Energy's Clean Cities program in 2011. The report also details other performance metrics, including the number of stakeholders in Clean Cities coalitions, outreach activities by coalitions and national laboratories, and alternative fuel vehicles deployed.

  14. Clean Cities 2010 Annual Metrics Report

    SciTech Connect (OSTI)

    Johnson, C.

    2012-10-01

    This report details the petroleum savings and vehicle emissions reductions achieved by the U.S. Department of Energy's Clean Cities program in 2010. The report also details other performance metrics, including the number of stakeholders in Clean Cities coalitions, outreach activities by coalitions and national laboratories, and alternative fuel vehicles deployed.

  15. Benchmarking Data Cleansing: A Rite of Passage Along the Benchmarking Journey

    Broader source: Energy.gov [DOE]

    This webinar will train analysts, energy planners, and community officials on the principles used for identifying potential problems associated with benchmarking data, and a methodology for cleaning the data prior to analysis.

  16. DOE TAP Webinar: Benchmarking Data Cleansing: A Rite of Passage Along the Benchmarking Journey

    Broader source: Energy.gov [DOE]

    A growing number of local governments and states are collecting building benchmarking data from thousands of public and private building owners. Data cleansing is a critical step prior to analysis...

  17. Benchmark the Fuel Cost of Steam Generation, Energy Tips: STEAM...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    5 Benchmark the Fuel Cost of Steam Generation Benchmarking the fuel cost of steam generation, in dollars per 1,000 pounds (1,000 lb) of steam, is an effective way to assess the ...

  18. pMSSM Benchmark Models for Snowmass 2013 (Journal Article) |...

    Office of Scientific and Technical Information (OSTI)

    Journal Article: pMSSM Benchmark Models for Snowmass 2013 Citation Details In-Document Search Title: pMSSM Benchmark Models for Snowmass 2013 Authors: Cahill-Rowley, Matthew W. ; ...

  19. TAP Webinar: Benchmarking Data Cleansing: A Rite of Passage Along the Benchmarking Journey

    Broader source: Energy.gov [DOE]

    This webinar will train analysts, energy planners and community officials on the principles used for identifying potential problems associated with benchmarking data, and a methodology for cleaning the data prior to analysis. This training session is intended for cities, communities, schools, and states that have implemented an internal or community-wide building benchmarking program and are working to better understand energy use trends and design targeted and effective energy efficiency programs.

  20. Vehicle Technologies Office Merit Review 2015: Benchmarking EV and HEV

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Technologies | Department of Energy Benchmarking EV and HEV Technologies Vehicle Technologies Office Merit Review 2015: Benchmarking EV and HEV Technologies Presentation given by Oak Ridge National Laboratory at 2015 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Office Annual Merit Review and Peer Evaluation Meeting about benchmarking EV and HEV technologies. edt006_burress_2015_o.pdf (3.81 MB) More Documents & Publications Benchmarking State-of-the-Art Technologies

  1. Guide to Benchmarking Residential Program Progress Webcast Slides |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy Guide to Benchmarking Residential Program Progress Webcast Slides Guide to Benchmarking Residential Program Progress Webcast Slides Slides from "Guide to Benchmarking Residential Program Progress - Call for Public Review", a webcast from the U.S. Department of Energy's (DOE's) Better Buildings Neighborhood Program, presented by Dale Hoffmeyer and Cheryl Jenkins. Guide to Benchmarking Residential Program Progress Webcast Slides (1.15 MB) More Documents &

  2. Benchmarking Outreach and Data Collection Techniques for External Portfolios

    Broader source: Energy.gov [DOE]

    This document contains the transcript for the Benchmarking Outreach and Data Collection Techniques webinar, held on April 25, 2013.

  3. Federal Building Energy Use Benchmarking Guidance, August 2014 Update

    Broader source: Energy.gov [DOE]

    Guidance describes the Energy Independence and Security Act of 2007 Section 432 requirement for benchmarking federal facilities.

  4. Vehicle Technologies Office Merit Review 2014: Benchmarking EV and HEV

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Technologies | Department of Energy Benchmarking EV and HEV Technologies Vehicle Technologies Office Merit Review 2014: Benchmarking EV and HEV Technologies Presentation given by Oak Ridge National Laboratory at 2014 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Office Annual Merit Review and Peer Evaluation Meeting about benchmarking EV and HEV technologies. ape006_burress_2014_p.pdf (3.6 MB) More Documents & Publications Benchmarking State-of-the-Art Technologies Vehicle

  5. Simulation information regarding Sandia National Laboratories%3CU%2B2019%3E trinity capability improvement metric.

    SciTech Connect (OSTI)

    Agelastos, Anthony Michael; Lin, Paul T.

    2013-10-01

    Sandia National Laboratories, Los Alamos National Laboratory, and Lawrence Livermore National Laboratory each selected a representative simulation code to be used as a performance benchmark for the Trinity Capability Improvement Metric. Sandia selected SIERRA Low Mach Module: Nalu, which is a uid dynamics code that solves many variable-density, acoustically incompressible problems of interest spanning from laminar to turbulent ow regimes, since it is fairly representative of implicit codes that have been developed under ASC. The simulations for this metric were performed on the Cielo Cray XE6 platform during dedicated application time and the chosen case utilized 131,072 Cielo cores to perform a canonical turbulent open jet simulation within an approximately 9-billion-elementunstructured- hexahedral computational mesh. This report will document some of the results from these simulations as well as provide instructions to perform these simulations for comparison.

  6. Toxicological Benchmarks for Screening of Potential Contaminants of Concern for Effects on Aquatic Biota on the Oak Ridge Reservation, Oak Ridge, Tennessee

    SciTech Connect (OSTI)

    Suter, G.W., II

    1993-01-01

    One of the initial stages in ecological risk assessment of hazardous waste sites is the screening of contaminants to determine which, if any, of them are worthy of further consideration; this process is termed contaminant screening. Screening is performed by comparing concentrations in ambient media to benchmark concentrations that are either indicative of a high likelihood of significant effects (upper screening benchmarks) or of a very low likelihood of significant effects (lower screening benchmarks). Exceedance of an upper screening benchmark indicates that the chemical in question is clearly of concern and remedial actions are likely to be needed. Exceedance of a lower screening benchmark indicates that a contaminant is of concern unless other information indicates that the data are unreliable or the comparison is inappropriate. Chemicals with concentrations below the lower benchmark are not of concern if the ambient data are judged to be adequate. This report presents potential screening benchmarks for protection of aquatic life from contaminants in water. Because there is no guidance for screening benchmarks, a set of alternative benchmarks is presented herein. The alternative benchmarks are based on different conceptual approaches to estimating concentrations causing significant effects. For the upper screening benchmark, there are the acute National Ambient Water Quality Criteria (NAWQC) and the Secondary Acute Values (SAV). The SAV concentrations are values estimated with 80% confidence not to exceed the unknown acute NAWQC for those chemicals with no NAWQC. The alternative chronic benchmarks are the chronic NAWQC, the Secondary Chronic Value (SCV), the lowest chronic values for fish and daphnids, the lowest EC20 for fish and daphnids from chronic toxicity tests, the estimated EC20 for a sensitive species, and the concentration estimated to cause a 20% reduction in the recruit abundance of largemouth bass. It is recommended that ambient chemical

  7. Benchmark West Texas Intermediate crude assayed

    SciTech Connect (OSTI)

    Rhodes, A.K.

    1994-08-15

    The paper gives an assay of West Texas Intermediate, one of the world's market crudes. The price of this crude, known as WTI, is followed by market analysts, investors, traders, and industry managers around the world. WTI price is used as a benchmark for pricing all other US crude oils. The 41[degree] API < 0.34 wt % sulfur crude is gathered in West Texas and moved to Cushing, Okla., for distribution. The WTI posted prices is the price paid for the crude at the wellhead in West Texas and is the true benchmark on which other US crudes are priced. The spot price is the negotiated price for short-term trades of the crude. And the New York Mercantile Exchange, or Nymex, price is a futures price for barrels delivered at Cushing.

  8. Specification for the VERA Depletion Benchmark Suite

    SciTech Connect (OSTI)

    Kim, Kang Seog

    2015-12-17

    CASL-X-2015-1014-000 iii Consortium for Advanced Simulation of LWRs EXECUTIVE SUMMARY The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the pressurized water reactor. MPACT includes the ORIGEN-API and internal depletion module to perform depletion calculations based upon neutron-material reaction and radioactive decay. It is a challenge to validate the depletion capability because of the insufficient measured data. One of the detoured methods to validate it is to perform a code-to-code comparison for benchmark problems. In this study a depletion benchmark suite has been developed and a detailed guideline has been provided to obtain meaningful computational outcomes which can be used in the validation of the MPACT depletion capability.

  9. Widget:CrazyEggMetrics | Open Energy Information

    Open Energy Info (EERE)

    CrazyEggMetrics Jump to: navigation, search This widget runs javascript code for the Crazy Egg user experience metrics. This should not be on all pages, but on select pages...

  10. Energy Department Project Captures and Stores One Million Metric...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    One Million Metric Tons of Carbon Energy Department Project Captures and Stores One Million Metric Tons of Carbon January 8, 2015 - 11:18am Addthis News Media Contact 202-586-4940 ...

  11. SEE Action Series: Local Strategies for Whole-Building Energy Savings |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy SEE Action Series: Local Strategies for Whole-Building Energy Savings SEE Action Series: Local Strategies for Whole-Building Energy Savings This presentation provides information on Local Strategies for Whole-Building Energy Savings. Presentation (2.25 MB) More Documents & Publications Energy Audit and Retro-Commissioning Policies for Public and Commercial Buildings Energy Benchmarking, Rating, and Disclosure for Local Governments Energy Performance Benchmarking and

  12. Smart Grid Status and Metrics Report

    SciTech Connect (OSTI)

    Balducci, Patrick J.; Weimar, Mark R.; Kirkham, Harold

    2014-07-01

    To convey progress made in achieving the vision of a smart grid, this report uses a set of six characteristics derived from the National Energy Technology Laboratory Modern Grid Strategy. It measures 21 metrics to provide insight into the grid’s capacity to embody these characteristics. This report looks across a spectrum of smart grid concerns to measure the status of smart grid deployment and impacts.

  13. Collection of Neutronic VVER Reactor Benchmarks.

    Energy Science and Technology Software Center (OSTI)

    2002-01-30

    Version 00 A system of computational neutronic benchmarks has been developed. In this CD-ROM report, the data generated in the course of the project are reproduced in their integrity with minor corrections. The editing that was performed on the various documents comprising this report was primarily meant to facilitate the production of the CD-ROM and to enable electronic retrieval of the information. The files are electronically navigable.

  14. State/Local Government Project Performance Benchmarks

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    State/Local Government Project Performance Benchmarks (All ASHRAE Zones) We define an ESCO as a company that provides energy efficiency-related and other value-added services and that employs performance contracting as a core part of its energy efficiency services business. 1 For projects with electricity savings, we assume site energy conversion (1 kWh = 3,412 Btu). We did not estimate avoided Btus from gallons of water conserved. In general, we followed the analytical approach documented in

  15. Introduction to the HPC Challenge Benchmark Suite

    SciTech Connect (OSTI)

    Luszczek, Piotr; Dongarra, Jack J.; Koester, David; Rabenseifner,Rolf; Lucas, Bob; Kepner, Jeremy; McCalpin, John; Bailey, David; Takahashi, Daisuke

    2005-04-25

    The HPC Challenge benchmark suite has been released by the DARPA HPCS program to help define the performance boundaries of future Petascale computing systems. HPC Challenge is a suite of tests that examine the performance of HPC architectures using kernels with memory access patterns more challenging than those of the High Performance Linpack (HPL) benchmark used in the Top500 list. Thus, the suite is designed to augment the Top500 list, providing benchmarks that bound the performance of many real applications as a function of memory access characteristics e.g., spatial and temporal locality, and providing a framework for including additional tests. In particular, the suite is composed of several well known computational kernels (STREAM, HPL, matrix multiply--DGEMM, parallel matrix transpose--PTRANS, FFT, RandomAccess, and bandwidth/latency tests--b{sub eff}) that attempt to span high and low spatial and temporal locality space. By design, the HPC Challenge tests are scalable with the size of data sets being a function of the largest HPL matrix for the tested system.

  16. Financial Metrics Data Collection Protocol, Version 1.0

    SciTech Connect (OSTI)

    Fowler, Kimberly M.; Gorrissen, Willy J.; Wang, Na

    2010-04-30

    Brief description of data collection process and plan that will be used to collect financial metrics associated with sustainable design.

  17. Nonmaximality of known extremal metrics on torus and Klein bottle

    SciTech Connect (OSTI)

    Karpukhin, M A

    2013-12-31

    The El Soufi-Ilias theorem establishes a connection between minimal submanifolds of spheres and extremal metrics for eigenvalues of the Laplace-Beltrami operator. Recently, this connection was used to provide several explicit examples of extremal metrics. We investigate the properties of these metrics and prove that none of them is maximal. Bibliography: 24 titles.

  18. Annex A Metrics for the Smart Grid System Report

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Annex A Metrics for the Smart Grid System Report A.iii Table of Contents Introduction ........................................................................................................................................... A.1 Metric #1: The Fraction of Customers and Total Load Served by Real-Time Pricing, Critical Peak Pricing, and Time-of-Use Pricing ........................................................................................ A.2 Metric #2: Real-Time System Operations Data

  19. Metrics For Comparing Plasma Mass Filters

    SciTech Connect (OSTI)

    Abraham J. Fetterman and Nathaniel J. Fisch

    2012-08-15

    High-throughput mass separation of nuclear waste may be useful for optimal storage, disposal, or environmental remediation. The most dangerous part of nuclear waste is the fission product, which produces most of the heat and medium-term radiation. Plasmas are well-suited to separating nuclear waste because they can separate many different species in a single step. A number of plasma devices have been designed for such mass separation, but there has been no standardized comparison between these devices. We define a standard metric, the separative power per unit volume, and derive it for three different plasma mass filters: the plasma centrifuge, Ohkawa filter, and the magnetic centrifugal mass filter. __________________________________________________

  20. Metrics for comparing plasma mass filters

    SciTech Connect (OSTI)

    Fetterman, Abraham J.; Fisch, Nathaniel J.

    2011-10-15

    High-throughput mass separation of nuclear waste may be useful for optimal storage, disposal, or environmental remediation. The most dangerous part of nuclear waste is the fission product, which produces most of the heat and medium-term radiation. Plasmas are well-suited to separating nuclear waste because they can separate many different species in a single step. A number of plasma devices have been designed for such mass separation, but there has been no standardized comparison between these devices. We define a standard metric, the separative power per unit volume, and derive it for three different plasma mass filters: the plasma centrifuge, Ohkawa filter, and the magnetic centrifugal mass filter.

  1. Clean Cities 2013 Annual Metrics Report

    Alternative Fuels and Advanced Vehicles Data Center [Office of Energy Efficiency and Renewable Energy (EERE)]

    3 Annual Metrics Report Caley Johnson and Mark Singer National Renewable Energy Laboratory Technical Report NREL/TP-5400-62838 October 2014 NREL is a national laboratory of the U.S. Department of Energy Office of Energy Efficiency & Renewable Energy Operated by the Alliance for Sustainable Energy, LLC This report is available at no cost from the National Renewable Energy Laboratory (NREL) at www.nrel.gov/publications. Contract No. DE-AC36-08GO28308 National Renewable Energy Laboratory 15013

  2. Clean Cities 2014 Annual Metrics Report

    Alternative Fuels and Advanced Vehicles Data Center [Office of Energy Efficiency and Renewable Energy (EERE)]

    4 Annual Metrics Report Caley Johnson and Mark Singer National Renewable Energy Laboratory Technical Report NREL/TP-5400-65265 December 2015 NREL is a national laboratory of the U.S. Department of Energy Office of Energy Efficiency & Renewable Energy Operated by the Alliance for Sustainable Energy, LLC This report is available at no cost from the National Renewable Energy Laboratory (NREL) at www.nrel.gov/publications. Contract No. DE-AC36-08GO28308 National Renewable Energy Laboratory 15013

  3. A Uranium Bioremediation Reactive Transport Benchmark

    SciTech Connect (OSTI)

    Yabusaki, Steven B.; Sengor, Sevinc; Fang, Yilin

    2015-06-01

    A reactive transport benchmark problem set has been developed based on in situ uranium bio-immobilization experiments that have been performed at a former uranium mill tailings site in Rifle, Colorado, USA. Acetate-amended groundwater stimulates indigenous microorganisms to catalyze the reduction of U(VI) to a sparingly soluble U(IV) mineral. The interplay between the flow, acetate loading periods and rates, microbially-mediated and geochemical reactions leads to dynamic behavior in metal- and sulfate-reducing bacteria, pH, alkalinity, and reactive mineral surfaces. The benchmark is based on an 8.5 m long one-dimensional model domain with constant saturated flow and uniform porosity. The 159-day simulation introduces acetate and bromide through the upgradient boundary in 14-day and 85-day pulses separated by a 10 day interruption. Acetate loading is tripled during the second pulse, which is followed by a 50 day recovery period. Terminal electron accepting processes for goethite, phyllosilicate Fe(III), U(VI), and sulfate are modeled using Monod-type rate laws. Major ion geochemistry modeled includes mineral reactions, as well as aqueous and surface complexation reactions for UO2++, Fe++, and H+. In addition to the dynamics imparted by the transport of the acetate pulses, U(VI) behavior involves the interplay between bioreduction, which is dependent on acetate availability, and speciation-controlled surface complexation, which is dependent on pH, alkalinity and available surface complexation sites. The general difficulty of this benchmark is the large number of reactions (74), multiple rate law formulations, a multisite uranium surface complexation model, and the strong interdependency and sensitivity of the reaction processes. Results are presented for three simulators: HYDROGEOCHEM, PHT3D, and PHREEQC.

  4. Corrective Action

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    of Corrective Action Complete is demonstrated by one of the following: Eliminate Exposure (11 SMAs, 16 Sites) SMA SITE Submittal Date Document 2M-SMA-2.2 03-003(k) September...

  5. Benchmarking ESCO Projects in Public Sector Markets

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Chuck Goldman CAGoldman@lbl.gov Lawrence Berkeley National Laboratory State Energy Advisory Board (STEAB) Visit February 22, 2011  U.S. ESCO Industry and Market Trends  ESCO Project Performance: New Results from LBNL/NAESCO Database  Benchmarking Tools/information to assist State/ Local Governments * U.S. ESCO industry revenues were $4.1B in 2008; 7% annual growth from 2006 to 2008 despite general economic slowdown * In 2008, MUSH (i.e., municipal/state govt, universities/ colleges,

  6. K…12 Schools Project Performance Benchmarks

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    K-12 Schools Project Performance Benchmarks (All ASHRAE Zones) We define an ESCO as a company that provides energy efficiency-related and other value-added services and that employs performance contracting as a core part of its energy efficiency services business. 1 For projects with electricity savings, we assume site energy conversion (1 kWh = 3,412 Btu). We did not estimate avoided Btus from gallons of water conserved. In general, we followed the analytical approach documented in Hopper et

  7. Defining a Standard Metric for Electricity Savings

    SciTech Connect (OSTI)

    Brown, Marilyn; Akbari, Hashem; Blumstein, Carl; Koomey, Jonathan; Brown, Richard; Calwell, Chris; Carter, Sheryl; Cavanagh, Ralph; Chang, Audrey; Claridge, David; Craig, Paul; Diamond, Rick; Eto, Joseph H.; Fulkerson, William; Gadgil, Ashok; Geller, Howard; Goldemberg, Jose; Goldman, Chuck; Goldstein, David B.; Greenberg, Steve; Hafemeister, David; Harris, Jeff; Harvey, Hal; Heitz, Eric; Hirst, Eric; Hummel, Holmes; Kammen, Dan; Kelly, Henry; Laitner, Skip; Levine, Mark; Lovins, Amory; Masters, Gil; McMahon, James E.; Meier, Alan; Messenger, Michael; Millhone, John; Mills, Evan; Nadel, Steve; Nordman, Bruce; Price, Lynn; Romm, Joe; Ross, Marc; Rufo, Michael; Sathaye, Jayant; Schipper, Lee; Schneider, Stephen H; Sweeney, James L; Verdict, Malcolm; Vorsatz, Diana; Wang, Devra; Weinberg, Carl; Wilk, Richard; Wilson, John; Worrell, Ernst

    2009-03-01

    The growing investment by governments and electric utilities in energy efficiency programs highlights the need for simple tools to help assess and explain the size of the potential resource. One technique that is commonly used in this effort is to characterize electricity savings in terms of avoided power plants, because it is easier for people to visualize a power plant than it is to understand an abstraction such as billions of kilowatt-hours. Unfortunately, there is no standardization around the characteristics of such power plants. In this letter we define parameters for a standard avoided power plant that have physical meaning and intuitive plausibility, for use in back-of-the-envelope calculations. For the prototypical plant this article settles on a 500 MW existing coal plant operating at a 70percent capacity factor with 7percent T&D losses. Displacing such a plant for one year would save 3 billion kW h per year at the meter and reduce emissions by 3 million metric tons of CO2 per year. The proposed name for this metric is the Rosenfeld, in keeping with the tradition among scientists of naming units in honor of the person most responsible for the discovery and widespread adoption of the underlying scientific principle in question--Dr. Arthur H. Rosenfeld.

  8. CBEI: Benchmarking Analytics Tools - 2015 Peer Review | Department of

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Energy Benchmarking Analytics Tools - 2015 Peer Review CBEI: Benchmarking Analytics Tools - 2015 Peer Review Presenter: Clinton Andrews, Rutgers University View the Presentation CBEI: Benchmarking Analytics Tools - 2015 Peer Review (1.44 MB) More Documents & Publications CBEI: Broker Training - Placing Value on Energy Retrofits - 2015 Peer Review Market Engagement Overview - 2015 BTO Peer Review CBEI: Lessons Learned from Integrated Retrofits in Small and Medium Sized Commercial

  9. Guide for Benchmarking Residential Energy Efficiency Program Progress

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Residential Program Benchmarking Guide 1 DRAFT - November 14, 2014 Guide for Benchmarking Residential Energy Efficiency Program Progress Prepared for the Building Technologies Office Energy Efficiency and Renewable Energy U.S. Department of Energy by Vermont Energy Investment Corporation Under contract to Eastern Research Group REVIEW OPPORTUNITY Home energy upgrade programs are being sought to review this draft Guide for Benchmarking Residential Energy Efficiency Program Progress. Your

  10. Energy Data Management Webinar Series - Part 2: Benchmarking Energy Data

    Broader source: Energy.gov (indexed) [DOE]

    Analysis | Department of Energy 7, 2016 3:00PM to 4:30PM MDT Benchmarking is a proactive approach to energy management that emphasizes continuous improvement. It helps organizations manage their energy use rather than react to it. Benchmarking can also help to assess the effectiveness of your current operations. Personnel from cities, communities, and states that have implemented or are considering implementing a benchmarking and/or disclosure program or policy and are preparing their

  11. DOE Lab Releases Wind Turbine Reliability Benchmark Report | Department of

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Energy Lab Releases Wind Turbine Reliability Benchmark Report DOE Lab Releases Wind Turbine Reliability Benchmark Report October 1, 2012 - 1:17pm Addthis This is an excerpt from the Third Quarter 2012 edition of the Wind Program R&D Newsletter. The U.S. Department of Energy's Sandia National Laboratories (SNL) recently released its second annual public benchmark report for the Continuous Reliability Enhancement for Wind (CREW) database. CREW is a national reliability database that

  12. Microsoft Word - Sandia CREW 2013 Wind Plant Reliability Benchmark...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... Abstract To benchmark the current U.S. wind turbine fleet reliability performance and ... ideas and perspectives that can help us address the nation's most daunting challenges. ...

  13. Advanced Technology Vehicle Lab Benchmarking - Level 2 (in-depth...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    More Documents & Publications Advanced Technology Vehicle Lab Benchmarking - Level 2 (in-depth) Data Collection for Improved Cold Temperature Thermal Modeling Advanced Technology ...

  14. Lessons Learned: Measuring Program Outcomes and Using Benchmarks...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Learned presentation More Documents & Publications Better Buildings Residential Network Orientation How Can the Network Meet Your Needs? Optional Residential Program Benchmarking

  15. Microsoft Word - Sandia CREW 2012 Wind Plant Reliability Benchmark...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    2-7328 Unlimited Release September 2012 Continuous Reliability Enhancement for Wind (CREW) Database: Wind Plant Reliability Benchmark Valerie A. Peters, Alistair B. Ogilvie, Cody...

  16. Microsoft Word - Sandia CREW 2013 Wind Plant Reliability Benchmark...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    3-7288 Unlimited Release September 2013 Continuous Reliability Enhancement for Wind (CREW) Database: Wind Plant Reliability Benchmark Valerie A. Hines, Alistair B. Ogilvie, Cody R....

  17. ENERGY STAR Portfolio Manager and Utility Benchmarking Programs...

    Broader source: Energy.gov (indexed) [DOE]

    Manager and Utility Benchmarking Programs: Effectiveness as a Conduit to Utility Energy Efficiency Programs Rohit Vaidya, Nexus Market Research Arlis Reynolds, National Grid...

  18. Asking the right questions: benchmarking fault-tolerant extreme...

    Office of Scientific and Technical Information (OSTI)

    Title: Asking the right questions: benchmarking fault-tolerant extreme-scale systems. Abstract not provided. Authors: Widener, Patrick ; Ferreira, Kurt Brian ; Levy, Scott N. ; ...

  19. Preliminary Benchmarking and MCNP Simulation Results for Homeland...

    Office of Scientific and Technical Information (OSTI)

    Preliminary Benchmarking and MCNP Simulation Results for Homeland Security Citation ... Visit OSTI to utilize additional information resources in energy science and technology. A ...

  20. Lessons Learned: Measuring Program Outcomes and Using Benchmarks

    Broader source: Energy.gov [DOE]

    Lessons Learned: Measuring Program Outcomes and Using Benchmarks, a presentation on August 21, 2013 by Dale Hoffmeyer, U.S. Department of Energy.

  1. The MCNP6 Analytic Criticality Benchmark Suite (Technical Report...

    Office of Scientific and Technical Information (OSTI)

    Citation Details In-Document Search Title: The MCNP6 Analytic Criticality Benchmark Suite Authors: Brown, Forrest B. 1 + Show Author Affiliations Los Alamos National Laboratory ...

  2. Energy Benchmarking, Rating, and Disclosure for Local Governments

    SciTech Connect (OSTI)

    Existing Commercial Buildings Working Group

    2012-05-23

    Provides information on how access to energy use data can help local governments create policies for benchmarking and disclosing building energy performance for public and private sector buildings.

  3. Benchmarking and Transparency Policy and Program Impact Evaluation...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Benchmarking and Transparency Policy and Program Impact Evaluation Handbook Prepared for by the U.S. Department of Energy, this Handbook provides both a strategic planning ...

  4. EISA Federal Facility Management and Benchmarking Reporting Requiremen...

    Office of Environmental Management (EM)

    measured savings and persistence of savings Building benchmarking information. loginctssystem.png viewctsdata.png Guidance The following guidance is available to help ...

  5. State and Local Energy Benchmarking and Disclosure Policy | Department...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    ... the concept of benchmarking, new requirements, technical tools, and processes. It is especially helpful if government agencies can facilitate enhanced access to energy data by ...

  6. New York City Benchmarking and Transparency Policy Impact Evaluation...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    New York City's benchmarking and transparency policy, Local Law 84, and the results of the application of those methodologies to the early period of the policy's implementation. ...

  7. NASA Benchmarks Lessons Learned Assessment Plan - Developed By...

    Broader source: Energy.gov (indexed) [DOE]

    NASA BENCHMARKS LESSONS LEARNED Assessment Plan Developed By NNSANevada Site Office Facility Representative Division Performance Objective: Management should have an established...

  8. Los Alamos National Lab staff benchmark Y-12 sustainability programs...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Los Alamos National Lab staff benchmark Y-12 sustainability programs Posted: June 27, 2013 ... to learn about its award-winning Sustainability and Stewardship Program. "By ...

  9. Uranium Leasing Program: Lease Tract Metrics

    Broader source: Energy.gov [DOE]

    The Atomic Energy Act and other legislative actions authorized the U.S. Atomic Energy Commission (AEC), predecessor agency to the U.S. Department of Energy (DOE), to withdraw lands from the public...

  10. An international land-biosphere model benchmarking activity for the IPCC Fifth Assessment Report (AR5)

    SciTech Connect (OSTI)

    Hoffman, Forrest M [ORNL; Randerson, James T [ORNL; Thornton, Peter E [ORNL; Bonan, Gordon [National Center for Atmospheric Research (NCAR); Erickson III, David J [ORNL; Fung, Inez [University of California, Berkeley

    2009-12-01

    The need to capture important climate feedbacks in general circulation models (GCMs) has resulted in efforts to include atmospheric chemistry and land and ocean biogeochemistry into the next generation of production climate models, called Earth System Models (ESMs). While many terrestrial and ocean carbon models have been coupled to GCMs, recent work has shown that such models can yield a wide range of results (Friedlingstein et al., 2006). This work suggests that a more rigorous set of global offline and partially coupled experiments, along with detailed analyses of processes and comparisons with measurements, are needed. The Carbon-Land Model Intercomparison Project (C-LAMP) was designed to meet this need by providing a simulation protocol and model performance metrics based upon comparisons against best-available satellite- and ground-based measurements (Hoffman et al., 2007). Recently, a similar effort in Europe, called the International Land Model Benchmark (ILAMB) Project, was begun to assess the performance of European land surface models. These two projects will now serve as prototypes for a proposed international land-biosphere model benchmarking activity for those models participating in the IPCC Fifth Assessment Report (AR5). Initially used for model validation for terrestrial biogeochemistry models in the NCAR Community Land Model (CLM), C-LAMP incorporates a simulation protocol for both offline and partially coupled simulations using a prescribed historical trajectory of atmospheric CO2 concentrations. Models are confronted with data through comparisons against AmeriFlux site measurements, MODIS satellite observations, NOAA Globalview flask records, TRANSCOM inversions, and Free Air CO2 Enrichment (FACE) site measurements. Both sets of experiments have been performed using two different terrestrial biogeochemistry modules coupled to the CLM version 3 in the Community Climate System Model version 3 (CCSM3): the CASA model of Fung, et al., and the carbon