Powered by Deep Web Technologies
Note: This page contains sample records for the topic "benchmark models version" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


1

DOE Commercial Building Benchmark Models: Preprint  

SciTech Connect

To provide a consistent baseline of comparison and save time conducting such simulations, the U.S. Department of Energy (DOE) has developed a set of standard benchmark building models. This paper will provide an executive summary overview of these benchmark buildings, and how they can save building analysts valuable time. Fully documented and implemented to use with the EnergyPlus energy simulation program, the benchmark models are publicly available and new versions will be created to maintain compatibility with new releases of EnergyPlus. The benchmark buildings will form the basis for research on specific building technologies, energy code development, appliance standards, and measurement of progress toward DOE energy goals. Having a common starting point allows us to better share and compare research results and move forward to make more energy efficient buildings.

Torcelini, P.; Deru, M.; Griffith, B.; Benne, K.; Halverson, M.; Winiarski, D.; Crawley, D. B.

2008-07-01T23:59:59.000Z

2

Hospital Energy Benchmarking Guidance - Version 1.0  

E-Print Network (OSTI)

energy conversion is provided in the documentation for the Energy Star facility-level benchmarking system

Singer, Brett C.

2010-01-01T23:59:59.000Z

3

Hospital Energy Benchmarking Guidance - Version 1.0  

Science Conference Proceedings (OSTI)

This document describes an energy benchmarking framework for hospitals. The document is organized as follows. The introduction provides a brief primer on benchmarking and its application to hospitals. The next two sections discuss special considerations including the identification of normalizing factors. The presentation of metrics is preceded by a description of the overall framework and the rationale for the grouping of metrics. Following the presentation of metrics, a high-level protocol is provided. The next section presents draft benchmarks for some metrics; benchmarks are not available for many metrics owing to a lack of data. This document ends with a list of research needs for further development.

Singer, Brett C.

2009-09-08T23:59:59.000Z

4

Hospital Energy Benchmarking Guidance - Version 1.0  

E-Print Network (OSTI)

Region Benchmarks 1 Source & notes HOSPITAL BUILDING ENERGYbenchmarks are based on hospital energy end use estimates presented on LBNL’s EnergyIQ commercial building

Singer, Brett C.

2010-01-01T23:59:59.000Z

5

Complex version of high performance computing LINPACK benchmark (HPL)  

Science Conference Proceedings (OSTI)

This paper describes our effort to enhance the performance of the AORSA fusion energy simulation program through the use of high-performance LINPACK (HPL) benchmark, commonly used in ranking the top 500 supercomputers. The algorithm used by HPL, enhanced ... Keywords: HPL, parallel dense solver

R. F. Barrett; T. H. F. Chan; E. F. D'Azevedo; E. F. Jaeger; K. Wong; R. Y. Wong

2010-04-01T23:59:59.000Z

6

A framework for benchmarking land models  

SciTech Connect

Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1) targeted aspects of model performance to be evaluated, (2) a set of benchmarks as defined references to test model performance, (3) metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4) model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1) a priori thresholds of acceptable model performance and (2) a scoring system to combine data model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties of land models to improve their prediction performance skills.

Luo, Yiqi [University of Oklahoma; Randerson, James T. [University of California, Irvine; Hoffman, Forrest [ORNL; Norby, Richard J [ORNL

2012-01-01T23:59:59.000Z

7

A framework for benchmarking land models  

SciTech Connect

Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1) targeted aspects of model performance to be evaluated, (2) a set of benchmarks as defined references to test model performance, (3) metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4) model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1) a priori thresholds of acceptable model performance and (2) a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties of land models to improve their prediction performance skills.

Luo, Yiqi; Randerson, J.; Abramowitz, G.; Bacour, C.; Blyth, E.; Carvalhais, N.; Ciais, Philippe; Dalmonech, D.; Fisher, J.B.; Fisher, R.; Friedlingstein, P.; Hibbard, Kathleen A.; Hoffman, F. M.; Huntzinger, Deborah; Jones, C.; Koven, C.; Lawrence, David M.; Li, D.J.; Mahecha, M.; Niu, S.L.; Norby, Richard J.; Piao, S.L.; Qi, X.; Peylin, P.; Prentice, I.C.; Riley, William; Reichstein, M.; Schwalm, C.; Wang, Y.; Xia, J. Y.; Zaehle, S.; Zhou, X. H.

2012-10-09T23:59:59.000Z

8

ILAMB Goals What is a Benchmark? ILAMB Meeting International Land Model Benchmarking (ILAMB) Project  

E-Print Network (OSTI)

Develop benchmarks for land model performance, with a focus on carbon cycle, ecosystem, surface energy, and hydrological processes. The benchmarks should be designed and accepted by the community. Apply the benchmarks to global models. Support the design and development of a new, open-source, benchmarking software system for either diagnostic or model intercomparison purposes. Strengthen linkages between experimental, monitoring, remote sensing, and climate modeling communities in the design of new model tests and new measurement programs.

Forrest M. Hoffman; James T. R; Forrest M. Hoffman; James T. R; Forrest M. Hoffman; James T. R; Forrest M. Hoffman; James T. R; Forrest M. Hoffman; James T. R

2011-01-01T23:59:59.000Z

9

Benchmarking  

NLE Websites -- All DOE Office Websites (Extended Search)

energy and water use, and rating the energy performance of selected building types. The tool enables users to: * Track multiple energy and water meters; * Benchmark facilities...

10

Independent verification and benchmark testing of the UNSAT-H computer code, Version 2. 0  

Science Conference Proceedings (OSTI)

Independent testing of the UNSAT-H computer code, Version 2.0, was conducted to establish confidence that the code is ready for general use in performance assessment applications. Verification and benchmark test problems were used to check the correctness of the FORTRAN coding, computational efficiency and accuracy of the numerical algorithm, and code, capability to simulate diverse hydrologic conditions. This testing was performed using a structured and quantitative evaluation protocol. The protocol consisted of: blind testing, independent applications, maintaining test equivalence and use of graduated test cases. Graphical comparisons and calculation of the relative root mean square (RRMS) values were used as indicators of accuracy and consistency levels. Four specific ranges of RRMS values were chosen for in judging the quality of the comparison. Four verification test problems were used to check the computational accuracy of UNSAT-H in solving the uncoupled fluid flow and heat transport equations. Five benchmark test problems, ranging in complexity, were used to check the code's simulation capability. Some of the benchmark test cases include comparisons with laboratory and field data. The primary findings of this independent testing is that the UNSAT-H is fully operationaL In general, the test results showed that computer code produced unsaturated flow simulations with excellent stability, reasonable accuracy, and acceptable speed. This report describes the technical basis, approach, and results of the independent testing. A number of future refinements to the UNSAT-H code are recommended that would improve: computational speed and accuracy, code usability and code portability. Aspects of the code that warrant further testing are outlined.

Baca, R.G.; Magnuson, S.O.

1990-02-01T23:59:59.000Z

11

Independent verification and benchmark testing of the UNSAT-H computer code, Version 2.0  

Science Conference Proceedings (OSTI)

Independent testing of the UNSAT-H computer code, Version 2.0, was conducted to establish confidence that the code is ready for general use in performance assessment applications. Verification and benchmark test problems were used to check the correctness of the FORTRAN coding, computational efficiency and accuracy of the numerical algorithm, and code, capability to simulate diverse hydrologic conditions. This testing was performed using a structured and quantitative evaluation protocol. The protocol consisted of: blind testing, independent applications, maintaining test equivalence and use of graduated test cases. Graphical comparisons and calculation of the relative root mean square (RRMS) values were used as indicators of accuracy and consistency levels. Four specific ranges of RRMS values were chosen for in judging the quality of the comparison. Four verification test problems were used to check the computational accuracy of UNSAT-H in solving the uncoupled fluid flow and heat transport equations. Five benchmark test problems, ranging in complexity, were used to check the code`s simulation capability. Some of the benchmark test cases include comparisons with laboratory and field data. The primary findings of this independent testing is that the UNSAT-H is fully operationaL In general, the test results showed that computer code produced unsaturated flow simulations with excellent stability, reasonable accuracy, and acceptable speed. This report describes the technical basis, approach, and results of the independent testing. A number of future refinements to the UNSAT-H code are recommended that would improve: computational speed and accuracy, code usability and code portability. Aspects of the code that warrant further testing are outlined.

Baca, R.G.; Magnuson, S.O.

1990-02-01T23:59:59.000Z

12

Benchmarking PDR models against the Horsehead edge  

E-Print Network (OSTI)

To prepare for the unprecedented spatial and spectral resolution provided by ALMA and Herschel/HIFI, chemical models are being benchmarked against each other. It is obvious that chemical models also need well-constrained observations that can serve as references. Photo-dissociation regions (PDRs) are particularly well suited to serve as references because they make the link between diffuse and molecular clouds, thus enabling astronomers to probe a large variety of physical and chemical processes. At a distance of 400 pc (1" corresponding to 0.002 pc), the Horsehead PDR is very close to the prototypical kind of source (i.e. 1D, edge-on) needed to serve as a reference to models.

Jérôme Pety; Javier R. Goicoechea; Maryvonne Gerin; Pierre Hily-Blant; David Teyssier; Evelyne Roueff; Emilie Habart; Alain Abergel

2006-12-20T23:59:59.000Z

13

Benchmarks used  

NLE Websites -- All DOE Office Websites (Extended Search)

Benchmarks used Benchmarks used Benchmarks used Using a set of benchmarks described below, different optimization options for the different compilers on Edison. The compilers are also compared against one another on the benchmarks. NERSC6 Benchmarks We used these benchmarks from the NERSC6 procurement: NERSC 6 PROCUREMENT MPI BENCHMARKS Benchmark Science Area Algorithm Concurrency Languages GTC Fusion PIC, finite difference 2048 f90 IMPACT-T Accelerator Physics PIC, FFT 1024 f90 MILC Materials Science Conjugate gradient, sparse matrix, FFT 1024 c, assembly NPB 3.3.1 MPI Parallel Benchmarks The following NPB 3.3 MPI Benchmarks were run, all at a concurrency of 1024 processes. They are all written in Fortran. NAS PARALLEL MPI BENCHMARKS - VERSION 3.3.1 Benchmark Full Name Description Level

14

Benchmarks used  

NLE Websites -- All DOE Office Websites (Extended Search)

Benchmarks used Benchmarks used Benchmarks used Using a set of benchmarks described below, different optimization options for the different compilers on Edison. The compilers are also compared against one another on the benchmarks. NERSC6 Benchmarks We used these benchmarks from the NERSC6 procurement: NERSC 6 PROCUREMENT MPI BENCHMARKS Benchmark Science Area Algorithm Concurrency Languages GTC Fusion PIC, finite difference 2048 f90 IMPACT-T Accelerator Physics PIC, FFT 1024 f90 MILC Materials Science Conjugate gradient, sparse matrix, FFT 1024 c, assembly NPB 3.3.1 MPI Parallel Benchmarks The following NPB 3.3 MPI Benchmarks were run, all at a concurrency of 1024 processes. They are all written in Fortran. NAS PARALLEL MPI BENCHMARKS - VERSION 3.3.1 Benchmark Full Name Description Level

15

Synthetic benchmark for modeling flow in 3D fractured media  

Science Conference Proceedings (OSTI)

Intensity and localization of flows in fractured media have promoted the development of a large range of different modeling approaches including Discrete Fracture Networks, pipe networks and equivalent continuous media. While benchmarked usually within ... Keywords: Benchmark, Fractured media, Single-phase flow, Stochastic model

Jean-Raynald De Dreuzy; GéRaldine Pichot; Baptiste Poirriez; Jocelyne Erhel

2013-01-01T23:59:59.000Z

16

A Benchmark Simulation for Moist Nonhydrostatic Numerical Models  

Science Conference Proceedings (OSTI)

A benchmark solution that facilitates testing the accuracy, efficiency, and efficacy of moist nonhydrostatic numerical model formulations and assumptions is presented. The solution is created from a special configuration of moist model processes ...

George H. Bryan; J. Michael Fritsch

2002-12-01T23:59:59.000Z

17

A Moist Benchmark Calculation for Atmospheric General Circulation Models  

Science Conference Proceedings (OSTI)

A benchmark calculation is designed to compare the climate and climate sensitivity of atmospheric general circulation models (AGCMs). The experimental setup basically follows that of the aquaplanet experiment (APE) proposed by Neale and Hoskins, ...

Myong-In Lee; Max J. Suarez; In-Sik Kang; Isaac M. Held; Daehyun Kim

2008-10-01T23:59:59.000Z

18

A Revised Version of Lettau's Evapoclimatonomy Model  

Science Conference Proceedings (OSTI)

In this paper a revised version of Lettau's evapoclimatonomy model is introduced. Climatonomy is a one-dimensional representation of mean climate, which includes a complete characterization of the surface energy and water balances. The model is ...

Sharon E. Nicholson; Andrew R. Lare; JosiéA. Marengo; Pablo Santos

1996-04-01T23:59:59.000Z

19

The NCAR Climate System Model, Version One  

Science Conference Proceedings (OSTI)

The NCAR Climate System Model, version one, is described. The spinup procedure prior to a fully coupled integration is discussed. The fully coupled model has been run for 300 yr with no surface flux corrections in momentum, heat, or freshwater. ...

Byron A. Boville; Peter R. Gent

1998-06-01T23:59:59.000Z

20

Building America Research Benchmark Definition, Version 3.1, Updated July 14, 2004  

DOE Green Energy (OSTI)

To track progress toward aggressive multi-year whole-house energy savings goals of 40-70% and onsite power production of up to 30%, the U.S. Department of Energy (DOE) Residential Buildings Program and the National Renewable Energy Laboratory (NREL) developed the Building America Research Benchmark in consultation with the Building America industry teams. The Benchmark is generally consistent with mid-1990s standard practice, as reflected in the Home Energy Rating System (HERS) Technical Guidelines (RESNET 2002), with additional definitions that allow the analyst to evaluate all residential end-uses, an extension of the traditional HERS rating approach that focuses on space conditioning and hot water. A series of user profiles, intended to represent the behavior of a ''standard'' set of occupants, was created for use in conjunction with the Benchmark.

Hendron, R.

2005-01-01T23:59:59.000Z

Note: This page contains sample records for the topic "benchmark models version" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


21

The Community Climate System Model, Version 2  

Science Conference Proceedings (OSTI)

The Community Climate System Model, version 2 (CCSM2) is briefly described. A 1000-yr control simulation of the present day climate has been completed without flux adjustments. Minor modifications were made at year 350, which included all five ...

Jeffrey T. Kiehl; Peter R. Gent

2004-10-01T23:59:59.000Z

22

The Community Climate System Model Version 4  

Science Conference Proceedings (OSTI)

The fourth version of the Community Climate System Model (CCSM4) was recently completed and released to the climate community. This paper describes developments to all CCSM components, and documents fully coupled preindustrial control runs ...

Peter R. Gent; Gokhan Danabasoglu; Leo J. Donner; Marika M. Holland; Elizabeth C. Hunke; Steve R. Jayne; David M. Lawrence; Richard B. Neale; Philip J. Rasch; Mariana Vertenstein; Patrick H. Worley; Zong-Liang Yang; Minghua Zhang

2011-10-01T23:59:59.000Z

23

User's Manual for BEST-Dairy: Benchmarking and Energy/water-Saving Tool (BEST) for the Dairy Processing Industry (Version 1.2)  

Science Conference Proceedings (OSTI)

This User's Manual summarizes the background information of the Benchmarking and Energy/water-Saving Tool (BEST) for the Dairy Processing Industry (Version 1.2, 2011), including'Read Me' portion of the tool, the sections of Introduction, and Instructions for the BEST-Dairy tool that is developed and distributed by Lawrence Berkeley National Laboratory (LBNL).

Xu, T.; Ke, J.; Sathaye, J.

2011-04-20T23:59:59.000Z

24

CPM-3 Validation: A Summary of Version 1.0 Benchmark and Assessment  

Science Conference Proceedings (OSTI)

The longer fuel cycles now used in nuclear power plants require more accurate core models to address higher burnable poison loadings and new lattice designs. This report is a summary of the validation and assessment of the CPM-3 code, a state-of-the-art lattice physics methodology designed to provide increased accuracy and flexibility.

1999-07-15T23:59:59.000Z

25

Model-Based Engineering and Manufacturing CAD/CAM Benchmark  

Science Conference Proceedings (OSTI)

The Benehmark Project was created from a desire to identify best practices and improve the overall efficiency and performance of the Y-12 Plant's systems and personnel supprting the manufacturing mission. The mission of the benchmark team was to search out industry leaders in manufacturing and evaluate lheir engineering practices and processes to determine direction and focus fm Y-12 modmizadon efforts. The companies visited included several large established companies and anew, small, high-tech machining firm. As a result of this efforL changes are recommended that will enable Y-12 to become a more responsive cost-effective manufacturing facility capable of suppordng the needs of the Nuclear Weapons Complex (NW@) and Work Fw Others into the 21' century. The benchmark team identified key areas of interest, both focused and gencml. The focus arm included Human Resources, Information Management, Manufacturing Software Tools, and Standarda/ Policies and Practices. Areas of general interest included Inhstructure, Computer Platforms and Networking, and Organizational Structure. The method for obtaining the desired information in these areas centered on the creation of a benchmark questionnaire. The questionnaire was used throughout each of the visits as the basis for information gathering. The results of this benchmark showed that all companies are moving in the direction of model-based engineering and manufacturing. There was evidence that many companies are trying to grasp how to manage current and legacy data. In terms of engineering design software tools, the companies contacted were using both 3-D solid modeling and surfaced Wire-frame models. The manufacturing computer tools were varie4 with most companies using more than one software product to generate machining data and none currently performing model-based manufacturing (MBM) ftom a common medel. The majority of companies were closer to identifying or using a single computer-aided design (CAD) system than a single computer-aided manufacturing (CAM) system. The Inteznet was a technology that all companies were considering to either transport information more easily throughout the corporation or as a conduit for business, as the small firm was doing Successfully. Because PrdEngineer is the de facto CAD standard fbr the NWC, the Benchmark Team targeted companies using Parametric Technology Corporation (PTC) software tools. Most of the companies used Pm'Engineer for design to some degree, but found the PTC CAM product, PdManufacture lacking as compared to alternate CAM solutions. All of the companies visited fOund the data exchange between CAD/CAM systems problematic. It was apparent that these companies were trying to consolidate their software tools to reduce translation but had not been able to do so because no single solution had all the needed capabilities. In regard to organizational slructure and human resoukes, two companies were found to be using product or program teams. These teams consisted of the technical staff capable of completing the entire task and were xmintained throughout the project. This same strategy was evident at another of the companies but with more mobility of members. For all eornpanies visited except the small ~ work structure breakdown and responsibility were essentially the same as Y-12's at this time. The functions of numerical control (NC), desi~ and process planning were separate and distinct. The team made numerous recommendations that are detailed in the report.

Domm, T.D.; Underwood, R.S.

1999-04-26T23:59:59.000Z

26

The Community Climate System Model Version 4  

SciTech Connect

The fourth version of the Community Climate System Model (CCSM4) was recently completed and released to the climate community. This paper describes developments to all the CCSM components, and documents fully coupled pre-industrial control runs compared to the previous version, CCSM3. Using the standard atmosphere and land resolution of 1{sup o} results in the sea surface temperature biases in the major upwelling regions being comparable to the 1.4{sup o} resolution CCSM3. Two changes to the deep convection scheme in the atmosphere component result in the CCSM4 producing El Nino/Southern Oscillation variability with a much more realistic frequency distribution than the CCSM3, although the amplitude is too large compared to observations. They also improve the representation of the Madden-Julian Oscillation, and the frequency distribution of tropical precipitation. A new overflow parameterization in the ocean component leads to an improved simulation of the deep ocean density structure, especially in the North Atlantic. Changes to the CCSM4 land component lead to a much improved annual cycle of water storage, especially in the tropics. The CCSM4 sea ice component uses much more realistic albedos than the CCSM3, and the Arctic sea ice concentration is improved in the CCSM4. An ensemble of 20th century simulations runs produce an excellent match to the observed September Arctic sea ice extent from 1979 to 2005. The CCSM4 ensemble mean increase in globally-averaged surface temperature between 1850 and 2005 is larger than the observed increase by about 0.4 C. This is consistent with the fact that the CCSM4 does not include a representation of the indirect effects of aerosols, although other factors may come into play. The CCSM4 still has significant biases, such as the mean precipitation distribution in the tropical Pacific Ocean, too much low cloud in the Arctic, and the latitudinal distributions of short-wave and long-wave cloud forcings.

Gent, Peter R.; Danabasoglu, Gokhan; Donner, Leo J.; Holland, Marika M.; Hunke, Elizabeth C.; Jayne, Steve R.; Lawrence, David M.; Neale, Richard; Rasch, Philip J.; Vertenstein, Mariana; Worley, Patrick; Yang, Zong-Liang; Zhang, Minghua

2011-10-01T23:59:59.000Z

27

Revised Benchmark Problem for modeling of metal flow and metal ...  

Science Conference Proceedings (OSTI)

The literature is scarce when it comes to benchmark problems for MHD flow in a cell and those cases which are available often suffer from insufficient level of ...

28

Fuel Cell Systems Sensors Air Management Benchmarking Modeling  

NLE Websites -- All DOE Office Websites (Extended Search)

Systems Systems F u e l P r o c e s s o r Sensors Air Management Benchmarking Modeling Patrick Davis Patrick Davis Targets and Status 50 kWe (net) Integrated Fuel Cell Power System 5000 2000 1000 Hours Durability 45 125 275 $/kW Cost (including H2 storage) 650 500 400 W/L Power density (w/o H2 stor) Operating on direct hydrogen 5000 2000 1000 Hours Durability 45 125 325 $/kW Cost 325 250 140 W/L Power density Operating on Tier 2 gasoline containing 30 ppm sulfur, average 2010 2005 2003 status Units Characteristics Projects Fuel Cell Power Systems Analysis ANL NREL TIAX Directed Technologies, Inc. TIAX TIAX * Fuel Cell Systems Analysis * Fuel Cell Vehicle Systems Analysis * Cost Analyses of Fuel Cell Stacks/ Systems * DFMA Cost Estimates of Fuel Cell/ Reformer Systems at Low, Medium, & High Production Rates * Assessment of Fuel Cell Auxiliary

29

Development of whole-building energy performance models as benchmarks for retrofit projects  

Science Conference Proceedings (OSTI)

This paper presents a systematic development process of whole-building energy models as performance benchmarks for retrofit projects. Statistical regression-based models and computational performance models are being used for retrofit projects in industry ...

Omer Tugrul Karaguzel; Khee Poh Lam

2011-12-01T23:59:59.000Z

30

Benchmark Models, Planes, Lines and Points for Future SUSY Searches at the LHC  

E-Print Network (OSTI)

We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

S. S. AbdusSalam; B. C. Allanach; H. K. Dreiner; J. Ellis; U. Ellwanger; J. Gunion; S. Heinemeyer; M. Kraemer; M. L. Mangano; K. A. Olive; S. Rogerson; L. Roszkowski; M. Schlaffer; G. Weiglein

2011-09-18T23:59:59.000Z

31

The Community Climate System Model Version 3 (CCSM3)  

Science Conference Proceedings (OSTI)

The Community Climate System Model version 3 (CCSM3) has recently been developed and released to the climate community. CCSM3 is a coupled climate model with components representing the atmosphere, ocean, sea ice, and land surface connected by a ...

William D. Collins; Cecilia M. Bitz; Maurice L. Blackmon; Gordon B. Bonan; Christopher S. Bretherton; James A. Carton; Ping Chang; Scott C. Doney; James J. Hack; Thomas B. Henderson; Jeffrey T. Kiehl; William G. Large; Daniel S. McKenna; Benjamin D. Santer; Richard D. Smith

2006-06-01T23:59:59.000Z

32

A Global Version of the PSU–NCAR Mesoscale Model  

Science Conference Proceedings (OSTI)

A global version of the fifth-generation Pennsylvania State University–National Center for Atmospheric Research Mesoscale Model (PSU–NCAR MM5) is described. The new model employs two polar stereographic projection domains centered on each pole. ...

Jimy Dudhia; James F. Bresch

2002-12-01T23:59:59.000Z

33

Climate Sensitivity of the Community Climate System Model, Version 4  

Science Conference Proceedings (OSTI)

Equilibrium climate sensitivity of the Community Climate System Model, version 4 (CCSM4) is 3.20°C for 1° horizontal resolution in each component. This is about a half degree Celsius higher than in the previous version (CCSM3). The transient ...

C. M. Bitz; K. M. Shell; P. R. Gent; D. A. Bailey; G. Danabasoglu; K. C. Armour; M. M. Holland; J. T. Kiehl

2012-05-01T23:59:59.000Z

34

Job benchmarks  

NLE Websites -- All DOE Office Websites (Extended Search)

Benchmarks Job Benchmarks PDSF Benchmarks Select benchmark to view output of ATLAS Fragmentation Alice EMC Simulation Dayabay Analysis STAR pp500 reconstruction STAR AuAu200...

35

GCFM Users Guide Revision for Model Version 5.0  

DOE Green Energy (OSTI)

This paper documents alterations made to the MITRE/DOE Geothermal Cash Flow Model (GCFM) in the period of September 1980 through September 1981. Version 4.0 of GCFM was installed on the computer at the DOE San Francisco Operations Office in August 1980. This Version has also been distributed to about a dozen geothermal industry firms, for examination and potential use. During late 1980 and 1981, a few errors detected in the Version 4.0 code were corrected, resulting in Version 4.1. If you are currently using GCFM Version 4.0, it is suggested that you make the changes to your code that are described in Section 2.0. User's manual changes listed in Section 3.0 and Section 4.0 should then also be made.

Keimig, Mark A.; Blake, Coleman

1981-08-10T23:59:59.000Z

36

Solar Advisor Model User Guide for Version 2.0  

Science Conference Proceedings (OSTI)

The Solar Advisor Model (SAM) provides a consistent framework for analyzing and comparing power system costs and performance across the range of solar technologies and markets, from photovoltaic systems for residential and commercial markets to concentrating solar power and large photovoltaic systems for utility markets. This manual describes Version 2.0 of the software, which can model photovoltaic and concentrating solar power technologies for electric applications for several markets. The current version of the Solar Advisor Model does not model solar heating and lighting technologies.

Gilman, P.; Blair, N.; Mehos, M.; Christensen, C.; Janzou, S.; Cameron, C.

2008-08-01T23:59:59.000Z

37

Studying performance of DEVS modeling and simulation environments using the DEVStone benchmark  

Science Conference Proceedings (OSTI)

The Discrete Event System Specification (DEVS) formal modeling and simulation (M&S) framework (which supports hierarchical and modular model composition) has been widely used to understand, analyze and develop a variety of systems. Numerous DEVS simulators ... Keywords: DEVS, modeling and simulation tools, simulator performance evaluation, synthetic benchmarks

Gabriel Wainer; Ezequiel Glinsky; Marcelo Gutierrez-Alcaraz

2011-07-01T23:59:59.000Z

38

User's Manual for BEST-Dairy: Benchmarking and Energy/water-Saving Tool (BEST) for the Dairy Processing Industry (Version 1.2)  

E-Print Network (OSTI)

plants. Publication of this manual benefits from BarbaraLBNL Report User’s Manual for BEST-Dairy: Benchmarking andSathaye (2011). User’s Manual for BEST-Dairy: Benchmarking

Xu, T.

2011-01-01T23:59:59.000Z

39

Smart Grid Interoperability Maturity Model Beta Version  

SciTech Connect

The GridWise Architecture Council was formed by the U.S. Department of Energy to promote and enable interoperability among the many entities that interact with the electric power system. This balanced team of industry representatives proposes principles for the development of interoperability concepts and standards. The Council provides industry guidance and tools that make it an available resource for smart grid implementations. In the spirit of advancing interoperability of an ecosystem of smart grid devices and systems, this document presents a model for evaluating the maturity of the artifacts and processes that specify the agreement of parties to collaborate across an information exchange interface. You are expected to have a solid understanding of large, complex system integration concepts and experience in dealing with software component interoperation. Those without this technical background should read the Executive Summary for a description of the purpose and contents of the document. Other documents, such as checklists, guides, and whitepapers, exist for targeted purposes and audiences. Please see the www.gridwiseac.org website for more products of the Council that may be of interest to you.

Widergren, Steven E.; Drummond, R.; Giroti, Tony; Houseman, Doug; Knight, Mark; Levinson, Alex; longcore, Wayne; Lowe, Randy; Mater, J.; Oliver, Terry V.; Slack, Phil; Tolk, Andreas; Montgomery, Austin

2011-12-02T23:59:59.000Z

40

Development of a California commercial building benchmarking database  

SciTech Connect

Building energy benchmarking is a useful starting point for commercial building owners and operators to target energy savings opportunities. There are a number of tools and methods for benchmarking energy use. Benchmarking based on regional data can provides more relevant information for California buildings than national tools such as Energy Star. This paper discusses issues related to benchmarking commercial building energy use and the development of Cal-Arch, a building energy benchmarking database for California. Currently Cal-Arch uses existing survey data from California's Commercial End Use Survey (CEUS), a largely underutilized wealth of information collected by California's major utilities. Doe's Commercial Building Energy Consumption Survey (CBECS) is used by a similar tool, Arch, and by a number of other benchmarking tools. Future versions of Arch/Cal-Arch will utilize additional data sources including modeled data and individual buildings to expand the database.

Kinney, Satkartar; Piette, Mary Ann

2002-05-17T23:59:59.000Z

Note: This page contains sample records for the topic "benchmark models version" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


41

Energy Benchmarking Database  

E-Print Network (OSTI)

Building energy benchmarking is a useful starting point for commercial building owners and operators to target energy savings opportunities. There are a number of tools and methods for benchmarking energy use. Benchmarking based on regional data can provides more relevant information for California buildings than national tools such as Energy Star. This paper discusses issues related to benchmarking commercial building energy use and the development of Cal-Arch, a building energy benchmarking database for California. Currently Cal-Arch uses existing survey data from California's Commercial End Use Survey (CEUS), a largely underutilized wealth of information collected by California's major utilities. Doe's Commercial Building Energy Consumption Survey (CBECS) is used by a similar tool, Arch, and by a number of other benchmarking tools. Future versions of Arch/Cal-Arch will utilize additional

Satkartar Kinney; Mary Ann Piette; Satkartar Kinney; Mary Ann Piette; Ernest Orlando Lawrence Berkeley; Satkartar Kinney; Mary Ann Piette

2002-01-01T23:59:59.000Z

42

Snow Model Verification Using Ensemble Prediction and Operational Benchmarks  

Science Conference Proceedings (OSTI)

Hydrologic model evaluations have traditionally focused on measuring how closely the model can simulate various characteristics of historical observations. Although advancing hydrologic forecasting is an often-stated goal of numerous modeling ...

Kristie J. Franz; Terri S. Hogue; Soroosh Sorooshian

2008-12-01T23:59:59.000Z

43

Development of a HEX-Z Partially Homogenized Benchmark Model for the FFTF Isothermal Physics Measurements  

SciTech Connect

A series of isothermal physics measurements were performed as part of an acceptance testing program for the Fast Flux Test Facility (FFTF). A HEX-Z partially-homogenized benchmark model of the FFTF fully-loaded core configuration was developed for evaluation of these measurements. Evaluated measurements include the critical eigenvalue of the fully-loaded core, two neutron spectra, 32 reactivity effects measurements, an isothermal temperature coefficient, and low-energy gamma and electron spectra. Dominant uncertainties in the critical configuration include the placement of radial shielding around the core, reactor core assembly pitch, composition of the stainless steel components, plutonium content in the fuel pellets, and boron content in the absorber pellets. Calculations of criticality, reactivity effects measurements, and the isothermal temperature coefficient using MCNP5 and ENDF/B-VII.0 cross sections with the benchmark model are in good agreement with the benchmark experiment measurements. There is only some correlation between calculated and measured spectral measurements; homogenization of many of the core components may have impacted computational assessment of these measurements. This benchmark evaluation has been added to the IRPhEP Handbook.

John D. Bess

2012-05-01T23:59:59.000Z

44

Towards a benchmark for model checkers of asynchronous concurrent systems  

E-Print Network (OSTI)

? Model Checking is an established filed, ? With wealth of publications, ? and plenty of tools. ? But there are little empirical results, ? and lack of analytical techniques for evaluating tools...

Diyaa-addein Atiya; Néstor Catańo; Geral Lüttgen

2005-01-01T23:59:59.000Z

45

Solid Waste Projection Model: Database (Version 1. 3)  

SciTech Connect

The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC). The SWPM system provides a modeling and analysis environment that supports decisions in the process of evaluating various solid waste management alternatives. This document, one of a series describing the SWPM system, contains detailed information regarding the software and data structures utilized in developing the SWPM Version 1.3 Database. This document is intended for use by experienced database specialists and supports database maintenance, utility development, and database enhancement.

Blackburn, C.L.

1991-11-01T23:59:59.000Z

46

H2A Production Model, Version 2 User Guide  

DOE Green Energy (OSTI)

The H2A Production Model analyzes the technical and economic aspects of central and forecourt hydrogen production technologies. Using a standard discounted cash flow rate of return methodology, it determines the minimum hydrogen selling price, including a specified after-tax internal rate of return from the production technology. Users have the option of accepting default technology input values--such as capital costs, operating costs, and capacity factor--from established H2A production technology cases or entering custom values. Users can also modify the model's financial inputs. This new version of the H2A Production Model features enhanced usability and functionality. Input fields are consolidated and simplified. New capabilities include performing sensitivity analyses and scaling analyses to various plant sizes. This User Guide helps users already familiar with the basic tenets of H2A hydrogen production cost analysis get started using the new version of the model. It introduces the basic elements of the model then describes the function and use of each of its worksheets.

Steward, D.; Ramsden, T.; Zuboy, J.

2008-09-01T23:59:59.000Z

47

Benchmark hydrogeophysical data from a physical seismic model  

Science Conference Proceedings (OSTI)

Theoretical fluid flow models are used regularly to predict and analyze porous media flow but require verification against natural systems. Seismic monitoring in a controlled laboratory setting at a nominal scale of 1:1000 in the acoustic frequency range ... Keywords: Gassmann, Hertz-Mindlin, Saturation, Sensors, Soil

Juan M. Lorenzo; David E. Smolkin; Christopher White; Shannon R. Chollett; Ting Sun

2013-01-01T23:59:59.000Z

48

Fukushima Radiological Assessment Tool: Benchmarking Radiological Assessment and Dose Models using Fukushima Dataset  

Science Conference Proceedings (OSTI)

The Electric Power Research Institute (EPRI) is developing the Fukushima Radiological Assessment Tool (FRAT), a comprehensive database and software application for accessing, analyzing, and interpreting data related to radiological releases from the Fukushima Daiichi Nuclear Power Plant (NPP). This report documents the development of the FRAT to support the benchmarking of emergency response and dose modeling codes used by nuclear power plants, using radiological data from the Fukushima ...

2013-07-31T23:59:59.000Z

49

Lattice Wess-Zumino model with Ginsparg-Wilson fermions: One-loop results and GPU benchmarks  

E-Print Network (OSTI)

We numerically evaluate the one-loop counterterms for the four-dimensional Wess-Zumino model formulated on the lattice using Ginsparg-Wilson fermions of the overlap (Neuberger) variety, together with an auxiliary fermion (plus superpartners), such that a lattice version of $U(1)_R$ symmetry is exactly preserved in the limit of vanishing bare mass. We confirm previous findings by other authors that at one loop there is no renormalization of the superpotential in the lattice theory, but that there is a mismatch in the wavefunction renormalization of the auxiliary field. We study the range of the Dirac operator that results when the auxiliary fermion is integrated out, and show that localization does occur, but that it is less pronounced than the exponential localization of the overlap operator. We also present preliminary simulation results for this model, and outline a strategy for nonperturbative improvement of the lattice supercurrent through measurements of supersymmetry Ward identities. Related to this, some benchmarks for our graphics processing unit code are provided. Our simulation results find a nearly vanishing vacuum expectation value for the auxiliary field, consistent with approximate supersymmetry at weak coupling.

Chen Chen; Eric Dzienkowski; Joel Giedt

2010-05-18T23:59:59.000Z

50

Lattice Wess-Zumino model with Ginsparg-Wilson fermions: One-loop results and GPU benchmarks  

Science Conference Proceedings (OSTI)

We numerically evaluate the one-loop counterterms for the four-dimensional Wess-Zumino model formulated on the lattice using Ginsparg-Wilson fermions of the overlap (Neuberger) variety, together with an auxiliary fermion (plus superpartners), such that a lattice version of U(1){sub R} symmetry is exactly preserved in the limit of vanishing bare mass. We confirm previous findings by other authors that at one loop there is no renormalization of the superpotential in the lattice theory, but that there is a mismatch in the wave-function renormalization of the auxiliary field. We study the range of the Dirac operator that results when the auxiliary fermion is integrated out, and show that localization does occur, but that it is less pronounced than the exponential localization of the overlap operator. We also present preliminary simulation results for this model, and outline a strategy for nonperturbative improvement of the lattice supercurrent through measurements of supersymmetry Ward identities. Related to this, some benchmarks for our graphics processing unit code are provided. Our simulation results find a nearly vanishing vacuum expectation value for the auxiliary field, consistent with approximate supersymmetry at weak coupling.

Chen Chen; Dzienkowski, Eric; Giedt, Joel [Department of Physics, Applied Physics and Astronomy, Rensselaer Polytechnic Institute, 110 8th Street, Troy New York 12065 (United States)

2010-10-15T23:59:59.000Z

51

Solid-State Lighting: Text-Alternative Version: Model Specification for LED  

NLE Websites -- All DOE Office Websites (Extended Search)

Information Information Resources Printable Version Share this resource Send a link to Solid-State Lighting: Text-Alternative Version: Model Specification for LED Roadway Luminaires Webcast to someone by E-mail Share Solid-State Lighting: Text-Alternative Version: Model Specification for LED Roadway Luminaires Webcast on Facebook Tweet about Solid-State Lighting: Text-Alternative Version: Model Specification for LED Roadway Luminaires Webcast on Twitter Bookmark Solid-State Lighting: Text-Alternative Version: Model Specification for LED Roadway Luminaires Webcast on Google Bookmark Solid-State Lighting: Text-Alternative Version: Model Specification for LED Roadway Luminaires Webcast on Delicious Rank Solid-State Lighting: Text-Alternative Version: Model Specification for LED Roadway Luminaires Webcast on Digg

52

UNSAT-H Version 3.0: Unsaturated Soil Water and Heat Flow Model Theory, User Manual, and Examples  

Science Conference Proceedings (OSTI)

The UNSAT-H model was developed at Pacific Northwest National Laboratory (PNNL) to assess the water dynamics of arid sites and, in particular, estimate recharge fluxes for scenarios pertinent to waste disposal facilities. During the last 4 years, the UNSAT-H model received support from the Immobilized Waste Program (IWP) of the Hanford Site's River Protection Project. This program is designing and assessing the performance of on-site disposal facilities to receive radioactive wastes that are currently stored in single- and double-shell tanks at the Hanford Site (LMHC 1999). The IWP is interested in estimates of recharge rates for current conditions and long-term scenarios involving the vadose zone disposal of tank wastes. Simulation modeling with UNSAT-H is one of the methods being used to provide those estimates (e.g., Rockhold et al. 1995; Fayer et al. 1999). To achieve the above goals for assessing water dynamics and estimating recharge rates, the UNSAT-H model addresses soil water infiltration, redistribution, evaporation, plant transpiration, deep drainage, and soil heat flow as one-dimensional processes. The UNSAT-H model simulates liquid water flow using Richards' equation (Richards 1931), water vapor diffusion using Fick's law, and sensible heat flow using the Fourier equation. This report documents UNSAT-H .Version 3.0. The report includes the bases for the conceptual model and its numerical implementation, benchmark test cases, example simulations involving layered soils and plants, and the code manual. Version 3.0 is an, enhanced-capability update of UNSAT-H Version 2.0 (Fayer and Jones 1990). New features include hysteresis, an iterative solution of head and temperature, an energy balance check, the modified Picard solution technique, additional hydraulic functions, multiple-year simulation capability, and general enhancements.

MJ Fayer

2000-06-12T23:59:59.000Z

53

A Forward Looking Version of the MIT Emissions Prediction and Policy Analysis (EPPA) Model  

E-Print Network (OSTI)

This paper documents a forward looking multi-regional general equilibrium model developed from the latest version of the recursive-dynamic MIT Emissions Prediction and Policy Analysis (EPPA) model. The model represents ...

Babiker, Mustafa M.H.

54

Fast 2D non-LTE radiative modelling of prominences I. Numerical methods and benchmark results  

E-Print Network (OSTI)

New high-resolution spectropolarimetric observations of solar prominences require improved radiative modelling capabilities in order to take into account both multi-dimensional - at least 2D - geometry and complex atomic models. This makes necessary the use of very fast numerical schemes for the resolution of 2D non-LTE radiative transfer problems considering freestanding and illuminated slabs. The implementation of Gauss-Seidel and successive over-relaxation iterative schemes in 2D, together with a multi-grid algorithm, is thoroughly described in the frame of the short characteristics method for the computation of the formal solution of the radiative transfer equation in cartesian geometry. We propose a new test for multidimensional radiative transfer codes and we also provide original benchmark results for simple 2D multilevel atom cases which should be helpful for the further development of such radiative transfer codes, in general.

L. Leger; L. Chevallier; F. Paletou

2007-03-27T23:59:59.000Z

55

Technical Basis and Benchmarking of the Crud Deposition Risk Assessment Model (CORAL)  

Science Conference Proceedings (OSTI)

Deposition of boiling water reactor (BWR) system corrosion products (crud) on operating fuel rods has resulted in performance-limiting conditions in a number of plants. To facilitate improved management of any crud-related fuel performance risk, EPRI has developed the Crud DepOsition Risk Assessment ModeL (CORAL). CORAL incorporates a modified version of the Versatile Internals and Component Program for Reactors ...

2012-08-27T23:59:59.000Z

56

Verification and validation benchmarks.  

SciTech Connect

Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of achievement in V&V activities, how closely related the V&V benchmarks are to the actual application of interest, and the quantification of uncertainties related to the application of interest.

Oberkampf, William Louis; Trucano, Timothy Guy

2007-02-01T23:59:59.000Z

57

2D and 3D Numerical Modeling of Solidification Benchmark of Sn-3 ...  

Science Conference Proceedings (OSTI)

The benchmark experiment consists in solidifying a rectangular ingot of Sn-3% wt . Pb alloys, by using two lateral heat exchangers which allow extracting heat ...

58

Physical Model Development and Benchmarking for MHD Flows in Blanket Design  

SciTech Connect

An advanced simulation environment to model incompressible MHD flows relevant to blanket conditions in fusion reactors has been developed at HyPerComp in research collaboration with TEXCEL. The goals of this phase-II project are two-fold: The first is the incorporation of crucial physical phenomena such as induced magnetic field modeling, and extending the capabilities beyond fluid flow prediction to model heat transfer with natural convection and mass transfer including tritium transport and permeation. The second is the design of a sequence of benchmark tests to establish code competence for several classes of physical phenomena in isolation as well as in select (termed here as “canonical”,) combinations. No previous attempts to develop such a comprehensive MHD modeling capability exist in the literature, and this study represents essentially uncharted territory. During the course of this Phase-II project, a significant breakthrough was achieved in modeling liquid metal flows at high Hartmann numbers. We developed a unique mathematical technique to accurately compute the fluid flow in complex geometries at extremely high Hartmann numbers (10,000 and greater), thus extending the state of the art of liquid metal MHD modeling relevant to fusion reactors at the present time. These developments have been published in noted international journals. A sequence of theoretical and experimental results was used to verify and validate the results obtained. The code was applied to a complete DCLL module simulation study with promising results.

Ramakanth Munipalli; P.-Y.Huang; C.Chandler; C.Rowell; M.-J.Ni; N.Morley; S.Smolentsev; M.Abdou

2008-06-05T23:59:59.000Z

59

Benchmarks used  

NLE Websites -- All DOE Office Websites (Extended Search)

described below, different optimization options for the different compilers on Edison. The compilers are also compared against one another on the benchmarks. NERSC6...

60

The Climate Sensitivity of the Community Climate System Model Version 3 (CCSM3)  

Science Conference Proceedings (OSTI)

The climate sensitivity of the Community Climate System Model (CCSM) is described in terms of the equilibrium change in surface temperature due to a doubling of carbon dioxide in a slab ocean version of the Community Atmosphere Model (CAM) and ...

Jeffrey T. Kiehl; Christine A. Shields; James J. Hack; William D. Collins

2006-06-01T23:59:59.000Z

Note: This page contains sample records for the topic "benchmark models version" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


61

Benchmark Monitoring: Retired Systems  

NLE Websites -- All DOE Office Websites (Extended Search)

Completed Batch Jobs Completed Parallel Jobs Usage Reports Hopper Benchmark Monitoring Edison Benchmark Monitoring Carver Benchmark Monitoring Benchmark Monitoring: Retired Systems...

62

Coupling of Integrated Biosphere Simulator to Regional Climate Model Version 3  

Science Conference Proceedings (OSTI)

A description of the coupling of Integrated Biosphere Simulator (IBIS) to Regional Climate Model version 3 (RegCM3) is presented. IBIS introduces several key advantages to RegCM3, most notably vegetation dynamics, the coexistence of multiple ...

Jonathan M. Winter; Jeremy S. Pal; Elfatih A. B. Eltahir

2009-05-01T23:59:59.000Z

63

Coupling of Integrated Biosphere Simulator to Regional Climate Model Version 3  

E-Print Network (OSTI)

A description of the coupling of Integrated Biosphere Simulator (IBIS) to Regional Climate Model version 3 (RegCM3) is presented. IBIS introduces several key advantages to RegCM3, most notably vegetation dynamics, the ...

Winter, Jonathan (Jonathan Mark)

64

Description of the Earth system model of intermediate complexity LOVECLIM version 1.2  

E-Print Network (OSTI)

The main characteristics of the new version 1.2 of the three-dimensional Earth system model of intermediate complexity LOVECLIM are briefly described. LOVECLIM 1.2 includes representations of the atmosphere, the ocean and ...

Goosse, H.

65

Variational Data Assimilation with an Adiabatic Version of the NMC Spectral Model  

Science Conference Proceedings (OSTI)

Variational four-dimensional (4D) data assimilation is performed using an adiabatic version of the National Meteorological Center (NMC) baroclinic spectral primitive equation model with operationally analyzed fields as well as simulated datasets. ...

I. M. Navon; X. Zou; J. Derber; J. Sela

1992-07-01T23:59:59.000Z

66

ENSO and Pacific Decadal Variability in the Community Climate System Model Version 4  

Science Conference Proceedings (OSTI)

This study presents an overview of the El Nińo–Southern Oscillation (ENSO) phenomenon and Pacific decadal variability (PDV) simulated in a multicentury preindustrial control integration of the NCAR Community Climate System Model version 4 (CCSM4) ...

Clara Deser; Adam S. Phillips; Robert A. Tomas; Yuko M. Okumura; Michael A. Alexander; Antonietta Capotondi; James D. Scott; Young-Oh Kwon; Masamichi Ohba

2012-04-01T23:59:59.000Z

67

Evaluating Benchmark . . .  

E-Print Network (OSTI)

To reduce the simulation time to a tractable amount or due to compilation (or other related) problems, computer architects often simulate only a subset of the benchmarks in a benchmark suite. However, if the architect chooses a subset of benchmarks that is not representative, the subsequent simulation results will, at best, be misleading or, at worst, yield incorrect conclusions. To address this problem, computer architects have recently proposed several statistically-based approaches to subset a benchmark suite. While some of these approaches are well-grounded statistically, what has not yet been thoroughly evaluated is the: 1) Absolute accuracy, 2) Relative accuracy across a range of processor and memory subsystem enhancements, and 3) Representativeness and coverage of each approach for a range of subset sizes. Specifically, this paper evaluates statistically-based subsetting approaches based on principal components analysis (PCA) and the Plackett and Burman (P&B) design, in addition to prevailing approaches such as integer vs. floating-point, core vs. memory-bound, by language, and at random. Our results show that the two statistically-based approaches, PCA and P&B, have the best absolute and relative accuracy for CPI and energy-delay product (EDP), produce subsets that are the most representative, and choose benchmark and input set pairs that are most well-distributed across the benchmark space. To achieve a 5 % absolute CPI and EDP error, across a wide range of configurations, PCA and P&B typically need about 17 benchmark and input set pairs, while the other five approaches often choose more than 30 benchmark and input set pairs.

Joshua J. Yi; Resit Sendag; Lieven Eeckhout; Ajay Joshi; David J. Lilja; Lizy K. John

2006-01-01T23:59:59.000Z

68

Result Summary for the Area 5 Radioactive Waste Management Site Performance Assessment Model Version 4.113  

Science Conference Proceedings (OSTI)

Preliminary results for Version 4.113 of the Nevada National Security Site Area 5 Radioactive Waste Management Site performance assessment model are summarized. Version 4.113 includes the Fiscal Year 2011 inventory estimate.

Shott, G. J.

2012-04-15T23:59:59.000Z

69

Regression benchmarking with simple middleware benchmarks  

E-Print Network (OSTI)

The paper introduces the concept of regression benchmarking as a variant of regression testing focused at detecting performance regressions. Applying the regression benchmarking in the area of middleware development, the paper explains how regression benchmarking differs from middleware benchmarking in general. On a real-world example of TAO, the paper shows why the existing benchmarks do not give results sufficient for regression benchmarking, and proposes techniques for detecting performance regressions using simple benchmarks. 1.

Lubomír Bulej; Tomáš Kalibera; Petr T?ma

2004-01-01T23:59:59.000Z

70

The dc modeling program (DCMP): Version 2. 0  

Science Conference Proceedings (OSTI)

In this project one of the main objectives was the refinement of tools for the study of HVDC systems. The original software was prepared in project RP1964-2 (EL-4365) as power flow and stability program models for HVDC systems. In this project new modeling capabilities were added to both the power flow and stability models. Additionally, the HVDC specific model capabilities were integrated into a new program, termed the Standalone program, for use in the development and testing of HVDC models. This manual provides technical background for programmers and those interested in understanding, augmenting or transporting the dc models.

Chapman, D.G. (Manitoba HVDC Research Centre, Winnipeg, MB (Canada))

1990-08-01T23:59:59.000Z

71

Present-Day Antarctic Climatology Of the NCAR Community Climate Model Version 1  

Science Conference Proceedings (OSTI)

Five-year seasonal cycle output produced by the NCAR Community Climate Model Version 1 (CCM 1) with R15 resolution is used to evaluate the ability of the model to simulate the present-day climate of Antarctica. The model results are compared with ...

Ren-Yow Tzengo; David H. Bromwich; Thomas R. Parish

1993-02-01T23:59:59.000Z

72

The dc modeling program (DCMP): Version 2. 0  

Science Conference Proceedings (OSTI)

In this project one of the main objectives was the refinement of tools for the study of HVDC systems. The original software was prepared in project RP1964-2 (EL-4365) as power flow and stability program models for HVDC systems. In this project new modeling capabilities were added to both the power flow and stability models. Additionally, the HVDC specific model capabilities were integrated into a new program, termed the Standalone program, for use in the development and testing of HVDC models.

Chapman, D.G. (Manitoba HVDC Research Centre, Winnipeg, MB (Canada))

1990-08-01T23:59:59.000Z

73

The dc modeling program (DCMP): Version 2. 0  

SciTech Connect

In this project one of the main objectives was the refinement of tools for the study of HVDC systems. The original software was prepared in project RP1964-2 (EL-4365) as power flow and stability program models for HVDC systems. In this project new modeling capabilities were added to both the power flow and stability models. Additionally, the HVDC specific model capabilities were integrated into a new program, termed the Standalone program, for use in the development and testing of HVDC models. This volume provides information on the application of the software in the form of a User's Manual.

Chapman, D.G. (Manitoba HVDC Research Centre, Winnipeg, MB (Canada))

1990-08-01T23:59:59.000Z

74

The NAS Parallel Benchmarks  

E-Print Network (OSTI)

Weeratunga, “The NAS Parallel Benchmarks,” Intl. Journal ofD. Simon, “NAS Par- allel Benchmark Results,” Proceedings ofD. Simon, “NAS Par- allel Benchmark Results,” IEEE Parallel

Bailey, David H.

2010-01-01T23:59:59.000Z

75

NERSC-6 Benchmarks  

NLE Websites -- All DOE Office Websites (Extended Search)

Benchmarks NERSC-6 Benchmarks The NERSC-6 application benchmarks were used in the acquisition process that resulted in the NERSC Cray XE6 ("Hopper") system. A technical report...

76

Edison Benchmark Monitoring  

NLE Websites -- All DOE Office Websites (Extended Search)

Edison Benchmark Monitoring Benchmark Results Select Benchmark CAM GAMESS GTC IMPACT-T MAESTRO MILC PARATEC Submit Last edited: 2013-06-25 22:45:11...

77

Vehicle Technologies Office: Benchmarking  

NLE Websites -- All DOE Office Websites (Extended Search)

Benchmarking Benchmarking Research funded by the Vehicle Technologies Office produces a great deal of valuable data, but it is important to compare those research results with similar work done elsewhere in the world. Through laboratory testing, researchers can compare vehicles and components to validate models, support technical target-setting, and provide data to help guide technology development tasks. Benchmarking activities fall into two primary areas: Vehicle and component testing, in which researchers test and analyze emerging technologies obtained from sources throughout the world. The results are used to continually assess program efforts. Model validation, in which researchers use test data to validate the accuracy of vehicle and component computer models including: overall measures such as fuel economy, state-of-charge energy storage across the driving cycle, and transient component behavior, such as fuel rate and torque.

78

Macro System Model (MSM) User Guide, Version 1.3  

DOE Green Energy (OSTI)

This user guide describes the macro system model (MSM). The MSM has been designed to allow users to analyze the financial, environmental, transitional, geographical, and R&D issues associated with the transition to a hydrogen economy. Basic end users can use the MSM to answer cross-cutting questions that were previously difficult to answer in a consistent and timely manner due to various assumptions and methodologies among different models.

Ruth, M.; Diakov, V.; Sa, T.; Goldsby, M.

2011-09-01T23:59:59.000Z

79

Coupling of Integrated Biosphere Simulator to Regional Climate Model version 3  

E-Print Network (OSTI)

Presented in this thesis is a description of the coupling of Integrated Biosphere Simulator (IBIS) to Regional Climate Model version 3 (RegCM3), and an assessment of the coupled model (RegCM3-IBIS). RegCM3 is a 3-dimensional, ...

Winter, Jonathan (Jonathan Mark)

2006-01-01T23:59:59.000Z

80

CANDIE: A New Version of the DieCAST Ocean Circulation Model  

Science Conference Proceedings (OSTI)

The development and verification of a new version of the DieCAST ocean circulation model to be referred to as CANDIE (Canadian Diecast) are considered. Both CANDIE and DieCAST have many features in common with the well-known Modular Ocean Model (...

Jinyu Sheng; Daniel G. Wright; Richard J. Greatbatch; David E. Dietrich

1998-12-01T23:59:59.000Z

Note: This page contains sample records for the topic "benchmark models version" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


81

Red Storm usage model :Version 1.12.  

Science Conference Proceedings (OSTI)

Red Storm is an Advanced Simulation and Computing (ASC) funded massively parallel supercomputer located at Sandia National Laboratories (SNL). The Red Storm Usage Model (RSUM) documents the capabilities and the environment provided for the FY05 Tri-Lab Level II Limited Availability Red Storm User Environment Milestone and the FY05 SNL Level II Limited Availability Red Storm Platform Milestone. This document describes specific capabilities, tools, and procedures to support both local and remote users. The model is focused on the needs of the ASC user working in the secure computing environments at Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and SNL. Additionally, the Red Storm Usage Model maps the provided capabilities to the Tri-Lab ASC Computing Environment (ACE) requirements. The ACE requirements reflect the high performance computing requirements for the ASC community and have been updated in FY05 to reflect the community's needs. For each section of the RSUM, Appendix I maps the ACE requirements to the Limited Availability User Environment capabilities and includes a description of ACE requirements met and those requirements that are not met in that particular section. The Red Storm Usage Model, along with the ACE mappings, has been issued and vetted throughout the Tri-Lab community.

Jefferson, Karen L.; Sturtevant, Judith E.

2005-12-01T23:59:59.000Z

82

PRE-SW Dynamic Mercury Cycling Model (D-MCM)Version 4.0, Beta  

Science Conference Proceedings (OSTI)

The Dynamic Mercury Cycling Model (D-MCM) is a Windows™ based simulation model for personal computers. It predicts mercury cycling and bioaccumulation in aquatic systems.  Mercury forms include methylmercury, Hg(II), and elemental mercury. D-MCM is a time-dependent mechanistic model that can be applied deterministically or probabilistically. Version 4.0 is a major update to D-MCM.   The model can be applied in 1,2, and 3 dimensional applications for lakes, rivers, estuaries, ...

2013-09-17T23:59:59.000Z

83

Models and Results Database (MAR-D), Version 4. 0  

SciTech Connect

The Nuclear Regulatory Commission's Office of Nuclear Regulatory Research (NRC-RES) is presently funding the development of the Models and Results Database (MAR-D) at the Idaho National Engineering Laboratory. MAR-D's primary function is to create a data repository for NUREG-1150 and other permanent data by providing input, conversion, and output capabilities for data used by IRRAS, SARA, SETS, and FRANTIC personal computer (PC) codes. As probabilistic risk assessments and individual plant examinations are submitted to the NRC for review, MAR-D can be used to convert the models and results from the study for use with IRRAS and SARA. Then, these data can be easily accessed by future studies and will be in a form that will enhance the analysis process. This reference manual provides an overview of the function available within MAR-D and step-by-step operating instructions.

Branham-Haar, K.A.; Dinneen, R.A.; Russell, K.D.; Skinner, N.L. (EG and G Idaho, Inc., Idaho Falls, ID (United States))

1992-05-01T23:59:59.000Z

84

Advanced Coal Power Plant Model (ACCPM) Version 1.1  

Science Conference Proceedings (OSTI)

With the purchase of a license for the appropriate SimTech IPSEpro modules and library, users can quickly generate performance and capital cost estimates of new, advanced coal power plants. The application allows users to screen integrated gasification combined cycle (IGCC) technologies prior to engaging in more extensive studies of their preferred choice. Such screening activities generally require sophisticated software and qualified staff to run the models, which takes time and significant investment....

2011-03-08T23:59:59.000Z

85

Concrete Model Descriptions and Summary of Benchmark Studies for Blast Effects Simulations  

DOE Green Energy (OSTI)

Concrete is perhaps one of the most widely used construction materials in the world. Engineers use it to build massive concrete dams, concrete waterways, highways, bridges, and even nuclear reactors. The advantages of using concrete is that it can be cast into any desired shape, it is durable, and very economical compared to structural steel. The disadvantages are its low tensile strength, low ductility, and low strength-to-weight ratio. Concrete is a composite material that consists of a coarse granular material, or aggregate, embedded in a hard matrix of material, or cement, which fills the gaps between the aggregates and binds them together. Concrete properties, however, vary widely. The properties depend on the choice of materials used and the proportions for a particular application, as well as differences in fabrication techniques. Table 1 provides a listing of typical engineering properties for structural concrete. Properties also depend on the level of concrete confinement, or hydrostatic pressure, the material is being subjected to. In general, concrete is rarely subjected to a single axial stress. The material may experience a combination of stresses all acting simultaneously. The behavior of concrete under these combined stresses are, however, extremely difficult to characterize. In addition to the type of loading, one must also consider the stress history of the material. Failure is determined not only by the ultimate stresses, but also by the rate of loading and the order in which these stresses were applied. The concrete model described herein accounts for this complex behavior of concrete. It was developed by Javier Malvar, Jim Wesevich, and John Crawford of Karagozian and Case, and Don Simon of Logicon RDA in support of the Defense Threat Reduction Agency's programs. The model is an enhanced version of the Concrete/Geological Material Model 16 in the Lagrangian finite element code DYNA3D. The modifications that were made to the original model ensured that the material response followed experimental observations for standard uniaxial, biaxial, and triaxial tests for both tension and compression type loading. A disadvantage of using this material model, however, is the overwhelming amount of input that is required from the user. Therefore, the goal of this report is to provide future users with the tools necessary for successfully using this model.

Noble, C; Kokko, E; Darnell, I; Dunn, T; Hagler, L; Leininger, L

2005-07-21T23:59:59.000Z

86

Analyzing the Levelized Cost of Centralized and Distributed Hydrogen Production Using the H2A Production Model, Version 2  

DOE Green Energy (OSTI)

Analysis of the levelized cost of producing hydrogen via different pathways using the National Renewable Energy Laboratory's H2A Hydrogen Production Model, Version 2.

Ramsden, T.; Steward, D.; Zuboy, J.

2009-09-01T23:59:59.000Z

87

Stochastic PV performance/reliability model : preview of alpha version.  

DOE Green Energy (OSTI)

Problem Statement: (1) Uncertainties in PV system performance and reliability impact business decisions - Project cost and financing estimates, Pricing service contracts and guarantees, Developing deployment and O&M strategies; (2) Understanding and reducing these uncertainties will help make the PV industry more competitive (3) Performance has typically been estimated without much attention to reliability of components; and (4) Tools are needed to assess all inputs to the value proposition (e.g., LCOE, cash flow, reputation, etc.). Goals and objectives are: (1) Develop a stochastic simulation model (in GoldSim) that can represent PV system performance as a function of system design, weather, reliability, and O&M policies; (2) Evaluate performance for an example system to quantify sources of uncertainty and identify dominant parameters via a sensitivity study; and (3) Example System - 1 inverter, 225 kW DC Array latitude tilt (90 strings of 12 modules {l_brace}1080 modules{r_brace}), Weather from Tucumcari, NM (TMY2 with annual uncertainty).

Stein, Joshua S.; Miller, Steven P.

2010-03-01T23:59:59.000Z

88

MDAbench: A Tool for Customized Benchmark Generation Using MDA  

E-Print Network (OSTI)

Designing component-based application that meets performance requirements remains a challenging problem, and usually requires a prototype to be constructed to benchmark performance. Building a custom benchmark suite is however costly and tedious. This demonstration illustrates an approach for generating customized component-based benchmark applications using a Model Driven Architecture (MDA) approach. All the platform related plumbing and basic performance testing routines are encapsulated in MDA generation "cartridges" along with default implementations of testing logic. We will show how to use a tailored version of the UML 2.0 Testing Profile to model a customized load testing client. The performance configuration (such as transaction mix and spiking simulations) can also be modeled using the UML model. Executing the generated deployable code will collect the performance testing data automatically. The tool implementation is based on a widely used open source MDA framework AndroMDA. We extended it by providing a cartridge for a performance testing tailored version of the UML 2.0 Testing Profile. Essentially, we use OObased meta-modeling in designing and implementing a lightweight performance testing domain specific language with supporting infrastructure on top of the existing UML testing standard.

Liming Zhu; Yan Liu; Ian Gorton; Ngoc Bao Bui

2005-01-01T23:59:59.000Z

89

Assessment of the Regional Climate Model Version 3 over the Maritime Continent Using Different Cumulus Parameterization and Land Surface Schemes  

Science Conference Proceedings (OSTI)

This paper describes an assessment of the Regional Climate Model, version 3 (RegCM3), coupled to two land surface schemes: the Biosphere–Atmosphere Transfer System, version 1e (BATS1e), and the Integrated Biosphere Simulator (IBIS). The model’s ...

Rebecca L. Gianotti; Dongfeng Zhang; Elfatih A. B. Eltahir

2012-01-01T23:59:59.000Z

90

Evolving e-government benchmarking to better cover technology development and emerging societal needs  

Science Conference Proceedings (OSTI)

Many international e-government benchmarks seek to measure progress towards various versions of a digital society, and in this endeavor include a component of e-government. But because comparable international e-government data are scarce, most reports ... Keywords: United Nations, benchmarking framework, benchmarking tools, benchmarking trends, e-government, technology trends

Kim Andreasson; Jeremy Millard; Mikael Snaprud

2012-10-01T23:59:59.000Z

91

A one-dimensional material transfer model for HECTR version 1. 5  

DOE Green Energy (OSTI)

HECTR (Hydrogen Event Containment Transient Response) is a lumped-parameter computer code developed for calculating the pressure-temperature response to combustion in a nuclear power plant containment building. The code uses a control-volume approach and subscale models to simulate the mass, momentum, and energy transfer occurring in the containment during a loss-of-collant-accident (LOCA). This document describes one-dimensional subscale models for mass and momentum transfer, and the modifications to the code required to implement them. Two problems were analyzed: the first corresponding to a standard problem studied with previous HECTR versions, the second to experiments. The performance of the revised code relative to previous HECTR version is discussed as is the ability of the code to model the experiments. 8 refs., 5 figs., 3 tabs.

Geller, A.S.; Wong, C.C.

1991-08-01T23:59:59.000Z

92

Computational evaluation of two reactor benchmark problems  

E-Print Network (OSTI)

A neutronic evaluation of two reactor benchmark problems was performed. The benchmark problems describe typical PWR uranium and plutonium (mixed oxide) fueled lattices. WIMSd4m, a neutron transport lattice code, was used to evaluate multigroup macroscopic cross sections for various pincell models in each benchmark problem. DEF3D, a multigroup multidimensional diffusion code, was used to evaluate the uranium-fueled lattice benchmark problem of the American Nuclear Society. TWODANT, a multigroup, two-dimensional transport code, was used to evaluate the mixed oxide lattice benchmark problem from the Nuclear Energy Agency. Both benchmark problems yielded results consistent with preliminary results submitted by other participants in the benchmarking exercises. Some suggestions are made to improve future benchmark evaluations.

Cowan, James Anthony

1998-01-01T23:59:59.000Z

93

Load Model Data Processing and Parameter Derivation (LMDPPD) Version 2.1  

Science Conference Proceedings (OSTI)

The tool allows the user to find optimum values of parameters for two load model structures developed as part of the load modeling project using system disturbance data. Description It is important to represent the dynamic behavior of system load for system planning studies and analysis. Developing load models is a challenging task due to the varying nature of loads and uncertainty in the load information. The Load Model Data Processing and Parameter Derivation (LMDPPD) Version 2.1 software tool is a sim...

2009-09-02T23:59:59.000Z

94

Benchmark studies of the Bending Corrected Rotating Linear Model (BCRLM) reactive scattering code: Implications for accurate quantum calculations  

SciTech Connect

The Bending Corrected Rotating Linear Model (BCRLM), developed by Hayes and Walker, is a simple approximation to the true multidimensional scattering problem for reaction of the type: A + BC {yields} AB + C. While the BCRLM method is simpler than methods designed to obtain accurate three dimensional quantum scattering results, this turns out to be a major advantage in terms of our benchmarking studies. The computer code used to obtain BCRLM scattering results is written for the most part in standard FORTRAN and has been reported to several scalar, vector, and parallel architecture computers including the IBM 3090-600J, the Cray XMP and YMP, the Ardent Titan, IBM RISC System/6000, Convex C-1 and the MIPS 2000. Benchmark results will be reported for each of these machines with an emphasis on comparing the scalar, vector, and parallel performance for the standard code with minimum modifications. Detailed analysis of the mapping of the BCRLM approach onto both shared and distributed memory parallel architecture machines indicates the importance of introducing several key changes in the basic strategy and algorithums used to calculate scattering results. This analysis of the BCRLM approach provides some insights into optimal strategies for mapping three dimensional quantum scattering methods, such as the Parker-Pack method, onto shared or distributed memory parallel computers.

Hayes, E.F.; Darakjian, Z. (Rice Univ., Houston, TX (USA). Dept. of Chemistry); Walker, R.B. (Los Alamos National Lab., NM (USA))

1990-01-01T23:59:59.000Z

95

Benchmark Modeling of the Near-Field and Far-Field Wave Effects of Wave Energy Arrays  

SciTech Connect

This project is an industry-led partnership between Columbia Power Technologies and Oregon State University that will perform benchmark laboratory experiments and numerical modeling of the near-field and far-field impacts of wave scattering from an array of wave energy devices. These benchmark experimental observations will help to fill a gaping hole in our present knowledge of the near-field effects of multiple, floating wave energy converters and are a critical requirement for estimating the potential far-field environmental effects of wave energy arrays. The experiments will be performed at the Hinsdale Wave Research Laboratory (Oregon State University) and will utilize an array of newly developed BuoysĂ?Â?Ă?Â?Ă?Â?Ă?Âť that are realistic, lab-scale floating power converters. The array of Buoys will be subjected to realistic, directional wave forcing (1:33 scale) that will approximate the expected conditions (waves and water depths) to be found off the Central Oregon Coast. Experimental observations will include comprehensive in-situ wave and current measurements as well as a suite of novel optical measurements. These new optical capabilities will include imaging of the 3D wave scattering using a binocular stereo camera system, as well as 3D device motion tracking using a newly acquired LED system. These observing systems will capture the 3D motion history of individual Buoys as well as resolve the 3D scattered wave field; thus resolving the constructive and destructive wave interference patterns produced by the array at high resolution. These data combined with the device motion tracking will provide necessary information for array design in order to balance array performance with the mitigation of far-field impacts. As a benchmark data set, these data will be an important resource for testing of models for wave/buoy interactions, buoy performance, and far-field effects on wave and current patterns due to the presence of arrays. Under the proposed project we will initiate high-resolution (fine scale, very near-field) fluid/structure interaction simulations of buoy motions, as well as array-scale, phase-resolving wave scattering simulations. These modeling efforts will utilize state-of-the-art research quality models, which have not yet been brought to bear on this complex problem of large array wave/structure interaction problem.

Rhinefrank, Kenneth E.; Haller, Merrick C.; Ozkan-Haller, H. Tuba

2013-01-26T23:59:59.000Z

96

Study of the Dynamics of the Intertropical Convergence Zone with a Symmetric Version of the GLAS Climate Model  

Science Conference Proceedings (OSTI)

The results of some calculations with a zonally symmetric version of the Goddard Laboratory of Atmospheric Sciences (GLAS) climate model are described. The model was first used to study the nature of symmetric circulation in response to various ...

B. N. Goswami; J. Shukla; E. K. Schneider; Y. C. Sud

1984-01-01T23:59:59.000Z

97

Characteristics of High-Resolution Versions of the Met Office Unified Model for Forecasting Convection over the United Kingdom  

Science Conference Proceedings (OSTI)

With many operational centers moving toward order 1-km-gridlength models for routine weather forecasting, this paper presents a systematic investigation of the properties of high-resolution versions of the Met Office Unified Model for short-range ...

Humphrey W. Lean; Peter A. Clark; Mark Dixon; Nigel M. Roberts; Anna Fitch; Richard Forbes; Carol Halliwell

2008-09-01T23:59:59.000Z

98

I/O Benchmarks  

NLE Websites -- All DOE Office Websites (Extended Search)

Benchmarks IO Benchmarks Transfer rates measured 4 times per day for the past week on all of the eliza file systems are shown below. For more details see IO Benchmarking Details....

99

NERSC-8 / Trinity Benchmarks  

NLE Websites -- All DOE Office Websites (Extended Search)

Benchmarks NERSC-8 Trinity Benchmarks These benchmark programs are for use as part of the joint NERSC ACES NERSC-8Trinity system procurement. There are two basic kinds of...

100

Including the Human Factor in Dependability Benchmarks  

E-Print Network (OSTI)

We describe the construction of a dependability benchmark that captures the impact of the human system operator on the tested system. Our benchmark follows the usual model of injecting faults and perturbations into the tested system; however, our perturbations are generated by the unscripted actions of actual human operators participating in the benchmark procedure in addition to more traditional fault injection. We introduce the issues that arise as we attempt to incorporate human behavior into a dependability benchmark and describe the possible solutions that we have arrived at through preliminary experimentation. Finally, we describe the implementation of our techniques in a dependability benchmark that we are currently developing

Aaron B. Brown; Leonard C. Chung; David A. Patterson

2002-01-01T23:59:59.000Z

Note: This page contains sample records for the topic "benchmark models version" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


101

Solid Waste Projection Model: Database (Version 1.3). Technical reference manual  

SciTech Connect

The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC). The SWPM system provides a modeling and analysis environment that supports decisions in the process of evaluating various solid waste management alternatives. This document, one of a series describing the SWPM system, contains detailed information regarding the software and data structures utilized in developing the SWPM Version 1.3 Database. This document is intended for use by experienced database specialists and supports database maintenance, utility development, and database enhancement.

Blackburn, C.L.

1991-11-01T23:59:59.000Z

102

Community Land Model Version 3.0 (CLM3.0) Developer's Guide  

SciTech Connect

This document describes the guidelines adopted for software development of the Community Land Model (CLM) and serves as a reference to the entire code base of the released version of the model. The version of the code described here is Version 3.0 which was released in the summer of 2004. This document, the Community Land Model Version 3.0 (CLM3.0) User's Guide (Vertenstein et al., 2004), the Technical Description of the Community Land Model (CLM) (Oleson et al., 2004), and the Community Land Model's Dynamic Global Vegetation Model (CLM-DGVM): Technical Description and User's Guide (Levis et al., 2004) provide the developer, user, or researcher with details of implementation, instructions for using the model, a scientific description of the model, and a scientific description of the Dynamic Global Vegetation Model integrated with CLM respectively. The CLM is a single column (snow-soil-vegetation) biogeophysical model of the land surface which can be run serially (on a laptop or personal computer) or in parallel (using distributed or shared memory processors or both) on both vector and scalar computer architectures. Written in Fortran 90, CLM can be run offline (i.e., run in isolation using stored atmospheric forcing data), coupled to an atmospheric model (e.g., the Community Atmosphere Model (CAM)), or coupled to a climate system model (e.g., the Community Climate System Model Version 3 (CCSM3)) through a flux coupler (e.g., Coupler 6 (CPL6)). When coupled, CLM exchanges fluxes of energy, water, and momentum with the atmosphere. The horizontal land surface heterogeneity is represented by a nested subgrid hierarchy composed of gridcells, landunits, columns, and plant functional types (PFTs). This hierarchical representation is reflected in the data structures used by the model code. Biophysical processes are simulated for each subgrid unit (landunit, column, and PFT) independently, and prognostic variables are maintained for each subgrid unit. Vertical heterogeneity is represented by a single vegetation layer, 10 layers for soil, and up to five layers for snow, depending on the snow depth. For computational efficiency, gridcells are grouped into ''clumps'' which are divided in cyclic fashion among distributed memory processors. Additional parallel performance is obtained by distributing clumps of gridcells across shared memory processors on computer platforms that support hybrid Message Passing Interface (MPI)/OpenMP operation. Significant modifications to the source code have been made over the last year to support efficient operation on newer vector architectures, specifically the Earth Simulator in Japan and the Cray X1 at Oak Ridge National Laboratory (Homan et al., 2004). These code modifications resulted in performance improvements even on the scalar architectures widely used for running CLM presently. To better support vectorized processing in the code, subgrid units (columns and PFTs) are grouped into ''filters'' based on their process-specific categorization. For example, filters (vectors of integers) referring to all snow, non-snow, lake, non-lake, and soil covered columns and PFTs within each clump are built and maintained when the model is run. Many loops within the scientific subroutines use these filters to indirectly address the process-appropriate subgrid units.

Hoffman, FM

2004-12-21T23:59:59.000Z

103

Benchmark scenarios for the NMSSM  

E-Print Network (OSTI)

We discuss constrained and semi--constrained versions of the next--to--minimal supersymmetric extension of the Standard Model (NMSSM) in which a singlet Higgs superfield is added to the two doublet superfields that are present in the minimal extension (MSSM). This leads to a richer Higgs and neutralino spectrum and allows for many interesting phenomena that are not present in the MSSM. In particular, light Higgs particles are still allowed by current constraints and could appear as decay products of the heavier Higgs states, rendering their search rather difficult at the LHC. We propose benchmark scenarios which address the new phenomenological features, consistent with present constraints from colliders and with the dark matter relic density, and with (semi--)universal soft terms at the GUT scale. We present the corresponding spectra for the Higgs particles, their couplings to gauge bosons and fermions and their most important decay branching ratios. A brief survey of the search strategies for these states at the LHC is given.

A. Djouadi; M. Drees; U. Ellwanger; R. Godbole; C. Hugonie; S. F. King; S. Lehti; S. Moretti; A. Nikitenko; I. Rottlaender; M. Schumacher; A. Teixeira

2008-01-28T23:59:59.000Z

104

Benchmarks for GADRAS performance validation.  

SciTech Connect

The performance of the Gamma Detector Response and Analysis Software (GADRAS) was validated by comparing GADRAS model results to experimental measurements for a series of benchmark sources. Sources for the benchmark include a plutonium metal sphere, bare and shielded in polyethylene, plutonium oxide in cans, a highly enriched uranium sphere, bare and shielded in polyethylene, a depleted uranium shell and spheres, and a natural uranium sphere. The benchmark experimental data were previously acquired and consist of careful collection of background and calibration source spectra along with the source spectra. The calibration data were fit with GADRAS to determine response functions for the detector in each experiment. A one-dimensional model (pie chart) was constructed for each source based on the dimensions of the benchmark source. The GADRAS code made a forward calculation from each model to predict the radiation spectrum for the detector used in the benchmark experiment. The comparisons between the GADRAS calculation and the experimental measurements are excellent, validating that GADRAS can correctly predict the radiation spectra for these well-defined benchmark sources.

Mattingly, John K.; Mitchell, Dean James; Rhykerd, Charles L., Jr.

2009-09-01T23:59:59.000Z

105

Benchmarking data warehouses  

Science Conference Proceedings (OSTI)

Database benchmarks can either help users in comparing the performances of different systems, or help engineers in testing the effect of various design choices. In the field of data warehouses, the Transaction Processing Performance Council's standard ... Keywords: DWEB, OLAP, benchmarking, data mining, data warehouse design, data warehouse engineering benchmarks, data warehouses, database benchmarks, online analytical processing, optimisation techniques, performance evaluation

Jerome Darmont; Fadila Bentayeb; Omar Boussaid

2007-03-01T23:59:59.000Z

106

Multiple-code benchmark simulation study of coupled THMC processes in the excavation disturbed zone associated with geological nuclear waste repositories  

E-Print Network (OSTI)

MULTIPLE-CODE BENCHMARK SIMULATION STUDY OF COUPLED THMCinternational, multiple-code benchmark test (BMT) study isinternational, multiple-model benchmark test (BMT) study of

2006-01-01T23:59:59.000Z

107

Benchmarking a Visual-Basic based multi-component one-dimensional reactive transport modeling tool  

Science Conference Proceedings (OSTI)

We present the details of a comprehensive numerical modeling tool, RT1D, which can be used for simulating biochemical and geochemical reactive transport problems. The code can be run within the standard Microsoft EXCEL Visual Basic platform, and it does ... Keywords: Bioremediation, Geochemical transport, Groundwater models, Numerical model, Reactive transport

Jagadish Torlapati; T. Prabhakar Clement

2013-01-01T23:59:59.000Z

108

Simplified Risk Model Version II (SRM-II) Structure and Application  

SciTech Connect

The Simplified Risk Model Version II (SRM-II) is a quantitative tool for efficiently evaluating the risk from Department of Energy waste management activities. Risks evaluated include human safety and health and environmental impact. Both accidents and normal, incident-free operation are considered. The risk models are simplifications of more detailed risk analyses, such as those found in environmental impact statements, safety analysis reports, and performance assessments. However, wherever possible, conservatisms in such models have been removed to obtain best estimate results. The SRM-II is used to support DOE complex-wide environmental management integration studies. Typically such studies involve risk predictions covering the entire waste management program, including such activities as initial storage, handling, treatment, interim storage, transportation, and final disposal.

S. A. Eide; T. E. Wierman

1999-08-01T23:59:59.000Z

109

Self-benchmarking Guide for Cleanrooms: Metrics, Benchmarks,...  

NLE Websites -- All DOE Office Websites (Extended Search)

Cleanrooms: Metrics, Benchmarks, Actions Title Self-benchmarking Guide for Cleanrooms: Metrics, Benchmarks, Actions Publication Type Report LBNL Report Number LBNL-3392E Year of...

110

Self-benchmarking Guide for Laboratory Buildings: Metrics, Benchmarks, Actions  

E-Print Network (OSTI)

data items. Figure 9. Benchmarks for total system pressuredrop. The benchmarks in figure are basedcomponent pressure drop benchmarks shown in figure below, as

Mathew, Paul

2010-01-01T23:59:59.000Z

111

Self-benchmarking Guide for Cleanrooms: Metrics, Benchmarks, Actions  

E-Print Network (OSTI)

for Cleanrooms: Metrics, Benchmarks, Actions Paul Mathew,efficiency metrics and benchmarks that can be used to trackalso use the metrics and benchmarks described in this guide

Mathew, Paul

2010-01-01T23:59:59.000Z

112

Self-benchmarking Guide for Data Centers: Metrics, Benchmarks, Actions  

E-Print Network (OSTI)

for Data Centers: Metrics, Benchmarks, Actions Paul Mathew,efficiency metrics and benchmarks that can be used to trackalso use the metrics and benchmarks described in this guide

Mathew, Paul

2010-01-01T23:59:59.000Z

113

Self-benchmarking Guide for Data Centers: Metrics, Benchmarks...  

NLE Websites -- All DOE Office Websites (Extended Search)

Data Centers: Metrics, Benchmarks, Actions Title Self-benchmarking Guide for Data Centers: Metrics, Benchmarks, Actions Publication Type Report LBNL Report Number LBNL-3393E Year...

114

Self-benchmarking Guide for Laboratory Buildings: Metrics, Benchmarks...  

NLE Websites -- All DOE Office Websites (Extended Search)

Laboratory Buildings: Metrics, Benchmarks, Actions Title Self-benchmarking Guide for Laboratory Buildings: Metrics, Benchmarks, Actions Publication Type Report LBNL Report Number...

115

Benchmark calculation of no-core Monte Carlo shell model in light nuclei  

E-Print Network (OSTI)

The Monte Carlo shell model is firstly applied to the calculation of the no-core shell model in light nuclei. The results are compared with those of the full configuration interaction. The agreements between them are within a few % at most.

T. Abe; P. Maris; T. Otsuka; N. Shimizu; Y. Utsuno; J. P. Vary

2011-07-09T23:59:59.000Z

116

Benchmark calculation of no-core Monte Carlo shell model in light nuclei  

E-Print Network (OSTI)

The Monte Carlo shell model is firstly applied to the calculation of the no-core shell model in light nuclei. The results are compared with those of the full configuration interaction. The agreements between them are within a few % at most.

Abe, T; Otsuka, T; Shimizu, N; Utsuno, Y; Vary, J P; 10.1063/1.3584062

2011-01-01T23:59:59.000Z

117

Benchmark calculation of no-core Monte Carlo shell model in light nuclei  

SciTech Connect

The Monte Carlo shell model is firstly applied to the calculation of the no-core shell model in light nuclei. The results are compared with those of the full configuration interaction. The agreements between them are within a few % at most.

Abe, T.; Shimizu, N. [Department of Physics, the University of Tokyo, Hongo, Tokyo 113-0033 (Japan); Maris, P.; Vary, J. P. [Department of Physics and Astronomy, Iowa State University, Ames, Iowa 50011 (United States); Otsuka, T. [Department of Physics, University of Tokyo, Hongo, Tokyo 113-0033 (Japan); CNS, University of Tokyo, Hongo, Tokyo 113-0033 (Japan); NSCL, Michigan State University, East Lansing, Michigan 48824 (United States); Utsuno, Y. [ASRC, Japan Atomic Energy Agency, Tokai, Ibaraki 319-1195 (Japan)

2011-05-06T23:59:59.000Z

118

BENCHMARKING EXERCISES TO VALIDATE THE UPDATED ELLWF GOLDSIM SLIT TRENCH MODEL  

Science Conference Proceedings (OSTI)

The Savannah River National Laboratory (SRNL) results of the 2008 Performance Assessment (PA) (WSRC, 2008) sensitivity/uncertainty analyses conducted for the trenches located in the EArea LowLevel Waste Facility (ELLWF) were subject to review by the United States Department of Energy (U.S. DOE) Low-Level Waste Disposal Facility Federal Review Group (LFRG) (LFRG, 2008). LFRG comments were generally approving of the use of probabilistic modeling in GoldSim to support the quantitative sensitivity analysis. A recommendation was made, however, that the probabilistic models be revised and updated to bolster their defensibility. SRS committed to addressing those comments and, in response, contracted with Neptune and Company to rewrite the three GoldSim models. The initial portion of this work, development of Slit Trench (ST), Engineered Trench (ET) and Components-in-Grout (CIG) trench GoldSim models, has been completed. The work described in this report utilizes these revised models to test and evaluate the results against the 2008 PORFLOW model results. This was accomplished by first performing a rigorous code-to-code comparison of the PORFLOW and GoldSim codes and then performing a deterministic comparison of the two-dimensional (2D) unsaturated zone and three-dimensional (3D) saturated zone PORFLOW Slit Trench models against results from the one-dimensional (1D) GoldSim Slit Trench model. The results of the code-to-code comparison indicate that when the mechanisms of radioactive decay, partitioning of contaminants between solid and fluid, implementation of specific boundary conditions and the imposition of solubility controls were all tested using identical flow fields, that GoldSim and PORFLOW produce nearly identical results. It is also noted that GoldSim has an advantage over PORFLOW in that it simulates all radionuclides simultaneously ? thus avoiding a potential problem as demonstrated in the Case Study (see Section 2.6). Hence, it was concluded that the follow-on work using GoldSim to develop 1D equivalent models of the PORFLOW multi-dimensional models was justified. The comparison of GoldSim 1D equivalent models to PORFLOW multi-dimensional models was made at two locations in the model domains ? at the unsaturated-saturated zone interface and at the 100m point of compliance. PORFLOW model results from the 2008 PA were utilized to investigate the comparison. By making iterative adjustments to certain water flux terms in the GoldSim models it was possible to produce contaminant mass fluxes and water concentrations that were highly similar to the PORFLOW model results at the two locations where comparisons were made. Based on the ability of the GoldSim 1D trench models to produce mass flux and concentration curves that are sufficiently similar to multi-dimensional PORFLOW models for all of the evaluated radionuclides and their progeny, it is concluded that the use of the GoldSim 1D equivalent Slit and Engineered trenches models for further probabilistic sensitivity and uncertainty analysis of ELLWF trench units is justified. A revision to the original report was undertaken to correct mislabeling on the y-axes of the compliance point concentration graphs, to modify the terminology used to define the ?blended? source term Case for the saturated zone to make it consistent with terminology used in the 2008 PA, and to make a more definitive statement regarding the justification of the use of the GoldSim 1D equivalent trench models for follow-on probabilistic sensitivity and uncertainty analysis.

Taylor, G.; Hiergesell, R.

2013-11-12T23:59:59.000Z

119

The Model of Emissions of Gases and Aerosols from Nature version 2.1 (MEGAN2.1): an extended and updated framework for modeling biogenic emissions  

E-Print Network (OSTI)

The Model of Emissions of Gases and Aerosols from Nature version 2.1 (MEGAN2.1) is a modeling framework for estimating fluxes of biogenic compounds between terrestrial ecosystems and the atmosphere using simple mechanistic ...

Guenther, A. B.

120

Benchmarking GEANT4 nuclear models for carbon-therapy at 95 MeV/A  

E-Print Network (OSTI)

In carbon-therapy, the interaction of the incoming beam with human tissues may lead to the production of a large amount of nuclear fragments and secondary light particles. An accurate estimation of the biological dose deposited into the tumor and the surrounding healthy tissues thus requires sophisticated simulation tools based on nuclear reaction models. The validity of such models requires intensive comparisons with as many sets of experimental data as possible. Up to now, a rather limited set of double di erential carbon fragmentation cross sections have been measured in the energy range used in hadrontherapy (up to 400 MeV/A). However, new data have been recently obtained at intermediate energy (95 MeV/A). The aim of this work is to compare the reaction models embedded in the GEANT4 Monte Carlo toolkit with these new data. The strengths and weaknesses of each tested model, i.e. G4BinaryLightIonReaction, G4QMDReaction and INCL++, coupled to two di fferent de-excitation models, i.e. the generalized evaporation model and the Fermi break-up are discussed.

J. Dudouet; D. Cussol; D. Durand; M. Labalme

2013-09-06T23:59:59.000Z

Note: This page contains sample records for the topic "benchmark models version" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


121

Windows and Linux Robustness Benchmarks With Respect to Application Erroneous Behavior  

E-Print Network (OSTI)

Windows and Linux Robustness Benchmarks With Respect to Application Erroneous Behavior Karama of benchmark results obtained for various versions of Windows and Linux operating systems. The benchmark, Karama Kanoun et Lisa Spainhower (Ed.) (2008) 227-254" #12;Kanoun et al. Windows and Linux Robustness

122

Modeling PCM-Enhanced Insulation System and Benchmarking EnergyPlus against Controlled Field Data  

Science Conference Proceedings (OSTI)

Phase-change materials (PCM) used in building envelopes appear to be a promising technology to reduce energy consumption and reduce/shift peak load. However, due to complexity in modeling the dynamic behavior of PCMs, current modeling tools either lack an accurate way of predicting the performance and impact of PCMs in buildings or validation of predicted or measured performance is not available. This paper presents a model of a PCM-enhanced dynamic-insulation system in EnergyPlus (E+) and compares the simulation results against field-measured data. Laboratory tests to evaluate thermal properties and to characterize the PCM and PCM-enhanced cellulose insulation system are also presented in this paper. Results indicate that the predicted daily average heat flux through walls from the E+ simulation was within 9% of field measured data. Future analysis will allow us to predict annual energy savings from the use of PCM in buildings.

Shrestha, Som S [ORNL; Miller, William A [ORNL; Stovall, Therese K [ORNL; Desjarlais, Andre Omer [ORNL; Childs, Kenneth W [ORNL; Porter, Wallace D [ORNL; Bhandari, Mahabir S [ORNL; Coley, Steven J [ORNL

2011-01-01T23:59:59.000Z

123

Hospital Energy Benchmarking Guidance  

NLE Websites -- All DOE Office Websites (Extended Search)

of metrics, a high-level protocol is provided. The next section presents draft benchmarks for some metrics; benchmarks are not available for many metrics owing to a lack of...

124

Factory Flow Benchmarking Report  

E-Print Network (OSTI)

LAI benchmarked representative part fabrications and some assembly operations within its member companies of the defense aircraft industry. This paper reports the results of this benchmarking effort. In addition, this ...

Shields, Thomas J.

125

NERSC Benchmarking and Workload Characterization  

NLE Websites -- All DOE Office Websites (Extended Search)

Petascale Initiative Science Gateway Development Storage and IO Technologies Testbeds Home R & D Benchmarking & Workload Characterization Benchmarking & Workload...

126

Properties of potential modelling three benchmarks: the cosmological constant, inflation and three generations  

E-Print Network (OSTI)

We argue for a model of low-energy correction to the inflationary potential as caused by the gauge-mediated breaking down the supersymmetry at the scale of $\\mu_\\textsc{x}\\sim 10^4$ GeV, that provides us with the seesaw mechanism of thin domain wall fluctuations in the flat vacuum. The fluctuations are responsible for the vacuum with the cosmological constant at the scale of $\\mu_\\Lambda\\sim 10^{-2}$ eV suppressed by the Planckian mass $m_\\mathtt{Pl}$ via $\\mu_\\Lambda\\sim\\mu_\\textsc{x}^2/m_\\mathtt{Pl}$. The appropriate vacuum state is occupied after the inflation with quartic coupling constant $\\lambda\\sim\\mu_\\textsc{x}/m_\\mathtt{Pl}\\sim 10^{-14}$ inherently related with the bare mass scale of $\\widetilde m\\sim\\sqrt{\\mu_\\textsc{x}m_\\mathtt{Pl}}\\sim 10^{12}$ GeV determining the thickness of domain walls $\\delta r\\sim1/\\widetilde m$. Such the parameters of potential are still marginally consistent with the observed inhomogeneity of matter density in the Universe. The inflationary evolution suggests the vacuum structure compatible with three fermionic generations of matter as well as with observed hierarchies of masses and mixing in the Standard Model.

V. V. Kiselev; S. A. Timofeev

2010-04-23T23:59:59.000Z

127

User manual for GEOCOST: a computer model for geothermal cost analysis. Volume 2. Binary cycle version  

DOE Green Energy (OSTI)

A computer model called GEOCOST has been developed to simulate the production of electricity from geothermal resources and calculate the potential costs of geothermal power. GEOCOST combines resource characteristics, power recovery technology, tax rates, and financial factors into one systematic model and provides the flexibility to individually or collectively evaluate their impacts on the cost of geothermal power. Both the geothermal reservoir and power plant are simulated to model the complete energy production system. In the version of GEOCOST in this report, geothermal fluid is supplied from wells distributed throughout a hydrothermal reservoir through insulated pipelines to a binary power plant. The power plant is simulated using a binary fluid cycle in which the geothermal fluid is passed through a series of heat exchangers. The thermodynamic state points in basic subcritical and supercritical Rankine cycles are calculated for a variety of working fluids. Working fluids which are now in the model include isobutane, n-butane, R-11, R-12, R-22, R-113, R-114, and ammonia. Thermodynamic properties of the working fluids at the state points are calculated using empirical equations of state. The Starling equation of state is used for hydrocarbons and the Martin-Hou equation of state is used for fluorocarbons and ammonia. Physical properties of working fluids at the state points are calculated.

Huber, H.D.; Walter, R.A.; Bloomster, C.H.

1976-03-01T23:59:59.000Z

128

Implementation of the Semi-Lagrangian Method in a High-Resolution Version of the ECMWF Forecast Model  

Science Conference Proceedings (OSTI)

In this article the implementation of the semi-Lagrangian method in a high-resolution version of the ECMWF forecast model is examined. Novel aspects include the application of the semi-Lagrangian scheme to a global model using the ECMWF hybrid ...

Harold Ritchie; Clive Temperton; Adrian Simmons; Mariano Hortal; Terry Davies; David Dent; Mats Hamrud

1995-02-01T23:59:59.000Z

129

Model Assisted Probability of Detection Using R (MAPOD-R) Version 1.0  

Science Conference Proceedings (OSTI)

MAPOD, Version 1.0 was developed in 2009 using Crystal Ball statistical software. This software requires the users to maintain a license for Crystal Ball. MAPOD-R provides the same basic output as MAPOD, Version 1 but uses a statistics software that is free and publically available called “R”.  MAPOD, Version 1.0 is still available. Both these applications provide the utilities and vendors a tool to calculate a site-specific probability of detection. They also allow ...

2013-03-26T23:59:59.000Z

130

Radiation Detection Computational Benchmark Scenarios  

SciTech Connect

Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for compilation. This is a report describing the details of the selected Benchmarks and results from various transport codes.

Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

2013-09-24T23:59:59.000Z

131

Sell-Side Benchmarks  

E-Print Network (OSTI)

Sell-side analysts employ different benchmarks when defining their stock recommendations. For example, a ‘buy ’ for some brokers means the stock is expected to outperform its peers in the same sector (“sector benchmarkers”), while for other brokers it means the stock is expected to outperform the market (“market benchmarkers”), or just some absolute return (“total benchmarkers”). We explore the validity and implications of the adoption of these different benchmarks. Analysis of the relation between analysts ’ recommendations and their long-term growth and earnings forecasts suggests that analysts indeed abide by their benchmarks: Sector benchmarkers rely less on across-industry information, and focus more on ranking firms within their industries. We also find evidence that market- and sector-benchmarkers are successful in meeting or beating their benchmark returns, while total-benchmarkers are not. However, we do not find much evidence that investors react differently to recommendations based on the different benchmarks. The research carries implications for the correct understanding and interpretation of sell-side research and its investment value.

Ohad Kadan; Leonardo Madureira; Rong Wang; Tzachi Zach

2012-01-01T23:59:59.000Z

132

Assessment of the Land Surface and Boundary Layer Models in Two Operational Versions of the NCEP Eta Model Using FIFE Data  

Science Conference Proceedings (OSTI)

Data from the 1987 summer FIFE experiment for four pairs of days are compared with corresponding 48-h forecasts from two different versions of the Eta Model, both initialized from the NCEP–NCAR (National Centers for Environmental Prediction–...

Alan K. Betts; Fei Chen; Kenneth E. Mitchell; Zaviša I. Janji?

1997-11-01T23:59:59.000Z

133

Energy Integration for 2050 - A Strategic Impact Model (2050 SIM), Version 1.0  

SciTech Connect

The United States (U.S.) energy infrastructure is among the most reliable, accessible, and economic in the world. On the other hand, it is also excessively reliant on foreign energy sources, experiences high volatility in energy prices, does not always practice good stewardship of finite indigenous energy resources, and emits significant quantities of greenhouse gas. The U.S. Department of Energy is conducting research and development on advanced nuclear reactor concepts and technologies, including High Temperature Gas Reactor (HTGR) technologies, directed at helping the United States meet its current and future energy challenges. This report discusses the Draft Strategic Impact Model (SIM), an initial version of which was created during the later part of FY-2010. SIM was developed to analyze and depict the benefits of various energy sources in meeting the energy demand and to provide an overall system understanding of the tradeoffs between building and using HTGRs versus other existing technologies for providing energy (heat and electricity) to various energy-use sectors in the United States. This report also provides the assumptions used in the model, the rationale for the methodology, and the references for the source documentation and source data used in developing the SIM.

Not Available

2010-10-01T23:59:59.000Z

134

Energy Integration for 2050 - A Strategic Impact Model (2050 SIM), Version 2.0  

Science Conference Proceedings (OSTI)

The United States (U.S.) energy infrastructure is among the most reliable, accessible, and economic in the world. On the other hand, it is also excessively reliant on foreign energy sources, experiences high volatility in energy prices, does not always practice good stewardship of finite indigenous energy resources, and emits significant quantities of greenhouse gas. The U.S. Department of Energy is conducting research and development on advanced nuclear reactor concepts and technologies, including High Temperature Gas Reactor (HTGR) technologies, directed at helping the United States meet its current and future energy challenges. This report discusses the Draft Strategic Impact Model (SIM), an initial version of which was created during the later part of FY-2010. SIM was developed to analyze and depict the benefits of various energy sources in meeting the energy demand and to provide an overall system understanding of the tradeoffs between building and using HTGRs versus other existing technologies for providing energy (heat and electricity) to various energy-use sectors in the United States. This report also provides the assumptions used in the model, the rationale for the methodology, and the references for the source documentation and source data used in developing the SIM.

John Collins

2011-09-01T23:59:59.000Z

135

Columbia River Statistical Update Model, Version 4. 0 (COLSTAT4): Background documentation and user's guide  

Science Conference Proceedings (OSTI)

Daily-averaged temperature and flow information on the Columbia River just downstream of Priest Rapids Dam and upstream of river mile 380 were collected and stored in a data base. The flow information corresponds to discharges that were collected daily from October 1, 1959, through July 28, 1986. The temperature information corresponds to values that were collected daily from January 1, 1965, through May 27, 1986. The computer model, COLSTAT4 (Columbia River Statistical Update - Version 4.0 model), uses the temperature-discharge data base to statistically analyze temperature and flow conditions by computing the frequency of occurrence and duration of selected temperatures and flow rates for the Columbia River. The COLSTAT4 code analyzes the flow and temperature information in a sequential time frame (i.e., a continuous analysis over a given time period); it also analyzes this information in a seasonal time frame (i.e., a periodic analysis over a specific season from year to year). A provision is included to enable the user to edit and/or extend the data base of temperature and flow information. This report describes the COLSTAT4 code and the information contained in its data base.

Whelan, G.; Damschen, D.W.; Brockhaus, R.D.

1987-08-01T23:59:59.000Z

136

Outage Management Benchmarking Guideline  

Science Conference Proceedings (OSTI)

Benchmarking of power plant outages will help plants target performance improvements to specific elements of a plant outage program in order to improve overall availability, reliability, and safety while decreasing generation costs. EPRI's "Outage Management Benchmarking Guideline" builds on the Institute's fossil and nuclear plant experience with routine maintenance and extends that to outage maintenance processes. The guideline describes the initial steps in an outage benchmarking effort and 13 key ele...

2003-03-26T23:59:59.000Z

137

Industrial Combustion Emissions (ICE) model, Version 6. 0. Model-Simulation  

SciTech Connect

The Industrial Combustion Emissions (ICE) Model was developed by the Environmental Protection Agency for use by the National Acid Precipitation Assessment Program (NAPAP) in preparing future assessments of industrial-boiler emissions. The ICE Model user's manual includes a summary of user options and software characteristics, a description of the input data files, and a description of the procedures for operation of the ICE Model. Proper formatting of files and creation of job-control language are discussed. The ICE Model projects for each State the sulfur dioxide, sulfates, and nitrogen oxides emissions from fossil fuel combustion in industrial boilers. Projections of emissions and costs of boiler generation, including emission-control costs, are projected for the years 1985, 1990, 1995, 2000, 2010, 2020, and 2030.

Elliott, D.J.; Hogan, T.

1987-12-01T23:59:59.000Z

138

MIT Integrated Global System Model (IGSM) Version 2: Model Description and Baseline Evaluation  

E-Print Network (OSTI)

The MIT Integrated Global System Model (IGSM) is designed for analyzing the global environmental changes that may result from anthropogenic causes, quantifying the uncertainties associated with the projected changes, and ...

Sokolov, Andrei P.

139

Magma benchmark code - CECM  

E-Print Network (OSTI)

Below is the Magma code used to run the benchmarks in Section 5 of the paper " In-place Arithmetic for Univariate Polynomials over an Algebraic Number Field" ...

140

PRE-SW Model Assisted Probability of Detection (MAPOD-R) Version 2.0, Beta  

Science Conference Proceedings (OSTI)

MAPOD-R, Version 1.0 was developed in 2009 using Crystal Ball statistical software. This software is expensive and has to be purchased and maintained by each utility. There is statistics software that is free and publically available called 8220R8221. The contractor will develop scripts using R to create the same output as MAPOD Version 1.0 creates. WindowsXP/Vista/7, Excel 2003 and 2007

2012-04-22T23:59:59.000Z

Note: This page contains sample records for the topic "benchmark models version" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


141

Solitary Wave Benchmarks in Magma Dynamics  

E-Print Network (OSTI)

We present a model problem for benchmarking codes that investigate magma migration in the Earth's interior. This system retains the essential features of more sophisticated models, yet has the advantage of possessing solitary wave solutions. The existence of such exact solutions to the nonlinear problem make it an excellent benchmark problem for combinations of solver algorithms. In this work, we explore a novel algorithm for computing high quality approximations of the solitary waves and use them to benchmark a semi-Lagrangian Crank-Nicholson scheme for a finite element discretization of the time dependent problem.

Simpson, Gideon

2010-01-01T23:59:59.000Z

142

Inclusion of Ice Microphysics in the NCAR Community Atmospheric Model Version 3 (CAM3)  

SciTech Connect

A prognostic equation for ice crystal number concentration together with an ice nucleation scheme are implemented in the National Center for Atmospheric Research (NCAR) Community Atmospheric Model Version 3 (CAM3) with the aim of studying the indirect effect of aerosols on cold clouds. The effective radius of ice crystals which is used in the radiation and gravitational settlement calculations is now calculated from model predicted mass and number of ice crystals rather than diagnosed as a function of temperature. We add a water vapor deposition scheme to replace the condensation and evaporation (C-E) in the standard CAM3 for ice clouds. The repartitioning of total water into liquid and ice in mixed-phase clouds as a function of temperature is removed, and ice supersaturation is allowed. The predicted ice water content in the modified CAM3 is in better agreement with the Aura MLS data than that in the standard CAM3. The cirrus cloud fraction near the tropical tropopause, which is underestimated in the standard CAM3, is increased, and the cold temperature bias there is reduced by 1-2 °K. However, an increase in the cloud fraction in polar regions makes the underestimation of downwelling shortwave radiation in the standard CAM3 even worse. A sensitivity test reducing the threshold relative humidity with respective to ice (RHi) for heterogeneous ice nucleation from 120% to 105% (representing nearly perfert ice nuclei) increases the global cloud cover by 1.7%, temperature near the tropical tropopause by 4-5 °K, and water vapor in the stratosphere by 50-90%.

Liu, Xiaohong; Penner, Joyce E.; Ghan, Steven J.; Wang, M.

2007-09-15T23:59:59.000Z

143

Self-benchmarking Guide for Laboratory Buildings: Metrics, Benchmarks, Actions  

E-Print Network (OSTI)

Purpose This guide describes energy efficiency metrics andEnergy Use Intensity 28 Laboratory Benchmarking Guidethe energy benchmarking approach describe in this guide can

Mathew, Paul

2010-01-01T23:59:59.000Z

144

Self-benchmarking Guide for Cleanrooms: Metrics, Benchmarks, Actions  

E-Print Network (OSTI)

Purpose This guide describes energy efficiency metrics andfor Reheat Energy Use Factor Cleanroom Benchmarking Guidethe energy benchmarking approach describe in this guide can

Mathew, Paul

2010-01-01T23:59:59.000Z

145

I/O Benchmarking Details  

NLE Websites -- All DOE Office Websites (Extended Search)

IO Benchmarking Details IO Benchmarking Details These benchmarks are simply the transfer rate for copying some files from an eliza file system to TMPDIR on a batch node. Each...

146

Action-Oriented Benchmarking:  

NLE Websites -- All DOE Office Websites (Extended Search)

Feb-2008 submitted to Energy Engineering Feb-2008 submitted to Energy Engineering Action-Oriented Benchmarking: Concepts and Tools Evan Mills, Paul Mathew & Mary Ann Piette, Lawrence Berkeley National Laboratory Norman Bourassa & Martha Brook, California Energy Commission ABSTRACT Most energy benchmarking tools provide static feedback on how one building compares to a larger set of loosely similar buildings, without providing information at the end-use level or on what can be done to reduce consumption, cost, or emissions. In this article-Part 1 of a two-part series-we describe an "action-oriented benchmarking" approach, which extends whole-building energy benchmarking to include analysis of system and component energy use metrics and features. Action-oriented benchmarking thereby allows users to generate more meaningful

147

Benchmarking e-business security: A model and framework”, Proceedings of 3rd Australian information security management conference  

E-Print Network (OSTI)

The dynamic nature of threats and vulnerabilities within the E-business environment can impede online functionality, compromise organisational or customer information, contravene security implementations and thereby undermine online customer confidence. To negate these problems, E-business security has to become proactive, by reviewing and continuously improving security to strengthen E-business security measures and policies. This can be achieved through benchmarking the security measures and policies utilised within the Ebusiness, against recognised information technology (IT) and information security (IS) security standards.

Graeme Pye; Matthew J. Warren

2005-01-01T23:59:59.000Z

148

ESP: A system utilization benchmark  

E-Print Network (OSTI)

ESP: A System Utilization Benchmark Adrian T. Wong, LeonidEffective System Performance (ESP) test, which is designedEffective System Performance (ESP) benchmark, which measures

Wong, Adrian T.; Oliker, Leonid; Kramer, William T.C.; Kaltz, Teresa L.; Bailey, David H.

2000-01-01T23:59:59.000Z

149

Mechanisms of Low Cloud–Climate Feedback in Idealized Single-Column Simulations with the Community Atmospheric Model, Version 3 (CAM3)  

Science Conference Proceedings (OSTI)

This study investigates the physical mechanism of low cloud feedback in the Community Atmospheric Model, version 3 (CAM3) through idealized single-column model (SCM) experiments over the subtropical eastern oceans. Negative cloud feedback is ...

Minghua Zhang; Christopher Bretherton

2008-09-01T23:59:59.000Z

150

Evaluation of Near-Surface Parameters in the Two Versions of the Atmospheric Model in CESM1 using Flux Station Observations  

Science Conference Proceedings (OSTI)

This paper describes the performance of the Community Atmosphere Model (CAM) versions 4 and 5 in simulating near-surface parameters. CAM is the atmospheric component of the Community Earth System Model (CESM). Most of the parameterizations in the ...

Jenny Lindvall; Gunilla Svensson; Cecile Hannay

2013-01-01T23:59:59.000Z

151

Mixed-Oxide (MOX) Fuel Performance Benchmarks  

Science Conference Proceedings (OSTI)

Within the framework of the OECD/NEA Expert Group on Reactor-based Plutonium disposition (TFRPD), a fuel modeling code benchmarks for MOX fuel was initiated. This paper summarizes the calculation results provided by the contributors for the first two fuel performance benchmark problems. A limited sensitivity study of the effect of the rod power uncertainty on code predictions of fuel centerline temperature and fuel pin pressure also was performed and is included in the paper.

Ott, Larry J [ORNL; Tverberg, Terje [OECD Halden Reactor Project; Sartori, Enrico [ORNL

2009-01-01T23:59:59.000Z

152

Control of a benchmark structure using GA-optimized fuzzy logic control  

E-Print Network (OSTI)

Mitigation of displacement and acceleration responses of a three story benchmark structure excited by seismic motions is pursued in this study. Multiple 20-kN magnetorheological (MR) dampers are installed in the three-story benchmark structure and managed by a global fuzzy logic controller to provide smart damping forces to the benchmark structure. Two configurations of MR damper locations are considered to display multiple-input, single-output and multiple-input, multiple-output control capabilities. Characterization tests of each MR damper are performed in a laboratory to enable the formulation of fuzzy inference models. Prediction of MR damper forces by the fuzzy models shows sufficient agreement with experimental results. A controlled-elitist multi-objective genetic algorithm is utilized to optimize a set of fuzzy logic controllers with concurrent consideration to four structural response metrics. The genetic algorithm is able to identify optimal passive cases for MR damper operation, and then further improve their performance by intelligently modulating the command voltage for concurrent reductions of displacement and acceleration responses. An optimal controller is identified and validated through numerical simulation and fullscale experimentation. Numerical and experimental results show that performance of the controller algorithm is superior to optimal passive cases in 43% of investigated studies. Furthermore, the state-space model of the benchmark structure that is used in numerical simulations has been improved by a modified version of the same genetic algorithm used in development of fuzzy logic controllers. Experimental validation shows that the state-space model optimized by the genetic algorithm provides accurate prediction of response of the benchmark structure to base excitation.

Shook, David Adam

2006-12-01T23:59:59.000Z

153

Making Buildings Part of the Climate Solution by Overcoming Information Gaps through Benchmarking  

E-Print Network (OSTI)

This paper focuses on the impact of benchmarking the energy performance of U.S. commercial buildings by requiring utilities to submit energy data to a uniform database accessible to building owners and tenants. Understanding how a commercial building uses energy has many benefits; in particular, it helps building owners and tenants focus on poor-performing buildings and subsystems, and enables highperforming buildings to participate in various certification programs that can lead to higher occupancy rates, rents, and property values. Through analysis chiefly utilizing the Georgia Tech version of the National Energy Modeling System (GT-NEMS), updating input discount rates and the impact of benchmarking shows a reduction in energy consumption of 5.6 % in 2035 relative to the Reference case projection of the Annual Energy Outlook 2011. It is estimated that the benefits of a national benchmarking policy would outweigh the costs, both to the private sector and society broadly. However, its geographical impact would vary substantially, with the South Atlantic and New England regions benefiting the most. By reducing the discount rates used to evaluate energy-efficiency investments, benchmarking would increase the purchase of energy-efficient equipment thereby reducing energy bills, CO2 emissions, and conventional air pollution. *Corresponding author:

Matt Cox; Marilyn A. Brown; Xiaojing Sun; Dr. Marilyn; A. Brown; D. M. Smith Building

2012-01-01T23:59:59.000Z

154

Benchmarking and Energy Saving Tool | Open Energy Information  

Open Energy Info (EERE)

Benchmarking and Energy Saving Tool Benchmarking and Energy Saving Tool Jump to: navigation, search Tool Summary Name: Benchmarking and Energy Saving Tool Agency/Company /Organization: Lawrence Berkeley National Laboratory Sector: Energy Focus Area: Energy Efficiency, - Central Plant, Industry Topics: Pathways analysis Resource Type: Software/modeling tools User Interface: Spreadsheet Website: industrial-energy.lbl.gov/node/100 Cost: Free Language: English References: Benchmarking and Energy Saving Tool [1] Logo: Benchmarking and Energy Saving Tool The Benchmarking and Energy Saving Tool (BEST) is an Excel-based spreadsheet energy analysis tool developed by Lawrence Berkeley National Laboratory. The Benchmarking and Energy Saving Tool (BEST) is an Excel-based spreadsheet energy analysis tool developed by Lawrence Berkeley National

155

Result Summary for the Area 5 Radioactive Waste Management Site Performance Assessment Model Version 4.110  

SciTech Connect

Results for Version 4.110 of the Area 5 Radioactive Waste Management Site (RWMS) performance assessment (PA) model are summarized. Version 4.110 includes the fiscal year (FY) 2010 inventory estimate, including a future inventory estimate. Version 4.110 was implemented in GoldSim 10.11(SP4). The following changes have been implemented since the last baseline model, Version 4.105: (1) Updated the inventory and disposal unit configurations with data through the end of FY 2010. (1) Implemented Federal Guidance Report 13 Supplemental CD dose conversion factors (U.S. Environmental Protection Agency, 1999). Version 4.110 PA results comply with air pathway and all-pathways annual total effective dose (TED) performance objectives (Tables 2 and 3, Figures 1 and 2). Air pathways results decrease moderately for all scenarios. The time of the maximum for the air pathway open rangeland scenario shifts from 1,000 to 100 years (y). All-pathways annual TED increases for all scenarios except the resident scenario. The maximum member of public all-pathways dose occurs at 1,000 y for the resident farmer scenario. The resident farmer dose was predominantly due to technetium-99 (Tc-99) (82 percent) and lead-210 (Pb-210) (13 percent). Pb-210 present at 1,000 y is produced predominantly by radioactive decay of uranium-234 (U-234) present at the time of disposal. All results for the postdrilling and intruder-agriculture scenarios comply with the performance objectives (Tables 4 and 5, Figures 3 and 4). The postdrilling intruder results are similar to Version 4.105 results. The intruder-agriculture results are similar to Version 4.105, except for the Pit 6 Radium Disposal Unit (RaDU). The intruder-agriculture result for the Shallow Land Burial (SLB) disposal units is a significant fraction of the performance objective and exceeds the performance objective at the 95th percentile. The intruder-agriculture dose is due predominantly to Tc-99 (75 percent) and U-238 (9.5 percent). The acute intruder scenario results comply with all performance objectives (Tables 6 and 7, Figures 5 and 6). The acute construction result for the SLB disposal units decreases significantly with this version. The maximum acute intruder dose occurs at 1,000 y for the SLB disposal units under the acute construction scenario. The acute intruder dose is caused by multiple radionuclides including U-238 (31 percent), Th-229 (28 percent), plutonium-239 (8.6 percent), U-233 (7.8 percent), and U-234 (6.7 percent). All results for radon-222 (Rn-222) flux density comply with the performance objective (Table 8, Figure 7). The mean Pit 13 RaDU flux density is close to the 0.74 Bq m{sup -2} s{sup -1} limit.

NSTec Environmental Management

2011-07-20T23:59:59.000Z

156

Origin of the Springtime Westerly Bias in Equatorial Atlantic Surface Winds in the Community Atmosphere Model Version 3 (CAM3) Simulation  

Science Conference Proceedings (OSTI)

This study makes the case that westerly bias in the surface winds of the National Center for Atmospheric Research (NCAR) Community Atmosphere Model, version 3 (CAM3), over the equatorial Atlantic in boreal spring has its origin in the rainfall (...

Ching-Yee Chang; Sumant Nigam; James A. Carton

2008-09-01T23:59:59.000Z

157

Impact of Data Assimilation on Forecasting Convection over the United Kingdom Using a High-Resolution Version of the Met Office Unified Model  

Science Conference Proceedings (OSTI)

A high-resolution data assimilation system has been implemented and tested within a 4-km grid length version of the Met Office Unified Model (UM). A variational analysis scheme is used to correct larger scales using conventional observation ...

Mark Dixon; Zhihong Li; Humphrey Lean; Nigel Roberts; Sue Ballard

2009-05-01T23:59:59.000Z

158

The Climatology of the Middle Atmosphere in a Vertically Extended Version of the Met Office’s Climate Model. Part II: Variability  

Science Conference Proceedings (OSTI)

Stratospheric variability is examined in a vertically extended version of the Met Office global climate model. Equatorial variability includes the simulation of an internally generated quasi-biennial oscillation (QBO) and semiannual oscillation (...

Scott M. Osprey; Lesley J. Gray; Steven C. Hardiman; Neal Butchart; Andrew C. Bushell; Tim J. Hinton

2010-11-01T23:59:59.000Z

159

Introduction to the HPC Challenge Benchmark Suite  

E-Print Network (OSTI)

and Karl Solchenbach. Benchmark design for character-effect of computer benchmarks upon applied mathematics,Pe- titet. The LINPACK benchmark: Past, present, [11] Matteo

2005-01-01T23:59:59.000Z

160

Machine Learning Benchmarks and Random Forest Regression  

E-Print Network (OSTI)

Machine Learning Benchmarks and Random Forest Regressionerror on a suite of benchmark datasets. As the basethe Machine Learning Benchmark Problems package; see http://

Segal, Mark R

2004-01-01T23:59:59.000Z

Note: This page contains sample records for the topic "benchmark models version" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


161

Decommissioning Benchmarking Study Final Report | Department...  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Benchmarking Study Final Report Decommissioning Benchmarking Study Final Report DOE's former Office of Environmental Restoration (EM-40) conducted a benchmarking study of its...

162

Investigating the limits of randomized benchmarking protocols  

E-Print Network (OSTI)

In this paper, we analyze the performance of randomized benchmarking protocols on gate sets under a variety of realistic error models that include systematic rotations, amplitude damping, leakage to higher levels, and 1/f noise. We find that, in almost all cases, benchmarking provides better than a factor-of-two estimate of average error rate, suggesting that randomized benchmarking protocols are a valuable tool for verification and validation of quantum operations. In addition, we derive new models for fidelity decay curves under certain types of non-Markovian noise models such as 1/f and leakage errors. We also show that, provided the standard error of the fidelity measurements is small, only a small number of trials are required for high confidence estimation of gate errors.

Jeffrey M. Epstein; Andrew W. Cross; Easwar Magesan; Jay M. Gambetta

2013-08-13T23:59:59.000Z

163

The MIT Emissions Prediction and Policy Analysis (EPPA) Model: Version 4  

E-Print Network (OSTI)

The Emissions Prediction and Policy Analysis (EPPA) model is the part of the MIT Integrated Global Systems Model (IGSM) that represents the human systems. EPPA is a recursive-dynamic multi-regional general equilibrium model ...

Paltsev, Sergey.

164

Hydrogen Macro System Model User Guide, Version 1.2.1  

NLE Websites -- All DOE Office Websites (Extended Search)

Model (HDSAM), and GREET models, thus allowing analysis of the economics, primary energy-source requirements, and emissions of hydrogen production and delivery pathways....

165

Benchmarking using basic DBMS operations  

Science Conference Proceedings (OSTI)

The TPC-H benchmark proved to be successful in the decision support area. Many commercial database vendors and their related hardware vendors used these benchmarks to show the superiority and competitive edge of their products. However, over time, the ...

Alain Crolotte; Ahmad Ghazal

2010-09-01T23:59:59.000Z

166

Building Energy Use Benchmarking Guidance  

NLE Websites -- All DOE Office Websites (Extended Search)

Building Energy Use Benchmarking Guidance April 15, 2010 EISA SECTION 432 - Benchmarking of Federal Facilities (42 U.S.C. 8253 Subsection (f), Use of Energy and Water Efficiency...

167

Version 2.0 Visual Sample Plan (VSP): Models and Code Verification  

E-Print Network (OSTI)

, Validation of stochastic flow and transport models for unsaturated soils: A comprehensive field study, NUREG

168

Comparison and validation of HEU and LEU modeling results to HEU experimental benchmark data for the Massachusetts Institute of Technology MITR reactor.  

Science Conference Proceedings (OSTI)

The Massachusetts Institute of Technology Reactor (MITR-II) is a research reactor in Cambridge, Massachusetts designed primarily for experiments using neutron beam and in-core irradiation facilities. It delivers a neutron flux comparable to current LWR power reactors in a compact 6 MW core using Highly Enriched Uranium (HEU) fuel. In the framework of its non-proliferation policies, the international community presently aims to minimize the amount of nuclear material available that could be used for nuclear weapons. In this geopolitical context, most research and test reactors both domestic and international have started a program of conversion to the use of Low Enriched Uranium (LEU) fuel. A new type of LEU fuel based on an alloy of uranium and molybdenum (UMo) is expected to allow the conversion of U.S. domestic high performance reactors like the MITR-II reactor. Towards this goal, comparisons of MCNP5 Monte Carlo neutronic modeling results for HEU and LEU cores have been performed. Validation of the model has been based upon comparison to HEU experimental benchmark data for the MITR-II. The objective of this work was to demonstrate a model which could represent the experimental HEU data, and therefore could provide a basis to demonstrate LEU core performance. This report presents an overview of MITR-II model geometry and material definitions which have been verified, and updated as required during the course of validation to represent the specifications of the MITR-II reactor. Results of calculations are presented for comparisons to historical HEU start-up data from 1975-1976, and to other experimental benchmark data available for the MITR-II Reactor through 2009. This report also presents results of steady state neutronic analysis of an all-fresh LEU fueled core. Where possible, HEU and LEU calculations were performed for conditions equivalent to HEU experiments, which serves as a starting point for safety analyses for conversion of MITR-II from the use of HEU fuel to the use of UMo LEU fuel.

Newton, T. H.; Wilson, E. H; Bergeron, A.; Horelik, N.; Stevens, J. (Nuclear Engineering Division); (MIT Nuclear Reactor Lab.)

2011-03-02T23:59:59.000Z

169

California commercial building energy benchmarking  

E-Print Network (OSTI)

benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings.

Kinney, Satkartar; Piette, Mary Ann

2003-01-01T23:59:59.000Z

170

Development of models for the sodium version of the two-phase three-dimensional thermal hydraulics code THERMIT. [LMFBR  

SciTech Connect

Several different models and correlations were developed and incorporated in the sodium version of THERMIT, a thermal-hydraulics code written at MIT for the purpose of analyzing transients under LMFBR conditions. This includes: a mechanism for the inclusion of radial heat conduction in the sodium coolant as well as radial heat loss to the structure surrounding the test section. The fuel rod conduction scheme was modified to allow for more flexibility in modelling the gas plenum regions and fuel restructuring. The formulas for mass and momentum exchange between the liquid and vapor phases were improved. The single phase and two phase friction factors were replaced by correlations more appropriate to LMFBR assembly geometry.

Wilson, G.J.; Kazimi, M.S.

1980-05-01T23:59:59.000Z

171

Shielding Integral Benchmark Archive and Database (SINBAD)  

Science Conference Proceedings (OSTI)

The Shielding Integral Benchmark Archive and Database (SINBAD) collection of benchmarks was initiated in the early 1990 s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development s Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD is a major attempt to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD is also a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories fission, fusion, and accelerator benchmarks. Where possible, each experiment is described and analyzed using deterministic or probabilistic (Monte Carlo) radiation transport software.

Kirk, Bernadette Lugue [ORNL; Grove, Robert E [ORNL; Kodeli, I. [International Atomic Energy Agency (IAEA); Sartori, Enrico [ORNL; Gulliford, J. [OECD Nuclear Energy Agency

2011-01-01T23:59:59.000Z

172

The NewFLOW Computational Model and Intermediate Format - Version 1.04  

E-Print Network (OSTI)

This report motivates and defines a general-purpose, architecture independent, parallel computational model, which captures the intuitions which underlie the design of the United Functions and Objects (UFO) programming language. The model has two aspects, which turn out to be a traditional dataflow model and an actor-like model, with a very simple interface between the two. Certain aspects of the model, particularly strictness, maximum parallelism, and lack of suspension are stressed. The implications of introducing stateful objects are carefully spelled out. The model has several purposes, although we primarily describe it as a vehicle for the compilation and optimisation of UFO, and for visualising the execution of programs. Having motivated the model, this report specifies, in detail, both the syntax and semantics of the model, and provides some examples of its use. 1 Motivation The primary purpose of this report is to define the semantics and syntax of NewFLOW, an intermediate rep...

Julian Seward; John Sargeant; Chris Kirkham

1996-01-01T23:59:59.000Z

173

Cloudy Sky Version of Bird's Broadband Hourly Clear Sky Model (Presentation)  

DOE Green Energy (OSTI)

Presentation on Bird's Broadband Hourly Clear Sky Model given by NREL's Daryl Myers at SOLAR 2006. The objective of this report is to produce ''all sky'' modeled hourly solar radiation. This is based on observed cloud cover data using a SIMPLE model.

Myers, D.

2006-08-01T23:59:59.000Z

174

Hospital Energy Benchmarking Guidance - Version 1.0  

E-Print Network (OSTI)

major building energy services and systems: - Cooling (equipment and other energy-intensive services are additionalBtu) + energy to distribute service within hospital (Btu of

Singer, Brett C.

2010-01-01T23:59:59.000Z

175

Hospital Energy Benchmarking Guidance - Version 1.0  

E-Print Network (OSTI)

use attribution to thermal services (cooling, space heating,a. Identify thermal energy flows (cooling, space heating,s). 1a. Identify thermal energy flows (cooling, heating,

Singer, Brett C.

2010-01-01T23:59:59.000Z

176

Hospital Energy Benchmarking Guidance - Version 1.0  

E-Print Network (OSTI)

systems a. Identify thermal energy flows (cooling, spaceestimated from the thermal energy supplied to the hospitaland distribute this thermal energy throughout the facility.

Singer, Brett C.

2010-01-01T23:59:59.000Z

177

Hospital Energy Benchmarking Guidance - Version 1.0  

E-Print Network (OSTI)

08B funded by the California Energy Commission, Publicsponsored by the California Energy Commission (Commission).was funded by the California Energy Commission through a

Singer, Brett C.

2010-01-01T23:59:59.000Z

178

Hospital Energy Benchmarking Guidance - Version 1.0  

NLE Websites -- All DOE Office Websites (Extended Search)

Center (DRRC) performed a technology evaluation for the Pacific Gas and Electric Company (PG&E) Emerging Technologies Programs. This report summarizes the design, deployment,...

179

Dynamic Mercury Cycling Model Version 3.0 (D-MCM)  

Science Conference Proceedings (OSTI)

The Dynamic Mercury Cycling Model (D-MCM) predicts the cycling and fate of the major forms of mercury in lakes. The Dynamic Mercury Cycling Model (D-MCM) is a Windowsbased simulation model for personal computers. It predicts the cycling and fate of the major forms of mercury in lakes, including methylmercury, Hg(II), and elemental mercury. D-MCM is a time-dependent mechanistic model, designed to consider the most important physical, chemical and biological factors affecting fish mercury concentrations in...

2009-12-09T23:59:59.000Z

180

Technical documentation and user's guide for City-County Allocation Model (CCAM). Version 1. 0  

Science Conference Proceedings (OSTI)

The City-County Allocation Model (CCAM) was developed as part of the Monitored Retrievable Storage (MRS) Program. The CCAM model was designed to allocate population changes forecasted by the MASTER model to specific local communities within commuting distance of the MRS facility. The CCAM model was designed to then forecast the potential changes in demand for key community services such as housing, police protection, and utilities for these communities. The CCAM model uses a flexible on-line data base on demand for community services that is based on a combination of local service levels and state and national service standards. The CCAM model can be used to quickly forecast the potential community service consequence of economic development for local communities anywhere in the country. The remainder of this document is organized as follows. The purpose of this manual is to assist the user in understanding and operating the City-County Allocation Model (CCAM). The annual explains the data sources for the model and code modifications as well as the operational procedures.

Clark, L.T. Jr.; Scott, M.J.; Hammer, P.

1986-05-01T23:59:59.000Z

Note: This page contains sample records for the topic "benchmark models version" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


181

Hydrogen Macro System Model User Guide, Version 1.2.1  

DOE Green Energy (OSTI)

The Hydrogen Macro System Model (MSM) is a simulation tool that links existing and emerging hydrogen-related models to perform rapid, cross-cutting analysis. It allows analysis of the economics, primary energy-source requirements, and emissions of hydrogen production and delivery pathways.

Ruth, M.; Diakov, V.; Sa, T.; Goldsby, M.; Genung, K.; Hoseley, R.; Smith, A.; Yuzugullu, E.

2009-07-01T23:59:59.000Z

182

Converter Program: PSADD Dictionary for Dynamics Model, Version 2.0  

Science Conference Proceedings (OSTI)

This manual contains a listing of the PSADD dictionary for PSS/E dynamics models. The format of this dictionary is the same as that used for the "main" PSADD. Wherever possible, dictionary names and data types are the same as those used in the "main" PSADD. Please note that not all of the PSS/E dynamics models have been included in this dictionary and are convertible by the CONVERTER program. EPRI advisors and PTI engineers selected the models for inclusion based on the perceived "usefulness" of the mode...

2000-07-13T23:59:59.000Z

183

Handbook for personal computer versions enhanced oil recovery predictive models: Supporting technology for enhanced oil recovery  

SciTech Connect

The personal computer (PC) programs described in this handbook were adapted from the Tertiary Oil Recovery Information System (TORIS) enhanced oil recovery (EOR) predictive models. The models, both those developed for the Department of Energy and those developed for the National Petroleum Council (NPC), were designed by Scientific Software-Intercomp and were used in the 1984 NPC study on the national potential for enhanced oil recovery. The Department of Energy, Bartlesville Project Office, supported the NPC study and has maintained the models since the study was completed. 10 refs.

Allison, E.; Waldrop, R.; Ray, R.M.

1988-02-01T23:59:59.000Z

184

Relative Risk Model for Transmission and Distribution Electric Infrastructure (General RRM) Version 1.0  

Science Conference Proceedings (OSTI)

The General Relative Risk Model (RRM) is a decision support tool that provides a platform for the assessment of relative risks (human, ecological, and financial) associated with releases of dielectric fluids from a wide range of transmission and distribution (T&D) electrical equipment.  The General RRM is designed to model the relative risk of a given equipment portfolio (i.e., a user-defined grouping of T&D equipment) owned and operated by a utility.  The General RRM evaluates the ...

2012-11-28T23:59:59.000Z

185

Guidebook for Using the Tool BEST Cement: Benchmarking and Energy Savings Tool for the Cement Industry  

SciTech Connect

The Benchmarking and Energy Savings Tool (BEST) Cement is a process-based tool based on commercially available efficiency technologies used anywhere in the world applicable to the cement industry. This version has been designed for use in China. No actual cement facility with every single efficiency measure included in the benchmark will likely exist; however, the benchmark sets a reasonable standard by which to compare for plants striving to be the best. The energy consumption of the benchmark facility differs due to differences in processing at a given cement facility. The tool accounts for most of these variables and allows the user to adapt the model to operational variables specific for his/her cement facility. Figure 1 shows the boundaries included in a plant modeled by BEST Cement. In order to model the benchmark, i.e., the most energy efficient cement facility, so that it represents a facility similar to the user's cement facility, the user is first required to input production variables in the input sheet (see Section 6 for more information on how to input variables). These variables allow the tool to estimate a benchmark facility that is similar to the user's cement plant, giving a better picture of the potential for that particular facility, rather than benchmarking against a generic one. The input variables required include the following: (1) the amount of raw materials used in tonnes per year (limestone, gypsum, clay minerals, iron ore, blast furnace slag, fly ash, slag from other industries, natural pozzolans, limestone powder (used post-clinker stage), municipal wastes and others); the amount of raw materials that are preblended (prehomogenized and proportioned) and crushed (in tonnes per year); (2) the amount of additives that are dried and ground (in tonnes per year); (3) the production of clinker (in tonnes per year) from each kiln by kiln type; (4) the amount of raw materials, coal and clinker that is ground by mill type (in tonnes per year); (5) the amount of production of cement by type and grade (in tonnes per year); (6) the electricity generated onsite; and, (7) the energy used by fuel type; and, the amount (in RMB per year) spent on energy. The tool offers the user the opportunity to do a quick assessment or a more detailed assessment--this choice will determine the level of detail of the energy input. The detailed assessment will require energy data for each stage of production while the quick assessment will require only total energy used at the entire facility (see Section 6 for more details on quick versus detailed assessments). The benchmarking tool provides two benchmarks--one for Chinese best practices and one for international best practices. Section 2 describes the differences between these two and how each benchmark was calculated. The tool also asks for a target input by the user for the user to set goals for the facility.

Galitsky, Christina; Price, Lynn; Zhou, Nan; Fuqiu , Zhou; Huawen, Xiong; Xuemin, Zeng; Lan, Wang

2008-07-30T23:59:59.000Z

186

Benchmarking of the MIT High Temperature Gas-cooled Reactor TRISO-coated particle fuel performance model  

E-Print Network (OSTI)

MIT has developed a Coated Particle Fuel Performance Model to study the behavior of TRISO nuclear fuels. The code, TIMCOAT, is designed to assess the mechanical and chemical condition of populations of coated particles and ...

Stawicki, Michael A

2006-01-01T23:59:59.000Z

187

Senior Design Projects 2013 Project Title 1 : Monte Carlo Simulations Using a Benchmark Full-Core Pressured Water Rector Model  

E-Print Network (OSTI)

defined in MCNP. There are a number of approaches in parallel high performance computing that can and 7,168 GPUs. The high performance computing industry is moving toward a hybrid computer model, where

Danon, Yaron

188

Wind Turbine Generator Model Validation Software Tool (WTGMV) Version 1.0  

Science Conference Proceedings (OSTI)

This software tool allows the user to validate the model for a wind turbine generator (WTG) using measured disturbance data from either a digital fault recorder (DFR) or a phaor measurement unit (PMU) located at the turbine - factor measured data from type testing of the turbine may also be used. The tool also performs parameter optimization on a some of the model parameters such as a few of the controller gains. The tool is a first step in the ultimate plan to enhance the tool to allow for ...

2012-08-30T23:59:59.000Z

189

Guidance and Recommended Procedures for Maintaining and Using RACKLIFE Version 1.10 Models  

Science Conference Proceedings (OSTI)

RACKLIFE is a spent fuel rack management tool that can be applied to extend the useful service life of racks utilizing Boraflex as the neutron absorber material for nuclear criticality control. This document provides procedures and guidance for maintaining and using RACKLIFE models.

2002-04-23T23:59:59.000Z

190

Action-Oriented Benchmarking: Using the CEUS Database to Benchmark Commercial Buildings in California  

E-Print Network (OSTI)

Using the CEUS Database to Benchmark Commercial Buildings infeatures allow users to “benchmark” the presence or absencefor Required Building Data Benchmark Applicable Metrics &

Mathew, Paul

2008-01-01T23:59:59.000Z

191

Self-benchmarking Guide for Data Centers: Metrics, Benchmarks, Actions  

E-Print Network (OSTI)

Hours (full cooling) Electrical Power Chain dP1 UPS Peak20 Electrical Power ChainBenchmarking Guide 6. Electrical Power Chain Metrics ID P1

Mathew, Paul

2010-01-01T23:59:59.000Z

192

Self-benchmarking Guide for Data Centers: Metrics, Benchmarks, Actions  

E-Print Network (OSTI)

Purpose This guide describes energy efficiency metrics andthe energy benchmarking approach describe in this guide candesigners and energy managers. This guide also builds on

Mathew, Paul

2010-01-01T23:59:59.000Z

193

Self-benchmarking Guide for Data Centers: Metrics, Benchmarks, Actions  

SciTech Connect

This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in data centers. This guide is primarily intended for personnel who have responsibility for managing energy use in existing data centers - including facilities managers, energy managers, and their engineering consultants. Additionally, data center designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior data center benchmarking studies supported by the California Energy Commission. Much of the benchmarking data are drawn from the LBNL data center benchmarking database that was developed from these studies. Additional benchmark data were obtained from engineering experts including facility designers and energy managers. This guide also builds on recent research supported by the U.S. Department of Energy's Save Energy Now program.

Mathew, Paul; Ganguly, Srirupa; Greenberg, Steve; Sartor, Dale

2009-07-13T23:59:59.000Z

194

T2LBM Version 1.0: Landfill bioreactor model for TOUGH2  

DOE Green Energy (OSTI)

The need to control gas and leachate production and minimize refuse volume in landfills has motivated the development of landfill simulation models that can be used by operators to predict and design optimal treatment processes. T2LBM is a module for the TOUGH2 simulator that implements a Landfill Bioreactor Model to provide simulation capability for the processes of aerobic or anaerobic biodegradation of municipal solid waste and the associated flow and transport of gas and liquid through the refuse mass. T2LBM incorporates a Monod kinetic rate law for the biodegradation of acetic acid in the aqueous phase by either aerobic or anaerobic microbes as controlled by the local oxygen concentration. Acetic acid is considered a proxy for all biodegradable substrates in the refuse. Aerobic and anaerobic microbes are assumed to be immobile and not limited by nutrients in their growth. Methane and carbon dioxide generation due to biodegradation with corresponding thermal effects are modeled. The numerous parameters needed to specify biodegradation are input by the user in the SELEC block of the TOUGH2 input file. Test problems show that good matches to laboratory experiments of biodegradation can be obtained. A landfill test problem demonstrates the capabilities of T2LBM for a hypothetical two-dimensional landfill scenario with permeability heterogeneity and compaction.

Oldenburg, Curtis M.

2001-05-22T23:59:59.000Z

195

System cost model user`s manual, version 1.2  

SciTech Connect

The System Cost Model (SCM) was developed by Lockheed Martin Idaho Technologies in Idaho Falls, Idaho and MK-Environmental Services in San Francisco, California to support the Baseline Environmental Management Report sensitivity analysis for the U.S. Department of Energy (DOE). The SCM serves the needs of the entire DOE complex for treatment, storage, and disposal (TSD) of mixed low-level, low-level, and transuranic waste. The model can be used to evaluate total complex costs based on various configuration options or to evaluate site-specific options. The site-specific cost estimates are based on generic assumptions such as waste loads and densities, treatment processing schemes, existing facilities capacities and functions, storage and disposal requirements, schedules, and cost factors. The SCM allows customization of the data for detailed site-specific estimates. There are approximately forty TSD module designs that have been further customized to account for design differences for nonalpha, alpha, remote-handled, and transuranic wastes. The SCM generates cost profiles based on the model default parameters or customized user-defined input and also generates costs for transporting waste from generators to TSD sites.

Shropshire, D.

1995-06-01T23:59:59.000Z

196

Analysis of Convective Transport and Parameter Sensitivity in a Single Column Version of the Goddard Earth Observation System, Version 5, General Circulation Model  

Science Conference Proceedings (OSTI)

Convection strongly influences the distribution of atmospheric trace gases. General circulation models (GCMs) use convective mass fluxes calculated by parameterizations to transport gases, but the results are difficult to compare with trace gas ...

L. E. Ott; J. Bacmeister; S. Pawson; K. Pickering; G. Stenchikov; M. Suarez; H. Huntrieser; M. Loewenstein; J. Lopez; I. Xueref-Remy

2009-03-01T23:59:59.000Z

197

Benchmark of Atucha-2 PHWR RELAP5-3D control rod model by Monte Carlo MCNP5 core calculation  

Science Conference Proceedings (OSTI)

Atucha-2 is a Siemens-designed PHWR reactor under construction in the Republic of Argentina. Its geometrical complexity and peculiarities require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Therefore core models of Atucha-2 PHWR were developed using MCNP5. In this work a methodology was set up to collect the flux in the hexagonal mesh by which the Atucha-2 core is represented. The scope of this activity is to evaluate the effect of obliquely inserted control rod on neutron flux in order to validate the RELAP5-3D{sup C}/NESTLE three dimensional neutron kinetic coupled thermal-hydraulic model, applied by GRNSPG/UNIPI for performing selected transients of Chapter 15 FSAR of Atucha-2. (authors)

Pecchia, M.; D'Auria, F. [San Piero A Grado Nuclear Research Group GRNSPG, Univ. of Pisa, via Diotisalvi, 2, 56122 - Pisa (Italy); Mazzantini, O. [Nucleo-electrica Argentina Societad Anonima NA-SA, Buenos Aires (Argentina)

2012-07-01T23:59:59.000Z

198

NFS version 4 Protocol  

Science Conference Proceedings (OSTI)

NFS (Network File System) version 4 is a distributed file system protocol which owes heritage to NFS protocol versions 2 [RFC1094] and 3 [RFC1813]. Unlike earlier versions, the NFS version 4 protocol supports traditional file access while integrating ...

S. Shepler; B. Callaghan; D. Robinson; R. Thurlow; C. Beame; M. Eisler; D. Noveck

2000-12-01T23:59:59.000Z

199

Quantum Benchmarks from minimal Resources  

E-Print Network (OSTI)

We investigate several recently published benchmark criteria for storage or transmission of quantum information. A comparison reveals that criteria based on a Gaussian distribution of coherent states are most resilient to noise. We then address the issue of experimental resources and derive an equally strong benchmark, solely based on three coherent states and homodyne detection. This benchmark is further simplified in the presence of naturally occurring random phases, which remove the need for active input state modulation.

Häseler, Hauke

2009-01-01T23:59:59.000Z

200

Fuel Cell Power Model Version 2: Startup Guide, System Designs, and Case Studies. Modeling Electricity, Heat, and Hydrogen Generation from Fuel Cell-Based Distributed Energy Systems  

DOE Green Energy (OSTI)

This guide helps users get started with the U.S. Department of Energy/National Renewable Energy Laboratory Fuel Cell Power (FCPower) Model Version 2, which is a Microsoft Excel workbook that analyzes the technical and economic aspects of high-temperature fuel cell-based distributed energy systems with the aim of providing consistent, transparent, comparable results. This type of energy system would provide onsite-generated heat and electricity to large end users such as hospitals and office complexes. The hydrogen produced could be used for fueling vehicles or stored for later conversion to electricity.

Steward, D.; Penev, M.; Saur, G.; Becker, W.; Zuboy, J.

2013-06-01T23:59:59.000Z

Note: This page contains sample records for the topic "benchmark models version" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


201

User's guide to DIANE Version 2. 1: A microcomputer software package for modeling battery performance in electric vehicle applications  

DOE Green Energy (OSTI)

DIANE is an interactive microcomputer software package for the analysis of battery performance in electric vehicle (EV) applications. The principal objective of this software package is to enable the prediction of EV performance on the basis of laboratory test data for batteries. The model provides a second-by-second simulation of battery voltage and current for any specified velocity/time or power/time profile. The capability of the battery is modeled by an algorithm that relates the battery voltage to the withdrawn current, taking into account the effect of battery depth-of-discharge (DOD). Because of the lack of test data and other constraints, the current version of DIANE deals only with vehicles using fresh'' batteries with or without regenerative braking. Deterioration of battery capability due to aging can presently be simulated with user-input parameters accounting for an increase of effective internal resistance and/or a decrease of cell no-load voltage. DIANE 2.1 is written in FORTRAN language for use on IBM-compatible microcomputers. 7 refs.

Marr, W.W.; Walsh, W.J. (Argonne National Lab., IL (USA). Energy Systems Div.); Symons, P.C. (Electrochemical Engineering Consultants, Inc., Morgan Hill, CA (USA))

1990-06-01T23:59:59.000Z

202

Sustained System Performance (SSP) Benchmark  

NLE Websites -- All DOE Office Websites (Extended Search)

John M. Shalf, and Erich Strohmaier Background The NERSC Approach to Procurement Benchmarks The NERSC-5 SSP The NERSC-6 SSP The Effective System Performance (ESP) Metric...

203

Decommissioning Benchmarking Study Final Report  

Energy.gov (U.S. Department of Energy (DOE))

DOE's former Office of Environmental Restoration (EM-40) conducted a benchmarking study of its decommissioning program to analyze physical activities in facility decommissioning and to determine...

204

BENCHMARKING EMERGING PIPELINE INSPECTION TECHNOLOGIES  

NLE Websites -- All DOE Office Websites (Extended Search)

FINAL REPORT Benchmarking Emerging Pipeline Inspection Technologies To Department of Energy National Energy Technology Laboratory (NETL) DE-AP26-04NT40361 and Department of...

205

Effective System Performance (ESP) Benchmark  

NLE Websites -- All DOE Office Websites (Extended Search)

System Performance (ESP) Benchmark It is now generally recognized in the high performance computing community that peak performance does not adequately predict the usefulness...

206

Hybrid2: The hybrid system simulation model, Version 1.0, user manual  

DOE Green Energy (OSTI)

In light of the large scale desire for energy in remote communities, especially in the developing world, the need for a detailed long term performance prediction model for hybrid power systems was seen. To meet these ends, engineers from the National Renewable Energy Laboratory (NREL) and the University of Massachusetts (UMass) have spent the last three years developing the Hybrid2 software. The Hybrid2 code provides a means to conduct long term, detailed simulations of the performance of a large array of hybrid power systems. This work acts as an introduction and users manual to the Hybrid2 software. The manual describes the Hybrid2 code, what is included with the software and instructs the user on the structure of the code. The manual also describes some of the major features of the Hybrid2 code as well as how to create projects and run hybrid system simulations. The Hybrid2 code test program is also discussed. Although every attempt has been made to make the Hybrid2 code easy to understand and use, this manual will allow many organizations to consider the long term advantages of using hybrid power systems instead of conventional petroleum based systems for remote power generation.

Baring-Gould, E.I.

1996-06-01T23:59:59.000Z

207

Benchmarking ICRF simulations for ITER  

DOE Green Energy (OSTI)

Abstract Benchmarking of full-wave solvers for ICRF simulations is performed using plasma profiles and equilibria obtained from integrated self-consistent modeling predictions of four ITER plasmas. One is for a high performance baseline (5.3 T, 15 MA) DT H-mode plasma. The others are for half-field, half-current plasmas of interest for the pre-activation phase with bulk plasma ion species being either hydrogen or He4. The predicted profiles are used by seven groups to predict the ICRF electromagnetic fields and heating profiles. Approximate agreement is achieved for the predicted heating power partitions for the DT and He4 cases. Profiles of the heating powers and electromagnetic fields are compared.

R. V. Budny, L. Berry, R. Bilato, P. Bonoli, M. Brambilla, R.J. Dumont, A. Fukuyama, R. Harvey, E.F. Jaeger, E. Lerche, C.K. Phillips, V. Vdovin, J. Wright, and members of the ITPA-IOS

2010-09-28T23:59:59.000Z

208

Building energy benchmarks and rating tools | Department of Energy  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Building energy benchmarks and rating tools Building energy benchmarks and rating tools Building energy benchmarks and rating tools Building energy benchmarks and rating tools More...

209

Benchmarking ENDF/B-VII.1, JENDL-4.0 and JEFF-3.1  

SciTech Connect

Three nuclear data libraries have been tested extensively using criticality safety benchmark calculations. The three libraries are the new release of the US library ENDF/B-VII.1 (2011), the new release of the Japanese library JENDL-4.0 (2011), and the OECD/NEA library JEFF-3.1 (2006). All calculations were performed with the continuous-energy Monte Carlo code MCNP (version 4C3, as well as version 6-beta1). Around 2000 benchmark cases from the International Handbook of Criticality Safety Benchmark Experiments (ICSBEP) were used. The results were analyzed per ICSBEP category, and per element. Overall, the three libraries show similar performance on most criticality safety benchmarks. The largest differences are probably caused by elements such as Be, C, Fe, Zr, W. (authors)

Van Der Marck, S. C. [Nuclear Research and Consultancy Group NRG, P.O. Box 25, 1755 ZG Petten (Netherlands)

2012-07-01T23:59:59.000Z

210

Cleanroom Energy Efficiency: Metrics and Benchmarks  

E-Print Network (OSTI)

The key metrics and benchmarks to evaluate the efficiency ofEfficiency: Metrics and Benchmarks Paul A. Mathew, WilliamEfficiency: Metrics and Benchmarks Paul A. Mathew, Ph.D,

Mathew, Paul A.

2012-01-01T23:59:59.000Z

211

Quantum benchmarks for Gaussian states  

E-Print Network (OSTI)

Teleportation and storage of continuous variable states of light and atoms are essential building blocks for the realization of large scale quantum networks. Rigorous validation of these implementations require identifying, and surpassing, benchmarks set by the most effective strategies attainable without the use of quantum resources. Such benchmarks have been established for special families of input states, like coherent states and particular subclasses of squeezed states. Here we solve the longstanding problem of defining quantum benchmarks for general pure Gaussian states with arbitrary phase, displacement, and squeezing, randomly sampled according to a realistic prior distribution. As a special case, we show that the fidelity benchmark for teleporting squeezed states with totally random phase and squeezing degree is 1/2, equal to the corresponding one for coherent states. We discuss the use of entangled resources to beat the benchmarks in experiments.

Chiribella, Giulio

2013-01-01T23:59:59.000Z

212

Building Energy Use Benchmarking Guidance  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Building Energy Use Benchmarking Guidance April 15, 2010 EISA SECTION 432 - Benchmarking of Federal Facilities (42 U.S.C. 8253 Subsection (f), Use of Energy and Water Efficiency Measures in Federal Buildings) I. Background A. Authority - Benchmarking Requirements Section 432 of the Energy Independence and Security Act of 2007 (EISA) requires the Secretary of the United States Department of Energy (DOE) to select or develop a building energy use benchmarking system and to issue guidance for use of the system. EISA requires the designated agency energy managers to enter energy use data for each metered building that is (or is a part of) a covered facility into a building energy use benchmarking system, such as the ENERGY STAR Portfolio Manager tool (Portfolio Manager) (see 42 U.S.C. 8253(f)(8)(A), as

213

Benchmarking foreign electronics technologies  

SciTech Connect

This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

1994-12-01T23:59:59.000Z

214

EIA model documentation: World oil refining logistics demand model,``WORLD`` reference manual. Version 1.1  

SciTech Connect

This manual is intended primarily for use as a reference by analysts applying the WORLD model to regional studies. It also provides overview information on WORLD features of potential interest to managers and analysts. Broadly, the manual covers WORLD model features in progressively increasing detail. Section 2 provides an overview of the WORLD model, how it has evolved, what its design goals are, what it produces, and where it can be taken with further enhancements. Section 3 reviews model management covering data sources, managing over-optimization, calibration and seasonality, check-points for case construction and common errors. Section 4 describes in detail the WORLD system, including: data and program systems in overview; details of mainframe and PC program control and files;model generation, size management, debugging and error analysis; use with different optimizers; and reporting and results analysis. Section 5 provides a detailed description of every WORLD model data table, covering model controls, case and technology data. Section 6 goes into the details of WORLD matrix structure. It provides an overview, describes how regional definitions are controlled and defines the naming conventions for-all model rows, columns, right-hand sides, and bounds. It also includes a discussion of the formulation of product blending and specifications in WORLD. Several Appendices supplement the main sections.

Not Available

1994-04-11T23:59:59.000Z

215

Benchmark Data Through The International Reactor Physics Experiment Evaluation Project (IRPHEP)  

SciTech Connect

The International Reactor Physics Experiments Evaluation Project (IRPhEP) was initiated by the Organization for Economic Cooperation and Development (OECD) Nuclear Energy Agency’s (NEA) Nuclear Science Committee (NSC) in June of 2002. The IRPhEP focus is on the derivation of internationally peer reviewed benchmark models for several types of integral measurements, in addition to the critical configuration. While the benchmarks produced by the IRPhEP are of primary interest to the Reactor Physics Community, many of the benchmarks can be of significant value to the Criticality Safety and Nuclear Data Communities. Benchmarks that support the Next Generation Nuclear Plant (NGNP), for example, also support fuel manufacture, handling, transportation, and storage activities and could challenge current analytical methods. The IRPhEP is patterned after the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and is closely coordinated with the ICSBEP. This paper highlights the benchmarks that are currently being prepared by the IRPhEP that are also of interest to the Criticality Safety Community. The different types of measurements and associated benchmarks that can be expected in the first publication and beyond are described. The protocol for inclusion of IRPhEP benchmarks as ICSBEP benchmarks and for inclusion of ICSBEP benchmarks as IRPhEP benchmarks is detailed. The format for IRPhEP benchmark evaluations is described as an extension of the ICSBEP format. Benchmarks produced by the IRPhEP add new dimension to criticality safety benchmarking efforts and expand the collection of available integral benchmarks for nuclear data testing. The first publication of the "International Handbook of Evaluated Reactor Physics Benchmark Experiments" is scheduled for January of 2006.

J. Blair Briggs; Dr. Enrico Sartori

2005-09-01T23:59:59.000Z

216

Action-Oriented Benchmarking: Concepts and Tools  

E-Print Network (OSTI)

simulation (for design) or energy audits (for retrofit), asconventional benchmarking and energy audits. Whole BuildingBenchmarking Investment-Grade Energy Audit Screen facilities

Mills, Evan; California Energy Commission

2008-01-01T23:59:59.000Z

217

Precise Regression Benchmarking with Random Effects: Improving Mono Benchmark Results  

E-Print Network (OSTI)

Benchmarking as a method of assessing software performance is known to su#er from random fluctuations that distort the observed performance. In this paper, we focus on the fluctuations caused by compilation.

Tomas Kalibera; Petr Tuma

2006-01-01T23:59:59.000Z

218

Self-benchmarking Guide for Laboratory Buildings: Metrics, Benchmarks, Actions  

E-Print Network (OSTI)

Building Site Energy Intensity (BTU/sf-yr). A Performance Benchmarkand benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings.

Mathew, Paul

2010-01-01T23:59:59.000Z

219

Phase-Covariant Quantum Benchmarks  

E-Print Network (OSTI)

We give a quantum benchmark for teleportation and quantum storage experiments suited for pure and mixed test states. The benchmark is based on the average fidelity over a family of phase-covariant states and certifies that an experiment can not be emulated by a classical setup, i.e., by a measure-and-prepare scheme. We give an analytical solution for qubits, which shows important differences with standard state estimation approach, and compute the value of the benchmark for coherent and squeezed states, both pure and mixed.

Calsamiglia, J; Muńoz-Tŕpia, R; Bagán, E

2008-01-01T23:59:59.000Z

220

Phase-Covariant Quantum Benchmarks  

E-Print Network (OSTI)

We give a quantum benchmark for teleportation and quantum storage experiments suited for pure and mixed test states. The benchmark is based on the average fidelity over a family of phase-covariant states and certifies that an experiment can not be emulated by a classical setup, i.e., by a measure-and-prepare scheme. We give an analytical solution for qubits, which shows important differences with standard state estimation approach, and compute the value of the benchmark for coherent and squeezed states, both pure and mixed.

J. Calsamiglia; M. Aspachs; R. Munoz-Tapia; E. Bagan

2008-07-31T23:59:59.000Z

Note: This page contains sample records for the topic "benchmark models version" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


221

California commercial building energy benchmarking  

SciTech Connect

Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the identities of building owners might be revealed and hence are reluctant to share their data. The California Commercial End Use Survey (CEUS), the primary source of data for Cal-Arch, is a unique source of information on commercial buildings in California. It has not been made public; however, it was made available by CEC to LBNL for the purpose of developing a public benchmarking tool.

Kinney, Satkartar; Piette, Mary Ann

2003-07-01T23:59:59.000Z

222

A Simplified HTTR Diffusion Theory Benchmark  

SciTech Connect

The Georgia Institute of Technology (GA-Tech) recently developed a transport theory benchmark based closely on the geometry and the features of the HTTR reactor that is operational in Japan. Though simplified, the benchmark retains all the principal physical features of the reactor and thus provides a realistic and challenging test for the codes. The purpose of this paper is twofold. The first goal is an extension of the benchmark to diffusion theory applications by generating the additional data not provided in the GA-Tech prior work. The second goal is to use the benchmark on the HEXPEDITE code available to the INL. The HEXPEDITE code is a Green’s function-based neutron diffusion code in 3D hexagonal-z geometry. The results showed that the HEXPEDITE code accurately reproduces the effective multiplication factor of the reference HELIOS solution. A secondary, but no less important, conclusion is that in the testing against actual HTTR data of a full sequence of codes that would include HEXPEDITE, in the apportioning of inevitable discrepancies between experiment and models, the portion of error attributable to HEXPEDITE would be expected to be modest. If large discrepancies are observed, they would have to be explained by errors in the data fed into HEXPEDITE. Results based on a fully realistic model of the HTTR reactor are presented in a companion paper. The suite of codes used in that paper also includes HEXPEDITE. The results shown here should help that effort in the decision making process for refining the modeling steps in the full sequence of codes.

Rodolfo M. Ferrer; Abderrafi M. Ougouag; Farzad Rahnema

2010-10-01T23:59:59.000Z

223

Sustained System Performance (SSP) Benchmark  

NLE Websites -- All DOE Office Websites (Extended Search)

Sustained System Sustained System Performance (SSP) Benchmark Sustained System Performance (SSP) Benchmark William T.C. Kramer, John M. Shalf, and Erich Strohmaier Background The NERSC Approach to Procurement Benchmarks The NERSC-5 SSP The NERSC-6 SSP The Effective System Performance (ESP) Metric Conclusion Notes Formal description of SSP A formal description of the SSP, including detailed formulae, is now available. This is a portion of the soon-to-be-published Ph.D. dissertation, Kramer, W.T.C., 2008, "PERCU: A Holistic Method for Evaluating High End Computing Systems," Department of Electrical Engineering and Computer Science, University of California, Berkeley. Background Most plans and reports recently discuss only one of four distinct purposes benchmarks are used. The obvious purpose is selection of a system from

224

ESP: a system utilization benchmark  

Science Conference Proceedings (OSTI)

This article describes a new benchmark, called the Effective System Performance (ESP) test, which is designed to measure system-level performance, including such factors as job scheduling efficiency, handling of large jobs and shutdown-reboot times. ...

Adrian T. Wong; Leonid Oliker; William T. C. Kramer; Teresa L. Kaltz; David H. Bailey

2000-11-01T23:59:59.000Z

225

A PWR Thorium Pin Cell Burnup Benchmark  

SciTech Connect

As part of work to evaluate the potential benefits of using thorium in LWR fuel, a thorium fueled benchmark comparison was made in this study between state-of-the-art codes, MOCUP (MCNP4B + ORIGEN2), and CASMO-4 for burnup calculations. The MOCUP runs were done individually at MIT and INEEL, using the same model but with some differences in techniques and cross section libraries. Eigenvalue and isotope concentrations were compared on a PWR pin cell model up to high burnup. The eigenvalue comparison as a function of burnup is good: the maximum difference is within 2% and the average absolute difference less than 1%. The isotope concentration comparisons are better than a set of MOX fuel benchmarks and comparable to a set of uranium fuel benchmarks reported in the literature. The actinide and fission product data sources used in the MOCUP burnup calculations for a typical thorium fuel are documented. Reasons for code vs code differences are analyzed and discussed.

Weaver, Kevan Dean; Zhao, X.; Pilat, E. E; Hejzlar, P.

2000-05-01T23:59:59.000Z

226

Nuclear Data Performance Testing Using Sensitive, but Less Frequently Used ICSBEP Benchmarks  

Science Conference Proceedings (OSTI)

The International Criticality Safety Benchmark Evaluation Project (ICSBEP) has published the International Handbook of Evaluated Criticality Safety Benchmark Experiments annually since 1995. The Handbook now spans over 51,000 pages with benchmark specifications for 4,283 critical, near critical, or subcritical configurations; 24 criticality alarm placement/shielding configurations with multiple dose points for each; and 200 configurations that have been categorized as fundamental physics measurements relevant to criticality safety applications. Benchmark data in the ICSBEP Handbook were originally intended for validation of criticality safety methods and data; however, the benchmark specifications are now used extensively for nuclear data testing. There are several, less frequently used benchmarks within the Handbook that are very sensitive to thorium and certain key structural and moderating materials. Calculated results for many of those benchmarks using modern nuclear data libraries suggest there is still room for improvement. These and other highly sensitive, but rarely quoted benchmarks are highlighted and data testing results provided using the Monte Carlo N-Particle Version 5 (MCNP5) code and continuous energy ENDF/B-V, VI.8, and VII.0, JEFF-3.1, and JENDL-3.3 nuclear data libraries.

J. Blair Briggs; John D. Bess

2011-08-01T23:59:59.000Z

227

Depletion Reactivity Benchmark for the International Handbook of Evaluated Reactor Physics Benchmark Experiments  

Science Conference Proceedings (OSTI)

The Electric Power Research Institute– (EPRI-) sponsored depletion reactivity benchmarks documented in reports 1022909, Benchmarks for Quantifying Fuel Reactivity Depletion Uncertainty, and 1025203, Utilization of the EPRI Depletion Benchmarks for Burnup Credit Validation, have been translated to an evaluated benchmark for incorporation in the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhE), published by the Organisation for Economic ...

2013-04-10T23:59:59.000Z

228

Program on Technology Innovation: EPRI Yucca Mountain Total System Performance Assessment Code (IMARC) Version 8: Model Description  

Science Conference Proceedings (OSTI)

EPRI has been conducting independent assessments of the total system performance of the candidate spent nuclear fuel and high level radioactive waste (HLW) repository at Yucca Mountain, Nevada, since 1989. EPRI's total system performance assessment (TSPA) code is formally known as IMARC, or Integrated Multiple Assumptions and Release Code. Descriptions of the current version of IMARC are found in numerous EPRI reports. The purpose of this report is to provide a succinct summary of all components of IMARC...

2005-05-30T23:59:59.000Z

229

Robust randomized benchmarking of quantum processes  

E-Print Network (OSTI)

We describe a simple randomized benchmarking protocol for quantum information processors and obtain a sequence of models for the observable fidelity decay as a function of a perturbative expansion of the errors. We are able to prove that the protocol provides an efficient and reliable estimate of an average error-rate for a set operations (gates) under a general noise model that allows for both time and gate-dependent errors. We determine the conditions under which this estimate remains valid and illustrate the protocol through numerical examples.

Easwar Magesan; J. M. Gambetta; Joseph Emerson

2010-09-19T23:59:59.000Z

230

Benchmark Evaluation of the NRAD Reactor LEU Core Startup Measurements  

Science Conference Proceedings (OSTI)

The Neutron Radiography (NRAD) reactor is a 250-kW TRIGA-(Training, Research, Isotope Production, General Atomics)-conversion-type reactor at the Idaho National Laboratory; it is primarily used for neutron radiography analysis of irradiated and unirradiated fuels and materials. The NRAD reactor was converted from HEU to LEU fuel with 60 fuel elements and brought critical on March 31, 2010. This configuration of the NRAD reactor has been evaluated as an acceptable benchmark experiment and is available in the 2011 editions of the International Handbook of Evaluated Criticality Safety Benchmark Experiments (ICSBEP Handbook) and the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook). Significant effort went into precisely characterizing all aspects of the reactor core dimensions and material properties; detailed analyses of reactor parameters minimized experimental uncertainties. The largest contributors to the total benchmark uncertainty were the 234U, 236U, Er, and Hf content in the fuel; the manganese content in the stainless steel cladding; and the unknown level of water saturation in the graphite reflector blocks. A simplified benchmark model of the NRAD reactor was prepared with a keff of 1.0012 {+-} 0.0029 (1s). Monte Carlo calculations with MCNP5 and KENO-VI and various neutron cross section libraries were performed and compared with the benchmark eigenvalue for the 60-fuel-element core configuration; all calculated eigenvalues are between 0.3 and 0.8% greater than the benchmark value. Benchmark evaluations of the NRAD reactor are beneficial in understanding biases and uncertainties affecting criticality safety analyses of storage, handling, or transportation applications with LEU-Er-Zr-H fuel.

J. D. Bess; T. L. Maddock; M. A. Marshall

2011-09-01T23:59:59.000Z

231

Effective System Performance (ESP) Benchmark  

NLE Websites -- All DOE Office Websites (Extended Search)

Effective System Effective System Performance (ESP) Benchmark Effective System Performance (ESP) Benchmark It is now generally recognized in the high performance computing community that peak performance does not adequately predict the usefulness of a system for a given set of applications. One of the first benchmarks designed to measure system performance in a real-world operational environment was NERSC's Effective System Performance (ESP) test. NERSC introduced ESP in 1999 with the hope that this test would be of use to system managers and would help to spur the community (both researchers and vendors) to improve system efficiency. The discussion below uses examples from the Cray T3E system that NERSC was operating in 1999. Improved MPP System Efficiency Equals Million-Dollar Savings

232

Benchmarking Energy Use in Schools  

E-Print Network (OSTI)

Local governments across the United States spent approximately $5 billion, an average of $100 million per state, on energy for their public schools in 1992. This represents a tremendous drain on education dollars of which part (captured through building system and operational efficiency improvements) could be directed toward more important educational needs. States and local governments know there are sizeable opportunities, but are challenged by how and where to start. IdentifLing the worst energy performers, with the most potential, easily and at low cost is a key in motivating local governments into action. Energy benchmarking is an excellent tool for this purpose. The 1992 US Energy Information Administration’s Commercial Buildings Energy Consumption Survey (CBECS) database is investigated as a source for energy benchmarks for local-government-owned schools. Average energy use values derived from CBECS are shown to be poor energy benchmarks. Simple distributions of building energy use values derived from CBECS, however, are shown to be reliable energy benchmarks for local schools. These can be used to gauge the energy performance of your local public school. Using a stepwise, linear-regression analysis, the primary determinants of electric use in local schools were found to be gross floor area, year of construction, use of walk-in coolers, electric cooling, non-electric energy use, roof construction, and HVAC operational responsibility. The determinants vary depending on the school’s location. While benchmarking based on simple distributions is a good method, an improved benchmarking method which can account for these additional drivers of energy use is detailed.

Terv R. Sharp; Oak Ridge; National Laboratoy

1998-01-01T23:59:59.000Z

233

Advanced benchmarking for complex building types: laboratories as an exemplar.  

E-Print Network (OSTI)

Metrics and Benchmarks for Energy Efficiency inmetrics. However, benchmarks generated from simulations areetc. Whole-building benchmarks are limited in their

Mathew, Paul; Clear, Robert; Kircher, Kevin; Webster, Tom; Lee, Kwang Ho; Hoyt, Tyler

2010-01-01T23:59:59.000Z

234

Advanced Benchmarking for Complex Building Types: Laboratories as an Exemplar  

E-Print Network (OSTI)

Metrics and Benchmarks for Energy Efficiency inmodel to generate a benchmark energy intensity normalizedlimited efforts thus far to benchmark laboratory facilities

Mathew, Paul A.

2010-01-01T23:59:59.000Z

235

Metrics and Benchmarks for Energy Efficiency in Laboratories  

E-Print Network (OSTI)

Report 4-10-08 Metrics and Benchmarks for Energy Efficiencybenchmarking database. The benchmarks for standard, good andefficiency metrics and benchmarks for laboratories, which

Mathew, Paul; Rumsey Engineers

2008-01-01T23:59:59.000Z

236

Outlook for Industrial Energy Benchmarking  

E-Print Network (OSTI)

The U.S. Environmental Protection Agency is exploring options to sponsor an industrial energy efficiency benchmarking study to identify facility specific, cost-effective best practices and technologies. Such a study could help develop a common understanding of opportunities for energy efficiency improvements and provide additional information to improve the competitiveness of U.S. industry. The EPA's initial benchmarking efforts will focus on industrial power facilities. The key industries of interest include the most energy intensive industries, such as chemical, pulp and paper, and iron and steel manufacturing.

Hartley, Z.

2000-04-01T23:59:59.000Z

237

Solar Webinar Text Version  

Energy.gov (U.S. Department of Energy (DOE))

Download the text version of the audio from the DOE Office of Indian Energy webinar on solar renewable energy.

238

DataTrends Benchmarking and Energy Savings  

NLE Websites -- All DOE Office Websites (Extended Search)

Benchmarking and Energy Savings Do buildings that consistently benchmark energy performance save energy? The answer is yes, based on the large number of buildings using the U.S....

239

COSBench: cloud object storage benchmark  

Science Conference Proceedings (OSTI)

With object storage systems being increasingly recognized as a preferred way to expose one's storage infrastructure to the web, the past few years have witnessed an explosion in the acceptance of these systems. Unfortunately, the proliferation of available ... Keywords: benchmark tool, object storage

Qing Zheng; Haopeng Chen; Yaguang Wang; Jian Zhang; Jiangang Duan

2013-04-01T23:59:59.000Z

240

Aluchemie Back to Benchmark - Programmaster.org  

Science Conference Proceedings (OSTI)

Meeting, 2010 TMS Annual Meeting & Exhibition. Symposium, Electrode Technology for Aluminum Production. Presentation Title, Aluchemie Back to Benchmark.

Note: This page contains sample records for the topic "benchmark models version" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


241

Testing (Validating?) Cross Sections with ICSBEP Benchmarks  

SciTech Connect

We discuss how to use critical benchmarks from the International Handbook of Evaluated Criticality Safety Benchmark Experiments to determine the applicability of specific cross sections to the end-user's problem of interest. Particular attention is paid to making sure the selected suite of benchmarks includes the user's range of applicability (ROA).

Kahler, Albert C. III [Los Alamos National Laboratory

2012-06-28T23:59:59.000Z

242

Model Validation  

Science Conference Proceedings (OSTI)

...thus establishing appropriate and important benchmarks. Benchmarking can go beyond validation and also measure relative computational speed, accuracy, and breadth for available modeling approaches and implementations, providing valuable information for users to discern the best models and for modelers...

243

Advanced benchmarking for complex building types: laboratories as an exemplar.  

E-Print Network (OSTI)

benchmark against which energy use for a given building canbuilding systems and resulting energy use. The Labs21 Benchmark

Mathew, Paul; Clear, Robert; Kircher, Kevin; Webster, Tom; Lee, Kwang Ho; Hoyt, Tyler

2010-01-01T23:59:59.000Z

244

Advanced Benchmarking for Complex Building Types: Laboratories as an Exemplar  

E-Print Network (OSTI)

benchmark against which energy use for a given building canbuilding systems and resulting energy use. The Labs21 Benchmark

Mathew, Paul A.

2010-01-01T23:59:59.000Z

245

COMET solutions to whole core CANDU-6 benchmark problems  

SciTech Connect

In this paper, the coarse mesh transport code COMET is used to solve CANDU-6 benchmark problems in two and three dimensional geometry. These problems are representative of a simplified quarter core reactor model. The COMET solutions, the core eigenvalue and the fuel pin fission density distribution, are compared to those from the Monte Carlo code MCNP using two-group cross sections. COMET decomposes the core volume into a set of non-overlapping sub-volumes (coarse meshes) and uses pre-computed heterogeneous response functions that are constructed using Legendre polynomials as boundary conditions to generate a user selected whole core solution (e.g., the core eigenvalue and fuel pin fission density distribution). These response functions are pre-computed by performing fixed source calculations with a modified version of MCNP in only the unique coarse meshes in the core. Reference solutions are calculated by MCNP5 with a two-group energy library generated with the HELIOS lattice code. In the 2-D problem, the angular current on the coarse mesh interfaces in COMET is expanded to 2. order in both spatial and angular variables. The COMET eigenvalue error is 0.09%. The corresponding average error in the fission density over all 3515 fuel pins is 0.5%. The maximum error observed is 2.0%. For the 3-D case, with 4. order expansion in space and azimuthal angle and 2. order expansion in the cosine of the polar angle, the eigenvalue differs from the reference solution by 0.05%. The average fission density error over the 42180 fuel pins is 0.7% with a maximum error of 3.3%. (authors)

Forget, B.; Rahnema, F. [Nuclear and Radiological Engineering / Medical Physics Programs, George W. Woodruff School, Georgia Inst. of Technology, Atlanta, GA 30332-0405 (United States)

2006-07-01T23:59:59.000Z

246

Preliminary analysis of feasible benchmark problems for the hydrid PRAM/NUMA REPLICA architecture  

Science Conference Proceedings (OSTI)

We study benchmarking on modern chip multi-processors (CMP), and outline a set of programs to measure the architectural performance properties, focusing on the REPLICA architecture employing a hybrid of PRAM and NUMA computational models. We analyse ... Keywords: benchmarking, multi-core, parallel computing, processor architecture

Jari-Matti Mäkelä; Ville Leppänen; Martti Forsell

2012-06-01T23:59:59.000Z

247

WMAP-Compliant Benchmark Surfaces for MSSM Higgs Bosons  

E-Print Network (OSTI)

We explore `benchmark surfaces' suitable for studying the phenomenology of Higgs bosons in the minimal supersymmetric extension of the Standard Model (MSSM), which are chosen so that the supersymmetric relic density is generally compatible with the range of cold dark matter density preferred by WMAP and other observations. These benchmark surfaces are specified assuming that gaugino masses m_{1/2}, soft trilinear supersymmetry-breaking parameters A_0 and the soft supersymmetry-breaking contributions m_0 to the squark and slepton masses are universal, but not those associated with the Higgs multiplets (the NUHM framework). The benchmark surfaces may be presented as M_A-tan_beta planes with fixed or systematically varying values of the other NUHM parameters, such as m_0, m_{1/2}, A_0 and the Higgs mixing parameter mu. We discuss the prospects for probing experimentally these benchmark surfaces at the Tevatron collider, the LHC, the ILC, in B physics and in direct dark-matter detection experiments. An Appendix documents developments in the FeynHiggs code that enable the user to explore for her/himself the WMAP-compliant benchmark surfaces.

J. Ellis; T. Hahn; S. Heinemeyer; K. A. Olive; G. Weiglein

2007-09-02T23:59:59.000Z

248

BENCHMARKING EMERGING PIPELINE INSPECTION TECHNOLOGIES  

NLE Websites -- All DOE Office Websites (Extended Search)

Benchmarking Emerging Pipeline Inspection Technologies To Department of Energy National Energy Technology Laboratory (NETL) DE-AP26-04NT40361 and Department of Transportation Research and Special Programs Administration (RSPA) DTRS56-02-T-0002 (Milestone 7) September 2004 Final Report on Benchmarking Emerging Pipeline Inspection Technologies Cofunded by Department of Energy National Energy Technology Laboratory (NETL) DE-AP26-04NT40361 and Department of Transportation Research and Special Programs Administration (RSPA) DTRS56-02-T-0002 (Milestone 7) by Stephanie A. Flamberg and Robert C. Gertler September 2004 BATTELLE 505 King Avenue Columbus, Ohio 43201-2693 Neither Battelle, nor any person acting on their behalf: (1) Makes any warranty or representation, expressed or implied, with respect to the

249

Geothermal Heat Pump Benchmarking Report  

SciTech Connect

A benchmarking study was conducted on behalf of the Department of Energy to determine the critical factors in successful utility geothermal heat pump programs. A Successful program is one that has achieved significant market penetration. Successfully marketing geothermal heat pumps has presented some major challenges to the utility industry. However, select utilities have developed programs that generate significant GHP sales. This benchmarking study concludes that there are three factors critical to the success of utility GHP marking programs: (1) Top management marketing commitment; (2) An understanding of the fundamentals of marketing and business development; and (3) An aggressive competitive posture. To generate significant GHP sales, competitive market forces must by used. However, because utilities have functioned only in a regulated arena, these companies and their leaders are unschooled in competitive business practices. Therefore, a lack of experience coupled with an intrinsically non-competitive culture yields an industry environment that impedes the generation of significant GHP sales in many, but not all, utilities.

1997-01-17T23:59:59.000Z

250

Development and Evaluation of a Global Version of the Miami Isopycnic-Coordinate Ocean Model. Final report  

DOE Green Energy (OSTI)

The objective of this project was to test the ability of the Miami Isopycnic-Coordinate Ocean Model (MICOM) to simulate the global ocean circulation, setting the stage for the model's incorporation into coupled global climate models. An existing basin-scale model will be expanded to global domain; suitable atmospheric forcing fields, including precipitation and river runoff, will be selected; the modeling of ayssal flow will be improved by incorporating compressibility and particularly thermobaric effects; a sea-ice model will be added; parameterization options will be explored for subgrid-scale deep convection; parallel coarse- and fine-mesh simulations will be carried out to investigate the impact of grid resolution; the sensitivity of the model's solution to magnitude of vertical (diapycnal) exchange coefficient will be studied; and long-term trends in meridional heat transport and water-mass properties in model solutions will be documented and interpreted.

Bleck, Rainer; Rooth, Claes G.H.; Okeefe, Sawdey

1997-11-01T23:59:59.000Z

251

Restaurant Energy Use Benchmarking Guideline  

Science Conference Proceedings (OSTI)

A significant operational challenge for food service operators is defining energy use benchmark metrics to compare against the performance of individual stores. Without metrics, multiunit operators and managers have difficulty identifying which stores in their portfolios require extra attention to bring their energy performance in line with expectations. This report presents a method whereby multiunit operators may use their own utility data to create suitable metrics for evaluating their operations.

Hedrick, R.; Smith, V.; Field, K.

2011-07-01T23:59:59.000Z

252

INTEGRAL BENCHMARKS AVAILABLE THROUGH THE INTERNATIONAL REACTOR PHYSICS EXPERIMENT EVALUATION PROJECT AND THE INTERNATIONAL CRITICALITY SAFETY BENCHMARK EVALUATION PROJECT  

SciTech Connect

Interest in high-quality integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of next generation reactor and advanced fuel cycle concepts. The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) continue to expand their efforts and broaden their scope to identify, evaluate, and provide integral benchmark data for method and data validation. Benchmark model specifications provided by these two projects are used heavily by the international reactor physics, nuclear data, and criticality safety communities. Thus far, 14 countries have contributed to the IRPhEP, and 20 have contributed to the ICSBEP. The status of the IRPhEP and ICSBEP is discussed in this paper, and the future of the two projects is outlined and discussed. Selected benchmarks that have been added to the IRPhEP and ICSBEP handbooks since PHYSOR’06 are highlighted, and the future of the two projects is discussed.

J. Blair Briggs; Lori Scott; Enrico Sartori; Yolanda Rugama

2008-09-01T23:59:59.000Z

253

The NCEP Climate Forecast System Version 2  

Science Conference Proceedings (OSTI)

The second version of the NCEP Climate Forecast System (CFSv2) was made operational at NCEP in March 2011. This version has upgrades to nearly all aspects of the data assimilation and forecast model components of the system. A coupled Reanalysis ...

Suranjana Saha; Shrinivas Moorthi; Xingren Wu; Jiande Wang; Sudhir Nadiga; Patrick Tripp; David Behringer; Yu-Tai Hou; Hui-ya Chuang; Mark Iredell; Michael Ek; Jesse Meng; Rongqian Yang; Malaquías Peńa Mendez; Huug van den Dool; Qin Zhang; Wanqiu Wang; Mingyue Chen; Emily Becker

254

CLMT2 user's guide: A Coupled Model for Simulation of Hydraulic Processes from Canopy to Aquifer Version 1.0  

E-Print Network (OSTI)

equations  for  some  soil  hydraulic properties.  Water are capable to simulate hydraulic processes from  top of Model for Simulation of Hydraulic Processes from Canopy to 

Pan, Lehua

2006-01-01T23:59:59.000Z

255

Manuscript prepared for Geosci. Model Dev. with version 2.3 of the LATEX class copernicus.cls.  

E-Print Network (OSTI)

-Chem global chemical transport model. The implementation is based on the Kinetic PrePro- cessor (KPP). Two, continuous ad- joint, and discrete adjoint chemical models, with applications to sensitivity analysis using the Kinetic PreProcessor KPP. This work extends the set of chemical solvers available to GEOS

Sandu, Adrian

256

APEX user`s guide - (Argonne production, expansion, and exchange model for electrical systems), version 3.0  

Science Conference Proceedings (OSTI)

This report describes operating procedures and background documentation for the Argonne Production, Expansion, and Exchange Model for Electrical Systems (APEX). This modeling system was developed to provide the U.S. Department of Energy, Division of Fossil Energy, Office of Coal and Electricity with in-house capabilities for addressing policy options that affect electrical utilities. To meet this objective, Argonne National Laboratory developed a menu-driven programming package that enables the user to develop and conduct simulations of production costs, system reliability, spot market network flows, and optimal system capacity expansion. The APEX system consists of three basic simulation components, supported by various databases and data management software. The components include (1) the investigation of Costs and Reliability in Utility Systems (ICARUS) model, (2) the Spot Market Network (SMN) model, and (3) the Production and Capacity Expansion (PACE) model. The ICARUS model provides generating-unit-level production-cost and reliability simulations with explicit recognition of planned and unplanned outages. The SMN model addresses optimal network flows with recognition of marginal costs, wheeling charges, and transmission constraints. The PACE model determines long-term (e.g., longer than 10 years) capacity expansion schedules on the basis of candidate expansion technologies and load growth estimates. In addition, the Automated Data Assembly Package (ADAP) and case management features simplify user-input requirements. The ADAP, ICARUS, and SMN modules are described in detail. The PACE module is expected to be addressed in a future publication.

VanKuiken, J.C.; Veselka, T.D.; Guziel, K.A.; Blodgett, D.W.; Hamilton, S.; Kavicky, J.A.; Koritarov, V.S.; North, M.J.; Novickas, A.A.; Paprockas, K.R. [and others

1994-11-01T23:59:59.000Z

257

Industrial Combustion Emissions (ICE) model, Version 6. 0. User's manual. Report for November 1984-August 1987  

SciTech Connect

This report is a user's manual for the Industrial Combustion Emissions (ICE) model. It summarizes user options and software characteristics, and describes both the input data files and procedures for operating the model. It discusses proper formatting of files and creation of job-control language. The model projects for each state the emissions of sulfur oxides, sulfates, and nitrogen oxides from fossil-fuel combustion in industrial boilers. Emissions and costs of boiler generation, including emission-control costs, are projected for the years 1985, 1990, 1995, 2000, 2010, 2020, and 2030.

Hogan, T.

1988-02-01T23:59:59.000Z

258

Simulation of the Global Hydrological Cycle in the CCSM Community Atmosphere Model Version 3 (CAM3): Mean Features  

Science Conference Proceedings (OSTI)

The seasonal and annual climatological behavior of selected components of the hydrological cycle are presented from coupled and uncoupled configurations of the atmospheric component of the Community Climate System Model (CCSM) Community ...

James J. Hack; Julie M. Caron; Stephen G. Yeager; Keith W. Oleson; Marika M. Holland; John E. Truesdale; Philip J. Rasch

2006-06-01T23:59:59.000Z

259

A Global Multilevel Atmospheric Model Using a Vector Semi-Lagrangian Finite-Difference Scheme. Part II: Version with Physics  

Science Conference Proceedings (OSTI)

Full physical parameterzations have been incorporated into the global model using a two-time-level, semi-Lagrangian, semi-implicit finite-difference integration scheme that was described in Part I of this work. Virtual temperature effects have ...

S. Moorthi; R. W. Higgins; J. R. Bates

1995-05-01T23:59:59.000Z

260

Geothermal Energy Market Study on the Atlantic Coastal Plain. GRITS (Version 9): Model Description and User's Guide  

DOE Green Energy (OSTI)

The Geothermal Resource Interactive Temporal Simulation (GRITS) model calculates the cost and revenue streams for the lifetime of a project that utilizes low to moderate temperature geothermal resources. With these estimates, the net present value of the project is determined. The GRITS model allows preliminary economic evaluations of direct-use applications of geothermal energy under a wide range of resource, demand, and financial conditions, some of which change over the lifetime of the project.

Kroll, Peter; Kane, Sally Minch [eds.

1982-04-01T23:59:59.000Z

Note: This page contains sample records for the topic "benchmark models version" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


261

Advanced Benchmarking for Complex Building Types: Laboratories as an Exemplar  

SciTech Connect

Complex buildings such as laboratories, data centers and cleanrooms present particular challenges for energy benchmarking because it is difficult to normalize special requirements such as health and safety in laboratories and reliability (i.e., system redundancy to maintain uptime) in data centers which significantly impact energy use. For example, air change requirements vary widely based on the type of work being performed in each laboratory space. We present methods and tools for energy benchmarking in laboratories, as an exemplar of a complex building type. First, we address whole building energy metrics and normalization parameters. We present empirical methods based on simple data filtering as well as multivariate regression analysis on the Labs21 database. The regression analysis showed lab type, lab-area ratio and occupancy hours to be significant variables. Yet the dataset did not allow analysis of factors such as plug loads and air change rates, both of which are critical to lab energy use. The simulation-based method uses an EnergyPlus model to generate a benchmark energy intensity normalized for a wider range of parameters. We suggest that both these methods have complementary strengths and limitations. Second, we present"action-oriented" benchmarking, which extends whole-building benchmarking by utilizing system-level features and metrics such as airflow W/cfm to quickly identify a list of potential efficiency actions which can then be used as the basis for a more detailed audit. While action-oriented benchmarking is not an"audit in a box" and is not intended to provide the same degree of accuracy afforded by an energy audit, we demonstrate how it can be used to focus and prioritize audit activity and track performance at the system level. We conclude with key principles that are more broadly applicable to other complex building types.

Mathew, Paul A.; Clear, Robert; Kircher, Kevin; Webster, Tom; Lee, Kwang Ho; Hoyt, Tyler

2010-08-01T23:59:59.000Z

262

Benchmark precision and random initial state  

E-Print Network (OSTI)

The applications of software benchmarks place an obvious demand on the precision of the benchmark results. An intuitive and frequently employed approach to obtaining precise enough benchmark results is having the benchmark collect a large number of samples that are simply averaged or otherwise statistically processed. We show that this approach ignores an inherent and unavoidable nondeterminism in the initial state of the system that is evaluated, often leading to an implausible estimate of result precision. We proceed by outlining the sources of nondeterminism in a typical system, illustrating the impact of the nondeterminism on selected classes of benchmarks. Finally, we suggest a method for quantitatively assessing the influence of nondeterminism on a benchmark, as well as approach that provides a plausible estimate of result precision in face of the nondeterminism.

Tomas Kalibera; Lubomir Bulej; Petr Tuma

2005-01-01T23:59:59.000Z

263

Factors Causing Unexpected Variations in Ada Benchmarks  

E-Print Network (OSTI)

Benchmarks are often used to describe the performance of computer systems. This report considers factors that may cause Ada benchmarks to produce inaccurate results. Included are examples from the ongoing benchmarking efforts of the Ada Embedded Systems Testbed (AEST) Project using bare target computers with several Ada compilers. 1. Introduction One of the goals of the Ada Embedded Systems Testbed (AEST) Project is to assess the readiness of the Ada programming language and Ada tools for developing embedded systems. The benchmarking and instrumentation subgroup within the AEST Project is running various suites of Ada benchmarks to obtain data on the real-time performance of Ada on a number of different target systems. The purpose of this report is to categorize the factors which cause anomalous results to be produced by the benchmarks. Some of these factors have been observed, while others are more speculative in nature. All these factors should be understood if accurate, comparable,...

Neal Altman; Neal Altman

1987-01-01T23:59:59.000Z

264

benchmarks  

NLE Websites -- All DOE Office Websites (Extended Search)

Profile for Selected PCBs, ATSDRTP-8821, U.S. Public Health Service, Washington, D.C. Boese, B. L., H. Lee II, D. T. Specht, R. C. Randall, and M. H. Winsor 1990....

265

MCNP/KENO criticality benchmarks  

SciTech Connect

In the past, criticality safety analyses related to the handling and storage of fissile materials were obtained from critical experiments, nuclear safety guides, and handbooks. As a result of rising costs and time delays associated with critical experiments, most experimental facilities have been closed, triggering an increased reliance on computational methods. With this reliance comes the need and requirement for redundant validation by independent criticality codes. Currently, the KENO Monte Carlo transport code is the most widely used tool for criticality safety calculations. For other transport codes, such as MCNP, to be accepted by the criticality safety community as a redundant validation tool they must be able to reproduce experimental results at least as well as KENO. The Monte Carlo neutron, photon, and electron transport code MCNP, has an extensive list of attractive features, including continuous energy cross sections, generalized 3-D geometry, time dependent transport, criticality k{sub eff} calculations, and comprehensive source and tally capabilities. It is widely used for nuclear criticality analysis, nuclear reactor shielding, oil well logging, and medical dosimetry calculations. This report specifically addresses criticality and benchmarks the KENO 25 problem test set. These sample problems constitute the KENO standard benchmark set and represent a relatively wide variety of criticality problems. The KENO Monte Carlo code was chosen because of its extensive benchmarking against analytical and experimental criticality results. Whereas the uncertainty in experimental parameters generally prohibits code validation to better than about 1% in k{sub eff}, the value of k{sub eff} for criticality is considered unacceptable if it deviates more than a few percent from measurements.

McKinney, G.W. [Los Alamos National Lab., NM (United States); Wagner, J.C. [Pennsylvania State Univ., University Park, PA (United States); Sisolak, J.E. [Wisconsin Univ., Madison, WI (United States)

1993-04-01T23:59:59.000Z

266

Radcalc for windows benchmark study: A comparison of software results with Rocky Flats hydrogen gas generation data  

DOE Green Energy (OSTI)

Radcalc for Windows Version 2.01 is a user-friendly software program developed by Waste Management Federal Services, Inc., Northwest Operations for the U.S. Department of Energy (McFadden et al. 1998). It is used for transportation and packaging applications in the shipment of radioactive waste materials. Among its applications are the classification of waste per the US. Department of Transportation regulations, the calculation of decay heat and daughter products, and the calculation of the radiolytic production of hydrogen gas. The Radcalc program has been extensively tested and validated (Green et al. 1995, McFadden et al. 1998) by comparison of each Radcalc algorithm to hand calculations. An opportunity to benchmark Radcalc hydrogen gas generation calculations to experimental data arose when the Rocky Flats Environmental Technology Site (RFETS) Residue Stabilization Program collected hydrogen gas generation data to determine compliance with requirements for shipment of waste in the TRUPACT-II (Schierloh 1998). The residue/waste drums tested at RFETS contain contaminated, solid, inorganic materials in polyethylene bags. The contamination is predominantly due to plutonium and americium isotopes. The information provided by Schierloh (1 998) of RFETS includes decay heat, hydrogen gas generation rates, calculated G{sub eff} values, and waste material type, making the experimental data ideal for benchmarking Radcalc. The following sections discuss the RFETS data and the Radcalc cases modeled with the data. Results are tabulated and also provided graphically.

MCFADDEN, J.G.

1999-07-19T23:59:59.000Z

267

Optimization Online - Benchmark of Some Nonsmooth Optimization ...  

E-Print Network (OSTI)

Mar 1, 2006 ... Benchmark of Some Nonsmooth Optimization Solvers for Computing Nonconvex Proximal Points. Warren Hare (whare ***at*** cecm.sfu.ca)

268

Benchmarking optimization software with performance profiles  

E-Print Network (OSTI)

Mar 15, 2001 ... Abstract: We propose performance profiles -- probability distribution functions for a performance metric -- as a tool for benchmarking and ...

269

2000 TMS Annual Meeting Exhibitor: BENCHMARK STRUCTURAL  

Science Conference Proceedings (OSTI)

Benchmark Structural Ceramics Corp., has substituted the use of sintered silicon nitride and sialon parts utilized in molten aluminum handling and service with ...

270

Measurement Technology for Benchmark Spray Combustion ...  

Science Conference Proceedings (OSTI)

Benchmark Spray Combustion Database. ... A1, uncertainty budget for the fuel flow rate. A2, uncertainty budget for the combustion air flow rate. ...

2013-07-15T23:59:59.000Z

271

Method and system for benchmarking computers  

DOE Patents (OSTI)

A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.

Gustafson, John L. (Ames, IA)

1993-09-14T23:59:59.000Z

272

Measure, track, and benchmark | ENERGY STAR  

NLE Websites -- All DOE Office Websites (Extended Search)

tracking and benchmarking of energy across all operations are your most powerful energy waste reduction tools. Reducing energy waste requires that all forms of energy be...

273

Performance Evaluation and Benchmarking of Intelligent ...  

Science Conference Proceedings (OSTI)

Performance Evaluation and Benchmarking of Intelligent Systems Book. 2009, XIX, 338 p., Hardcover ISBN: 978-1-4419-0491-1 ... About this book: ...

2010-12-20T23:59:59.000Z

274

Quantum benchmarking with realistic states of light  

E-Print Network (OSTI)

The goal of quantum benchmarking is to certify that imperfect quantum communication devices (e.g., quantum channels, quantum memories, quantum key distribution systems) can still be used for meaningful quantum communication. However, the test states used in quantum benchmarking experiments may be imperfect as well. Many quantum benchmarks are only valid for states which match some ideal form, such as pure states or Gaussian states. We outline how to perform quantum benchmarking using arbitrary states of light. These results are used to certify a continuous variable quantum memory by showing that it has the ability to preserve entanglement.

Killoran, Nathan; Buchler, Ben C; Lam, Ping Koy; Lütkenhaus, Norbert

2012-01-01T23:59:59.000Z

275

Assessment of Applying the PMaC Prediction Framework to NERSC-5 SSP Benchmarks  

Science Conference Proceedings (OSTI)

NERSC procurement depends on application benchmarks, in particular the NERSC SSP. Machine vendors are asked to run SSP benchmarks at various scales to enable NERSC to assess system performance. However, it is often the case that the vendor cannot run the benchmarks at large concurrency as it is impractical to have that much hardware available. Additionally, there may be difficulties in porting the benchmarks to the hardware. The Performance Modeling and Characterization Lab (PMaC) at San Diego Supercomputing Center (SDSC) have developed a framework to predict the performance of codes on large parallel machines. The goal of this work was to apply the PMaC prediction framework to the NERSC-5 SSP benchmark applications and ultimately consider the accuracy of the predictions. Other tasks included identifying assumptions and simplifications in the process, determining the ease of use, and measuring the resources required to obtain predictions.

Keen, Noel

2006-09-30T23:59:59.000Z

276

Cielo Computational Environment Usage Model With Mappings to ACE Requirements for the General Availability User Environment Capabilities Release Version 1.1  

Science Conference Proceedings (OSTI)

Cielo is a massively parallel supercomputer funded by the DOE/NNSA Advanced Simulation and Computing (ASC) program, and operated by the Alliance for Computing at Extreme Scale (ACES), a partnership between Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL). The primary Cielo compute platform is physically located at Los Alamos National Laboratory. This Cielo Computational Environment Usage Model documents the capabilities and the environment to be provided for the Q1 FY12 Level 2 Cielo Capability Computing (CCC) Platform Production Readiness Milestone. This document describes specific capabilities, tools, and procedures to support both local and remote users. The model is focused on the needs of the ASC user working in the secure computing environments at Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory, or Sandia National Laboratories, but also addresses the needs of users working in the unclassified environment. The Cielo Computational Environment Usage Model maps the provided capabilities to the tri-Lab ASC Computing Environment (ACE) Version 8.0 requirements. The ACE requirements reflect the high performance computing requirements for the Production Readiness Milestone user environment capabilities of the ASC community. A description of ACE requirements met, and those requirements that are not met, are included in each section of this document. The Cielo Computing Environment, along with the ACE mappings, has been issued and reviewed throughout the tri-Lab community.

Vigil,Benny Manuel [Los Alamos National Laboratory; Ballance, Robert [SNL; Haskell, Karen [SNL

2012-08-09T23:59:59.000Z

277

Modular Accident Analysis Program, Version 5, Molten Corium–Concrete Interaction and Debris Coolability Model Enhancement Description  

Science Conference Proceedings (OSTI)

This report describes proposed enhancements to the Modular Accident Analysis Program (MAAP) molten corium–concrete interaction (MCCI) model. MAAP is a computer program that simulates the operation of light-water and heavy-water moderated nuclear power plants for both current and advanced light-water reactor designs.Engineers at Fukushima observed that water pumped into the reactor vessel rose to a certain height, but it did not rise further as more water was pumped into the reactor ...

2013-02-28T23:59:59.000Z

278

Randomized Benchmarking of Quantum Gates  

E-Print Network (OSTI)

A key requirement for scalable quantum computing is that elementary quantum gates can be implemented with sufficiently low error. One method for determining the error behavior of a gate implementation is to perform process tomography. However, standard process tomography is limited by errors in state preparation, measurement and one-qubit gates. It suffers from inefficient scaling with number of qubits and does not detect adverse error-compounding when gates are composed in long sequences. An additional problem is due to the fact that desirable error probabilities for scalable quantum computing are of the order of 0.0001 or lower. Experimentally proving such low errors is challenging. We describe a randomized benchmarking method that yields estimates of the computationally relevant errors without relying on accurate state preparation and measurement. Since it involves long sequences of randomly chosen gates, it also verifies that error behavior is stable when used in long computations. We implemented randomized benchmarking on trapped atomic ion qubits, establishing a one-qubit error probability per randomized pi/2 pulse of 0.00482(17) in a particular experiment. We expect this error probability to be readily improved with straightforward technical modifications.

E. Knill; D. Leibfried; R. Reichle; J. Britton; R. B. Blakestad; J. D. Jost; C. Langer; R. Ozeri; S. Seidelin; D. J. Wineland

2007-07-06T23:59:59.000Z

279

A Benchmark Study on Casting Residual Stress  

Science Conference Proceedings (OSTI)

Stringent regulatory requirements, such as Tier IV norms, have pushed the cast iron for automotive applications to its limit. The castings need to be designed with closer tolerances by incorporating hitherto unknowns, such as residual stresses arising due to thermal gradients, phase and microstructural changes during solidification phenomenon. Residual stresses were earlier neglected in the casting designs by incorporating large factors of safety. Experimental measurement of residual stress in a casting through neutron or X-ray diffraction, sectioning or hole drilling, magnetic, electric or photoelastic measurements is very difficult and time consuming exercise. A detailed multi-physics model, incorporating thermo-mechanical and phase transformation phenomenon, provides an attractive alternative to assess the residual stresses generated during casting. However, before relying on the simulation methodology, it is important to rigorously validate the prediction capability by comparing it to experimental measurements. In the present work, a benchmark study was undertaken for casting residual stress measurements through neutron diffraction, which was subsequently used to validate the accuracy of simulation prediction. The stress lattice specimen geometry was designed such that subsequent castings would generate adequate residual stresses during solidification and cooling, without any cracks. The residual stresses in the cast specimen were measured using neutron diffraction. Considering the difficulty in accessing the neutron diffraction facility, these measurements can be considered as benchmark for casting simulation validations. Simulations were performed using the identical specimen geometry and casting conditions for predictions of residual stresses. The simulation predictions were found to agree well with the experimentally measured residual stresses. The experimentally validated model can be subsequently used to predict residual stresses in different cast components. This enables incorporation of the residual stresses at the design phase along with external loads for accurate predictions of fatigue and fracture performance of the cast components.

Johnson, Eric M. [John Deere -- Moline Tech Center; Watkins, Thomas R [ORNL; Schmidlin, Joshua E [ORNL; Dutler, S. A. [MAGMA Foundry Technologies, Inc.

2012-01-01T23:59:59.000Z

280

Multipole Analysis of a Benchmark Data Set for Pion Photoproduction  

E-Print Network (OSTI)

We have fitted low- and medium-energy benchmark datasets employing methods used in the MAID/SAID and dynamical model analyses. Independent fits from the Mainz, RPI, Yerevan, and Kharkov groups have also been performed over the low-energy region. Results for the multipole amplitudes are compared in order to gauge the model-dependence of such fits, given identical data and a single method for error handling.

R. A. Arndt; I. Aznauryan; R. M. Davidson; D. Drechsel; O. Hanstein; S. S. Kamalov; A. S. Omelaenko; I. Strakovsky; L. Tiator; R. L. Workman; S. N. Yang

2001-06-25T23:59:59.000Z

Note: This page contains sample records for the topic "benchmark models version" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


281

The Role of Circulation Features on Black Carbon Transport into the Arctic in the Community Atmosphere Model Version 5 (CAM5)  

SciTech Connect

Current climate models generally under-predict the surface concentration of black carbon (BC) in the Arctic due to the uncertainties associated with emissions, transport, and removal. This bias is also present in the Community Atmosphere Model Version 5.1 (CAM5). In this study, we investigate the uncertainty of Arctic BC due to transport processes simulated by CAM5 by configuring the model to run in an “offline mode” in which the large-scale circulations are prescribed. We compare the simulated BC transport when the offline model is driven by the meteorology predicted by the standard free-running CAM5 with simulations where the meteorology is constrained to agree with reanalysis products. Some circulation biases are apparent: the free-running CAM5 produces about 50% less transient eddy transport of BC than the reanalysis-driven simulations, which may be attributed to the coarse model resolution insufficient to represent eddies. Our analysis shows that the free-running CAM5 reasonably captures the essence of the Arctic Oscillation (AO), but some discernable differences in the spatial pattern of the AO between the free-running CAM5 and the reanalysis-driven simulations result in significantly different AO modulation of BC transport over Northeast Asia and Eastern Europe. Nevertheless, we find that the overall climatological circulation patterns simulated by the free-running CAM5 generally resembles those from the reanalysis products, and BC transport is very similar in both simulation sets. Therefore, the simulated circulation features regulating the long-range BC transport is unlikely the most important cause of the large under-prediction of surface BC concentration in the Arctic.

Ma, Po-Lun; Rasch, Philip J.; Wang, Hailong; Zhang, Kai; Easter, Richard C.; Tilmes, S.; Fast, Jerome D.; Liu, Xiaohong; Yoon, Jin-Ho; Lamarque, Jean-Francois

2013-05-28T23:59:59.000Z

282

An Empirical Benchmark for Decadal Forecasts of Global Surface Temperature Anomalies  

Science Conference Proceedings (OSTI)

The suitability of a linear inverse model (LIM) as a benchmark for decadal surface temperature forecast skill is demonstrated. Constructed from the observed simultaneous and 1-yr lag covariability statistics of annually averaged sea surface ...

Matthew Newman

2013-07-01T23:59:59.000Z

283

Review of California and National Methods for Energy PerformanceBenchmarking of Commercial Buildings  

SciTech Connect

This benchmarking review has been developed to support benchmarking planning and tool development under discussion by the California Energy Commission (CEC), Lawrence Berkeley National Laboratory (LBNL) and others in response to the Governor's Executive Order S-20-04 (2004). The Executive Order sets a goal of benchmarking and improving the energy efficiency of California's existing commercial building stock. The Executive Order requires the CEC to propose ''a simple building efficiency benchmarking system for all commercial buildings in the state''. This report summarizes and compares two currently available commercial building energy-benchmarking tools. One tool is the U.S. Environmental Protection Agency's Energy Star National Energy Performance Rating System, which is a national regression-based benchmarking model (referred to in this report as Energy Star). The second is Lawrence Berkeley National Laboratory's Cal-Arch, which is a California-based distributional model (referred to as Cal-Arch). Prior to the time Cal-Arch was developed in 2002, there were several other benchmarking tools available to California consumers but none that were based solely on California data. The Energy Star and Cal-Arch benchmarking tools both provide California with unique and useful methods to benchmark the energy performance of California's buildings. Rather than determine which model is ''better'', the purpose of this report is to understand and compare the underlying data, information systems, assumptions, and outcomes of each model.

Matson, Nance E.; Piette, Mary Ann

2005-09-05T23:59:59.000Z

284

Metrics and Benchmarks for Energy Efficiency in Laboratories  

E-Print Network (OSTI)

2004 pro- vide an additional benchmark. F O R T H E Table 3.Performance Metrics & Benchmarks Notes A performance MetricBTU/sf-yr). A performance Benchmark is a particular value of

Mathew, Paul

2007-01-01T23:59:59.000Z

285

Memory-intensive benchmarks: IRAM vs. cache-based machines  

E-Print Network (OSTI)

the Stressmarks of the DIS Benchmark Project, v 1.0, TitanB. R. Gaeke, “GUPS Benchmark Manual,” Univ. of California,be indispensable to re-run our benchmarks on the real VIRAM

2001-01-01T23:59:59.000Z

286

Metrics and Benchmarks for Energy Efficiency in Laboratories  

E-Print Network (OSTI)

energy efficiency metrics and benchmarks for laboratories, at the whole buildingBuilding Site Energy BTU/sf-yr). A performance Benchmark isBenchmarks Good Practice ID Building B1 Name Unit Building Site Energy

Mathew, Paul

2007-01-01T23:59:59.000Z

287

Development of a Computer-based Benchmarking and Analytical Tool...  

NLE Websites -- All DOE Office Websites (Extended Search)

a Computer-based Benchmarking and Analytical Tool: Benchmarking and Energy & Water Savings Tool in Dairy Plants (BEST-Dairy) Title Development of a Computer-based Benchmarking and...

288

Low-Energy Supersymmetry Breaking from String Flux Compactifications: Benchmark Scenarios  

E-Print Network (OSTI)

Soft supersymmetry breaking terms were recently derived for type IIB string flux compactifications with all moduli stabilised. Depending on the choice of the discrete input parameters of the compactification such as fluxes and ranks of hidden gauge groups, the string scale was found to have any value between the TeV and GUT scales. We study the phenomenological implications of these compactifications at low energy. Three realistic scenarios can be identified depending on whether the Standard Model lies on D3 or D7 branes and on the value of the string scale. For the MSSM on D7 branes and the string scale between 10^12 GeV and 10^17 GeV we find that the LSP is a neutralino, while for lower scales it is the stop. At the GUT scale the results of the fluxed MSSM are reproduced, but now with all moduli stabilised. For the MSSM on D3 branes we identify two realistic scenarios. The first one corresponds to an intermediate string scale version of split supersymmetry. The second is a stringy mSUGRA scenario. This requires tuning of the flux parameters to obtain the GUT scale. Phenomenological constraints from dark matter, (g-2)_mu and BR(b->s gamma) are considered for the three scenarios. We provide benchmark points with the MSSM spectrum, making the models suitable for a detailed phenomenological analysis.

Benjamin C. Allanach; Fernando Quevedo; Kerim Suruliz

2005-12-06T23:59:59.000Z

289

ASHRAE Cleanroom Benchmarking Paper - REVISED  

NLE Websites -- All DOE Office Websites (Extended Search)

8E 8E Cleanroom Energy Efficiency: Metrics and Benchmarking Paul Mathew, William Tschudi, Dale Sartor Lawrence Berkeley National Laboratory James Beasley International SEMATECH Manufacturing Initiative October 2010 Published in ASHRAE Journal, v. 53, issue 10 DISCLAIMER This document was prepared as an account of work sponsored by the United States Government. While this document is believed to contain correct information, neither the United States Government nor any agency thereof, nor The Regents of the University of California, nor any of their employees, makes any warranty, express or implied, or assumes any legal responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe

290

An Independent Benchmarking of SDP and SOCP Solvers  

E-Print Network (OSTI)

The codes were run on a standard platform and on all the benchmark ... tabulated and commented benchmarking results this provides an overview of the state of ...

291

Fairer Benchmarking of Optimization Algorithms via Derivative Free ...  

E-Print Network (OSTI)

Oct 13, 2010 ... Some benchmarking is done as a proof-of-concept, ... amine an issue that arise during benchmarking and discuss a technique that can help ...

292

On-chip Benchmarking and Calibration without External References  

E-Print Network (OSTI)

the target component. A benchmark value is calculated based11  3.4 Calibration using Benchmarkthe target component. A benchmark value is calculated based

Lee, Cheol-Woong

2011-01-01T23:59:59.000Z

293

Energy-Efficiency Technologies and Benchmarking the Energy Intensity...  

NLE Websites -- All DOE Office Websites (Extended Search)

Energy-Efficiency Technologies and Benchmarking the Energy Intensity for the Textile Industry Title Energy-Efficiency Technologies and Benchmarking the Energy Intensity for the...

294

Building Energy Benchmarking between the United States and China...  

NLE Websites -- All DOE Office Websites (Extended Search)

Energy Benchmarking between the United States and China: Methods and Challenges Title Building Energy Benchmarking between the United States and China: Methods and Challenges...

295

MAPSS Version 1.0 Available  

NLE Websites -- All DOE Office Websites (Extended Search)

MAPSS (Mapped Atmosphere-Plant-Soil System Model) Version 1.0 Available MAPSS (Mapped Atmosphere-Plant-Soil System Model) Version 1.0 Available The ORNL NASA DAAC is please to announce the release of a new vegetation distribution model product, MAPSS: Mapped Atmosphere-Plant-Soil System Model, Version 1.0. The MAPSS model was developed by the Pacific Northwest Research Station USDA Forest Service and has been used extensively by the IPCC (Intergovernmental Panel on Climate Change) in regional and global assessments of climate change impacts on vegetation. The MAPSS model simulates the potential natural vegetation that could exist on any upland site in the world under present, past, or future climate change. It operates on the fundamental principal that ecosystems will tend to maximize the leaf area that can be supported at a site by available soil

296

Preliminary Benchmark Evaluation of Japan’s High Temperature Engineering Test Reactor  

SciTech Connect

A benchmark model of the initial fully-loaded start-up core critical of Japan’s High Temperature Engineering Test Reactor (HTTR) was developed to provide data in support of ongoing validation efforts of the Very High Temperature Reactor Program using publicly available resources. The HTTR is a 30 MWt test reactor utilizing graphite moderation, helium coolant, and prismatic TRISO fuel. The benchmark was modeled using MCNP5 with various neutron cross-section libraries. An uncertainty evaluation was performed by perturbing the benchmark model and comparing the resultant eigenvalues. The calculated eigenvalues are approximately 2-3% greater than expected with an uncertainty of ±0.70%. The primary sources of uncertainty are the impurities in the core and reflector graphite. The release of additional HTTR data could effectively reduce the benchmark model uncertainties and bias. Sensitivity of the results to the graphite impurity content might imply that further evaluation of the graphite content could significantly improve calculated results. Proper characterization of graphite for future Next Generation Nuclear Power reactor designs will improve computational modeling capabilities. Current benchmarking activities include evaluation of the annular HTTR cores and assessment of the remaining start-up core physics experiments, including reactivity effects, reactivity coefficient, and reaction-rate distribution measurements. Long term benchmarking goals might include analyses of the hot zero-power critical, rise-to-power tests, and other irradiation, safety, and technical evaluations performed with the HTTR.

John Darrell Bess

2009-05-01T23:59:59.000Z

297

Benchmark calculations for electron collisions with zinc atoms  

SciTech Connect

We present results from R-matrix (close-coupling) calculations for elastic scattering and electron impact excitation of Zn. The overall agreement between the predictions from two independent models, using either a semiempirical core potential or a recently developed B-spline approach with nonorthogonal orbitals, is very satisfactory. The latter method, however, yields particularly good agreement with the few existing experimental benchmark data for resonances at low incident energies.

Zatsarinny, Oleg; Bartschat, Klaus [Department of Physics and Astronomy, Drake University, Des Moines, Iowa 50311 (United States)

2005-02-01T23:59:59.000Z

298

Benchmark Evaluation of Plutonium Nitrate Solution Arrays  

Science Conference Proceedings (OSTI)

In October and November of 1981 thirteen approach-to-critical experiments were performed on a remote split table machine (RSTM) in the Critical Mass Laboratory of Pacific Northwest Laboratory (PNL) in Richland, Washington, using planar arrays of polyethylene bottles filled with plutonium (Pu) nitrate solution. Arrays of up to sixteen bottles were used to measure the critical number of bottles and critical array spacing with a tight fitting Plexiglas{reg_sign} reflector on all sides of the arrays except the top. Some experiments used Plexiglas shells fitted around each bottles to determine the effect of moderation on criticality. Each bottle contained approximately 2.4 L of Pu(NO3)4 solution with a Pu content of 105 g Pu/L and a free acid molarity H+ of 5.1. The plutonium was of low 240Pu (2.9 wt.%) content. These experiments were performed to fill a gap in experimental data regarding criticality limits for storing and handling arrays of Pu solution in reprocessing facilities. Of the thirteen approach-to-critical experiments eleven resulted in extrapolations to critical configurations. Four of the approaches were extrapolated to the critical number of bottles; these were not evaluated further due to the large uncertainty associated with the modeling of a fraction of a bottle. The remaining seven approaches were extrapolated to critical array spacing of 3-4 and 4-4 arrays; these seven critical configurations were evaluation for inclusion as acceptable benchmark experiments in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbook. Detailed and simple models of these configurations were created and the associated bias of these simplifications was determined to range from 0.00116 and 0.00162 {+-} 0.00006 ?keff. Monte Carlo analysis of all models was completed using MCNP5 with ENDF/BVII.0 neutron cross section libraries. A thorough uncertainty analysis of all critical, geometric, and material parameters was performed using parameter perturbation methods. It was found that uncertainty in the impurities in the polyethylene bottles, reflector position, bottle outer diameter, and critical array spacing had the largest effect. The total uncertainty ranged from 0.00651 to 0.00920 ?keff. Evaluation methods and results will be presented and discussed in greater detail in the full paper.

M. A. Marshall; J. D. Bess

2011-09-01T23:59:59.000Z

299

Addendum to the User's Guide for RIVRISK Version 5.0: A Model to Assess Potential Human Health and Ecological Risks from Power Plant and Industrial Facility Releases to Rivers  

Science Conference Proceedings (OSTI)

This is an addendum to the User's Guide for EPRI's RIVRISK analytic framework, Version 5.0. RIVRISK can be used to assess human health and ecological risks associated with industrial and power plant chemical and thermal releases to rivers. Some minor inconsistencies between the original User's Guide (EPRI Report 1000733) and the model examples were discovered during model applications. This addendum provides modified pages of the User's Guide that correct those inconsistencies. Those planning to use RIVR...

2001-05-04T23:59:59.000Z

300

Validation Test Report For The CRWMS Analysis and Logistics Visually Interactive Model Calvin Version 3.0, 10074-Vtr-3.0-00  

SciTech Connect

This report describes the tests performed to validate the CRWMS ''Analysis and Logistics Visually Interactive'' Model (CALVIN) Version 3.0 (V3.0) computer code (STN: 10074-3.0-00). To validate the code, a series of test cases was developed in the CALVIN V3.0 Validation Test Plan (CRWMS M&O 1999a) that exercises the principal calculation models and options of CALVIN V3.0. Twenty-five test cases were developed: 18 logistics test cases and 7 cost test cases. These cases test the features of CALVIN in a sequential manner, so that the validation of each test case is used to demonstrate the accuracy of the input to subsequent calculations. Where necessary, the test cases utilize reduced-size data tables to make the hand calculations used to verify the results more tractable, while still adequately testing the code's capabilities. Acceptance criteria, were established for the logistics and cost test cases in the Validation Test Plan (CRWMS M&O 1999a). The Logistics test cases were developed to test the following CALVIN calculation models: Spent nuclear fuel (SNF) and reactivity calculations; Options for altering reactor life; Adjustment of commercial SNF (CSNF) acceptance rates for fiscal year calculations and mid-year acceptance start; Fuel selection, transportation cask loading, and shipping to the Monitored Geologic Repository (MGR); Transportation cask shipping to and storage at an Interim Storage Facility (ISF); Reactor pool allocation options; and Disposal options at the MGR. Two types of cost test cases were developed: cases to validate the detailed transportation costs, and cases to validate the costs associated with the Civilian Radioactive Waste Management System (CRWMS) Management and Operating Contractor (M&O) and Regional Servicing Contractors (RSCs). For each test case, values calculated using Microsoft Excel 97 worksheets were compared to CALVIN V3.0 scenarios with the same input data and assumptions. All of the test case results compare with the CALVIN V3.0 results within the bounds of the acceptance criteria. Therefore, it is concluded that the CALVIN V3.0 calculation models and options tested in this report are validated.

S. Gillespie

2000-07-27T23:59:59.000Z

Note: This page contains sample records for the topic "benchmark models version" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


301

Benchmark the Fuel Cost of Steam Generation  

SciTech Connect

This revised ITP tip sheet on benchmarking the fuel cost of steam provides how-to advice for improving industrial steam systems using low-cost, proven practices and technologies.

2006-01-01T23:59:59.000Z

302

Benchmarking autonomic capabilities: Promises and pitfalls  

E-Print Network (OSTI)

Benchmarks provide a way to quantify progress in a field. Excellent examples of this are the dramatic improvements in processor speeds and middleware performance over the last decade, driven in part by SPEC ®

Aaron B. Brown; Joseph Hellerstein; Matt Hogstrom; Tony Lau; Sam Lightstone; Peter Shum; Mary Peterson Yost

2004-01-01T23:59:59.000Z

303

DataTrends Energy Use Benchmarking  

NLE Websites -- All DOE Office Websites (Extended Search)

Benchmarking Benchmarking The U.S. Environmental Protection Agency's (EPA) ENERGY STAR Portfolio Manager is changing the way organizations track and manage energy. As of December 2011, organizations have used Portfolio Manager to track and manage the energy use of over 260,000 buildings across all 50 states, representing over 28 billion square feet (nearly 40% of the commercial market). Because of this widespread market adoption, EPA has prepared the DataTrends series to examine benchmarking and trends in energy and water consumption in Portfolio Manager. To learn more, visit www.energystar.gov/DataTrends. Many different types of organizations use Portfolio Manager to benchmark the energy use of their buildings. Office, K-12

304

Action-Oriented Benchmarking: Concepts and Tools  

NLE Websites -- All DOE Office Websites (Extended Search)

opportunity assessment process can then be used to inform and optimize a full-scale audit or commissioning process. We introduce a new web-based action-oriented benchmarking...

305

Physics Benchmarks for the ILC Detectors  

E-Print Network (OSTI)

This note presents a list of physics processes for benchmarking the performance of proposed ILC detectors. This list gives broad coverage of the required physics capabilities of the ILC experiments and suggests target accuracies to be achieved. A reduced list of reactions, which capture within a very economical set the main challenges put by the ILC physics program, is suggested for the early stage of benchmarking of the detector concepts.

M. Battaglia; T. Barklow; M. Peskin; Y. Okada; S. Yamashita; P. Zerwas

2006-03-06T23:59:59.000Z

306

Testing ice microphysics parameterizations in the NCAR Community Atmospheric Model Version 3 using Tropical Warm Pool–International Cloud Experiment data  

SciTech Connect

Cloud properties have been simulated with a new double-moment microphysics scheme under the framework of the single column version of NCAR CAM3. For comparisons, the same simulation was made with the standard single-moment microphysics scheme of CAM3. Results from both simulations were compared favorably with observations during the Tropical Warm Pool- International Cloud Experiment by US Department of Energy Atmospheric Radiation Program in terms of the temporal variation and vertical distribution of cloud fraction and cloud condensate. Major differences between the two simulations are in the magnitude and distribution of ice water content within the mixed-phase cloud during the monsoon period, though the total frozen water (snow plus ice) content is similar. The ice mass content in the mixed-phase cloud from the new scheme is larger than that from the standard scheme, and extends 2 km further downward, which are closer to observations. The dependence of the frozen water mass fraction in total condensate on temperature from the new scheme is also closer to available observations. Outgoing longwave radiation (OLR) at the top of the atmosphere (TOA) from the simulation with the new scheme is in general larger than that with the standard scheme, while the surface downward longwave radiation is similar. Sensitivity tests suggest that different treatments of the ice effective radius contribute significantly to the difference in the TOA OLR in addition to cloud water path. The deep convection process affects both TOA OLR and surface downward longwave radiation. The over-frequently-triggered deep convention process in the model is not the only mechanism for the excess middle and high level clouds. Further evaluation especially for ice cloud properties based on in-situ data is needed.

Wang, Weiguo; Liu, Xiaohong; Xie, Shaocheng; Boyle, James; McFarlane, Sally A.

2009-07-23T23:59:59.000Z

307

LUBM: A benchmark for OWL knowledge base systems  

Science Conference Proceedings (OSTI)

We describe our method for benchmarking Semantic Web knowledge base systems with respect to use in large OWL applications. We present the Lehigh University Benchmark (LUBM) as an example of how to design such benchmarks. The LUBM features an ontology ... Keywords: Evaluation, Knowledge base system, Lehigh University Benchmark, Semantic Web

Yuanbo Guo; Zhengxiang Pan; Jeff Heflin

2005-10-01T23:59:59.000Z

308

CASMO-3/SIMULATE-3 benchmarking against Vermont Yankee  

Science Conference Proceedings (OSTI)

The cross-section generation code CASMO-3 and the advanced nodal code SIMULATE-3 are used to model Vermont Yankee (VY) cycles 9 through 13. Vermont Yankee is a small, high-power density boiling water reactor (BWR)-3 reactor. Cycles 9 through 13 were chosen for benchmarking because they have high-enrichment cores and use gamma-sensing traversing in-core probes (TIPs). To judge the merit of the new CASMO-3/SIMULATE-3 model, the results are compared to the old CASMO-2/SIMULATE-2 model. The figures of merit are consistent hot and cold eigenvalues near 1.0 and accurate reproduction of the plant TIP readings.

Hubbard, B.Y.; Morin, D.J.; Pappas, J.; Potter, R.C.; Woehlke, R.A. (Yankee Atomic Electric Co., Bolton, MA (USA))

1989-11-01T23:59:59.000Z

309

Network File System (NFS) version 4 Protocol  

Science Conference Proceedings (OSTI)

The Network File System (NFS) version 4 is a distributed filesystem protocol which owes heritage to NFS protocol version 2, RFC 1094, and version 3, RFC 1813. Unlike earlier versions, the NFS version 4 protocol supports traditional file access while ...

S. Shepler; B. Callaghan; D. Robinson; R. Thurlow; C. Beame; M. Eisler; D. Noveck

2003-04-01T23:59:59.000Z

310

Metadata Efficiency in Versioning File Systems  

Science Conference Proceedings (OSTI)

Versioning file systems retain earlier versions of modified files, allowing recovery from user mistakes or system corruption. Unfortunately, conventional versioning systems do not efficiently record large numbers of versions. In particular, versioned ...

Craig A. N. Soules; Garth R. Goodson; John D. Strunk; Gregory R. Ganger

2003-03-01T23:59:59.000Z

311

Metadata efficiency in versioning file systems  

Science Conference Proceedings (OSTI)

Versioning file systems retain earlier versions of modified files, allowing recovery from user mistakes or system corruption. Unfortunately, conventional versioning systems do not efficiently record large numbers of versions. In particular, versioned ...

Craig A. N. Soules; Garth R. Goodson; John D. Strunk; Gregory R. Ganger

2003-03-01T23:59:59.000Z

312

Initial data testing of ENDF/B-VI for thermal reactor benchmark analysis  

SciTech Connect

This paper summarizes some early data testing of ENDF/B-VI by members of the Cross Section Evaluation Working Group (CSEWG) Thermal Reactor Data Testing Subcommittee. Projections of ENDF/B-VI performance in thermal benchmark calculations are beginning to be available; and in some cases the calculations were performed with only a portion of the cross sections taken from version VI, the remainder taken from earlier data files. A factor delaying the thermal reactor data testing is that the final {sup 235}U evaluation has not yet been officially released--only an earlier evaluation with a constant low-energy eta value (like in version V) is currently available. The official version VI {sup 235}U evaluation (scheduled for release as Mod-1) gives a drooping eta variation at low energy; i.e., eta decreases with decreasing energy. This behavior was suggested by European studies to improve the calculation of temperature coefficients in LWRs.

Williams, M.L. [Louisiana State Univ., Baton Rouge, LA (United States). Nuclear Science Center; Kahler, A.C. [Bettis Atomic Power Lab., West Mifflin, PA (United States); MacFarlane, R.E. [Los Alamos National Lab., NM (United States); Milgram, M. [Atomic Energy of Canada Ltd., Chalk River, ON (Canada). Chalk River Nuclear Labs.; Wright, R.Q. [Oak Ridge National Lab., TN (United States)

1991-12-31T23:59:59.000Z

313

Information Security Policies Made Easy Version 11, Version 11 edition  

Science Conference Proceedings (OSTI)

Information Security Policies Made Easy, Version 11 is the new and updated version of the gold standard information security policy resource used by over 7000 organizations worldwide. Based on the 25 year consulting and security experience of Charles ...

Charles Cresson Wood; Dave Lineman

2009-09-01T23:59:59.000Z

314

CFD Model for Prediction of Liquid Steel Temperature in Ladle ...  

Science Conference Proceedings (OSTI)

2D and 3D Numerical Modeling of Solidification Benchmark of Sn-3% Pb Wt. Alloy under ... 3D CAFE Simulation of a Macrosegregation Benchmark Experiment.

315

Storage-Intensive Supercomputing Benchmark Study  

SciTech Connect

Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows: SuperMicro X7DBE Xeon Dual Socket Blackford Server Motherboard; 2 Intel Xeon Dual-Core 2.66 GHz processors; 1 GB DDR2 PC2-5300 RAM (2 x 512); 80GB Hard Drive (Seagate SATA II Barracuda). The Fusion board is presently capable of 4X in a PCIe slot. The image resampling benchmark was run on a dual Xeon workstation with NVIDIA graphics card (see Chapter 5 for full specification). An XtremeData Opteron+FPGA was used for the language classification application. We observed that these benchmarks are not uniformly I/O intensive. The only benchmark that showed greater that 50% of the time in I/O was the graph algorithm when it accessed data files over NFS. When local disk was used, the graph benchmark spent at most 40% of its time in I/O. The other benchmarks were CPU dominated. The image resampling benchmark and language classification showed order of magnitude speedup over software by using co-processor technology to offload the CPU-intensive kernels. Our experiments to date suggest that emerging hardware technologies offer significant benefit to boosting the performance of data-intensive algorithms. Using GPU and FPGA co-processors, we were able to improve performance by more than an order of magnitude on the benchmark algorithms, eliminating the processor bottleneck of CPU-bound tasks. Experiments with a prototype solid state nonvolative memory available today show 10X better throughput on random reads than disk, with a 2X speedup on a graph processing benchmark when compared to the use of local SATA disk.

Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

2007-10-30T23:59:59.000Z

316

User's Guide for RIVRISK Version 5.0: A Model to Assess Potential Human Health and Ecological Risks from Power Plant and Industrial Facility Releases to Rivers  

Science Conference Proceedings (OSTI)

This is a user's guide to EPRI's RIVRISK framework, Version 5.0, which can be used to assess human health and ecological risks associated with industrial and power plant chemical and thermal releases to rivers. The report also documents RIVRISK's theoretical foundation and graphical user interface. Industrial and government staff concerned with chemical and thermal releases will find this report useful.

2000-11-29T23:59:59.000Z

317

TriBITS lifecycle model. Version 1.0, a lean/agile software lifecycle model for research-based computational science and engineering and applied mathematical software.  

SciTech Connect

Software lifecycles are becoming an increasingly important issue for computational science and engineering (CSE) software. The process by which a piece of CSE software begins life as a set of research requirements and then matures into a trusted high-quality capability is both commonplace and extremely challenging. Although an implicit lifecycle is obviously being used in any effort, the challenges of this process - respecting the competing needs of research vs. production - cannot be overstated. Here we describe a proposal for a well-defined software lifecycle process based on modern Lean/Agile software engineering principles. What we propose is appropriate for many CSE software projects that are initially heavily focused on research but also are expected to eventually produce usable high-quality capabilities. The model is related to TriBITS, a build, integration and testing system, which serves as a strong foundation for this lifecycle model, and aspects of this lifecycle model are ingrained in the TriBITS system. Here, we advocate three to four phases or maturity levels that address the appropriate handling of many issues associated with the transition from research to production software. The goals of this lifecycle model are to better communicate maturity levels with customers and to help to identify and promote Software Engineering (SE) practices that will help to improve productivity and produce better software. An important collection of software in this domain is Trilinos, which is used as the motivation and the initial target for this lifecycle model. However, many other related and similar CSE (and non-CSE) software projects can also make good use of this lifecycle model, especially those that use the TriBITS system. Indeed this lifecycle process, if followed, will enable large-scale sustainable integration of many complex CSE software efforts across several institutions.

Willenbring, James M.; Bartlett, Roscoe Ainsworth (Oak Ridge National Laboratory, Oak Ridge, TN); Heroux, Michael Allen

2012-01-01T23:59:59.000Z

318

MC21 analysis of the nuclear energy agency Monte Carlo performance benchmark problem  

SciTech Connect

Due to the steadily decreasing cost and wider availability of large scale computing platforms, there is growing interest in the prospects for the use of Monte Carlo for reactor design calculations that are currently performed using few-group diffusion theory or other low-order methods. To facilitate the monitoring of the progress being made toward the goal of practical full-core reactor design calculations using Monte Carlo, a performance benchmark has been developed and made available through the Nuclear Energy Agency. A first analysis of this benchmark using the MC21 Monte Carlo code was reported on in 2010, and several practical difficulties were highlighted. In this paper, a newer version of MC21 that addresses some of these difficulties has been applied to the benchmark. In particular, the confidence-interval-determination method has been improved to eliminate source correlation bias, and a fission-source-weighting method has been implemented to provide a more uniform distribution of statistical uncertainties. In addition, the Forward-Weighted, Consistent-Adjoint-Driven Importance Sampling methodology has been applied to the benchmark problem. Results of several analyses using these methods are presented, as well as results from a very large calculation with statistical uncertainties that approach what is needed for design applications. (authors)

Kelly, D. J.; Sutton, T. M. [Knolls Atomic Power Laboratory, Bechtel Marine Propulsion Corporation, P. O. Box 1072, Schenectady, NY 12301-1072 (United States); Wilson, S. C. [Bertis Atomic Power Laboratory, Bechtel Marine Propulsion Corporation, P. O. Box 79, West Mifflin, PA 15122-0079 (United States)

2012-07-01T23:59:59.000Z

319

PVWatts Version 1 Technical Reference  

NLE Websites -- All DOE Office Websites (Extended Search)

Meteorological Year (TMY) version 2 or version 3 1 This report is available at no cost from the National Renewable Energy Laboratory (NREL) at www.nrel.govpublications. 3...

320

Embrittlement data base, version 1  

Science Conference Proceedings (OSTI)

The aging and degradation of light-water-reactor (LWR) pressure vessels is of particular concern because of their relevance to plant integrity and the magnitude of the expected irradiation embrittlement. The radiation embrittlement of reactor pressure vessel (RPV) materials depends on many different factors such as flux, fluence, fluence spectrum, irradiation temperature, and preirradiation material history and chemical compositions. These factors must be considered to reliably predict pressure vessel embrittlement and to ensure the safe operation of the reactor. Based on embrittlement predictions, decisions must be made concerning operating parameters and issues such as low-leakage-fuel management, possible life extension, and the need for annealing the pressure vessel. Large amounts of data from surveillance capsules and test reactor experiments, comprising many different materials and different irradiation conditions, are needed to develop generally applicable damage prediction models that can be used for industry standards and regulatory guides. Version 1 of the Embrittlement Data Base (EDB) is such a comprehensive collection of data resulting from merging version 2 of the Power Reactor Embrittlement Data Base (PR-EDB). Fracture toughness data were also integrated into Version 1 of the EDB. For power reactor data, the current EDB lists the 1,029 Charpy transition-temperature shift data points, which include 321 from plates, 125 from forgoings, 115 from correlation monitor materials, 246 from welds, and 222 from heat-affected-zone (HAZ) materials that were irradiated in 271 capsules from 101 commercial power reactors. For test reactor data, information is available for 1,308 different irradiated sets (352 from plates, 186 from forgoings, 303 from correlation monitor materials, 396 from welds and 71 from HAZs) and 268 different irradiated plus annealed data sets.

Wang, J.A.

1997-08-01T23:59:59.000Z

Note: This page contains sample records for the topic "benchmark models version" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


321

Action-Oriented Benchmarking: Using the CEUS Database to Benchmark Commercial Buildings in California  

SciTech Connect

The 2006 Commercial End Use Survey (CEUS) database developed by the California Energy Commission is a far richer source of energy end-use data for non-residential buildings than has previously been available and opens the possibility of creating new and more powerful energy benchmarking processes and tools. In this article--Part 2 of a two-part series--we describe the methodology and selected results from an action-oriented benchmarking approach using the new CEUS database. This approach goes beyond whole-building energy benchmarking to more advanced end-use and component-level benchmarking that enables users to identify and prioritize specific energy efficiency opportunities - an improvement on benchmarking tools typically in use today.

Mathew, Paul; Mills, Evan; Bourassa, Norman; Brook, Martha

2008-02-01T23:59:59.000Z

322

FRP model (Version 1.0) for estimating styrene emissions from fiber-reinforced plastics fabrication processes (on diskette). Model-simulation  

Science Conference Proceedings (OSTI)

This software estimates styrene emissions from the manufacture of fiber-reinforced plastics/composite (FRP/C) products. In using the model, the user first chooses the appropriate process: gel coating, resin sprayup, hand layup, etc. Choosing a process will cause the baseline input values for that process to be displayed. Then the new values that apply to the user`s plant are entered. After all the parameters appropriate to the fabrication process have been added, the values for Overall modification factor and Calculated emission (percent AS) will be displayed. Results can be printed or saved.

NONE

1998-05-01T23:59:59.000Z

323

Strong quantitative benchmarking of quantum optical devices  

E-Print Network (OSTI)

Quantum communication devices, such as quantum repeaters, quantum memories, or quantum channels, are unavoidably exposed to imperfections. However, the presence of imperfections can be tolerated, as long as we can verify such devices retain their quantum advantages. Benchmarks based on witnessing entanglement have proven useful for verifying the true quantum nature of these devices. The next challenge is to characterize how strongly a device is within the quantum domain. We present a method, based on entanglement measures and rigorous state truncation, which allows us to characterize the degree of quantumness of optical devices. This method serves as a quantitative extension to a large class of previously-known quantum benchmarks, requiring no additional information beyond what is already used for the non-quantitative benchmarks.

Nathan Killoran; Norbert Lütkenhaus

2011-02-16T23:59:59.000Z

324

Towards Scalable Benchmarks for Mass Storage Systems  

E-Print Network (OSTI)

While mass storage systems have been used for several decades to store large quantities of scientific data, there has been little work on devising standard ways of measuring them. Each system is hand-tuned using parameters that seem to work best, but it is difficult to gauge the potential effect of similar changes on other systems. The proliferation of storage management software and policies has made it difficult for users to make the best choices for their own systems. The introduction of benchmarks will make it possible to gather standard performance measurements across disparate systems, allowing users to make intelligent choices of hardware, software, and algorithms for their mass storage system. This paper presents guidelines for the design of a mass storage system benchmark suite, along with preliminary suggestions for programs to be included. The benchmarks will measure both peak and sustained performance of the system as well as predicting both short-term and long-term behav...

Ethan L. Miller

1996-01-01T23:59:59.000Z

325

Strong quantitative benchmarking of quantum optical devices  

Science Conference Proceedings (OSTI)

Quantum communication devices, such as quantum repeaters, quantum memories, or quantum channels, are unavoidably exposed to imperfections. However, the presence of imperfections can be tolerated, as long as we can verify that such devices retain their quantum advantages. Benchmarks based on witnessing entanglement have proven useful for verifying the true quantum nature of these devices. The next challenge is to characterize how strongly a device is within the quantum domain. We present a method, based on entanglement measures and rigorous state truncation, which allows us to characterize the degree of quantumness of optical devices. This method serves as a quantitative extension to a large class of previously known quantum benchmarks, requiring no additional information beyond what is already used for the nonquantitative benchmarks.

Killoran, N.; Luetkenhaus, N. [Institute for Quantum Computing and Department of Physics and Astronomy, University of Waterloo, Waterloo, N2L 3G1 (Canada)

2011-05-15T23:59:59.000Z

326

Argonne TTRDC - APRF - Research Activities - Benchmarking PHEVs  

NLE Websites -- All DOE Office Websites (Extended Search)

APRF Research Activities: Benchmarking of Plug-In Hybrid Electric Vehicles (PHEVs) Argonne engineer Mike Duoba Engineer Mike Duoba evaluates a vehicle in Argonne's APRF. Now that plug-in hybrid electric vehicles (PHEVs) are emerging, it is important to test, characterize and benchmark the wide variety of PHEV designs and control strategies. In the APRF, engineers benchmark PHEVs by combining testing and data analysis to characterize the vehicles' efficiency, performance, and emissions. The vehicles are evaluated over many cycles to find control strategies under a variety of operating conditions. Argonne researchers test PHEVs over cold-start and hot-start urban dynamometer driving schedule (UDDS) and highway cycles in both charge-depletion and charge-sustaining operation. Full-charge tests, as

327

version 3 | OpenEI Community  

Open Energy Info (EERE)

utility Utility Companies utility rate Utility Rates version 1 version 2 version 3 web service Smart meter After several months of development and testing, the next...

328

version 2 | OpenEI Community  

Open Energy Info (EERE)

utility Utility Companies utility rate Utility Rates version 1 version 2 version 3 web service Smart meter After several months of development and testing, the next...

329

version 1 | OpenEI Community  

Open Energy Info (EERE)

utility Utility Companies utility rate Utility Rates version 1 version 2 version 3 web service Smart meter After several months of development and testing, the next...

330

A benchmark of the SCS-40 computer: A mini supercomputer compatible with the Cray X-MP/24  

Science Conference Proceedings (OSTI)

An accurate benchmark of the SCS-40 mini supercomputer manufactured by Scientific Computer Systems Corporation has been carried out. A new, revised set of standard ANSI77 Fortran benchmark codes were run on the SCS-40 in a dedicated environment, using Version 1.13 of the CFT compiler. The results are compared with those obtained on one processor of a CRAY X-MP/24 computer using the Cray Research Inc. version of the same compiler. The results suggest that for a typical Los Alamos National Laboratory computational workload, the SCS-40 is equivalent to one-quarter to one-third of a single processor of the CRAY X-MP/24. 15 refs., 5 tabs.

Wasserman, H.J.; Simmons, M.L.; Hayes, A.H.

1987-01-01T23:59:59.000Z

331

Performance Benchmarks for I/S in Corporations (1988-1995)  

E-Print Network (OSTI)

ANNUAL REPORT PERFORMANCE BENCHMARKS FOR I/S IN CORPORATTONSIrvine PERFORMANCE BENCHMARKS F O R INFORMATION S Y S T E MIrvine PERFORMANCE BENCHMARKS FOR INFORMATION SYSTEMS IN

Kraemer, Kenneth L.; Dunkle, Debbie

1997-01-01T23:59:59.000Z

332

Performance Benchmarks for I/S in Corporations (1988-1994)  

E-Print Network (OSTI)

ANNUAL REPORT PERFORMANCE BENCHMARKS FOR I/S IN CORPORATIONSIrvine PERFORMANCE BENCHMARKS FOR INFORMATION SYSTEMS INServices PERFORMANCE BENCHMARKS FOR INFORMATION SYSTEMS IN

Kraemer, Kenneth L.; Gurbaxani, Viijay; Vitalari, Nicholas; Dunkle, Debbie

1995-01-01T23:59:59.000Z

333

Assessment of Applying the PMaC Prediction Framework to NERSC-5 SSP Benchmarks  

E-Print Network (OSTI)

Framework to NERSC-5 SSP Benchmarks Summer 2006 Author: Noeldepends on application benchmarks, in particular the NERSCvendors are asked to run SSP benchmarks at various scales to

Keen, Noel

2008-01-01T23:59:59.000Z

334

Science Driven Supercomputing Architectures: Analyzing Architectural Bottlenecks with Applications and Benchmark Probes  

E-Print Network (OSTI)

with Applications and Benchmark Probes The Berkeleydevelopment of adequate benchmarks for identification ofapplication kernels; and 3) Benchmarks to measure key system

2005-01-01T23:59:59.000Z

335

Performance Benchmarks for I/S in Corporations (1990-1999)  

E-Print Network (OSTI)

ANNUAL REPORT PERFORMANCE BENCHMARKS FOR I/S IN CORPORATIONSIrvine PERFORMANCE BENCHMARKS FOR INFORMATION SYSTEMS INiv- PERFORMANCE BENCHMARKS FOR INFORMATION SYSTEMS IN

Kraemer, Kenneth L.; Gurbaxani, Viijay; Dunkle, Debbie

2000-01-01T23:59:59.000Z

336

Post LHC8 SUSY benchmark points for ILC physics  

E-Print Network (OSTI)

We re-evaluate prospects for supersymmetry at the proposed International Linear e^+e^- Collider (ILC) in light of the first two years of serious data taking at LHC: LHC7 with ~5 fb^{-1} of pp collisions at sqrt{s}=7 TeV and LHC8 with ~20 fb^{-1} at \\sqrt{s}=8 TeV. Strong new limits from LHC8 SUSY searches, along with the discovery of a Higgs boson with m_h~125 GeV, suggest a paradigm shift from previously popular models to ones with new and compelling signatures. After a review of the current status of supersymmetry, we present a variety of new ILC benchmark models, including: natural SUSY, radiatively-driven natural SUSY (RNS), NUHM2 with low m_A, a focus point case from mSUGRA/CMSSM, non-universal gaugino mass (NUGM) model, stau-coannihilation, Kallosh-Linde/spread SUSY model, mixed gauge-gravity mediation, normal scalar mass hierarchy (NMH), and one example with the recently discovered Higgs boson being the heavy CP-even state H. While all these models at present elude the latest LHC8 limits, they do offer intriguing case study possibilities for ILC operating at \\sqrt{s}~0.25-1 TeV. The benchmark points also present a view of the widely diverse SUSY phenomena which might still be expected in the post LHC8 era at both LHC and ILC.

Howard Baer; Jenny List

2013-07-02T23:59:59.000Z

337

Benchmarks for new strong interactions at the LHC  

E-Print Network (OSTI)

New strong interactions at the LHC may exhibit a richer structure than expected from simply rescaling QCD to the electroweak scale. In fact, a departure from rescaled QCD is required for compatibility with electroweak constraints. To navigate the space of possible scenarios, we use a simple framework, based on a 5D model with modifications of AdS geometry in the infrared. In the parameter space, we select two points with particularly interesting phenomenology. For these benchmark points, we explore the discovery of triplets of vector and axial resonances at the LHC.

J. Hirn; A. Martin; V. Sanz

2007-12-21T23:59:59.000Z

338

BBSLA – Kazakhstan (032610) Kazakh (Global Version ...  

Science Conference Proceedings (OSTI)

BBSLA – Kazakhstan (032610) Kazakh (Global Version 031010) 1 BLACKBERRY SOLUTION ??????????? ???????? ????????????? ...

339

BBSLA - Kazakhstan (032610) Russian (Global Version ...  

Science Conference Proceedings (OSTI)

BBSLA - Kazakhstan (032610) Russian (Global Version 031010) 1 ???????????? ?????????? ?? ????????????? ??????? ...

340

Wind Webinar Text Version | Department of Energy  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Text Version Wind Webinar Text Version Download the text version of the audio from the DOE Office of Indian Energy webinar on wind renewable energy. Text Version of the DOE Office...

Note: This page contains sample records for the topic "benchmark models version" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


341

Benchmarking Non-Hardware Balance of System (Soft) Costs for...  

NLE Websites -- All DOE Office Websites (Extended Search)

Benchmarking Non-Hardware Balance of System (Soft) Costs for U.S. Photovoltaic Systems Using a Data-Driven Analysis from PV Installer Survey Results Title Benchmarking Non-Hardware...

342

Benchmarking Music Information Retrieval Systems Department of Electronic Engineering  

E-Print Network (OSTI)

Benchmarking Music Information Retrieval Systems Josh Reiss Department of Electronic Engineering and effective benchmarking system for music information retrieval (MIR) systems. This will serve the multiple surrounding retrieval of audio in test collections. 1. INTRODUCTION The Music Information Retrieval (MIR

Reiss, Josh

343

Energy Star Building Upgrade Manual Benchmarking Chapter 2  

NLE Websites -- All DOE Office Websites (Extended Search)

Benchmarks and Apply the Results 9 2.4 Summary 12 Bibliography 12 Glossary G-1 ENERGY STAR Building Manual 2 2. Benchmarking 2.1 Overview Businesses are reducing their...

344

A scalability benchmark suite for Erlang/OTP  

Science Conference Proceedings (OSTI)

Programming language implementers rely heavily on benchmarking for measuring and understanding performance of algorithms, architectural designs, and trade-offs between alternative implementations of compilers, runtime systems, and virtual machine components. ... Keywords: benchmarking, erlang, multicore, scalability

Stavros Aronis; Nikolaos Papaspyrou; Katerina Roukounaki; Konstantinos Sagonas; Yiannis Tsiouris; Ioannis E. Venetis

2012-09-01T23:59:59.000Z

345

Cleanroom Energy Efficiency: Metrics and Benchmarks  

Science Conference Proceedings (OSTI)

Cleanrooms are among the most energy-intensive types of facilities. This is primarily due to the cleanliness requirements that result in high airflow rates and system static pressures, as well as process requirements that result in high cooling loads. Various studies have shown that there is a wide range of cleanroom energy efficiencies and that facility managers may not be aware of how energy efficient their cleanroom facility can be relative to other cleanroom facilities with the same cleanliness requirements. Metrics and benchmarks are an effective way to compare one facility to another and to track the performance of a given facility over time. This article presents the key metrics and benchmarks that facility managers can use to assess, track, and manage their cleanroom energy efficiency or to set energy efficiency targets for new construction. These include system-level metrics such as air change rates, air handling W/cfm, and filter pressure drops. Operational data are presented from over 20 different cleanrooms that were benchmarked with these metrics and that are part of the cleanroom benchmark dataset maintained by Lawrence Berkeley National Laboratory (LBNL). Overall production efficiency metrics for cleanrooms in 28 semiconductor manufacturing facilities in the United States and recorded in the Fabs21 database are also presented.

International SEMATECH Manufacturing Initiative; Mathew, Paul A.; Tschudi, William; Sartor, Dale; Beasley, James

2010-07-07T23:59:59.000Z

346

Comparative Benchmarks of full QCD Algorithms  

E-Print Network (OSTI)

We report performance benchmarks for several algorithms that we have used to simulate the Schr"odinger functional with two flavors of dynamical quarks. They include hybrid and polynomial hybrid Monte Carlo with preconditioning. An appendix describes a method to deal with autocorrelations for nonlinear functions of primary observables as they are met here due to reweighting.

Roberto Frezzotti; Martin Hasenbusch; Jochen Heitger; Karl Jansen; Ulli Wolff

2000-09-20T23:59:59.000Z

347

NASA BENCHMARKS COMMUNICATIONS Assessment Plan NNSA/Nevada Site...  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Assessment Plan NNSANevada Site Office Facility Representative Division NASA BENCHMARKS COMMUNICATIONS Assessment Plan NNSANevada Site Office Facility Representative...

348

A Numerical Benchmark on the Prediction of Macrosegregation in ...  

Science Conference Proceedings (OSTI)

About this Abstract. Meeting, 2011 TMS Annual Meeting & Exhibition. Symposium , Frontiers in Solidification Science. Presentation Title, A Numerical Benchmark ...

349

The Problem with the Linpack Benchmark Matrix Generator  

E-Print Network (OSTI)

We characterize the matrix sizes for which the Linpack Benchmark matrix generator constructs a matrix with identical columns.

Dongarra, Jack

2008-01-01T23:59:59.000Z

350

Development of a California commercial building benchmarking database  

E-Print Network (OSTI)

benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings.

Kinney, Satkartar; Piette, Mary Ann

2002-01-01T23:59:59.000Z

351

NMSSM Higgs Benchmarks Near 125 GeV  

E-Print Network (OSTI)

The recent LHC indications of a SM-like Higgs boson near 125 GeV are consistent not only with the Standard Model (SM) but also with Supersymmetry (SUSY). However naturalness arguments disfavour the Minimal Supersymmetric Standard Model (MSSM). We consider the Next-to-Minimal Supersymmetric Standard Model (NMSSM) with a SM-like Higgs boson near 125 GeV involving relatively light stops and gluinos below 1 TeV in order to satisfy naturalness requirements. We are careful to ensure that the chosen values of couplings do not become non perturbative below the grand unification (GUT) scale, although we also examine how these limits may be extended by the addition of extra matter to the NMSSM at the two-loop level. We then propose four sets of benchmark points corresponding to the SM-like Higgs boson being the lightest or the second lightest Higgs state in the NMSSM or the NMSSM-with-extra-matter. With the aid of these benchmark points we discuss how the NMSSM Higgs boson near 125 GeV may be distinguished from the SM Higgs boson in future LHC searches.

S. F. King; M. Muhlleitner; R. Nevzorov

2012-01-12T23:59:59.000Z

352

NIST Periodic Table: Version History  

Science Conference Proceedings (OSTI)

... [Type of medium] Available: URL [Access date]. ... Version 4 September 2003, The ionization energy value was updated for Po; fourth printing of NIST ...

2013-05-28T23:59:59.000Z

353

Metrics and Benchmarks for Energy Efficiency in Laboratories  

E-Print Network (OSTI)

benchmarks that are based on the actual measured energy use of comparable buildings.energy efficiency metrics and benchmarks for laboratories, which have been developed and applied to several laboratory buildings –building targets be evaluated against empirical benchmarks that are based on the measured energy

Mathew, Paul; Rumsey Engineers

2008-01-01T23:59:59.000Z

354

Conventional Benchmarks as a Sample of the Performance Spectrum  

Science Conference Proceedings (OSTI)

Most benchmarks are smaller than actual application programs. One reason is to improve benchmark universality by demanding resources every computer is likely to have. However, users dynamically increase the size of application programs to match the power ... Keywords: HINT, benchmarks, hierarchical memory, performance analysis

John L. Gustafson; Rajat Todi

1999-05-01T23:59:59.000Z

355

Benchmarking Electricity Liberalisation in Europe  

E-Print Network (OSTI)

sources does the country’s electricity industry use? A country with a high proportion of hydro-electricity may not be exposed to fluctuations in the prices of fossil fuels, but is vulnerable to years with low precipitation. Historically, oil prices have... the summer of 2000. The disadvantages of this measure include the significant effort required to calculate it. Although simple models of the industry can be built and maintained at low cost, and regularly updated with fuel prices and demand levels...

Green, Richard J; Lorenzoni, Arturo; Perez, Yannick; Pollitt, Michael G.

356

RTJBench: A RealTime Java Benchmarking Framework  

E-Print Network (OSTI)

Abstract. The paper gives an overview of RTJBench, a framework designed to assist in the task of benchmarking programs written in the Real-Time Specification for Java, but with potentially more general applicability. RTJBench extends the JUnit framework for unit testing of Java applications with tools for real-time environment configuration, simple data processing and configurable graphical presentation services. We present design principles of RTJBench and give an example of a benchmarking suite we have been using for daily regression benchmarking of the Open Virtual Machine. Keywords: Benchmarking, regression benchmarking, Real-Time Specification for Java

Marek Prochazka; Andrey Madan; Jan Vitek; Wenchang Liu

2004-01-01T23:59:59.000Z

357

Use ENERGY STAR benchmarking tools | ENERGY STAR Buildings & Plants  

NLE Websites -- All DOE Office Websites (Extended Search)

Use ENERGY STAR benchmarking tools Use ENERGY STAR benchmarking tools Secondary menu About us Press room Contact Us Portfolio Manager Login Facility owners and managers Existing buildings Commercial new construction Industrial energy management Small business Service providers Service and product providers Verify applications for ENERGY STAR certification Design commercial buildings Energy efficiency program administrators Commercial and industrial program sponsors Associations State and local governments Federal agencies Tools and resources Training In this section How can we help you? Build an energy program Improve building and plant performance Earn the ENERGY STAR and other recognition Benchmark energy use Learn about benchmarking Use ENERGY STAR benchmarking tools ENERGY STAR in action Communicate and educate

358

Measure, track, and benchmark | ENERGY STAR Buildings & Plants  

NLE Websites -- All DOE Office Websites (Extended Search)

Measure, track, and benchmark Measure, track, and benchmark Secondary menu About us Press room Contact Us Portfolio Manager Login Facility owners and managers Existing buildings Commercial new construction Industrial energy management Small business Service providers Service and product providers Verify applications for ENERGY STAR certification Design commercial buildings Energy efficiency program administrators Commercial and industrial program sponsors Associations State and local governments Federal agencies Tools and resources Training In this section Get started with ENERGY STAR Make the business case Build an energy management program Measure, track, and benchmark Tools for benchmarking energy management practices Tools for tracking and benchmarking facility energy performance ENERGY STAR Energy Performance Indicators for plants

359

Simulation benchmarks for low-pressure plasmas: capacitive discharges  

E-Print Network (OSTI)

Benchmarking is generally accepted as an important element in demonstrating the correctness of computer simulations. In the modern sense, a benchmark is a computer simulation result that has evidence of correctness, is accompanied by estimates of relevant errors, and which can thus be used as a basis for judging the accuracy and efficiency of other codes. In this paper, we present four benchmark cases related to capacitively coupled discharges. These benchmarks prescribe all relevant physical and numerical parameters. We have simulated the benchmark conditions using five independently developed particle-in-cell codes. We show that the results of these simulations are statistically indistinguishable, within bounds of uncertainty that we define. We therefore claim that the results of these simulations represent strong benchmarks, that can be used as a basis for evaluating the accuracy of other codes. These other codes could include other approaches than particle-in-cell simulations, where benchmarking could exa...

Turner, M M; Donko, Z; Eremin, D; Kelly, S J; Lafleur, T; Mussenbrock, T

2012-01-01T23:59:59.000Z

360

Benchmarking Of Improved DPAC Transient Deflagration Analysis Code  

SciTech Connect

The transient deflagration code DPAC (Deflagration Pressure Analysis Code) has been upgraded for use in modeling hydrogen deflagration transients. The upgraded code is benchmarked using data from vented hydrogen deflagration tests conducted at the HYDRO-SC Test Facility at the University of Pisa. DPAC originally was written to calculate peak deflagration pressures for deflagrations in radioactive waste storage tanks and process facilities at the Savannah River Site. Upgrades include the addition of a laminar flame speed correlation for hydrogen deflagrations and a mechanistic model for turbulent flame propagation, incorporation of inertial effects during venting, and inclusion of the effect of water vapor condensation on vessel walls. In addition, DPAC has been coupled with CEA, a NASA combustion chemistry code. The deflagration tests are modeled as end-to-end deflagrations. The improved DPAC code successfully predicts both the peak pressures during the deflagration tests and the times at which the pressure peaks.

2013-03-21T23:59:59.000Z

Note: This page contains sample records for the topic "benchmark models version" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


361

Characterizing Quantum Gates via Randomized Benchmarking  

E-Print Network (OSTI)

We describe and expand upon the scalable randomized benchmarking protocol proposed in Phys. Rev. Lett. 106, 180504 (2011) which provides a method for benchmarking quantum gates and estimating the gate-dependence of the noise. The protocol allows the noise to have weak time and gate-dependence, and we provide a sufficient condition for the applicability of the protocol in terms of the average variation of the noise. We discuss how state preparation and measurement errors are taken into account and provide a complete proof of the scalability of the protocol. We establish a connection in special cases between the error rate provided by this protocol and the error strength measured using the diamond norm distance.

Easwar Magesan; Jay M. Gambetta; Joseph Emerson

2011-09-30T23:59:59.000Z

362

Parton Distribution Benchmarking with LHC Data  

E-Print Network (OSTI)

We present a detailed comparison of the most recent sets of NNLO PDFs from the ABM, CT, HERAPDF, MSTW and NNPDF collaborations. We compare parton distributions at low and high scales and parton luminosities relevant for LHC phenomenology. We study the PDF dependence of LHC benchmark inclusive cross sections and differential distributions for electroweak boson and jet production in the cases in which the experimental covariance matrix is available. We quantify the agreement between data and theory by computing the chi2 for each data set with all the various PDFs. PDF comparisons are performed consistently for common values of the strong coupling. We also present a benchmark comparison of jet production at the LHC, comparing the results from various available codes and scale settings. Finally, we discuss the implications of the updated NNLO PDF sets for the combined PDF+alphaS uncertainty in the gluon fusion Higgs production cross section.

Richard D. Ball; Stefano Carrazza; Luigi Del Debbio; Stefano Forte; Jun Gao; Nathan Hartland; Joey Huston; Pavel Nadolsky; Juan Rojo; Daniel Stump; Robert S. Thorne; C. -P. Yuan

2012-11-21T23:59:59.000Z

363

Measuring Performance and Benchmarking Project Management  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Measuring Performance and Measuring Performance and Benchmarking Project Management at the Department of Energy Measuring Performance and Benchmarking Project Management at the Department of Energy Committee for Oversight and Assessment of U.S. Department of Energy Project Management Board on Infrastructure and the Constructed Environment Division on Engineering and Physical Sciences THE NATIONAL ACADEMIES PRESS WASHINGTON, D.C. www.nap.edu THE NATIONAL ACADEMIES PRESS 500 Fifth Street, N.W. Washington, DC 20001 NOTICE: The project that is the subject of this report was approved by the Governing Board of the National Research Council, whose members are drawn from the councils of the National Academy of Sciences, the National Academy of Engineering, and the Institute of Medicine. The members of the

364

Benchmark West Texas Intermediate crude assayed  

Science Conference Proceedings (OSTI)

The paper gives an assay of West Texas Intermediate, one of the world's market crudes. The price of this crude, known as WTI, is followed by market analysts, investors, traders, and industry managers around the world. WTI price is used as a benchmark for pricing all other US crude oils. The 41[degree] API WTI posted prices is the price paid for the crude at the wellhead in West Texas and is the true benchmark on which other US crudes are priced. The spot price is the negotiated price for short-term trades of the crude. And the New York Mercantile Exchange, or Nymex, price is a futures price for barrels delivered at Cushing.

Rhodes, A.K.

1994-08-15T23:59:59.000Z

365

BENCHMARKING UPGRADED HOTSPOT DOSE CALCULATIONS AGAINST MACCS2 RESULTS  

Science Conference Proceedings (OSTI)

The radiological consequence of interest for a documented safety analysis (DSA) is the centerline Total Effective Dose Equivalent (TEDE) incurred by the Maximally Exposed Offsite Individual (MOI) evaluated at the 95th percentile consequence level. An upgraded version of HotSpot (Version 2.07) has been developed with the capabilities to read site meteorological data and perform the necessary statistical calculations to determine the 95th percentile consequence result. These capabilities should allow HotSpot to join MACCS2 (Version 1.13.1) and GENII (Version 1.485) as radiological consequence toolbox codes in the Department of Energy (DOE) Safety Software Central Registry. Using the same meteorological data file, scenarios involving a one curie release of {sup 239}Pu were modeled in both HotSpot and MACCS2. Several sets of release conditions were modeled, and the results compared. In each case, input parameter specifications for each code were chosen to match one another as much as the codes would allow. The results from the two codes are in excellent agreement. Slight differences observed in results are explained by algorithm differences.

Brotherton, Kevin

2009-04-30T23:59:59.000Z

366

MESURE Tool to benchmark Java Card platforms  

E-Print Network (OSTI)

The advent of the Java Card standard has been a major turning point in smart card technology. With the growing acceptance of this standard, understanding the performance behavior of these platforms is becoming crucial. To meet this need, we present in this paper a novel benchmarking framework to test and evaluate the performance of Java Card platforms. MESURE tool is the first framework which accuracy and effectiveness are independent from the particular Java Card platform tested and CAD used.

Bouzefrane, Samia; Paradinas, Pierre

2009-01-01T23:59:59.000Z

367

Benchmark analysis for the design of piping systems in advanced reactors  

SciTech Connect

To satisfy the need for the verification of the computer programs and modeling techniques that will be used to perform the final piping analyses for an advanced boding water reactor standard design, three piping benchmark problems were developed. The problems are representative piping systems subjected to representative dynamic loads with solutions developed using the methods being proposed for analysis for the advanced reactor standard design. It will be required that the combined license holders demonstrate that their solutions to these problems are in agreement with the benchmark problem set. A summary description of each problem and some sample results are included.

Bezler, P.; DeGrassi, G.; Braverman, J. (Brookhaven National Lab., Upton, NY (United States)); Shounien Hou (Nuclear Regulatory Commission, Washington, DC (United States))

1993-01-01T23:59:59.000Z

368

Benchmark analysis for the design of piping systems in advanced reactors  

Science Conference Proceedings (OSTI)

To satisfy the need for the verification of the computer programs and modeling techniques that will be used to perform the final piping analyses for an advanced boding water reactor standard design, three piping benchmark problems were developed. The problems are representative piping systems subjected to representative dynamic loads with solutions developed using the methods being proposed for analysis for the advanced reactor standard design. It will be required that the combined license holders demonstrate that their solutions to these problems are in agreement with the benchmark problem set. A summary description of each problem and some sample results are included.

Bezler, P.; DeGrassi, G.; Braverman, J. [Brookhaven National Lab., Upton, NY (United States); Shounien Hou [Nuclear Regulatory Commission, Washington, DC (United States)

1993-03-01T23:59:59.000Z

369

Higgs-Boson Benchmarks in Agreement with CDM, EWPO and BPO  

E-Print Network (OSTI)

We explore `benchmark planes' in the Minimal Supersymmetric Standard Model (MSSM) that are in agreement with the measured cold dark matter (CDM) density, electroweak precision observables (EWPO) and B physics observables (BPO). The M_A-tan_beta planes are specified assuming that gaugino masses m_{1/2}, soft trilinear supersymmetry-breaking parameters A_0 and the soft supersymmetry-breaking contributions m_0 to the squark and slepton masses are universal, but not those associated with the Higgs multiplets (the NUHM framework). We discuss the prospects for probing experimentally these benchmark surfaces at the Tevatron collider, the LHC and the ILC.

S. Heinemeyer

2007-10-16T23:59:59.000Z

370

Experimental power density distribution benchmark in the TRIGA Mark II reactor  

Science Conference Proceedings (OSTI)

In order to improve the power calibration process and to benchmark the existing computational model of the TRIGA Mark II reactor at the Josef Stefan Inst. (JSI), a bilateral project was started as part of the agreement between the French Commissariat a l'energie atomique et aux energies alternatives (CEA) and the Ministry of higher education, science and technology of Slovenia. One of the objectives of the project was to analyze and improve the power calibration process of the JSI TRIGA reactor (procedural improvement and uncertainty reduction) by using absolutely calibrated CEA fission chambers (FCs). This is one of the few available power density distribution benchmarks for testing not only the fission rate distribution but also the absolute values of the fission rates. Our preliminary calculations indicate that the total experimental uncertainty of the measured reaction rate is sufficiently low that the experiments could be considered as benchmark experiments. (authors)

Snoj, L.; Stancar, Z.; Radulovic, V.; Podvratnik, M.; Zerovnik, G.; Trkov, A. [Josef Stefan Inst., Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Barbot, L.; Domergue, C.; Destouches, C. [CEA DEN, DER, Instrumentation Sensors and Dosimetry laboratory Cadarache, F-13108 Saint-Paul-Lez-Durance (France)

2012-07-01T23:59:59.000Z

371

Nek5000 Ready to Use after Simulations of Important Pipe Flow Benchmark |  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Nek5000 Ready to Use after Simulations of Important Pipe Flow Nek5000 Ready to Use after Simulations of Important Pipe Flow Benchmark Nek5000 Ready to Use after Simulations of Important Pipe Flow Benchmark January 29, 2013 - 1:42pm Addthis Velocity magnitude in MATiS-H spacer grid with swirl-type vanes. Velocity magnitude in MATiS-H spacer grid with swirl-type vanes. As part of the on-going Nek5000 validation efforts, a series of large eddy simulations (LES) have been performed for thermal stratification in a pipe. Results were in good agreement with the experiment and the simulation data has provided insight into the physics of the flow. An additional series of simulations of the OECD-NEA MATiS-H benchmark has also been completed using intermediate- fidelity modeling approaches, such as k-epsilon, k-omega shear stress transport, and ID detached eddy simulation, as well as one

372

Introduction to the HPC Challenge Benchmark Suite  

Science Conference Proceedings (OSTI)

The HPC Challenge benchmark suite has been released by the DARPA HPCS program to help define the performance boundaries of future Petascale computing systems. HPC Challenge is a suite of tests that examine the performance of HPC architectures using kernels with memory access patterns more challenging than those of the High Performance Linpack (HPL) benchmark used in the Top500 list. Thus, the suite is designed to augment the Top500 list, providing benchmarks that bound the performance of many real applications as a function of memory access characteristics e.g., spatial and temporal locality, and providing a framework for including additional tests. In particular, the suite is composed of several well known computational kernels (STREAM, HPL, matrix multiply--DGEMM, parallel matrix transpose--PTRANS, FFT, RandomAccess, and bandwidth/latency tests--b{sub eff}) that attempt to span high and low spatial and temporal locality space. By design, the HPC Challenge tests are scalable with the size of data sets being a function of the largest HPL matrix for the tested system.

Luszczek, Piotr; Dongarra, Jack J.; Koester, David; Rabenseifner,Rolf; Lucas, Bob; Kepner, Jeremy; McCalpin, John; Bailey, David; Takahashi, Daisuke

2005-04-25T23:59:59.000Z

373

Towards a Benchmark and Automatic Calibration for IR-Based Concept Location  

Science Conference Proceedings (OSTI)

There has been a great deal of research into the use of Information Retrieval (IR)-based techniques to support concept location in source code. Much of this research has been focused on determining how to use various IR techniques to support concept ... Keywords: concept location, data model, software change reenactment, information retrieval, parameter calibration, benchmark

Scott David Ohlemacher; Andrian Marcus

2011-06-01T23:59:59.000Z

374

Home Energy Score Pilot Analysis Webinar (Text Version) | Department of  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Pilot Analysis Webinar (Text Version) Pilot Analysis Webinar (Text Version) Home Energy Score Pilot Analysis Webinar (Text Version) Below is a text version of the webinar titled "Home Energy Score: Analysis and Improvements to Date," originally presented on July 24, 2012. In addition to this text version of the audio, you can access the presentation slides and a recording of the webinar. Slide 2: To date based on the pilot findings, tell you a little bit about the analysis that we did over the last year or so, partly from the pilots and then also through another piece of analysis that NREL did for us using model data and also utility bill data. And then, finally I will tell you a little bit about our next steps and what were are planning as we move forward with implementation, both in terms of implementation, but with an

375

LBNL Window & Daylighting Software -- WINDOW 6 Research Version  

NLE Websites -- All DOE Office Websites (Extended Search)

2 2 Last Updated: 12/29/2013 If you find bugs, or have comments about this version, please do not hesitate to send an email to WINDOWHelp@lbl.gov to report your findings. Getting feedback from users is how we improve the program. WINDOW 7.2 (7.2.29) (12/29/2013) Release Notes -- Please read these before running this version ! This version contains these new modeling features Honeycomb shades Dynamic Glazing (Thermochromic and Electrochromic) This version is compatible with THERM 7.1 Please send us emails as you find issues in the program -- that is the only way that we can make it more robust. We hope to iterate versions fairly quickly in the next month or so to get the bugs ironed out. Radiance for WINDOW 7 Get a copy of Radiance for WINDOW 7.2 Must be used with WINDOW 7.0.59 or later

376

The Effect of a Well-Resolved Stratosphere on Surface Climate: Differences between CMIP5 Simulations with High and Low Top Versions of the Met Office Climate Model  

Science Conference Proceedings (OSTI)

The importance of using a general circulation model that includes a well-resolved stratosphere for climate simulations, and particularly the influence this has on surface climate, is investigated. High top model simulations are run with the Met ...

S. C. Hardiman; N. Butchart; T. J. Hinton; S. M. Osprey; L. J. Gray

2012-10-01T23:59:59.000Z

377

Gasoline prices decrease (long version)  

U.S. Energy Information Administration (EIA) Indexed Site

Gasoline prices decrease (long version) The U.S. average retail price for regular gasoline fell to 3.70 a gallon on Monday. That's down 1.4 cents from a week ago, based on the...

378

Soy Protein Products - Electronic Version  

Science Conference Proceedings (OSTI)

Soybeans as Functional Foods and Ingredients is written to serve as a reference for food product developers, food technologists, nutritionists, plant breeders, academic and government professionals... Soy Protein Products - Electronic Version eChapters F

379

Biomass Webinar Text Version | Department of Energy  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Text Version Biomass Webinar Text Version Dowload the text version of the audio from the DOE Office of Indian Energy webinar on biomass. DOE Office of Indian Energy Foundational...

380

Biomass Webinar Text Version | Department of Energy  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Biomass Webinar Text Version Biomass Webinar Text Version Dowload the text version of the audio from the DOE Office of Indian Energy webinar on biomass. DOE Office of Indian Energy...

Note: This page contains sample records for the topic "benchmark models version" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


381

Optimal versioning and pricing of information products with considering or not common valuation of customers  

Science Conference Proceedings (OSTI)

Since information products are often offered to market in multiple versions to make vertical differentiation, the optimal versioning strategy has become a hot topic in the research community. This paper focuses on the numerical investigation of the properties ... Keywords: Bilevel programming model, Information product, Numerical computation, Optimal pricing, Versioning strategy

Minqiang Li; Haiyang Feng; Fuzan Chen

2012-08-01T23:59:59.000Z

382

Multimedia: Energy 101: Daylighting (Text Version)  

Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

EERE Multimedia Printable Version Share this resource Send a link to Multimedia: Energy 101: Daylighting (Text Version) to someone by E-mail Share Multimedia: Energy 101:...

383

ABM11 parton distributions and benchmarks  

E-Print Network (OSTI)

We present a determination of the nucleon parton distribution functions (PDFs) and of the strong coupling constant $\\alpha_s$ at next-to-next-to-leading order (NNLO) in QCD based on the world data for deep-inelastic scattering and the fixed-target data for the Drell-Yan process. The analysis is performed in the fixed-flavor number scheme for $n_f=3,4,5$ and uses the $\\bar{MS}$ scheme for $\\alpha_s$ and the heavy quark masses. The fit results are compared with other PDFs and used to compute the benchmark cross sections at hadron colliders to the NNLO accuracy.

S. Alekhin; J. Bluemlein; S. -O. Moch

2012-08-07T23:59:59.000Z

384

Manufacturers' View on Benchmarking and Disclosure  

U.S. Energy Information Administration (EIA) Indexed Site

Association of Electrical and Association of Electrical and Medical Imaging Equipment Manufacturers Manufacturing Solutions for Energy Efficiency in Buildings Patrick Hughes Policy Director, High Performance Buildings National Electrical Manufacturers Association The Association of Electrical and Medical Imaging Equipment Manufacturers What is NEMA? The Association of Electrical Equipment and Medical Imaging Manufacturers Which policies encourage energy efficiency in buildings? Energy Savings Performance Contracts Tax Incentives Shaheen- Portman Benchmarking and Disclosure Bullitt Center Seattle, Washington The Association of Electrical Equipment and Medical Imaging Manufacturers Energy Savings Performance Contracts ESPCs pay for efficiency upgrades with

385

Tools for tracking and benchmarking facility energy performance | ENERGY  

NLE Websites -- All DOE Office Websites (Extended Search)

Industrial energy management Industrial energy management » Measure, track, and benchmark » Tools for tracking and benchmarking facility energy performance Secondary menu About us Press room Contact Us Portfolio Manager Login Facility owners and managers Existing buildings Commercial new construction Industrial energy management Small business Service providers Service and product providers Verify applications for ENERGY STAR certification Design commercial buildings Energy efficiency program administrators Commercial and industrial program sponsors Associations State and local governments Federal agencies Tools and resources Training In this section Get started with ENERGY STAR Make the business case Build an energy management program Measure, track, and benchmark Tools for benchmarking energy management practices

386

Energy Tips: Benchmark the Fuel Cost of Steam Generation | ENERGY...  

NLE Websites -- All DOE Office Websites (Extended Search)

You are here Home Buildings & Plants Energy Tips: Benchmark the Fuel Cost of Steam Generation Secondary menu About us Press room Contact Us Portfolio Manager Login...

387

New Benchmark Instances for the Steiner Problem in Graphs  

E-Print Network (OSTI)

Sep 26, 2001 ... New Benchmark Instances for the Steiner Problem in Graphs. Isabel Rosseti ( rosseti ***at*** inf.puc-rio.br) Marcus Poggi de Aragăo (poggi ...

388

Benchmark Results for TraPPE Carbon Dioxide  

Science Conference Proceedings (OSTI)

Benchmark results for TraPPE Carbon Dioxide. The purpose of these pages is to provide some explicit results from Monte ...

2013-09-20T23:59:59.000Z

389

Use Energy Information Services to Benchmark with ENERGY STAR  

NLE Websites -- All DOE Office Websites (Extended Search)

service products that meet different customer needs including: > Utility Bill Management: ENERGY STAR automated benchmarking, utility tracking, bill processing and payment,...

390

NERSC-6 Workload Analysis and Benchmark Selection Process  

E-Print Network (OSTI)

Computational Characteristics for NERSC?6 Benchmarks. *CI isScience-Driven Computing: NERSC’s Plan for 2006–2010,”Erich Strohmaier, “The NERSC Sustained System Performance (

Antypas, Katie

2008-01-01T23:59:59.000Z

391

ENERGY STAR Building Upgrade Manual Chapter 2: Benchmarking ...  

NLE Websites -- All DOE Office Websites (Extended Search)

efficiency upgrades presented in an easy-to-understand framework designed especially for ENERGY STAR partners. This 12-page chapter defines benchmarking, what successful...

392

Tools for benchmarking energy management practices | ENERGY STAR  

NLE Websites -- All DOE Office Websites (Extended Search)

benchmarking energy management practices Secondary menu About us Press room Contact Us Portfolio Manager Login Facility owners and managers Existing buildings Commercial new...

393

GATE Air-Sea Interaction. I: Numerical Model Calculation of Local Sea-Surface Temperatures on Diurnal Time Scales Using the GATE Version III Gridded Global Data Set  

Science Conference Proceedings (OSTI)

The numerical model of air-sea interaction previously described in Jacobs (1978), Pandolfo and Jacobs (1972) and Pandolfo (1969) is inserted at one horizontal grid point in the GATE III Gridded Global Data Set to calculate a model-generated, ...

P. S. Brown Jr.; J. P. Pandolfo; S. J. Thoren

1982-05-01T23:59:59.000Z

394

Benchmarking ICRF Full-wave Solvers for ITER  

DOE Green Energy (OSTI)

Abstract Benchmarking of full-wave solvers for ICRF simulations is performed using plasma profiles and equilibria obtained from integrated self-consistent modeling predictions of four ITER plasmas. One is for a high performance baseline (5.3 T, 15 MA) DT H-mode. The others are for half-field, half-current plasmas of interest for the pre-activation phase with bulk plasma ion species being either hydrogen or He4. The predicted profiles are used by six full-wave solver groups to simulate the ICRF electromagnetic fields and heating, and by three of these groups to simulate the current-drive. Approximate agreement is achieved for the predicted heating power for the DT and He4 cases. Factor of two disagreements are found for the cases with second harmonic He3 heating in bulk H cases. Approximate agreement is achieved simulating the ICRF current drive.

R. V. Budny, L. Berry, R. Bilato, P. Bonoli, M. Brambilla, R. J. Dumont, A. Fukuyama, R. Harvey, E. F. Jaeger, K. Indireshkumar, E. Lerche, D. McCune, C. K. Phillips, V. Vdovin, J. Wright, and members of the ITPA-IOS

2011-01-06T23:59:59.000Z

395

VVER-440 dosimetry and neutron spectrum benchmark  

SciTech Connect

Light Water Reactor (LWR) benchmark experiments performed in the United States under the Surveillance Dosimetry Improvement Program (SDIP), in general, reported measured reaction rates and not neutron flux spectrum. The VVER-440 benchmark experiments, using a combination of spherical hydrogen-filled proportional counters and a stilbene scintillator detector, were measurements that provided a direct verification of the transport neutron flux spectrum. The original SAILOR cross-section library from ENDF/B-IV were used, except that the iron, hydrogen, and oxygen values from ENDF/B-VI were inserted. A linear-least-squares analysis showed that the average difference between calculations and measurements below 10 MeV was (a) less than 6% at the surveillance position; (b) less than 5% at the pressure vessel (PV) inner surface; (c) less than 6% at 1/3 thickness into the PV (1/3 T); (d) less than 17% at 2/3 thickness into the PV (2/3 T); and (e) less than 24% at the PV outer surface.

Sajot, E. [Louisiana State Univ., Baton Rouge, LA (United States). Nuclear Science Center; Kam, F.B.K. [Oak Ridge National Lab., TN (United States)

1993-11-01T23:59:59.000Z

396

Updated Post-WMAP Benchmarks for Supersymmetry  

E-Print Network (OSTI)

We update a previously-proposed set of supersymmetric benchmark scenarios, taking into account the precise constraints on the cold dark matter density obtained by combining WMAP and other cosmological data, as well as the LEP and b -> s gamma constraints. We assume that R parity is conserved and work within the constrained MSSM (CMSSM) with universal soft supersymmetry-breaking scalar and gaugino masses m_0 and m_1/2. In most cases, the relic density calculated for the previous benchmarks may be brought within the WMAP range by reducing slightly m_0, but in two cases more substantial changes in m_0 and m_1/2 are made. Since the WMAP constraint reduces the effective dimensionality of the CMSSM parameter space, one may study phenomenology along `WMAP lines' in the (m_1/2, m_0) plane that have acceptable amounts of dark matter. We discuss the production, decays and detectability of sparticles along these lines, at the LHC and at linear e+ e- colliders in the sub- and multi-TeV ranges, stressing the complementarity of hadron and lepton colliders, and with particular emphasis on the neutralino sector. Finally, we preview the accuracy with which one might be able to predict the density of supersymmetric cold dark matter using collider measurements.

M. Battaglia; A. De Roeck; J. Ellis; F. Gianotti; K. A. Olive; L. Pape

2003-06-23T23:59:59.000Z

397

GATE Air-Sea Interactions II: Numerical-Model Calculation of Regional Sea-Surface Temperature Fields Using the GATE Version III Gridded Global Data Set  

Science Conference Proceedings (OSTI)

The numerical model of air-sea interaction previously described in Brown et al. (1982), Pandolfo and Jacobs (1972) and Pandolfo (1969) is applied over a limited horizontal portion of the GATE III Gridded Global Data set (including continental ...

P. S. Brown Jr.; J. P. Pandolfo; G. D. Robinson

1982-10-01T23:59:59.000Z

398

The CP-violating type-II 2HDM and Charged Higgs boson benchmarks  

E-Print Network (OSTI)

We review and update the interpretation of the 125 GeV scalar as the lightest Higgs boson of the Two-Higgs-Doublet Model, allowing for CP violation in the potential. The detection of a charged Higgs boson would exclude the Standard Model. Proposed benchmarks for charged-Higgs searches in the channel pp\\to H^+W^-X\\to W^+W^-H_1X are reviewed and updated.

Lorenzo Basso; Anna Lipniacka; Farvah Mahmoudi; Stefano Moretti; Per Osland; Giovanni Marco Pruna; Mahdi Purmohammadi

2013-05-14T23:59:59.000Z

399

Blind Benchmark Calculations for Melt Spreading in the ECOSTAR Project  

SciTech Connect

The Project ECOSTAR (5. EC Framework Programme) on Ex-Vessel Core Melt Stabilisation Research is oriented towards the analysis and mitigation of severe accident sequences that could occur in the ex-vessel phase of a postulated core melt accident. Spreading of the corium melt on the available basement surface is an important process, which defines the initial conditions for concrete attack and for the efficiency of cooling in case of water contact, respectively. The transfer and spreading of the melt on the basement is one of the major issues in ECOSTAR. This is addressed here by a spreading code benchmark involving a large-scale spreading experiment that is used for the validation of the existing spreading codes. The corium melt is simulated by a mixture of Al{sub 2}O{sub 3}, SiO{sub 2}, CaO and FeO with a sufficiently wide freezing interval. In the 3-dim benchmark test ECOKATS-1 170 litres of oxide melt are poured onto a 3 m by 4 m concrete surface with a low flow rate of about 2 l/s. From the results of an additional 2-dim channel experiment some basic rheological data (e.g. initial viscosity) are obtained in order to minimise the uncertainty in material properties of the melt. The participating spreading codes CORFLOW (Framatome ANP/FZK), LAVA (GRS), and THEMA (CEA) differ from each other by their focus of modelling and the assumptions made to simplify the relevant transport equations. In a first step both experiments (3-dim/2-dim) are calculated blindly by the participating codes. This serves for an overall assessment of the codes capabilities to predict the spreading of a melt with rather unknown material properties. In a second step the 3-dim experiment ECOKATS-1 is recalculated by the codes with the more precise knowledge of the rheological behaviour of the oxide melt in the 2-dim experiment. This, in addition, serves for the validation of the codes' capabilities to predict the spreading of a melt with well-known material properties. Based on the benchmark results and taking the specific validation process for each of the three codes applied into account, it is recommended that the spreading issue for reactor safety research be considered closed. (authors)

Spengler, C.; Allelein, H.J. [Gesellschaft fuer Anlagen- und Reaktorsicherheit, Schwertnergasse 1, 50667 Cologne (Germany); Foit, J.J.; Alsmeyer, H. [Forschungszentrum Karlsruhe, P.O. Box 36 40, 76021 Karlsruhe (Germany); Spindler, B.; Veteau, J.M. [CEA, 17, rue des Martyrs, 38054 Grenoble (France); Artnik, J.; Fischer, M. [Framatome ANP, P.O. Box 32 20, 91050 Erlangen (Germany)

2004-07-01T23:59:59.000Z

400

BBSLA – Malaysia (032410) English (Global Version 031010) ...  

Science Conference Proceedings (OSTI)

BBSLA – Malaysia (032410) English (Global Version 031010) 1 BLACKBERRY SOLUTION LICENSE AGREEMENT PLEASE READ THIS ...

Note: This page contains sample records for the topic "benchmark models version" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


401

BBSLA – Argentina (031710) Spanish (Global Version ...  

Science Conference Proceedings (OSTI)

BBSLA – Argentina (031710) Spanish (Global Version 031010) CONTRATO DE LICENCIA DE SOLUCIÓN BLACKBERRY FAVOR LEER ESTE ...

402

BBSLA – Pakistan (032310) English (Global Version 031010) ...  

Science Conference Proceedings (OSTI)

BBSLA – Pakistan (032310) English (Global Version 031010) 1 BLACKBERRY SOLUTION LICENSE AGREEMENT PLEASE READ THIS ...

403

BBSLA – Thailand (032410) English (Global Version 031010) ...  

Science Conference Proceedings (OSTI)

BBSLA – Thailand (032410) English (Global Version 031010) 1 BLACKBERRY SOLUTION LICENSE AGREEMENT PLEASE READ THIS ...

404

BBSLA - South Africa (031910) English (Global Version ...  

Science Conference Proceedings (OSTI)

BBSLA - South Africa (031910) English (Global Version 031010) 1 BLACKBERRY SOLUTION LICENSE AGREEMENT PLEASE READ THIS ...

405

Science Driven Supercomputing Architectures: AnalyzingArchitectural Bottlenecks with Applications and Benchmark Probes  

SciTech Connect

There is a growing gap between the peak speed of parallel computing systems and the actual delivered performance for scientific applications. In general this gap is caused by inadequate architectural support for the requirements of modern scientific applications, as commercial applications and the much larger market they represent, have driven the evolution of computer architectures. This gap has raised the importance of developing better benchmarking methodologies to characterize and to understand the performance requirements of scientific applications, to communicate them efficiently to influence the design of future computer architectures. This improved understanding of the performance behavior of scientific applications will allow improved performance predictions, development of adequate benchmarks for identification of hardware and application features that work well or poorly together, and a more systematic performance evaluation in procurement situations. The Berkeley Institute for Performance Studies has developed a three-level approach to evaluating the design of high end machines and the software that runs on them: (1) A suite of representative applications; (2) A set of application kernels; and (3) Benchmarks to measure key system parameters. The three levels yield different type of information, all of which are useful in evaluating systems, and enable NSF and DOE centers to select computer architectures more suited for scientific applications. The analysis will further allow the centers to engage vendors in discussion of strategies to alleviate the present architectural bottlenecks using quantitative information. These may include small hardware changes or larger ones that may be out interest to non-scientific workloads. Providing quantitative models to the vendors allows them to assess the benefits of technology alternatives using their own internal cost-models in the broader marketplace, ideally facilitating the development of future computer architectures more suited for scientific computations. The three levels also come with vastly different investments: the benchmarking efforts require significant rewriting to effectively use a given architecture, which is much more difficult on full applications than on smaller benchmarks.

Kamil, S.; Yelick, K.; Kramer, W.T.; Oliker, L.; Shalf, J.; Shan,H.; Strohmaier, E.

2005-09-26T23:59:59.000Z

406

The Snowmass Points and Slopes: Benchmarks for SUSY Searches  

E-Print Network (OSTI)

The ``Snowmass Points and Slopes'' (SPS) are a set of benchmark points and parameter lines in the MSSM parameter space corresponding to different scenarios in the search for Supersymmetry at present and future experiments. This set of benchmarks was agreed upon at the 2001 ``Snowmass Workshop on the Future of Particle Physics'' as a consensus based on different existing proposals.

B. C. Allanach; M. Battaglia; G. A. Blair; M. Carena; A. De Roeck; A. Dedes; A. Djouadi; D. Gerdes; N. Ghodbane; J. Gunion; H. E. Haber; T. Han; S. Heinemeyer; J. L. Hewett; I. Hinchliffe; J. Kalinowski; H. E. Logan; S. P. Martin; H. -U. Martyn; K. T. Matchev; S. Moretti; F. Moortgat; G. Moortgat-Pick; S. Mrenna; U. Nauenberg; Y. Okada; K. A. Olive; W. Porod; M. Schmitt; S. Su; C. E. M. Wagner; G. Weiglein; J. Wells; G. W. Wilson; P. Zerwas

2002-02-25T23:59:59.000Z

407

From Aardvark to Zorro: A Benchmark for Mammal Image Classification  

Science Conference Proceedings (OSTI)

Current object recognition systems aim at recognizing numerous object classes under limited supervision conditions. This paper provides a benchmark for evaluating progress on this fundamental task. Several methods have recently proposed to utilize the ... Keywords: Animals, Annotation, Benchmark, Database, Dataset, Machine learning, Mammals, Multiclass, Natural images, Object recognition, Svm

Michael Fink; Shimon Ullman

2008-05-01T23:59:59.000Z

408

HPC Global File System Performance Analysis Using A Scientific-Application Derived Benchmark  

E-Print Network (OSTI)

Scienti?c-Application Derived Benchmark. In Proc. SC07: HighMonterey, CA, April 11-14 2005. [9] Flash io benchmark.www-unix.mcs.anl.gov/pio-benchmark/. [10] W. Gropp, E. Lusk,

Borrill, Julian

2009-01-01T23:59:59.000Z

409

Do Benchmarks Matter? Do Measures Matter? A Study of Monthly Mutual Fund Returns  

E-Print Network (OSTI)

USING 279 MUTUAL FUNDS Benchmark: Time Series t-statistic:FOR 3 MEASURES WITH 4 BENCHMARKS USING 109 PASSIVE TEST120 MONTHLY OBSERVATIONS BENCHMARK: EW INDEX 10 FACTORS P8

Grinblatt, Mark; Titman, Sheridan

1991-01-01T23:59:59.000Z

410

Review of California and National Methods for Energy Performance Benchmarking of Commercial Buildings  

E-Print Network (OSTI)

benchmark the energy performance of California’s buildings.benchmark the energy performance of California’s buildings.benchmark with quantitative statistics guiding the building evaluation. Energy

Matson, Nance E.; Piette, Mary Ann

2005-01-01T23:59:59.000Z

411

Constraining the Influence of Natural Variability to Improve Estimates of Global Aerosol Indirect Effects in a Nudged Version of the Community Atmosphere Model 5  

SciTech Connect

Natural modes of variability on many timescales influence aerosol particle distributions and cloud properties such that isolating statistically significant differences in cloud radiative forcing due to anthropogenic aerosol perturbations (indirect effects) typically requires integrating over long simulations. For state-of-the-art global climate models (GCM), especially those in which embedded cloud-resolving models replace conventional statistical parameterizations (i.e. multi-scale modeling framework, MMF), the required long integrations can be prohibitively expensive. Here an alternative approach is explored, which implements Newtonian relaxation (nudging) to constrain simulations with both pre-industrial and present-day aerosol emissions toward identical meteorological conditions, thus reducing differences in natural variability and dampening feedback responses in order to isolate radiative forcing. Ten-year GCM simulations with nudging provide a more stable estimate of the global-annual mean aerosol indirect radiative forcing than do conventional free-running simulations. The estimates have mean values and 95% confidence intervals of -1.54 ± 0.02 W/m2 and -1.63 ± 0.17 W/m2 for nudged and free-running simulations, respectively. Nudging also substantially increases the fraction of the world’s area in which a statistically significant aerosol indirect effect can be detected (68% and 25% of the Earth's surface for nudged and free-running simulations, respectively). One-year MMF simulations with and without nudging provide global-annual mean aerosol indirect radiative forcing estimates of -0.80 W/m2 and -0.56 W/m2, respectively. The one-year nudged results compare well with previous estimates from three-year free-running simulations (-0.77 W/m2), which showed the aerosol-cloud relationship to be in better agreement with observations and high-resolution models than in the results obtained with conventional parameterizations.

Kooperman, G. J.; Pritchard, M. S.; Ghan, Steven J.; Wang, Minghuai; Somerville, Richard C.; Russell, Lynn

2012-12-11T23:59:59.000Z

412

Benchmark energy use | ENERGY STAR Buildings & Plants  

NLE Websites -- All DOE Office Websites (Extended Search)

Benchmark energy use Benchmark energy use Secondary menu About us Press room Contact Us Portfolio Manager Login Facility owners and managers Existing buildings Commercial new construction Industrial energy management Small business Service providers Service and product providers Verify applications for ENERGY STAR certification Design commercial buildings Energy efficiency program administrators Commercial and industrial program sponsors Associations State and local governments Federal agencies Tools and resources Training In this section How can we help you? Build an energy program Improve building and plant performance Earn the ENERGY STAR and other recognition Benchmark energy use Learn about benchmarking Use ENERGY STAR benchmarking tools ENERGY STAR in action Communicate and educate Find out who's partnered with ENERGY STAR

413

Energy Efficient Cities: Assessment Tool and Benchmarking Practices | Open  

Open Energy Info (EERE)

Efficient Cities: Assessment Tool and Benchmarking Practices Efficient Cities: Assessment Tool and Benchmarking Practices Jump to: navigation, search Tool Summary Name: Energy Efficient Cities: Assessment Tool and Benchmarking Practices Agency/Company /Organization: World Bank Sector: Energy Focus Area: Energy Efficiency, Buildings, Industry Topics: Resource assessment, Technology characterizations Resource Type: Publications, Guide/manual Website: www.esmap.org/esmap/sites/esmap.org/files/P115793_Energy%20Efficient%2 Energy Efficient Cities: Assessment Tool and Benchmarking Practices Screenshot References: EE Cities[1] Overview "Energy Efficient Cities: Assessment Tools and Benchmarking Practices has been developed from a careful review of selected papers presented during two ESMAP-sponsored sessions at the fifth World Bank Urban Research

414

Benchmarking Buildings to Prioritize Sites for Emissions Analysis |  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Benchmarking Buildings to Prioritize Sites for Emissions Analysis Benchmarking Buildings to Prioritize Sites for Emissions Analysis Benchmarking Buildings to Prioritize Sites for Emissions Analysis October 7, 2013 - 10:54am Addthis YOU ARE HERE Step 2 When actual energy use by building type is known, benchmarking the performance of those buildings to industry averages can help establish those with greatest opportunities for GHG reduction. Energy intensity can be used as a basis for benchmarking by building type and can be calculated using actual energy use, representative buildings, or available average estimates from agency energy records. Energy intensity should be compared to industry averages, such as the Commercial Buildings Energy Consumption Survey (CBECS) or an agency specific metered sample by location. When a program has access to metered data or representative building data,

415

Quantum benchmarks for pure single-mode Gaussian states  

E-Print Network (OSTI)

Teleportation and storage of continuous variable states of light and atoms are essential building blocks for the realization of large scale quantum networks. Rigorous validation of these implementations require identifying, and surpassing, benchmarks set by the most effective strategies attainable without the use of quantum resources. Such benchmarks have been established for special families of input states, like coherent states and particular subclasses of squeezed states. Here we solve the longstanding problem of defining quantum benchmarks for general pure Gaussian single-mode states with arbitrary phase, displacement, and squeezing, randomly sampled according to a realistic prior distribution. As a special case, we show that the fidelity benchmark for teleporting squeezed states with totally random phase and squeezing degree is 1/2, equal to the corresponding one for coherent states. We discuss the use of entangled resources to beat the benchmarks in experiments.

Giulio Chiribella; Gerardo Adesso

2013-08-09T23:59:59.000Z

416

Learn about benchmarking | ENERGY STAR Buildings & Plants  

NLE Websites -- All DOE Office Websites (Extended Search)

Learn about benchmarking Learn about benchmarking Secondary menu About us Press room Contact Us Portfolio Manager Login Facility owners and managers Existing buildings Commercial new construction Industrial energy management Small business Service providers Service and product providers Verify applications for ENERGY STAR certification Design commercial buildings Energy efficiency program administrators Commercial and industrial program sponsors Associations State and local governments Federal agencies Tools and resources Training In this section How can we help you? Build an energy program Improve building and plant performance Earn the ENERGY STAR and other recognition Benchmark energy use Learn about benchmarking Use ENERGY STAR benchmarking tools ENERGY STAR in action Communicate and educate Find out who's partnered with ENERGY STAR

417

Developing a Web-based Benchmarking Tool for Laboratories  

NLE Websites -- All DOE Office Websites (Extended Search)

Developing a Web-based Benchmarking Tool for Laboratories Developing a Web-based Benchmarking Tool for Laboratories Speaker(s): Mayank Singh Date: November 22, 2002 - 12:00pm Location: Bldg. 90 Seminar Host/Point of Contact: Dale Sartor (The EETD Applications Team includes: Satish Kumar, Paul Mathew, Dale Sartor, and Mayank Singh.) Developers of benchmarking tools are confronted with some common issues and some unique challenges. This presentation will describe the challenges faced by us while developing a web-based benchmarking tool for laboratories. Attributes such as the i) analytical and data visualization capability, and ii) flexibility and usability of the tool are common to any benchmarking effort. The various classification scheme and categories of laboratories, each with its own energy signature, posed a design challenge both for the database as well as data input forms,

418

Review of National and California Benchmarking Methods  

NLE Websites -- All DOE Office Websites (Extended Search)

Review of California and National Methods for Review of California and National Methods for Energy-Performance Benchmarking of Commercial Buildings Nance E. Matson and Mary Ann Piette Ernest Orlando Lawrence Berkeley National Laboratory September 5 th , 2005 LBNL No. 57364 DISCLAIMER This document was prepared as an account of work sponsored by the United States Government. While this document is believed to contain correct information, neither the United States Government nor any agency thereof, nor The Regents of the University of California, nor any of their employees, makes any warranty, express or implied, or assumes any legal responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned

419

A Power Benchmarking Framework for Network Devices  

E-Print Network (OSTI)

Abstract. Energy efficiency is becoming increasingly important in the operation of networking infrastructure, especially in enterprise and data center networks. Researchers have proposed several strategies for energy management of networking devices. However, we need a comprehensive characterization of power consumption by a variety of switches and routers to accurately quantify the savings from the various power savings schemes. In this paper, we first describe the hurdles in network power instrumentation and present a power measurement study of a variety of networking gear such as hubs, edge switches, core switches, routers and wireless access points in both stand-alone mode and a production data center. We build and describe a benchmarking suite that will allow users to measure and compare the power consumed for a large set of common configurations at any switch or router of their choice. We also propose a network energy proportionality index, which is an easily measurable metric, to compare power consumption behaviors of multiple devices.

Priya Mahadevan; Puneet Sharma; Sujata Banerjee

2009-01-01T23:59:59.000Z

420

Quantum deduction rules (preliminary version)  

E-Print Network (OSTI)

Quantum deduction rules (preliminary version) Pavel Pudlâ??ak # March 27, 2007 Abstract We define propositional quantum Frege proof systems and compare it with classical Frege proof systems. 1 Introduction In this paper we shall address the question whether quantum circuits could help us prove theorems faster than

Pudlák, Pavel

Note: This page contains sample records for the topic "benchmark models version" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


421

Performance Evaluation and Benchmarking of Intelligent Systems  

Science Conference Proceedings (OSTI)

To design and develop capable, dependable, and affordable intelligent systems, their performance must be measurable. Scientific methodologies for standardization and benchmarking are crucial for quantitatively evaluating the performance of emerging robotic and intelligent systems technologies. There is currently no accepted standard for quantitatively measuring the performance of these systems against user-defined requirements; and furthermore, there is no consensus on what objective evaluation procedures need to be followed to understand the performance of these systems. The lack of reproducible and repeatable test methods has precluded researchers working towards a common goal from exchanging and communicating results, inter-comparing system performance, and leveraging previous work that could otherwise avoid duplication and expedite technology transfer. Currently, this lack of cohesion in the community hinders progress in many domains, such as manufacturing, service, healthcare, and security. By providing the research community with access to standardized tools, reference data sets, and open source libraries of solutions, researchers and consumers will be able to evaluate the cost and benefits associated with intelligent systems and associated technologies. In this vein, the edited book volume addresses performance evaluation and metrics for intelligent systems, in general, while emphasizing the need and solutions for standardized methods. To the knowledge of the editors, there is not a single book on the market that is solely dedicated to the subject of performance evaluation and benchmarking of intelligent systems. Even books that address this topic do so only marginally or are out of date. The research work presented in this volume fills this void by drawing from the experiences and insights of experts gained both through theoretical development and practical implementation of intelligent systems in a variety of diverse application domains. The book presents a detailed and coherent picture of state-of-the-art, recent developments, and further research areas in intelligent systems.

Madhavan, Raj [ORNL; Messina, Elena [National Institute of Standards and Technology (NIST); Tunstel, Edward [JHU Applied Physics Laboratory

2009-09-01T23:59:59.000Z

422

EQ6, a computer program for reaction path modeling of aqueous geochemical systems: Theoretical manual, user`s guide, and related documentation (Version 7.0); Part 4  

Science Conference Proceedings (OSTI)

EQ6 is a FORTRAN computer program in the EQ3/6 software package (Wolery, 1979). It calculates reaction paths (chemical evolution) in reacting water-rock and water-rock-waste systems. Speciation in aqueous solution is an integral part of these calculations. EQ6 computes models of titration processes (including fluid mixing), irreversible reaction in closed systems, irreversible reaction in some simple kinds of open systems, and heating or cooling processes, as well as solve ``single-point`` thermodynamic equilibrium problems. A reaction path calculation normally involves a sequence of thermodynamic equilibrium calculations. Chemical evolution is driven by a set of irreversible reactions (i.e., reactions out of equilibrium) and/or changes in temperature and/or pressure. These irreversible reactions usually represent the dissolution or precipitation of minerals or other solids. The code computes the appearance and disappearance of phases in solubility equilibrium with the water. It finds the identities of these phases automatically. The user may specify which potential phases are allowed to form and which are not. There is an option to fix the fugacities of specified gas species, simulating contact with a large external reservoir. Rate laws for irreversible reactions may be either relative rates or actual rates. If any actual rates are used, the calculation has a time frame. Several forms for actual rate laws are programmed into the code. EQ6 is presently able to model both mineral dissolution and growth kinetics.

Wolery, T.J.; Daveler, S.A.

1992-10-09T23:59:59.000Z

423

Preliminary Thermal Modeling of HI-Storm 100S-218 Version B Storage Modules at Hope Creek Cuclear Power Station ISFSI  

Science Conference Proceedings (OSTI)

As part of the Used Fuel Disposition Campaign of the U. S. Department of Energy, Office of Nuclear Energy (DOE-NE) Fuel Cycle Research and Development, a consortium of national laboratories and industry is performing visual inspections and temperature measurements of selected storage modules at various locations around the United States. This report documents thermal analyses in in support of the inspections at the Hope Creek Nuclear Generating Station ISFSI. This site utilizes the HI-STORM100 vertical storage system developed by Holtec International. This is a vertical storage module design, and the thermal models are being developed using COBRA-SFS (Michener, et al., 1987), a code developed by PNNL for thermal-hydraulic analyses of multi assembly spent fuel storage and transportation systems. This report describes the COBRA-SFS model in detail, and presents pre-inspection predictions of component temperatures and temperature distributions. The final report will include evaluation of inspection results, and if required, additional post-test calculations, with appropriate discussion of results.

Cuta, Judith M.; Adkins, Harold E.

2013-08-30T23:59:59.000Z

424

Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC) verification and validation plan. version 1.  

SciTech Connect

The objective of the U.S. Department of Energy Office of Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC) is to provide an integrated suite of computational modeling and simulation (M&S) capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive-waste storage facility or disposal repository. To meet this objective, NEAMS Waste IPSC M&S capabilities will be applied to challenging spatial domains, temporal domains, multiphysics couplings, and multiscale couplings. A strategic verification and validation (V&V) goal is to establish evidence-based metrics for the level of confidence in M&S codes and capabilities. Because it is economically impractical to apply the maximum V&V rigor to each and every M&S capability, M&S capabilities will be ranked for their impact on the performance assessments of various components of the repository systems. Those M&S capabilities with greater impact will require a greater level of confidence and a correspondingly greater investment in V&V. This report includes five major components: (1) a background summary of the NEAMS Waste IPSC to emphasize M&S challenges; (2) the conceptual foundation for verification, validation, and confidence assessment of NEAMS Waste IPSC M&S capabilities; (3) specifications for the planned verification, validation, and confidence-assessment practices; (4) specifications for the planned evidence information management system; and (5) a path forward for the incremental implementation of this V&V plan.

Bartlett, Roscoe Ainsworth; Arguello, Jose Guadalupe, Jr.; Urbina, Angel; Bouchard, Julie F.; Edwards, Harold Carter; Freeze, Geoffrey A.; Knupp, Patrick Michael; Wang, Yifeng; Schultz, Peter Andrew; Howard, Robert (Oak Ridge National Laboratory, Oak Ridge, TN); McCornack, Marjorie Turner

2011-01-01T23:59:59.000Z

425

Retirement Saving with Contribution Payments and Labor Income as a Benchmark for Investments  

E-Print Network (OSTI)

In this paper we study the retirement saving problem from the point of view of a plan sponsor, who makes contribution payments for the future retirement of an employee. The plan sponsor considers the employee's labor income as investment-benchmark in order to ensure the continuation of consumption habits after retirement. We demonstrate that the demand for risky assets increases at low wealth levels due to the contribution payments. We quantify the demand for hedging against changes in wage growth and nd that it is relatively small. We show that downside-risk measures increase risk-taking at both low and high levels of wealth. Keywords: retirement saving, optimal asset allocation, discrete-time nance, dynamic programming. JEL Classications Codes: G11, G23. We are grateful to Bart Oldenkamp and Ton Vorst for comments on an earlier version of this paper. y Corresponding author: Erasmus University Rotterdam, Econometric Institute, P.O. Box 1738, 3000 DR Rotterdam, The Netherland...

Arjan Berkelaar; Roy Kouwenberg

1999-01-01T23:59:59.000Z

426

ARM - Carlos Sousa Interview (English Version)  

NLE Websites -- All DOE Office Websites (Extended Search)

DeployementCarlos Sousa Interview (English Version) DeployementCarlos Sousa Interview (English Version) Azores Deployment AMF Home Graciosa Island Home Data Plots and Baseline Instruments Satellite Retrievals Experiment Planning CAP-MBL Proposal Abstract and Related Campaigns Science Questions Science Plan (PDF, 4.4M) Rob Wood Website Outreach Backgrounders English Version (PDF, 363K) Portuguese Version (PDF, 327K) AMF Posters, 2009 English Version Portuguese Version Education Flyers English Version Portuguese Version News Campaign Images Carlos Sousa Interview (English Version) From Graciosa to India. Thanks to the Atmospheric Station. (From AĎ‚oriano Oriental, March 5, 2012) Carlos Sousa works on an instrument at the ARM India site. Carlos Sousa works on an instrument at the ARM India site. To be at the right place at the right time, can change a person's life.

427

Evaluation of HEU-Beryllium Benchmark Experiments to Improve Computational Analysis of Space Reactors  

SciTech Connect

An assessment was previously performed to evaluate modeling capabilities and quantify preliminary biases and uncertainties associated with the modeling methods and data utilized in designing a nuclear reactor such as a beryllium-reflected, highly-enriched-uranium (HEU)-O2 fission surface power (FSP) system for space nuclear power. The conclusion of the previous study was that current capabilities could preclude the necessity of a cold critical test of the FSP; however, additional testing would reduce uncertainties in the beryllium and uranium cross-section data and the overall uncertainty in the computational models. A series of critical experiments using HEU metal were performed in the 1960s and 1970s in support of criticality safety operations at the Y-12 Plant. Of the hundreds of experiments, three were identified as fast-fission configurations reflected by beryllium metal. These experiments have been evaluated as benchmarks for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE). Further evaluation of the benchmark experiments was performed using the sensitivity and uncertainty analysis capabilities of SCALE 6. The data adjustment methods of SCALE 6 have been employed in the validation of an example FSP design model to reduce the uncertainty due to the beryllium cross section data.

John D. Bess; Keith C. Bledsoe; Bradley T. Rearden

2011-02-01T23:59:59.000Z

428

Benchmarking the Remote-Handled Waste Facility at the West Valley Demonstration Project  

Science Conference Proceedings (OSTI)

ABSTRACT Facility decontamination activities at the West Valley Demonstration Project (WVDP), the site of a former commercial nuclear spent fuel reprocessing facility near Buffalo, New York, have resulted in the removal of radioactive waste. Due to high dose and/or high contamination levels of this waste, it needs to be handled remotely for processing and repackaging into transport/disposal-ready containers. An initial conceptual design for a Remote-Handled Waste Facility (RHWF), completed in June 1998, was estimated to cost $55 million and take 11 years to process the waste. Benchmarking the RHWF with other facilities around the world, completed in November 1998, identified unique facility design features and innovative waste pro-cessing methods. Incorporation of the benchmarking effort has led to a smaller yet fully functional, $31 million facility. To distinguish it from the June 1998 version, the revised design is called the Rescoped Remote-Handled Waste Facility (RRHWF) in this topical report. The conceptual design for the RRHWF was completed in June 1999. A design-build contract was approved by the Department of Energy in September 1999.

O. P. Mendiratta; D. K. Ploetz

2000-02-29T23:59:59.000Z

429

Nuclear Energy -- Knowledge Base for Advanced Modeling and Simulation (NE-KAMS) Code Verification and Validation Data Standards and Requirements: Fluid Dynamics Version 1.0  

SciTech Connect

V&V and UQ are the primary means to assess the accuracy and reliability of M&S and, hence, to establish confidence in M&S. Though other industries are establishing standards and requirements for the performance of V&V and UQ, at present, the nuclear industry has not established such standards or requirements. However, the nuclear industry is beginning to recognize that such standards are needed and that the resources needed to support V&V and UQ will be very significant. In fact, no single organization has sufficient resources or expertise required to organize, conduct and maintain a comprehensive V&V and UQ program. What is needed is a systematic and standardized approach to establish and provide V&V and UQ resources at a national or even international level, with a consortium of partners from government, academia and industry. Specifically, what is needed is a structured and cost-effective knowledge base that collects, evaluates and stores verification and validation data, and shows how it can be used to perform V&V and UQ, leveraging collaboration and sharing of resources to support existing engineering and licensing procedures as well as science-based V&V and UQ processes. The Nuclear Energy Knowledge base for Advanced Modeling and Simulation (NE-KAMS) is being developed at the Idaho National Laboratory in conjunction with Bettis Laboratory, Sandia National Laboratories, Argonne National Laboratory, Utah State University and others with the objective of establishing a comprehensive and web-accessible knowledge base to provide V&V and UQ resources for M&S for nuclear reactor design, analysis and licensing. The knowledge base will serve as an important resource for technical exchange and collaboration that will enable credible and reliable computational models and simulations for application to nuclear power. NE-KAMS will serve as a valuable resource for the nuclear industry, academia, the national laboratories, the U.S. Nuclear Regulatory Commission (NRC) and the public and will help ensure the safe, economical and reliable operation of existing and future nuclear reactors.

Greg Weirs; Hyung Lee

2011-09-01T23:59:59.000Z

430

COBRA: A hybrid method for software cost estimation, benchmarking and risk assessment  

E-Print Network (OSTI)

Current cost estimation techniques have a number of drawbacks. For example, developing algorithmic models requires extensive past project data. Also, off-the-shelf models have been found to be difficult to calibrate but inaccurate without calibration. Informal approaches based on experienced estimators depend on estimators ’ availability and are not easily repeatable, as well as not being much more accurate than algorithmic techniques. In this paper we present a method for cost estimation that combines aspects of algorithmic and experiential approaches (referred to as COBRA, COst estimation, Benchmarking, and Risk Assessment). We find through a case study that cost estimates using COBRA show an average ARE of 0.09, and show that the results are easily usable for benchmarking and risk assessment purposes. 1

Lionel C. Bri; Lionel C. Bri; Khaled El Emam; Khaled El Emam; Frank Bomarius; Frank Bomarius

1998-01-01T23:59:59.000Z

431

COBRA: A Hybrid Method for Software Cost Estimation, Benchmarking, and Risk Assessment  

E-Print Network (OSTI)

Current cost estimation techniques have a number of drawbacks. For example, developing algorithmic models requires extensive past project data. Also, off-the-shelf models have been found to be difficult to calibrate but inaccurate without calibration. Informal approaches based on experienced estimators depend on estimators' availability and are not easily repeatable, as well as not being much more accurate than algorithmic techniques. In this paper we present a method for cost estimation that combines aspects of algorithmic and experiential approaches (referred to as COBRA, COst estimation, Benchmarking, and Risk Assessment). We find through a case study that cost estimates using COBRA show an average ARE of 0.09, and show that the results are easily usable for benchmarking and risk assessment purposes. 1 Introduction Project and program managers require accurate and reliable cost estimates to allocate and control project resources, and to make realistic bids on external contracts. ...

Lionel C. Briand; Khaled El Emam; Frank Bomarius

1997-01-01T23:59:59.000Z

432

IAEA CRP on HTGR Uncertainty Analysis: Benchmark Definition and Test Cases  

SciTech Connect

Uncertainty and sensitivity studies are essential elements of the reactor simulation code verification and validation process. Although several international uncertainty quantification activities have been launched in recent years in the LWR, BWR and VVER domains (e.g. the OECD/NEA BEMUSE program [1], from which the current OECD/NEA LWR Uncertainty Analysis in Modelling (UAM) benchmark [2] effort was derived), the systematic propagation of uncertainties in cross-section, manufacturing and model parameters for High Temperature Reactor (HTGR) designs has not been attempted yet. This paper summarises the scope, objectives and exercise definitions of the IAEA Coordinated Research Project (CRP) on HTGR UAM [3]. Note that no results will be included here, as the HTGR UAM benchmark was only launched formally in April 2012, and the specification is currently still under development.

Gerhard Strydom; Frederik Reitsma; Hans Gougar; Bismark Tyobeka; Kostadin Ivanov

2012-11-01T23:59:59.000Z

433

TOUGH2 User's Guide Version 2  

DOE Green Energy (OSTI)

TOUGH2 is a numerical simulator for nonisothermal flows of multicomponent, multiphase fluids in one, two, and three-dimensional porous and fractured media. The chief applications for which TOUGH2 is designed are in geothermal reservoir engineering, nuclear waste disposal, environmental assessment and remediation, and unsaturated and saturated zone hydrology. TOUGH2 was first released to the public in 1991; the 1991 code was updated in 1994 when a set of preconditioned conjugate gradient solvers was added to allow a more efficient solution of large problems. The current Version 2.0 features several new fluid property modules and offers enhanced process modeling capabilities, such as coupled reservoir-wellbore flow, precipitation and dissolution effects, and multiphase diffusion. Numerous improvements in previously released modules have been made and new user features have been added, such as enhanced linear equation solvers, and writing of graphics files. The T2VOC module for three-phase flows of water, air and a volatile organic chemical (VOC), and the T2DM module for hydrodynamic dispersion in 2-D flow systems have been integrated into the overall structure of the code and are included in the Version 2.0 package. Data inputs are upwardly compatible with the previous version. Coding changes were generally kept to a minimum, and were only made as needed to achieve the additional functionalities desired. TOUGH2 is written in standard FORTRAN77 and can be run on any platform, such as workstations, PCs, Macintosh, mainframe and supercomputers, for which appropriate FORTRAN compilers are available. This report is a self-contained guide to application of TOUGH2 to subsurface flow problems. It gives a technical description of the TOUGH2 code, including a discussion of the physical processes modeled, and the mathematical and numerical methods used. Illustrative sample problems are presented along with detailed instructions for preparing input data.

Pruess, K.; Oldenburg, C.M.; Moridis, G.J.

1999-11-01T23:59:59.000Z

434

A nine year study of file system and storage benchmarking  

E-Print Network (OSTI)

Benchmarking is critical when evaluating performance, but is especially difficult for file and storage systems. Complex interactions between I/O devices, caches, kernel daemons, and other OS components result in behavior that is rather difficult to analyze. Moreover, systems have different features and optimizations, so no single benchmark is always suitable. The large variety of workloads that these systems experience in the real world also adds to this difficulty. In this article we survey 415 file system and storage benchmarks from 106 recent papers. We found that most popular benchmarks are flawed and many research papers do not provide a clear indication of true performance. We provide guidelines that we hope will improve future performance evaluations. To show how some widely used benchmarks can conceal or overemphasize overheads, we conducted a set of experiments. As a specific example, slowing down read operations on ext2 by a factor of 32 resulted in only a 2–5 % wall-clock slowdown in a popular compile benchmark. Finally, we discuss future work to improve file system and storage benchmarking.

Avishay Traeger; Erez Zadok; Nikolai Joukov; Charles P. Wright

2008-01-01T23:59:59.000Z

435

Methodology for developing Version 2.0 of the MECcheck{trademark} materials for the 1992, 1993, and 1995 Model Energy Codes  

Science Conference Proceedings (OSTI)

To help builders comply with the Council of American Building Officials (CABO) Model Energy Code (MEC), and to help code officials enforce the MEC requirements, the US Department of Energy (DOE) directed Pacific Northwest National Laboratory (PNNL) to develop the MECcheck{trademark} compliance materials. The materials include a compliance and enforcement manual for all the MEC requirements, prescriptive packages, software, and a trade-off worksheet (included in the compliance manual) to help comply with the thermal envelope requirements. The materials can be used for single-family and low- rise multifamily dwellings. The materials allow building energy efficiency measures (such as insulation levels) to be ``traded off`` against each other, allowing a wide variety of building designs to comply with the MEC. The materials were developed to provide compliance methods that are easy to use and understand. MECcheck compliance materials have been developed for three different editions of the MEC: the 1992, 1993, and 1995 editions. Although some requirements contained in the 1992, 1993, and 1995 MEC changed, the methodology used to develop the MECcheck materials for these three editions is essentially identical. This document explains the methodology used to produce the three MECcheck compliance approaches for meeting the MEC`s thermal envelope requirements--the prescriptive package approach, the software approach, and the trade-off approach. The MECcheck material are largely oriented to assisting the builder in meeting the most complicated part of the MEC--the building envelope U{sub o}-, U-, and R-value requirements in Section 502 of the MEC. This document details the calculations and assumptions underlying the treatment of the MEC requirements in MECcheck, with a major emphasis on the building envelope requirements.

Connell, L.M.; Lucas, R.G.; Taylor, Z.T.

1996-06-01T23:59:59.000Z

436

Establishing Benchmarks for DOE Commercial Building R&D and Program Evaluation: Preprint  

SciTech Connect

The U.S. Department of Energy (DOE) Building Technologies Program and the DOE research laboratories conduct a great deal of research on building technologies. However, differences in models and simulation tools used by various research groups make it difficult to compare results among studies. The authors have developed a set of 22 hypothetical benchmark buildings and weighting factors for nine locations across the country, for a total of 198 buildings.

Deru, M.; Griffith, B.; Torcellini, P.

2006-06-01T23:59:59.000Z

437

DISFRAC Version 2.0 Users Guide  

SciTech Connect

DISFRAC is the implementation of a theoretical, multi-scale model for the prediction of fracture toughness in the ductile-to-brittle transition temperature (DBTT) region of ferritic steels. Empirically-derived models of the DBTT region cannot legitimately be extrapolated beyond the range of existing fracture toughness data. DISFRAC requires only tensile properties and microstructural information as input, and thus allows for a wider range of application than empirical, toughness data dependent models. DISFRAC is also a framework for investigating the roles of various microstructural and macroscopic effects on fracture behavior, including carbide particle sizes, grain sizes, strain rates, and material condition. DISFRAC s novel approach is to assess the interaction effects of macroscopic conditions (geometry, loading conditions) with variable microstructural features on cleavage crack initiation and propagation. The model addresses all stages of the fracture process, from microcrack initiation within a carbide particle, to propagation of that crack through grains and across grain boundaries, finally to catastrophic failure of the material. The DISFRAC procedure repeatedly performs a deterministic analysis of microcrack initiation and propagation within a macroscopic crack plastic zone to calculate a critical fracture toughness value for each microstructural geometry set. The current version of DISFRAC, version 2.0, is a research code for developing and testing models related to cleavage fracture and transition toughness. The various models and computations have evolved significantly over the course of development and are expected to continue to evolve as testing and data collection continue. This document serves as a guide to the usage and theoretical foundations of DISFRAC v2.0. Feedback is welcomed and encouraged.

Cochran, Kristine B [ORNL; Erickson, Marjorie A [ORNL; Williams, Paul T [ORNL; Klasky, Hilda B [ORNL; Bass, Bennett Richard [ORNL

2013-01-01T23:59:59.000Z

438

PTLOAD version 6.2  

Science Conference Proceedings (OSTI)

The PTLOAD Version 6.2 software allows users to compute operating temperatures and ratings for power transformers. Description Power transformers are one of the most expensive components of any transmission system. Energy companies need to maximize utilization of these assets, while at the same time protecting them from damage and ensuring system reliability. To assist utilities in planning transformer loading, PTLOAD implements calculation methods from Institute of Electrical and Electronics Engineers (...

2007-10-15T23:59:59.000Z

439

REVIEW OF RESULTS FOR THE OECD/NEA PHASE VII BENCHMARK: STUDY OF SPENT FUEL COMPOSITIONS FOR LONG TERM DISPOSAL  

Science Conference Proceedings (OSTI)

This paper summarizes the problem specification and compares participants results for the OECD/NEA/WPNCS Expert Group on Burn-up Credit Criticality Safety Phase VII Benchmark Study of Spent Fuel Compositions for Long-Term Disposal. The Phase VII benchmark was developed to study the ability of relevant computer codes and associated nuclear data to predict spent fuel isotopic compositions and corresponding keff values in a cask configuration over the time duration relevant to spent nuclear fuel (SNF) disposal. The benchmark was divided into two sets of calculations: (1) decay calculations out to 1,000,000 years for provided pressurized-water-reactor (PWR) UO2 discharged fuel compositions and (2) burnup credit criticality calculations for a representative cask model at selected time steps. Contributions from 15 organizations and companies in 10 countries were submitted to the Phase VII benchmark exercise. This paper provides a description of the Phase VII benchmark and detailed comparisons of the participants isotopic compositions and keff values that were calculated with a diversity of computer codes and nuclear data sets. Differences observed in the calculated time-dependent nuclide densities are attributed to different decay data or code-specific numerical approximations. The variability of the keff results is consistent with the evaluated uncertainty associated with cross-section data.

Radulescu, Georgeta [ORNL; Wagner, John C [ORNL

2011-01-01T23:59:59.000Z

440

WS-BPEL Extensions for Versioning  

Science Conference Proceedings (OSTI)

This article proposes specific extensions for WS-BPEL (Business Process Execution Language) to support versioning of processes and partner links. It introduces new activities and extends existing activities, including partner links, invoke, receive, ... Keywords: BPEL, Business processes, SOA, Versioning

Matjaz B. Juric; Ana Sasa; Ivan Rozman

2009-08-01T23:59:59.000Z

Note: This page contains sample records for the topic "benchmark models version" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


441

Genome Majority Vote (GMV), Version 0.x  

NLE Websites -- All DOE Office Websites (Extended Search)

Genome Majority Vote (GMV), Version 0.x Genome Majority Vote (GMV), Version 0.x The pipeline runs PRODIGAL gene predictions on all genomes, runs pan-reciprocal BLAST, and...

442

COMPUTER PROGRAM CCC USER'S MANUAL VERSION II.  

E-Print Network (OSTI)

1PUTER PROGRAM CCC USER'S MANUAL VERSION II D.C. Mangold,COMPUTER PROGRAM CCC USER'S MANUAL Version II (January,Ll COMPUTER PROGRAM CCC--USER'S MANUAL D. C. Mangold, M, J,

Mangold, D.C.

2013-01-01T23:59:59.000Z

443

European Lean Gasoline Direct Injection Vehicle Benchmark  

DOE Green Energy (OSTI)

Lean Gasoline Direct Injection (LGDI) combustion is a promising technical path for achieving significant improvements in fuel efficiency while meeting future emissions requirements. Though Stoichiometric Gasoline Direct Injection (SGDI) technology is commercially available in a few vehicles on the American market, LGDI vehicles are not, but can be found in Europe. Oak Ridge National Laboratory (ORNL) obtained a European BMW 1-series fitted with a 2.0l LGDI engine. The vehicle was instrumented and commissioned on a chassis dynamometer. The engine and after-treatment performance and emissions were characterized over US drive cycles (Federal Test Procedure (FTP), the Highway Fuel Economy Test (HFET), and US06 Supplemental Federal Test Procedure (US06)) and steady state mappings. The vehicle micro hybrid features (engine stop-start and intelligent alternator) were benchmarked as well during the course of that study. The data was analyzed to quantify the benefits and drawbacks of the lean gasoline direct injection and micro hybrid technologies from a fuel economy and emissions perspectives with respect to the US market. Additionally that data will be formatted to develop, substantiate, and exercise vehicle simulations with conventional and advanced powertrains.

Chambon, Paul H [ORNL; Huff, Shean P [ORNL; Edwards, Kevin Dean [ORNL; Norman, Kevin M [ORNL; Prikhodko, Vitaly Y [ORNL; Thomas, John F [ORNL

2011-01-01T23:59:59.000Z

444

Analysisi Benchmark of the Single Heater Test  

SciTech Connect

The Single Heater Test (SHT) is the first of three in-situ thermal tests included in the site characterization program for the potential nuclear waste monitored geologic repository at Yucca Mountain. The heating phase of the SHT started in August 1996 and was concluded in May 1997 after 9 months of heating. Cooling continued until January 1998, at which time post-test characterization of the test block commenced. Numerous thermal, hydrological, mechanical, and chemical sensors monitored the coupled processes in the unsaturated fractured rock mass around the heater (CRWMS M&O 1999). The objective of this calculation is to benchmark a numerical simulation of the rock mass thermal behavior against the extensive data set that is available from the thermal test. The scope is limited to three-dimensional (3-D) numerical simulations of the computational domain of the Single Heater Test and surrounding rock mass. This calculation supports the waste package thermal design methodology, and is developed by Waste Package Department (WPD) under Office of Civilian Radioactive Waste Management (OCRWM) procedure AP-3.12Q, Revision 0, ICN 3, BSCN 1, Calculations.

H.M. Wade; H. Marr; M.J. Anderson

2006-07-27T23:59:59.000Z

445

Assessing Energy Resources Webinar Text Version  

Energy.gov (U.S. Department of Energy (DOE))

Download the text version of the audio from the DOE Office of Indian Energy webinar on assessing energy resources.

446

Strategic Energy Planning Webinar Text Version  

Energy.gov (U.S. Department of Energy (DOE))

Download the text version of the audio from the DOE Office of Indian Energy webinar on strategic energy planning.

447

Benchmarking Optimization Software with COPS 3.0  

E-Print Network (OSTI)

May 11, 2004 ... Benchmarking Optimization Software with COPS 3.0. Elizabeth D. Dolan (dolan * **at*** cs.unc.edu) Jorge J. More' (more ***at*** mcs.anl.gov)

448

An Independent Benchmarking of SDP and SOCP Solvers  

E-Print Network (OSTI)

Jul 23, 2001 ... The codes were run on a standard platform and on all the benchmark problems provided by the organizers of the challenge. A total of ten codes ...

449

The Extreme Benchmark Suite: Measuring High-Performance Embedded Systems  

E-Print Network (OSTI)

The Extreme Benchmark Suite (XBS) is designed to support performance measurement of highly parallel “extreme ” processors, many of which are designed to replace custom hardware implementations. XBS is designed to avoid many of the problems that occur when using existing benchmark suites with nonstandard and experimental architectures. In particular, XBS is intended to provide a fair comparison of a wide range of architectures, from general-purpose processors to hard-wired ASIC implementations. XBS has a clean modular structure to reduce porting effort, and is designed to be usable with slow cycle-accurate simulators. This work presents the motivation for the creation of XBS and describes in detail the XBS framework. Several benchmarks implemented with this framework are discussed, and these benchmarks are used to compare a standard platform, an experimental architecture, and custom

Steven Gerding; Krste Asanovi?

2005-01-01T23:59:59.000Z

450

Towards Systematic Benchmarking in Answer Set Programming: The Dagstuhl Initiative  

E-Print Network (OSTI)

for di#erent designs of a benchmarking and testing environment for ASP, we used the systems competition at the Dagstuhl Seminar. The following answer set programming systems participated in that initial competition. -- aspps, University of Kentucky, -- assat, UST Hong Kong, -- cmodels, University of Texas, -- dlv, Technical University of Vienna, -- smodels, Technical University of Helsinki. # A#liated with the School of Computing Science at Simon Fraser University, Burnaby, Canada. The di#culty that emerged right away was that these systems do not have a common input language nor do they agree on all functionalities. This led to the introduction of three di#erent (major) categories of benchmarks: Ground: Ground instances of coded benchmarks. As of now, these ground instances are produced by lparse or by the dlv grounder. These benchmarks can be used to test the performance of ASP solvers accepting as input ground (propositional) programs in output formats of lparse or the dlv

Paul Borchert; Christian Anger; Torsten Schaub; Miroslaw Truszczynski

2004-01-01T23:59:59.000Z

451

Spread narrows between Brent and WTI crude oil benchmark prices ...  

U.S. Energy Information Administration (EIA)

Spot prices for benchmarks West Texas Intermediate (WTI) and North Sea Brent crude oil neared parity of around $109 per barrel July 19, and the Brent-WTI spread was ...

452

The Extreme Benchmark Suite : measuring high-performance embedded systems  

E-Print Network (OSTI)

The Extreme Benchmark Suite (XBS) is designed to support performance measurement of highly parallel "extreme" processors, many of which are designed to replace custom hardware implementations. XBS is designed to avoid many ...

Gerding, Steven (Steven Bradley)

2005-01-01T23:59:59.000Z

453

Benchmarking of OEM Hybrid Electric Vehicles at NREL: Milestone Report  

DOE Green Energy (OSTI)

A milestone report that describes the NREL's progress and activities related to the DOE FY2001 Annual Operating Plan milestone entitled ''Benchmark 2 new production or pre-production hybrids with ADVISOR.''

Kelly, K. J.; Rajagopalan, A.

2001-10-26T23:59:59.000Z

454

A benchmark study on the thermal conductivity of nanofluids  

E-Print Network (OSTI)

This article reports on the International Nanofluid Property Benchmark Exercise, or INPBE, in which the thermal conductivity of identical samples of colloidally stable dispersions of nanoparticles or “nanofluids,” was ...

Buongiorno, Jacopo

455

Benchmarking the Mean Streets of NYC and Beyond  

NLE Websites -- All DOE Office Websites (Extended Search)

Benchmarking the Mean Streets of NYC and Beyond Speaker(s): Conor Laver Date: September 30, 2013 - 12:00pm - 1:00pm Location: 90-3122 Seminar HostPoint of Contact: Louis-Benoit...

456

Building America Research Benchmark Definition: Updated December 2009  

SciTech Connect

The Benchmark represents typical construction at a fixed point in time so it can be used as the basis for Building America's multi-year energy savings goals without chasing a 'moving target.'

Hendron, R.; Engebrecht, C.

2010-01-01T23:59:59.000Z

457

Investigator Manual Version 3.0  

E-Print Network (OSTI)

Investigator Manual Version 3.0 January 2013 Human Subjects Protection Program The University;Investigator Manual Version 3.0: 01/2013 Page 2 of 70 ©2009 Huron Consulting Services, LLC. Huron Consulting Manual Version 3.0: 01/2013 Page 3 of 70 ©2009 Huron Consulting Services, LLC. Huron Consulting Group Use

Arizona, University of

458

SIMULATE-E benchmarking of pilgrim nuclear power station  

Science Conference Proceedings (OSTI)

The CASMO-SIMULATE-E methodology is bench-marked to qualify its ability to determine power distributions and critical eigenvalues, k/sub eff/. Once the biases and uncertainties in this methodology are quantified, CASMO/SIMULATE-E will be utilized to generate reload fuel patterns and control rod sequences, and to provide operational support for Pilgrim Nuclear Power Station (PNPS). Only the results of the hot SIMULATE-E benchmarking are presented here.

DeWitt, G.L.; Hu, L.C.; Antonopoulos, P.T.

1986-01-01T23:59:59.000Z

459

A Version Model for Aspect Dependency Management  

Science Conference Proceedings (OSTI)

With Aspect-Oriented Programming (AOP) a new type of system units is introduced (aspects). One observed characteristic of AOP is that it results in a large number of additional (coarse-grained to fine-grained) system units (aspects) ready to be composed ...

Elke Pulvermueller; Andreas Speck; James Coplien

2001-09-01T23:59:59.000Z

460

Strategies for energy benchmarking in cleanrooms and laboratory-type facilities  

E-Print Network (OSTI)

benchmark by the actual energy consumption (Figure 4). The effectiveness metrics from multiple buildings

Sartor, Dale; Piette, Mary Ann; Tschudi, William; Fok, Stephen

2000-01-01T23:59:59.000Z

Note: This page contains sample records for the topic "benchmark models version" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


461

The Implementation of an Explicit Charging and Discharge Lightning Scheme within the WRF-ARW Model: Benchmark Simulations of a Continental Squall Line, a Tropical Cyclone, and a Winter Storm  

Science Conference Proceedings (OSTI)

This work describes the recent implementation of explicit lightning physics within the Weather Research and Forecasting (WRF) Model. Charging of hydrometeors consists of five distinct noninductive parameterizations, polarization of cloud water, ...

Alexandre O. Fierro; Edward R. Mansell; Donald R. MacGorman; Conrad L. Ziegler

2013-07-01T23:59:59.000Z

462

Finding benchmark brown dwarfs to probe the IMF as a function of time  

E-Print Network (OSTI)

Using a simulated disk brown dwarf (BD) population, we find that new large area infrared surveys are expected to identify enough BDs covering wide enough mass--age ranges to potentially measure the mass function down to ~0.03Mo, and the BD formation history out to 10 Gyr, at a level capable of establishing if BD formation follows star formation. We suggest these capabilities are best realised by spectroscopic calibration of BD properties (Teff, g and [M/H]) which, when combined with a measured luminosity and an evolutionary model can give BD mass and age relatively independent of BD atmosphere models. Such calibration requires an empirical understanding of how BD spectra are affected by variations in these properties, and thus the identification and study of "benchmark BDs" whose age and composition can be established independently. We identify the best sources of benchmark BDs as young open cluster members, moving group members, and wide (>1000AU) BD companions to both subgiant stars and high mass white dwarfs (WDs). We have used 2MASS to measure a wide L dwarf companion fraction of 2.7(+0.7/-0.5)%, which equates to a BD companion fraction of 34(+9/-6)% for an alpha~1 companion mass function. Using this value we simulate populations of wide BD binaries, and estimate that 80(+21/-14) subgiant--BD binaries, and 50(+13/-10) benchmark WD--BD binaries could be identified using current and new facilities. The WD--BD binaries should all be identifiable using the Large Area Survey component of UKIDSS combined with Sloan. Discovery of the subgiant--BD binaries will require a NIR imaging campaign around a large (~900) sample of Hipparcos subgiants. If identified, spectral studies of these benchmark brown dwarfs could reveal the spectral sensitivities across the Teff, g and [M/H] space probed by new surveys.

D. J. Pinfield; H. R. A. Jones; P. W. Lucas; T. R. Kendall; S. L. Folkes; A. C. Day-Jones; R. J. Chappelle; I. A. Steele

2006-03-13T23:59:59.000Z

463

Computational Benchmark Calculations Relevant to the Neutronic Design of the Spallation Neutron Source (SNS)  

Science Conference Proceedings (OSTI)

The Spallation Neutron Source (SNS) will provide an intense source of low-energy neutrons for experimental use. The low-energy neutrons are produced by the interaction of a high-energy (1.0 GeV) proton beam on a mercury (Hg) target and slowed down in liquid hydrogen or light water moderators. Computer codes and computational techniques are being benchmarked against relevant experimental data to validate and verify the tools being used to predict the performance of the SNS. The LAHET Code System (LCS), which includes LAHET, HTAPE ad HMCNP (a modified version of MCNP version 3b), have been applied to the analysis of experiments that were conducted in the Alternating Gradient Synchrotron (AGS) facility at Brookhaven National Laboratory (BNL). In the AGS experiments, foils of various materials were placed around a mercury-filled stainless steel cylinder, which was bombarded with protons at 1.6 GeV. Neutrons created in the mercury target, activated the foils. Activities of the relevant isotopes were accurately measured and compared with calculated predictions. Measurements at BNL were provided in part by collaborating scientists from JAERI as part of the AGS Spallation Target Experiment (ASTE) collaboration. To date, calculations have shown good agreement with measurements.

Gallmeier, F.X.; Glasgow, D.C.; Jerde, E.A.; Johnson, J.O.; Yugo, J.J.

1999-11-14T23:59:59.000Z

464

Plant Energy Benchmarking: A Ten Year Retrospective of the ENERGY STAR Energy Performace Indicators (ES-EPI)  

E-Print Network (OSTI)

Over the past several years, there has been growing interest among policy makers and others in the role that benchmarking industrial energy efficiency can play in climate, air, and other potential regulatory actives. For over ten years, the US EPA has supported the development of sector specific industrial energy efficiency benchmarks, known as ENERGY STAR Energy Performance Indicators (ES-EPI). To date there are ES-EPI that are either completed or under development for fourteen broad industries. Within these industries, ES-EPI account for over two dozen sub-sectors and many more detailed product types. Newer versions, or updates for three of the industries' ES-EPI have been developed in recent years. Through the process of updating this ES-EPI, the program has been able to observe changes in the energy performance of the sector as well as the range in performance found in the sector. This paper provides an overview of the approach that has been used in this research to develop this ES-EPI; summarizing the industry specific and general findings regarding the range of performance within and across industries. Observations about industrial plant benchmarking and lessons learned will be explored. In general, there are no sectors that are easily represented by a simple energy per widget benchmark; less energy intensive sectors tend to exhibit a wider range of performance than energy intensive ones; changes over time in the level and range of energy performance, i.e. industry curve shift, for ES-EPI that have been updated do not reveal any single pattern.

Boyd, G.; Tunnessen, W.

2013-01-01T23:59:59.000Z

465

Development and benchmarking of higher energy neutron transport data libraries  

Science Conference Proceedings (OSTI)

Neutron cross-section evaluations covering the energy range from 10/sup /minus/11/ to 100 MeV have been prepared for several materials. The principal method used to generate this data base has employed statistical-preequilibrium nuclear models, sophisticated phase shift analyses, and R-matrix techniques. The library takes advantage of formats developed for Version 6 of the Evaluated Nuclear Data File, ENDF. Methods to efficiently utilize the ENDF/B-VI representation of this library in the MCNP Monte Carlo code have been developed. MCNP results using the new library have been compared with calculated results using codes or data based upon intranuclear cascade models. 7 refs., 8 figs.

Arthur, E.D.; Young, P.G.; Perry, R.T.; Madland, D.G.; MacFarlane, R.E.; Little, R.C.; Bozoian, M.; LaBauve, R.J.

1988-01-01T23:59:59.000Z

466

Strategies for energy benchmarking in cleanrooms and laboratory-type facilities  

SciTech Connect

Buildings with cleanrooms and laboratories are growing in terms of total floor area and energy intensity. This building type is common in institutions such as universities and in many industries such as microelectronics and biotechnology. These buildings, with high ventilation rates and special environmental considerations, consume from 4 to 100 times more energy per square foot than conventional commercial buildings. Owners and operators of such facilities know they are expensive to operate, but have little way of knowing if their facilities are efficient or inefficient. A simple comparison of energy consumption per square foot is of little value. A growing interest in benchmarking is also fueled by: A new U.S. Executive Order removing the exemption of federal laboratories from energy efficiency goals, setting a 25% savings target, and calling for baseline guidance to measure progress; A new U.S. EPA and U.S. DOE initiative, Laboratories for the 21st Century, establishing voluntary performance goals and criteria for recognition; and A new PG and E market transformation program to improve energy efficiency in high tech facilities, including a cleanroom energy use benchmarking project. This paper identifies the unique issues associated with benchmarking energy use in high-tech facilities. Specific options discussed include statistical comparisons, point-based rating systems, model-based techniques, and hierarchical end-use and performance-metrics evaluations.

Sartor, Dale; Piette, Mary Ann; Tschudi, William; Fok, Stephen

2000-06-01T23:59:59.000Z

467

Interim report on verification and benchmark testing of the NUFT computer code  

Science Conference Proceedings (OSTI)

This interim report presents results of work completed in the ongoing verification and benchmark testing of the NUFT (Nonisothermal Unsaturated-saturated Flow and Transport) computer code. NUFT is a suite of multiphase, multicomponent models for numerical solution of thermal and isothermal flow and transport in porous media, with application to subsurface contaminant transport problems. The code simulates the coupled transport of heat, fluids, and chemical components, including volatile organic compounds. Grid systems may be cartesian or cylindrical, with one-, two-, or fully three-dimensional configurations possible. In this initial phase of testing, the NUFT code was used to solve seven one-dimensional unsaturated flow and heat transfer problems. Three verification and four benchmarking problems were solved. In the verification testing, excellent agreement was observed between NUFT results and the analytical or quasianalytical solutions. In the benchmark testing, results of code intercomparison were very satisfactory. From these testing results, it is concluded that the NUFT code is ready for application to field and laboratory problems similar to those addressed here. Multidimensional problems, including those dealing with chemical transport, will be addressed in a subsequent report.

Lee, K.H.; Nitao, J.J. [Lawrence Livermore National Lab., CA (United States); Kulshrestha, A. [Weiss Associates, Emeryville, CA (United States)

1993-10-01T23:59:59.000Z

468

Continuous Reliability Enhancement for Wind (CREW) database : wind plant reliability benchmark.  

SciTech Connect

To benchmark the current U.S. wind turbine fleet reliability performance and identify the major contributors to component-level failures and other downtime events, the Department of Energy funded the development of the Continuous Reliability Enhancement for Wind (CREW) database by Sandia National Laboratories. This report is the third annual Wind Plant Reliability Benchmark, to publically report on CREW findings for the wind industry. The CREW database uses both high resolution Supervisory Control and Data Acquisition (SCADA) data from operating plants and Strategic Power Systems' ORAPWind%C2%AE (Operational Reliability Analysis Program for Wind) data, which consist of downtime and reserve event records and daily summaries of various time categories for each turbine. Together, these data are used as inputs into CREW's reliability modeling. The results presented here include: the primary CREW Benchmark statistics (operational availability, utilization, capacity factor, mean time between events, and mean downtime); time accounting from an availability perspective; time accounting in terms of the combination of wind speed and generation levels; power curve analysis; and top system and component contributors to unavailability.

Hines, Valerie Ann-Peters; Ogilvie, Alistair B.; Bond, Cody R.

2013-09-01T23:59:59.000Z

469

Mesoscale Benchmark Demonstration Problem 1: Mesoscale Simulations of Intra-granular Fission Gas Bubbles in UO2 under Post-irradiation Thermal Annealing  

SciTech Connect

A study was conducted to evaluate the capabilities of different numerical methods used to represent microstructure behavior at the mesoscale for irradiated material using an idealized benchmark problem. The purpose of the mesoscale benchmark problem was to provide a common basis to assess several mesoscale methods with the objective of identifying the strengths and areas of improvement in the predictive modeling of microstructure evolution. In this work, mesoscale models (phase-field, Potts, and kinetic Monte Carlo) developed by PNNL, INL, SNL, and ORNL were used to calculate the evolution kinetics of intra-granular fission gas bubbles in UO2 fuel under post-irradiation thermal annealing conditions. The benchmark problem was constructed to include important microstructural evolution mechanisms on the kinetics of intra-granular fission gas bubble behavior such as the atomic diffusion of Xe atoms, U vacancies, and O vacancies, the effect of vacancy capture and emission from defects, and the elastic interaction of non-equilibrium gas bubbles. An idealized set of assumptions was imposed on the benchmark problem to simplify the mechanisms considered. The capability and numerical efficiency of different models are compared against selected experimental and simulation results. These comparisons find that the phase-field methods, by the nature of the free energy formulation, are able to represent a larger subset of the mechanisms influencing the intra-granular bubble growth and coarsening mechanisms in the idealized benchmark problem as compared to the Potts and kinetic Monte Carlo methods. It is recognized that the mesoscale benchmark problem as formulated does not specifically highlight the strengths of the discrete particle modeling used in the Potts and kinetic Monte Carlo methods. Future efforts are recommended to construct increasingly more complex mesoscale benchmark problems to further verify and validate the predictive capabilities of the mesoscale modeling methods used in this study.

Li, Yulan; Hu, Shenyang Y.; Montgomery, Robert; Gao, Fei; Sun, Xin; Tonks, Michael; Biner, Bullent; Millet, Paul; Tikare, Veena; Radhakrishnan, Balasubramaniam; Andersson , David

2012-04-11T23:59:59.000Z

470

NREL: PVWatts Site Specific Data Calculator (Version 1)  

NLE Websites -- All DOE Office Websites (Extended Search)

Site Specific Data Calculator (Version 1) Site Specific Data Calculator (Version 1) PVWattsTM Site Specific Data calculator allows users to select a photovoltaic (PV) system location from a defined list of options. For locations within the United States and its territories, users select a location from a map of 239 options. For international locations, users select a location from a drop-down menu of options. The PVWatts Site Specific Data calculator uses hourly typical meteorological year (TMY) weather data and a PV performance model to estimate annual energy production and cost savings for a crystalline silicon PV system. For locations in the United States and its territories, the PVWatts Version 1 calculator uses NREL TMY data. For other locations, it uses TMY data from the Solar and Wind Energy Resource Assessment

471

NREL: PVWatts - PVWatts Grid Data Calculator (Version 2)  

NLE Websites -- All DOE Office Websites (Extended Search)

Grid Data Calculator (Version 2) Grid Data Calculator (Version 2) PVWattsTM Grid Data calculator allows users to select a photovoltaic (PV) system location in the United States from an interactive map. The Grid Data calculator uses hourly typical meteorological year weather data and a PV performance model to estimate annual energy production and cost savings for a crystalline silicon PV system. It allows users to create estimated performance data for any location in the United States or its territories by selecting a site on a 40-km gridded map. The 40-km Grid Data calculator considers data from a climatologically similar typical meteorological year data station and site-specific solar resource and maximum temperature information to provide PV performance estimation. In this version, performance is first calculated for the the nearest TMY2

472

Benchmarking of RESRAD-OFFSITE : transition from RESRAD (onsite) toRESRAD-OFFSITE and comparison of the RESRAD-OFFSITE predictions with peercodes.  

SciTech Connect

The main purpose of this report is to document the benchmarking results and verification of the RESRAD-OFFSITE code as part of the quality assurance requirements of the RESRAD development program. This documentation will enable the U.S. Department of Energy (DOE) and its contractors, and the U.S. Nuclear Regulatory Commission (NRC) and its licensees and other stakeholders to use the quality-assured version of the code to perform dose analysis in a risk-informed and technically defensible manner to demonstrate compliance with the NRC's License Termination Rule, Title 10, Part 20, Subpart E, of the Code of Federal Regulations (10 CFR Part 20, Subpart E); DOE's 10 CFR Part 834, Order 5400.5, ''Radiation Protection of the Public and the Environment''; and other Federal and State regulatory requirements as appropriate. The other purpose of this report is to document the differences and similarities between the RESRAD (onsite) and RESRAD-OFFSITE codes so that users (dose analysts and risk assessors) can make a smooth transition from use of the RESRAD (onsite) code to use of the RESRAD-OFFSITE code for performing both onsite and offsite dose analyses. The evolution of the RESRAD-OFFSITE code from the RESRAD (onsite) code is described in Chapter 1 to help the dose analyst and risk assessor make a smooth conceptual transition from the use of one code to that of the other. Chapter 2 provides a comparison of the predictions of RESRAD (onsite) and RESRAD-OFFSITE for an onsite exposure scenario. Chapter 3 documents the results of benchmarking RESRAD-OFFSITE's atmospheric transport and dispersion submodel against the U.S. Environmental Protection Agency's (EPA's) CAP88-PC (Clean Air Act Assessment Package-1988) and ISCLT3 (Industrial Source Complex-Long Term) models. Chapter 4 documents the comparison results of the predictions of the RESRAD-OFFSITE code and its submodels with the predictions of peer models. This report was prepared by Argonne National Laboratory's (Argonne's) Environmental Science Division. This work is jointly sponsored by the NRC's Office of Nuclear Regulatory Research and DOE's Office of Environment, Safety and Health and Office of Environmental Management. The approaches and or methods described in this report are provided for information only. Use of product or trade names is for identification purposes only and does not constitute endorsement either by DOE, the NRC, or Argonne.

Yu, C.; Gnanapragasam, E.; Cheng, J.-J.; Biwer, B.

2006-05-22T23:59:59.000Z

473

Characterization of Computational Grid Resources Using Low-level Benchmarks  

E-Print Network (OSTI)

An important factor that needs to be taken into account by end-users and systems (schedulers, resource brokers, policy brokers) when mapping applications to the Grid, is the performance capacity of hardware resources attached to the Grid and made available through its Virtual Organizations (VOs). In this paper, we examine the problem of characterizing the performance capacity of Grid resources using benchmarking. We examine the conditions under which such characterization experiments can be implemented in a Grid setting and present the challenges that arise in this context. We specify a small number of performance metrics and propose a suite of micro-benchmarks to estimate these metrics for clusters that belong to large Virtual Organizations. We describe GridBench, a tool developed to administer benchmarking experiments, publish their results, and produce graphical representations of their metrics. We describe benchmarking experiments conducted with, and published through GridBench, and show how they can help end-users assess the performance capacity of resources that belong to a target Virtual Organization. Finally, we examine the advantages of this approach over solutions implemented currently in existing Grid infrastructures. We conclude that it is essential to provide benchmarking services in the Grid infrastructure, in order to enable the attachment of performance-related metadata to resources belonging to Virtual Organizations and the retrieval of such metadata by end-users and other Grid systems. 1

George Tsouloupas; Marios Dikaiakos

2004-01-01T23:59:59.000Z

474

The Role of Benchmarking in Promoting Strong Energy Management Systems  

E-Print Network (OSTI)

The significance of formalized energy management practices and programs in driving and sustaining energy efficiency improvements within the industrial sector has become more widely recognized over the past several years. The release of the ISO 50001 energy management standard will also further elevate the role of energy management systems. For over the past 10 years, the US EPA's ENERGY STAR Commercial and Industrial program have focused on promoting and supporting the development of strong corporate management programs. A key aspect of facilitating the establishment of energy management programs has been the development of benchmarking tools that help companies evaluate the energy performance and practices. This paper will examine some of the lessons learned in developing both quantitative and qualitative energy management benchmarking tools and the importance of establishing good energy performance indicators. The paper will examine the pros and cons of different types of quantitative energy performance benchmarks. The value of qualitative benchmarking tools to gauge management practices will also be discussed. Lastly, recommendations for how to further the development energy benchmarks shall be presented.

Tunnessen, W.

2010-01-01T23:59:59.000Z

475

DOE Commercial Reference Buildings Summary of Changes Between Versions  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Buildings Buildings Summary of Changes between Versions November 2012 1 Applicable Model(s) Change Changes from version 1.2_4.0 to 1.3_5.0 All Transitioned from EnergyPlus 4.0 to EnergyPlus 5.0 All For fan power calculations, fan nameplate horsepower corrected from 90% brake horsepower to 110% brake horsepower All Removed multipliers on roof surface infiltration because EnergyPlus now counts the roof surface in infiltration per exterior surface area calculations All Added parking lot exterior lighting All Updated headers to reflect new name for technical report reference All models with DX cooling Changed COP calculation to remove fan power at ARI conditions, not max. allowable fan power (see Ref. Bldgs. Technical Report for more info.) All models with DX cooling Changed cooling performance curves to reflect

476

NIST Photoionization of CO2 Version History  

Science Conference Proceedings (OSTI)

CO2-button Photoionization of CO 2. Version History. Example ... 1.0). [Online] Available: http://physics.nist.gov/CO2 [year, month day]. ...

2010-10-05T23:59:59.000Z

477

Energy Basics: Wind Power Animation (Text Version)  

Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

Energy Basics Renewable Energy Printable Version Share this resource Biomass Geothermal Hydrogen Hydropower Ocean Solar Wind Wind Turbines Wind Resources Wind Power...

478

Hydroelectric Webinar Presentation Slides and Text Version  

Energy.gov (U.S. Department of Energy (DOE))

Download presentation slides and a text version of the audio from the DOE Office of Indian Energy webinar on hydroelectric renewable energy.