National Library of Energy BETA

Sample records for minimum computer requirements

  1. Program Evaluation: Minimum EERE Requirements

    Broader source: Energy.gov [DOE]

    The minimum requirements for EERE's in-progress peer reviews are described below. Given the diversity of EERE programs and activities, a great deal of flexibility is provided within these...

  2. HEAT Loan Minimum Standards and Requirements | Department of Energy

    Energy Savers [EERE]

    HEAT Loan Minimum Standards and Requirements HEAT Loan Minimum Standards and Requirements Presents additional resources on loan standards and requirements from Elise Avers' presentation on HEAT Loan Minimum Standards and Requirements. PDF icon Minimum Standards and Requirements More Documents & Publications Building America Best Practices Series Vol. 14: Energy Renovations - HVAC: A Guide for Contractors to Share with Homeowners STEP Financial Incentives Summary Energy Saver 101: Home

  3. DOE CYBER SECURITY EBK: MINIMUM CORE COMPETENCY TRAINING REQUIREMENTS |

    Office of Environmental Management (EM)

    Department of Energy CYBER SECURITY EBK: MINIMUM CORE COMPETENCY TRAINING REQUIREMENTS DOE CYBER SECURITY EBK: MINIMUM CORE COMPETENCY TRAINING REQUIREMENTS puzzle-693870_960_720.jpg PDF icon DOE CYBER SECURITY EBK: MINIMUM CORE COMPETENCY TRAINING REQUIREMENTS More Documents & Publications DOE CYBER SECURITY EBK: MINIMUM CORE COMPETENCY TRAINING REQUIREMENTS DOE CYBER SECURITY EBK: CORE COMPETENCY TRAINING REQUIREMENTS: CA Authorizing Official Designated Representative (AODR)

  4. HEAT Loan Minimum Standards and Requirements

    Energy Savers [EERE]

    you must meet the following minimum standards listed below. * New natural gas or propane boilers must be at least 90% AFUE to be eligible. * New oil boilers must be at least...

  5. Minimum Efficiency Requirements Tables for Heating and Cooling Product

    Energy Savers [EERE]

    Categories | Department of Energy Minimum Efficiency Requirements Tables for Heating and Cooling Product Categories Minimum Efficiency Requirements Tables for Heating and Cooling Product Categories The Federal Energy Management Program (FEMP) created tables that mirror American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) 90.1-2013 tables, which include minimum efficiency requirements for FEMP-designated and ENERGY STAR-qualified heating and cooling product

  6. "Table A52. Nonswitchable Minimum Requirements and Maximum...

    U.S. Energy Information Administration (EIA) Indexed Site

    Consumption" " Potential by Census Region, 1991" " (Estimates in Physical Units)" ... (a) Minimum consumption represents actual 1991 consumption decreased by the" "quantity of ...

  7. Can Cloud Computing Address the Scientific Computing Requirements for DOE

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Researchers? Well, Yes, No and Maybe Can Cloud Computing Address the Scientific Computing Requirements for DOE Researchers? Well, Yes, No and Maybe Can Cloud Computing Address the Scientific Computing Requirements for DOE Researchers? Well, Yes, No and Maybe January 30, 2012 Jon Bashor, Jbashor@lbl.gov, +1 510-486-5849 Magellan1.jpg Magellan at NERSC After a two-year study of the feasibility of cloud computing systems for meeting the ever-increasing computational needs of scientists,

  8. Present and Future Computing Requirements

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Cosmology SciDAC-3 Project Ann Almgren (LBNL) Nick Gnedin (FNAL) Dave Higdon (LANL) Rob Ross (ANL) Martin White (UC Berkeley LBNL) Large Scale Production Computing and Storage...

  9. ASHRAE Minimum Efficiency Requirements Tables for Heating and Cooling Product Categories

    Broader source: Energy.gov [DOE]

    The Federal Energy Management Program (FEMP) created tables that mirror American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) 90.1-2013 tables, which include minimum efficiency requirements for FEMP-designated and ENERGY STAR-qualified heating and cooling product categories. Download the tables below to incorporate FEMP and ENERGY STAR purchasing requirements into federal product acquisition documents.

  10. Present and Future Computing Requirements for PETSc

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    and Future Computing Requirements for PETSc Jed Brown jedbrown@mcs.anl.gov Mathematics and Computer Science Division, Argonne National Laboratory Department of Computer Science, University of Colorado Boulder NERSC ASCR Requirements for 2017 2014-01-15 Extending PETSc's Hierarchically Nested Solvers ANL Lois C. McInnes, Barry Smith, Jed Brown, Satish Balay UChicago Matt Knepley IIT Hong Zhang LBL Mark Adams Linear solvers, nonlinear solvers, time integrators, optimization methods (merged TAO)

  11. Incorporate Minimum Efficiency Requirements for Heating and Cooling Products into Federal Acquisition Documents

    Broader source: Energy.gov [DOE]

    The Federal Energy Management Program (FEMP) organized information about FEMP-designated and ENERGY STAR-qualified heating, ventilating, and air conditioning (HVAC) and water heating products into tables that mirror American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) 90.1-2013 minimum efficiency requirement tables. Federal buyers can use these tables as a reference and to incorporate the proper purchasing requirements set by FEMP and ENERGY STAR into federal acquisition documents.

  12. Intro to computer programming, no computer required! | Argonne Leadership

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing Facility Intro to computer programming, no computer required! Author: Laura Wolf January 6, 2016 Facebook Twitter LinkedIn Google E-mail Printer-friendly version Pairing the volunteers with interested schools was the easy part. School administrators and teachers alike were delighted to have Argonne National Laboratory volunteers visit and help guide their Hour of Code activities last December. In all, Argonne's Educational Programs department helped place 44 volunteers in Chicago

  13. Large Scale Computing and Storage Requirements for Advanced Scientific...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large Scale Computing and Storage Requirements for Advanced Scientific Computing Research: Target 2014 ASCRFrontcover.png Large Scale Computing and Storage Requirements for ...

  14. Determining Allocation Requirements | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Allocation Management Determining Allocation Requirements Querying Allocations Using cbank Mira/Cetus/Vesta Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Determining Allocation Requirements Estimating CPU-Hours for ALCF Blue Gene/Q Systems When estimating CPU-hours for the ALCF Blue Gene/Q systems, it is important to take into consideration the unique aspects of the Blue Gene

  15. Minimum 186 Basin levels required for operation of ECS and CWS pumps

    SciTech Connect (OSTI)

    Reeves, K.K.; Barbour, K.L.

    1992-10-01

    Operation of K Reactor with a cooling tower requires that 186 Basin loss of inventory transients be considered during Design Basis Accident analyses requiring ECS injection, such as the LOCA and LOPA. Since the cooling tower systems are not considered safety systems, credit is not taken for their continued operation during a LOPA or LOCA even though they would likely continue to operate as designed. Without the continued circulation of cooling water to the 186 Basin by the cooling tower pumps, the 186 Basin will lose inventory until additional make-up can be obtained from the river water supply system. Increasing the make-up to the 186 Basin from the river water system may require the opening of manually operated valves, the starting of additional river water pumps, and adjustments of the flow to L Area. In the time required for these actions a loss of basin inventory could occur. The ECS and CWS pumps are supplied by the 186 Basin. A reduction in the basin level will result in decreased pump suction head. This reduction in suction head will result in decreased output from the pumps and, if severe enough, could lead to pump cavitation for some configurations. The subject of this report is the minimum 186 Basin level required to prevent ECS and CWS pump cavitation. The reduction in ECS flow due to a reduced 186 Basin level without cavitation is part of a separate study.

  16. Architectural requirements for the Red Storm computing system...

    Office of Scientific and Technical Information (OSTI)

    Technical Report: Architectural requirements for the Red Storm computing system. Citation Details In-Document Search Title: Architectural requirements for the Red Storm computing...

  17. Large Scale Computing and Storage Requirements for Advanced Scientific

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing Research: Target 2014 Large Scale Computing and Storage Requirements for Advanced Scientific Computing Research: Target 2014 ASCRFrontcover.png Large Scale Computing and Storage Requirements for Advanced Scientific Computing Research An ASCR / NERSC Review January 5-6, 2011 Final Report Large Scale Computing and Storage Requirements for Advanced Scientific Computing Research, Report of the Joint ASCR / NERSC Workshop conducted January 5-6, 2011 Goals This workshop is being

  18. Large Scale Production Computing and Storage Requirements for Fusion Energy

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Sciences: Target 2017 Large Scale Production Computing and Storage Requirements for Fusion Energy Sciences: Target 2017 The NERSC Program Requirements Review "Large Scale Production Computing and Storage Requirements for Fusion Energy Sciences" is organized by the Department of Energy's Office of Fusion Energy Sciences (FES), Office of Advanced Scientific Computing Research (ASCR), and the National Energy Research Scientific Computing Center (NERSC). The review's goal is to

  19. Large Scale Production Computing and Storage Requirements for High Energy

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Physics: Target 2017 Large Scale Production Computing and Storage Requirements for High Energy Physics: Target 2017 HEPlogo.jpg The NERSC Program Requirements Review "Large Scale Computing and Storage Requirements for High Energy Physics" is organized by the Department of Energy's Office of High Energy Physics (HEP), Office of Advanced Scientific Computing Research (ASCR), and the National Energy Research Scientific Computing Center (NERSC). The review's goal is to characterize

  20. Large Scale Computing and Storage Requirements for High Energy Physics

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large Scale Computing and Storage Requirements for High Energy Physics HEPFrontcover.png Large Scale Computing and Storage Requirements for High Energy Physics An HEP / ASCR / NERSC Workshop November 12-13, 2009 Report Large Scale Computing and Storage Requirements for High Energy Physics, Report of the Joint HEP / ASCR / NERSC Workshop conducted Nov. 12-13, 2009 https://www.nersc.gov/assets/HPC-Requirements-for-Science/HEPFrontcover.png Goals This workshop was organized by the Department of

  1. Architectural requirements for the Red Storm computing system. (Technical

    Office of Scientific and Technical Information (OSTI)

    Report) | SciTech Connect Technical Report: Architectural requirements for the Red Storm computing system. Citation Details In-Document Search Title: Architectural requirements for the Red Storm computing system. This report is based on the Statement of Work (SOW) describing the various requirements for delivering 3 new supercomputer system to Sandia National Laboratories (Sandia) as part of the Department of Energy's (DOE) Accelerated Strategic Computing Initiative (ASCI) program. This

  2. Large Scale Computing and Storage Requirements for Basic Energy Sciences:

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Target 2014 Large Scale Computing and Storage Requirements for Basic Energy Sciences: Target 2014 BESFrontcover.png Final Report Large Scale Computing and Storage Requirements for Basic Energy Sciences, Report of the Joint BES/ ASCR / NERSC Workshop conducted February 9-10, 2010 Workshop Agenda The agenda for this workshop is presented here: including presentation times and speaker information. Read More » Workshop Presentations Large Scale Computing and Storage Requirements for Basic

  3. Large Scale Production Computing and Storage Requirements for Advanced

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Scientific Computing Research: Target 2017 Large Scale Production Computing and Storage Requirements for Advanced Scientific Computing Research: Target 2017 ASCRLogo.png This is an invitation-only review organized by the Department of Energy's Office of Advanced Scientific Computing Research (ASCR) and NERSC. The general goal is to determine production high-performance computing, storage, and services that will be needed for ASCR to achieve its science goals through 2017. A specific focus

  4. Large Scale Production Computing and Storage Requirements for...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    2013 Hilton Washington DCRockville Hotel and Executive Meeting Center 1750 Rockville Pike, Rockville, MD 20852-1699 Final Report Large Scale Computing and Storage Requirements...

  5. Large Scale Production Computing and Storage Requirements for Basic Energy

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Sciences: Target 2017 Large Scale Production Computing and Storage Requirements for Basic Energy Sciences: Target 2017 BES-Montage.png This is an invitation-only review organized by the Department of Energy's Office of Basic Energy Sciences (BES), Office of Advanced Scientific Computing Research (ASCR), and the National Energy Research Scientific Computing Center (NERSC). The goal is to determine production high-performance computing, storage, and services that will be needed for BES to

  6. Large Scale Production Computing and Storage Requirements for Nuclear

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Physics: Target 2017 Large Scale Production Computing and Storage Requirements for Nuclear Physics: Target 2017 NPicon.png This invitation-only review is organized by the Department of Energy's Offices of Nuclear Physics (NP) and Advanced Scientific Computing Research (ASCR) and by NERSC. The goal is to determine production high-performance computing, storage, and services that will be needed for NP to achieve its science goals through 2017. The review brings together DOE Program Managers,

  7. Large Scale Computing and Storage Requirements for Fusion Energy Sciences:

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Target 2014 High Energy Physics (HEP) Nuclear Physics (NP) Overview Published Reports Case Study FAQs NERSC HPC Achievement Awards Share Your Research User Submitted Research Citations NERSC Citations Home » Science at NERSC » HPC Requirements Reviews » Requirements Reviews: Target 2014 » Fusion Energy Sciences (FES) Large Scale Computing and Storage Requirements for Fusion Energy Sciences: Target 2014 FESFrontcover.png An FES / ASCR / NERSC Workshop August 3-4, 2010 Final Report Large

  8. Large Scale Production Computing and Storage Requirements for Biological

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    and Environmental Research: Target 2017 Large Scale Production Computing and Storage Requirements for Biological and Environmental Research: Target 2017 BERmontage.gif September 11-12, 2012 Hilton Rockville Hotel and Executive Meeting Center 1750 Rockville Pike Rockville, MD, 20852-1699 TEL: 1-301-468-1100 Sponsored by: U.S. Department of Energy Office of Science Office of Advanced Scientific Computing Research (ASCR) Office of Biological and Environmental Research (BER) National Energy

  9. Large Scale Computing and Storage Requirements for Nuclear Physics Research

    SciTech Connect (OSTI)

    Gerber, Richard A.; Wasserman, Harvey J.

    2012-03-02

    IThe National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,000 users and hosting some 550 projects that involve nearly 700 codes for a wide variety of scientific disciplines. In addition to large-scale computing resources NERSC provides critical staff support and expertise to help scientists make the most efficient use of these resources to advance the scientific mission of the Office of Science. In May 2011, NERSC, DOE’s Office of Advanced Scientific Computing Research (ASCR) and DOE’s Office of Nuclear Physics (NP) held a workshop to characterize HPC requirements for NP research over the next three to five years. The effort is part of NERSC’s continuing involvement in anticipating future user needs and deploying necessary resources to meet these demands. The workshop revealed several key requirements, in addition to achieving its goal of characterizing NP computing. The key requirements include: 1. Larger allocations of computational resources at NERSC; 2. Visualization and analytics support; and 3. Support at NERSC for the unique needs of experimental nuclear physicists. This report expands upon these key points and adds others. The results are based upon representative samples, called “case studies,” of the needs of science teams within NP. The case studies were prepared by NP workshop participants and contain a summary of science goals, methods of solution, current and future computing requirements, and special software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, “multi-core” environment that is expected to dominate HPC architectures over the next few years. The report also includes a section with NERSC responses to the workshop findings. NERSC has many initiatives already underway that address key workshop findings and all of the action items are aligned with NERSC strategic plans.

  10. ComPASS Present and Future Computing Requirements

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ComPASS Present and Future Computing Requirements Panagiotis Spentzouris (Fermilab) for the ComPASS collaboration NERSC BER Requirements for 2017 September 11-12, 2012 Rockville, MD Accelerators for High Energy Physics § At the Energy Frontier, high- energy particle beam collisions seek to uncover new phenomena * the origin of mass, the nature of dark matter, extra dimensions of space. § At the Intensity Frontier, high-flux beams enable exploration of * neutrino interactions, to answer

  11. Large Scale Computing and Storage Requirements for High Energy Physics

    SciTech Connect (OSTI)

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes a section that describes efforts already underway or planned at NERSC that address requirements collected at the workshop. NERSC has many initiatives in progress that address key workshop findings and are aligned with NERSC's strategic plans.

  12. Present and Future Computational Requirements General Plasma Physics Center for Integrated Computation and Analysis of Reconnection and Turbulence (CICART)

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computational Current Future Accelerators Present and Future Computational Requirements General Plasma Physics Center for Integrated Computation and Analysis of Reconnection and Turbulence (CICART) Kai Germaschewski, Homa Karimabadi Amitava Bhattacharjee, Fatima Ebrahimi, Will Fox, Liwei Lin CICART Space Science Center / Dept. of Physics University of New Hampshire March 18, 2013 Kai Germaschewski and Homa Karimabadi CICART Project Computational Current Future Accelerators Outline 1 Project

  13. User's manual for RATEPAC: a digital-computer program for revenue requirements and rate-impact analysis

    SciTech Connect (OSTI)

    Fuller, L.C.

    1981-09-01

    The RATEPAC computer program is designed to model the financial aspects of an electric power plant or other investment requiring capital outlays and having annual operating expenses. The program produces incremental pro forma financial statements showing how an investment will affect the overall financial statements of a business entity. The code accepts parameters required to determine capital investment and expense as a function of time and sums these to determine minimum revenue requirements (cost of service). The code also calculates present worth of revenue requirements and required return on rate base. This user's manual includes a general description of the code as well as the instructions for input data preparation. A complete example case is appended.

  14. Present and Future Computing Requirements Sergey Syritsyn RIKEN...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... a Lattice Large-Scale Computing(NP) Apr 29-30, 2014 Sergey N. Syritsyn Lattice Objectives 2014-2017 : Hadron Structure ... EIC * Expanded HPC resources Planned "Burst buffer" ...

  15. National Ignition Facility sub-system design requirements computer system SSDR 1.5.1

    SciTech Connect (OSTI)

    Spann, J.; VanArsdall, P.; Bliss, E.

    1996-09-05

    This System Design Requirement document establishes the performance, design, development and test requirements for the Computer System, WBS 1.5.1 which is part of the NIF Integrated Computer Control System (ICCS). This document responds directly to the requirements detailed in ICCS (WBS 1.5) which is the document directly above.

  16. Large Scale Computing and Storage Requirements for Biological...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    the requirements input from the workshop attendees. Workshop attendees should review the case study update document and other background materials on the Reference Materials page....

  17. Large Scale Computing Requirements for Basic Energy Sciences...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    significant solution acceleration order of magnitude OFF SHORE BRAZIL CSEM DATA 3D Image Processing Requirements 3D Data and Imaging Volumes - nearly 1 million data points,...

  18. Large Scale Production Computing and Storage Requirements for...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    to provide the world-class facilities and services needed to support DOE Office of Science research. The review will produce a report that outlines HPC requirements for ASCR...

  19. Computer

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    I. INTRODUCTION This paper presents several computational tools required for processing images of a heavy ion beam and estimating the magnetic field within a plasma. The...

  20. Requirements

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Certificate in Renewable Energy and the Environment Requirements Earn this certificate by completing a total of 11 points from the following categories: Courses (minimum 4 points) For more information about courses, contact Erin Plut (eplut@wustl.edu) 100 - 200 Level Course Earn 1 point Receive a grade of B- or better in a 2.00 or 3.00 unit course 300 - 400 Level Course Earn 2 points Receive a grade of C or better in a 2.00 or more unit course Independent Study Earn 3 points Students may arrange

  1. Barbara Helland Advanced Scientific Computing Research NERSC-HEP Requirements Review

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    7-28, 2012 Barbara Helland Advanced Scientific Computing Research NERSC-HEP Requirements Review 1 Science C ase S tudies d rive d iscussions Program R equirements R eviews  Program offices evaluated every two-three years  Participants include program managers, PI/ Scientists, ESnet/NERSC staff and management  User-driven discussion of science opportunities and needs  What: Instruments and facilities, data scale, computational requirements  How: science process, data analysis,

  2. Large Scale Computing and Storage Requirements for Biological and Environmental Research

    SciTech Connect (OSTI)

    DOE Office of Science, Biological and Environmental Research Program Office ,

    2009-09-30

    In May 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of Biological and Environmental Research (BER) held a workshop to characterize HPC requirements for BER-funded research over the subsequent three to five years. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. Chief among them: scientific progress in BER-funded research is limited by current allocations of computational resources. Additionally, growth in mission-critical computing -- combined with new requirements for collaborative data manipulation and analysis -- will demand ever increasing computing, storage, network, visualization, reliability and service richness from NERSC. This report expands upon these key points and adds others. It also presents a number of"case studies" as significant representative samples of the needs of science teams within BER. Workshop participants were asked to codify their requirements in this"case study" format, summarizing their science goals, methods of solution, current and 3-5 year computing requirements, and special software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel,"multi-core" environment that is expected to dominate HPC architectures over the next few years.

  3. Large Scale Computing and Storage Requirements for Basic Energy Sciences Research

    SciTech Connect (OSTI)

    Gerber, Richard; Wasserman, Harvey

    2011-03-31

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility supporting research within the Department of Energy's Office of Science. NERSC provides high-performance computing (HPC) resources to approximately 4,000 researchers working on about 400 projects. In addition to hosting large-scale computing facilities, NERSC provides the support and expertise scientists need to effectively and efficiently use HPC systems. In February 2010, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR) and DOE's Office of Basic Energy Sciences (BES) held a workshop to characterize HPC requirements for BES research through 2013. The workshop was part of NERSC's legacy of anticipating users future needs and deploying the necessary resources to meet these demands. Workshop participants reached a consensus on several key findings, in addition to achieving the workshop's goal of collecting and characterizing computing requirements. The key requirements for scientists conducting research in BES are: (1) Larger allocations of computational resources; (2) Continued support for standard application software packages; (3) Adequate job turnaround time and throughput; and (4) Guidance and support for using future computer architectures. This report expands upon these key points and presents others. Several 'case studies' are included as significant representative samples of the needs of science teams within BES. Research teams scientific goals, computational methods of solution, current and 2013 computing requirements, and special software and support needs are summarized in these case studies. Also included are researchers strategies for computing in the highly parallel, 'multi-core' environment that is expected to dominate HPC architectures over the next few years. NERSC has strategic plans and initiatives already underway that address key workshop findings. This report includes a brief summary of those relevant to issues raised by researchers at the workshop.

  4. Large Scale Computing and Storage Requirements for Fusion Energy Sciences: Target 2017

    SciTech Connect (OSTI)

    Gerber, Richard

    2014-05-02

    The National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,500 users working on some 650 projects that involve nearly 600 codes in a wide variety of scientific disciplines. In March 2013, NERSC, DOE?s Office of Advanced Scientific Computing Research (ASCR) and DOE?s Office of Fusion Energy Sciences (FES) held a review to characterize High Performance Computing (HPC) and storage requirements for FES research through 2017. This report is the result.

  5. ASC Computational Environment (ACE) requirements version 8.0 final report.

    SciTech Connect (OSTI)

    Larzelere, Alex R. (Exagrid Engineering, Alexandria, VA); Sturtevant, Judith E.

    2006-11-01

    A decision was made early in the Tri-Lab Usage Model process, that the collection of the user requirements be separated from the document describing capabilities of the user environment. The purpose in developing the requirements as a separate document was to allow the requirements to take on a higher-level view of user requirements for ASC platforms in general. In other words, a separate ASC user requirement document could capture requirements in a way that was not focused on ''how'' the requirements would be fulfilled. The intent of doing this was to create a set of user requirements that were not linked to any particular computational platform. The idea was that user requirements would endure from one ASC platform user environment to another. The hope was that capturing the requirements in this way would assist in creating stable user environments even though the particular platforms would be evolving and changing. In order to clearly make the separation, the Tri-lab S&CS program decided to create a new title for the requirements. The user requirements became known as the ASC Computational Environment (ACE) Requirements.

  6. Harvey Wasserman! Large Scale Computing and Storage Requirements for High Energy Physics

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Harvey Wasserman! Large Scale Computing and Storage Requirements for High Energy Physics Research: Target 2017 Meeting Goals & Process! ! --- 1 --- December 3 , 2 012 Logistics: Schedule * Agenda o n w orkshop w eb p age - h%p://www.nersc.gov/science/requirements/HEP * Mid---morning / a <ernoon b reak, l unch * Self---organizaBon for dinner * MulBple s cience a reas, o ne w orkshop - Science---focused b ut c rosscu?ng d iscussion - Explore a reas o f c ommon n eed ( within H EP) *

  7. High Performance Computing and Storage Requirements for Nuclear Physics: Target 2017

    SciTech Connect (OSTI)

    Gerber, Richard; Wasserman, Harvey

    2015-01-20

    In April 2014, NERSC, ASCR, and the DOE Office of Nuclear Physics (NP) held a review to characterize high performance computing (HPC) and storage requirements for NP research through 2017. This review is the 12th in a series of reviews held by NERSC and Office of Science program offices that began in 2009. It is the second for NP, and the final in the second round of reviews that covered the six Office of Science program offices. This report is the result of that review

  8. Improved initial guess for minimum energy path calculations

    SciTech Connect (OSTI)

    Smidstrup, Sren; Pedersen, Andreas; Stokbro, Kurt

    2014-06-07

    A method is presented for generating a good initial guess of a transition path between given initial and final states of a system without evaluation of the energy. An objective function surface is constructed using an interpolation of pairwise distances at each discretization point along the path and the nudged elastic band method then used to find an optimal path on this image dependent pair potential (IDPP) surface. This provides an initial path for the more computationally intensive calculations of a minimum energy path on an energy surface obtained, for example, by ab initio or density functional theory. The optimal path on the IDPP surface is significantly closer to a minimum energy path than a linear interpolation of the Cartesian coordinates and, therefore, reduces the number of iterations needed to reach convergence and averts divergence in the electronic structure calculations when atoms are brought too close to each other in the initial path. The method is illustrated with three examples: (1) rotation of a methyl group in an ethane molecule, (2) an exchange of atoms in an island on a crystal surface, and (3) an exchange of two Si-atoms in amorphous silicon. In all three cases, the computational effort in finding the minimum energy path with DFT was reduced by a factor ranging from 50% to an order of magnitude by using an IDPP path as the initial path. The time required for parallel computations was reduced even more because of load imbalance when linear interpolation of Cartesian coordinates was used.

  9. Requirements for Control Room Computer-Based Procedures for use in Hybrid Control Rooms

    SciTech Connect (OSTI)

    Le Blanc, Katya Lee; Oxstrand, Johanna Helene; Joe, Jeffrey Clark

    2015-05-01

    Many plants in the U.S. are currently undergoing control room modernization. The main drivers for modernization are the aging and obsolescence of existing equipment, which typically results in a like-for-like replacement of analogue equipment with digital systems. However, the modernization efforts present an opportunity to employ advanced technology that would not only extend the life, but enhance the efficiency and cost competitiveness of nuclear power. Computer-based procedures (CBPs) are one example of near-term advanced technology that may provide enhanced efficiencies above and beyond like for like replacements of analog systems. Researchers in the LWRS program are investigating the benefits of advanced technologies such as CBPs, with the goal of assisting utilities in decision making during modernization projects. This report will describe the existing research on CBPs, discuss the unique issues related to using CBPs in hybrid control rooms (i.e., partially modernized analog control rooms), and define the requirements of CBPs for hybrid control rooms.

  10. computers

    National Nuclear Security Administration (NNSA)

    Each successive generation of computing system has provided greater computing power and energy efficiency.

    CTS-1 clusters will support NNSA's Life Extension Program and...

  11. QCD Thermodynamics at High Temperature Peter Petreczky Large Scale Computing and Storage Requirements for Nuclear Physics (NP),

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    QCD Thermodynamics at High Temperature Peter Petreczky Large Scale Computing and Storage Requirements for Nuclear Physics (NP), Bethesda MD, April 29-30, 2014 NY Center for Computational Science 2 Defining questions of nuclear physics research in US: Nuclear Science Advisory Committee (NSAC) "The Frontiers of Nuclear Science", 2007 Long Range Plan "What are the phases of strongly interacting matter and what roles do they play in the cosmos ?" "What does QCD predict for

  12. computers

    National Nuclear Security Administration (NNSA)

    California.

    Retired computers used for cybersecurity research at Sandia National...

  13. Standardized Procedure Content And Data Structure Based On Human Factors Requirements For Computer-Based Procedures

    SciTech Connect (OSTI)

    Bly, Aaron; Oxstrand, Johanna; Le Blanc, Katya L

    2015-02-01

    Most activities that involve human interaction with systems in a nuclear power plant are guided by procedures. Traditionally, the use of procedures has been a paper-based process that supports safe operation of the nuclear power industry. However, the nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. Advances in digital technology make computer-based procedures (CBPs) a valid option that provides further enhancement of safety by improving human performance related to procedure use. The transition from paper-based procedures (PBPs) to CBPs creates a need for a computer-based procedure system (CBPS). A CBPS needs to have the ability to perform logical operations in order to adjust to the inputs received from either users or real time data from plant status databases. Without the ability for logical operations the procedure is just an electronic copy of the paper-based procedure. In order to provide the CBPS with the information it needs to display the procedure steps to the user, special care is needed in the format used to deliver all data and instructions to create the steps. The procedure should be broken down into basic elements and formatted in a standard method for the CBPS. One way to build the underlying data architecture is to use an Extensible Markup Language (XML) schema, which utilizes basic elements to build each step in the smart procedure. The attributes of each step will determine the type of functionality that the system will generate for that step. The CBPS will provide the context for the step to deliver referential information, request a decision, or accept input from the user. The XML schema needs to provide all data necessary for the system to accurately perform each step without the need for the procedure writer to reprogram the CBPS. The research team at the Idaho National Laboratory has developed a prototype CBPS for field workers as well as the underlying data structure for such CBPS. The objective of the research effort is to develop guidance on how to design both the user interface and the underlying schema. This paper will describe the result and insights gained from the research activities conducted to date.

  14. Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Office of Advanced Scientific Computing Research in the Department of Energy Office of Science under contract number DE-AC02-05CH11231. ! Application and System Memory Use, Configuration, and Problems on Bassi Richard Gerber Lawrence Berkeley National Laboratory NERSC User Services ScicomP 13 Garching bei München, Germany, July 17, 2007 ScicomP 13, July 17, 2007, Garching Overview * About Bassi * Memory on Bassi * Large Page Memory (It's Great!) * System Configuration * Large Page

  15. Computations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computations - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced Nuclear

  16. Requirements for Computer Based-Procedures for Nuclear Power Plant Field Operators Results from a Qualitative Study

    SciTech Connect (OSTI)

    Katya Le Blanc; Johanna Oxstrand

    2012-05-01

    Although computer-based procedures (CBPs) have been investigated as a way to enhance operator performance on procedural tasks in the nuclear industry for almost thirty years, they are not currently widely deployed at United States utilities. One of the barriers to the wide scale deployment of CBPs is the lack of operational experience with CBPs that could serve as a sound basis for justifying the use of CBPs for nuclear utilities. Utilities are hesitant to adopt CBPs because of concern over potential costs of implementation, and concern over regulatory approval. Regulators require a sound technical basis for the use of any procedure at the utilities; without operating experience to support the use CBPs, it is difficult to establish such a technical basis. In an effort to begin the process of developing a technical basis for CBPs, researchers at Idaho National Laboratory are partnering with industry to explore CBPs with the objective of defining requirements for CBPs and developing an industry-wide vision and path forward for the use of CBPs. This paper describes the results from a qualitative study aimed at defining requirements for CBPs to be used by field operators and maintenance technicians.

  17. Theoretical minimum energies to produce steel for selected conditions

    SciTech Connect (OSTI)

    Fruehan, R. J.; Fortini, O.; Paxton, H. W.; Brindle, R.

    2000-03-01

    An ITP study has determined the theoretical minimum energy requirements for producing steel from ore, scrap, and direct reduced iron. Dr. Richard Fruehan's report, Theoretical Minimum Energies to Produce Steel for Selected Conditions, provides insight into the potential energy savings (and associated reductions in carbon dioxide emissions) for ironmaking, steelmaking, and rolling processes (PDF459 KB).

  18. Computing Videos

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing Videos Computing

  19. GMTI radar minimum detectable velocity.

    SciTech Connect (OSTI)

    Richards, John Alfred

    2011-04-01

    Minimum detectable velocity (MDV) is a fundamental consideration for the design, implementation, and exploitation of ground moving-target indication (GMTI) radar imaging modes. All single-phase-center air-to-ground radars are characterized by an MDV, or a minimum radial velocity below which motion of a discrete nonstationary target is indistinguishable from the relative motion between the platform and the ground. Targets with radial velocities less than MDV are typically overwhelmed by endoclutter ground returns, and are thus not generally detectable. Targets with radial velocities greater than MDV typically produce distinct returns falling outside of the endoclutter ground returns, and are thus generally discernible using straightforward detection algorithms. This document provides a straightforward derivation of MDV for an air-to-ground single-phase-center GMTI radar operating in an arbitrary geometry.

  20. Computer-Based Procedures for Field Workers in Nuclear Power Plants: Development of a Model of Procedure Usage and Identification of Requirements

    SciTech Connect (OSTI)

    Katya Le Blanc; Johanna Oxstrand

    2012-04-01

    The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts we are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field workers. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do so. This paper describes the development of a Model of Procedure Use and the qualitative study on which the model is based. The study was conducted in collaboration with four nuclear utilities and five research institutes. During the qualitative study and the model development requirements and for computer-based procedures were identified.

    1. Incorporate Minimum Efficiency Requirements for Heating and Cooling...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      and ENERGY STAR-qualified heating, ventilating, and air conditioning (HVAC) and water heating products into tables that mirror American Society of Heating, Refrigerating and ...

    2. Minimum Efficiency Requirements Tables for Heating and Cooling...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Publications Sample Contract Language for Energy-Efficient Products in ... for Energy-Efficient Products Contract Language for Energy-Consuming Product Purchases

    3. HEAT Loan Minimum Standards and Requirements | Department of...

      Office of Environmental Management (EM)

      & Publications Building America Best Practices Series Vol. 14: Energy Renovations - HVAC: A Guide for Contractors to Share with Homeowners STEP Financial Incentives Summary...

    4. Minimum Velocity Required to Transport Solid Particles from the...

      Office of Scientific and Technical Information (OSTI)

      December 27, 1993. Robert Perry and Cecil Chilton, Eds., Chemical Engineers' Handbook, 5th Ed., New York: McGraw-Hill, 1973, p. 3-71. C. O. Bennett and J. E. Myers,...

    5. Computer System,

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      System, Cluster, and Networking Summer Institute New Mexico Consortium and Los Alamos National Laboratory HOW TO APPLY Applications will be accepted JANUARY 5 - FEBRUARY 13, 2016 Computing and Information Technology undegraduate students are encouraged to apply. Must be a U.S. citizen. * Submit a current resume; * Offcial University Transcript (with spring courses posted and/or a copy of spring 2016 schedule) 3.0 GPA minimum; * One Letter of Recommendation from a Faculty Member; and * Letter of

    6. Optimization of Operating Parameters for Minimum Mechanical Specific Energy in Drilling

      SciTech Connect (OSTI)

      Hamrick, Todd

      2011-05-25

      Efficiency in drilling is measured by Mechanical Specific Energy (MSE). MSE is the measure of the amount of energy input required to remove a unit volume of rock, expressed in units of energy input divided by volume removed. It can be expressed mathematically in terms of controllable parameters; Weight on Bit, Torque, Rate of Penetration, and RPM. It is well documented that minimizing MSE by optimizing controllable factors results in maximum Rate of Penetration. Current methods for computing MSE make it possible to minimize MSE in the field only through a trial-and-error process. This work makes it possible to compute the optimum drilling parameters that result in minimum MSE. The parameters that have been traditionally used to compute MSE are interdependent. Mathematical relationships between the parameters were established, and the conventional MSE equation was rewritten in terms of a single parameter, Weight on Bit, establishing a form that can be minimized mathematically. Once the optimum Weight on Bit was determined, the interdependent relationship that Weight on Bit has with Torque and Penetration per Revolution was used to determine optimum values for those parameters for a given drilling situation. The improved method was validated through laboratory experimentation and analysis of published data. Two rock types were subjected to four treatments each, and drilled in a controlled laboratory environment. The method was applied in each case, and the optimum parameters for minimum MSE were computed. The method demonstrated an accurate means to determine optimum drilling parameters of Weight on Bit, Torque, and Penetration per Revolution. A unique application of micro-cracking is also presented, which demonstrates that rock failure ahead of the bit is related to axial force more than to rotation speed.

    7. Sample size requirements for estimating effective dose from computed tomography using solid-state metal-oxide-semiconductor field-effect transistor dosimetry

      SciTech Connect (OSTI)

      Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.; Hoffmann, Udo; Douglas, Pamela S.; Einstein, Andrew J.

      2014-04-15

      Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample size required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomenpelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same precision and confidence.

    8. Minimum Day Time Load Calculation and Screening

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Minimum Daytime Load Calculation and Screening Page 1 of 30 Kristen Ardani, Dora Nakfuji, Anthony Hong, and Babak Enayati Page 1 of 30 [Speaker: Kristen Ardani] Cover Slide: Thank you everyone for joining us today for our DG interconnection collaborative informational webinar. Today we are going to talk about minimum day time load calculation and screening procedures and their role in the distributed PV interconnection process. We're going to hear from Babak Enayati of the Massachusetts

    9. Computer Security

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Security All JLF participants must fully comply with all LLNL computer security regulations and procedures. A laptop entering or leaving B-174 for the sole use by a US citizen and so configured, and requiring no IP address, need not be registered for use in the JLF. By September 2009, it is expected that computers for use by Foreign National Investigators will have no special provisions. Notify maricle1@llnl.gov of all other computers entering, leaving, or being moved within B 174. Use

    10. Theoretical Minimum Energies to Produce Steel for Selected Conditions

      SciTech Connect (OSTI)

      Fruehan, R.J.; Fortini, O.; Paxton, H.W.; Brindle, R.

      2000-05-01

      The energy used to produce liquid steel in today's integrated and electric arc furnace (EAF) facilities is significantly higher than the theoretical minimum energy requirements. This study presents the absolute minimum energy required to produce steel from ore and mixtures of scrap and scrap alternatives. Additional cases in which the assumptions are changed to more closely approximate actual operating conditions are also analyzed. The results, summarized in Table E-1, should give insight into the theoretical and practical potentials for reducing steelmaking energy requirements. The energy values have also been converted to carbon dioxide (CO{sub 2}) emissions in order to indicate the potential for reduction in emissions of this greenhouse gas (Table E-2). The study showed that increasing scrap melting has the largest impact on energy consumption. However, scrap should be viewed as having ''invested'' energy since at one time it was produced by reducing ore. Increasing scrap melting in the BOF mayor may not decrease energy if the ''invested'' energy in scrap is considered.

    11. LHC Computing

      SciTech Connect (OSTI)

      Lincoln, Don

      2015-07-28

      The LHC is the world’s highest energy particle accelerator and scientists use it to record an unprecedented amount of data. This data is recorded in electronic format and it requires an enormous computational infrastructure to convert the raw data into conclusions about the fundamental rules that govern matter. In this video, Fermilab’s Dr. Don Lincoln gives us a sense of just how much data is involved and the incredible computer resources that makes it all possible.

    12. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Nodes Compute Nodes There are currently 2632 nodes available on PDSF. The compute (batch) nodes at PDSF are heterogenous, reflecting the periodic procurement of new nodes (and the eventual retirement of old nodes). From the user's perspective they are essentially all equivalent except that some have more memory per job slot. If your jobs have memory requirements beyond the default maximum of 1.1GB you should specify that in your job submission and the batch system will run your job on an

    13. Minimum Day Time Load Calculation and Screening

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Minimum Day Time Load Calculation and Screening" Dora Nakafuji and Anthony Hong, Hawaiian Electric Co. Babak Enayati, DG Techincal Standards Review Group April 30, 2014 2 Speakers Babak Enayati Chair of Massachusetts DG Technical Standards Review Group Dora Nakafuji Director of Renewable Energy Planning Hawaiian Electric Company (HECO) Kristen Ardani Solar Analyst, (today's moderator) NREL Anthony Hong Director of Distribution Planning Hawaiian Electric Company (HECO) Standardization of

    14. Minimum Day Time Load Calculation and Screening

      Office of Environmental Management (EM)

      Distributed Generation Interconnection Collaborative (DGIC) "Minimum Day Time Load Calculation and Screening" Dora Nakafuji and Anthony Hong, Hawaiian Electric Co. Babak Enayati, DG Techincal Standards Review Group April 30, 2014 2 Speakers Babak Enayati Chair of Massachusetts DG Technical Standards Review Group Dora Nakafuji Director of Renewable Energy Planning Hawaiian Electric Company (HECO) Kristen Ardani Solar Analyst, (today's moderator) NREL Anthony Hong Director of

    15. Minimum wear tube support hole design

      DOE Patents [OSTI]

      Glatthorn, Raymond H. (St. Petersburg, FL)

      1986-01-01

      A minimum-wear through-bore (16) is defined within a heat exchanger tube support plate (14) so as to have an hourglass configuration as determined by means of a constant radiused surface curvature (18) as defined by means of an external radius (R3), wherein the surface (18) extends between the upper surface (20) and lower surface (22) of the tube support plate (14). When a heat exchange tube (12) is disposed within the tube support plate (14) so as to pass through the through-bore (16), the heat exchange tube (12) is always in contact with a smoothly curved or radiused portion of the through-bore surface (16) whereby unacceptably excessive wear upon the heat exchange tube (12), as normally developed by means of sharp edges, lands, ridges, or the like conventionally part of the tube support plates, is eliminated or substantially reduced.

    16. ITP Steel: Theoretical Minimum Energies to Produce Steel for Selected

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Conditions, March 2000 | Department of Energy Theoretical Minimum Energies to Produce Steel for Selected Conditions, March 2000 ITP Steel: Theoretical Minimum Energies to Produce Steel for Selected Conditions, March 2000 PDF icon theoretical_minimum_energies.pdf More Documents & Publications Ironmaking Process Alternatives Screening Study ITP Steel: Steel Industry Marginal Opportunity Study September 2005 ITP Steel: Steel Industry Energy Bandwidth Stu

    17. Energy and IAQ Implications of Alternative Minimum Ventilation Rates in California Retail and School Buildings

      SciTech Connect (OSTI)

      Dutton, Spencer M.; Fisk, William J.

      2015-01-01

      For a stand-alone retail building, a primary school, and a secondary school in each of the 16 California climate zones, the EnergyPlus building energy simulation model was used to estimate how minimum mechanical ventilation rates (VRs) affect energy use and indoor air concentrations of an indoor-generated contaminant. The modeling indicates large changes in heating energy use, but only moderate changes in total building energy use, as minimum VRs in the retail building are changed. For example, predicted state-wide heating energy consumption in the retail building decreases by more than 50% and total building energy consumption decreases by approximately 10% as the minimum VR decreases from the Title 24 requirement to no mechanical ventilation. The primary and secondary schools have notably higher internal heat gains than in the retail building models, resulting in significantly reduced demand for heating. The school heating energy use was correspondingly less sensitive to changes in the minimum VR. The modeling indicates that minimum VRs influence HVAC energy and total energy use in schools by only a few percent. For both the retail building and the school buildings, minimum VRs substantially affected the predicted annual-average indoor concentrations of an indoor generated contaminant, with larger effects in schools. The shape of the curves relating contaminant concentrations with VRs illustrate the importance of avoiding particularly low VRs.

    18. Computation & Simulation > Theory & Computation > Research >...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      it. Click above to view. computational2 computational3 In This Section Computation & Simulation Computation & Simulation Extensive combinatorial results and ongoing basic...

    19. Optimizing minimum free-energy crossing points in solution: Linear...

      Office of Scientific and Technical Information (OSTI)

      Optimizing minimum free-energy crossing points in solution: Linear-response free energyspin-flip density functional theory approach Citation Details In-Document Search Title:...

    20. ON COMPUTING UPPER LIMITS TO SOURCE INTENSITIES

      SciTech Connect (OSTI)

      Kashyap, Vinay L.; Siemiginowska, Aneta [Smithsonian Astrophysical Observatory, 60 Garden Street, Cambridge, MA 02138 (United States); Van Dyk, David A.; Xu Jin [Department of Statistics, University of California, Irvine, CA 92697-1250 (United States); Connors, Alanna [Eureka Scientific, 2452 Delmer Street, Suite 100, Oakland, CA 94602-3017 (United States); Freeman, Peter E. [Department of Statistics, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213 (United States); Zezas, Andreas, E-mail: vkashyap@cfa.harvard.ed, E-mail: asiemiginowska@cfa.harvard.ed, E-mail: dvd@ics.uci.ed, E-mail: jinx@ics.uci.ed, E-mail: aconnors@eurekabayes.co, E-mail: pfreeman@cmu.ed, E-mail: azezas@cfa.harvard.ed [Physics Department, University of Crete, P.O. Box 2208, GR-710 03, Heraklion, Crete (Greece)

      2010-08-10

      A common problem in astrophysics is determining how bright a source could be and still not be detected in an observation. Despite the simplicity with which the problem can be stated, the solution involves complicated statistical issues that require careful analysis. In contrast to the more familiar confidence bound, this concept has never been formally analyzed, leading to a great variety of often ad hoc solutions. Here we formulate and describe the problem in a self-consistent manner. Detection significance is usually defined by the acceptable proportion of false positives (background fluctuations that are claimed as detections, or Type I error), and we invoke the complementary concept of false negatives (real sources that go undetected, or Type II error), based on the statistical power of a test, to compute an upper limit to the detectable source intensity. To determine the minimum intensity that a source must have for it to be detected, we first define a detection threshold and then compute the probabilities of detecting sources of various intensities at the given threshold. The intensity that corresponds to the specified Type II error probability defines that minimum intensity and is identified as the upper limit. Thus, an upper limit is a characteristic of the detection procedure rather than the strength of any particular source. It should not be confused with confidence intervals or other estimates of source intensity. This is particularly important given the large number of catalogs that are being generated from increasingly sensitive surveys. We discuss, with examples, the differences between these upper limits and confidence bounds. Both measures are useful quantities that should be reported in order to extract the most science from catalogs, though they answer different statistical questions: an upper bound describes an inference range on the source intensity, while an upper limit calibrates the detection process. We provide a recipe for computing upper limits that applies to all detection algorithms.

    1. Can Cloud Computing Address the Scientific Computing Requirements...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      achieve energy efficiency levels comparable to commercial cloud centers. Cloud is a business model and can be applied at DOE supercomputing centers. The progress of the...

    2. Present and Future Computing Requirements Radiative Transfer...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      balancing more work on regions with high particle counts, high scattering probability (opacity) strategies: population control, adaptive refinement, replicate heavily loaded...

    3. BER Science Network Requirements

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      BER Science Network Requirements Report of the Biological and Environmental Research Network Requirements Workshop Conducted July 26 and 27, 2007 BER Science Network Requirements Workshop Biological and Environmental Research Program Office, DOE Office of Science Energy Sciences Network Bethesda, MD - July 26 and 27, 2007 ESnet is funded by the US Dept. of Energy, Office of Science, Advanced Scientific Computing Research (ASCR) program. Dan Hitchcock is the ESnet Program Manager. ESnet is

    4. BES Science Network Requirements

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Network Requirements Report of the Basic Energy Sciences Network Requirements Workshop Conducted June 4-5, 2007 BES Science Network Requirements Workshop Basic Energy Sciences Program Office, DOE Office of Science Energy Sciences Network Washington, DC - June 4 and 5, 2007 ESnet is funded by the US Dept. of Energy, Office of Science, Advanced Scientific Computing Research (ASCR) program. Dan Hitchcock is the ESnet Program Manager. ESnet is operated by Lawrence Berkeley National Laboratory, which

    5. Requirement-Reviews.pptx

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      1, 2 013 Requirements Reviews * 1½-day reviews with each Program Office * Computing and storage requirements for next 5 years * Participants - DOE ADs & Program Managers - Leading scientists using NERSC & key potential users - NERSC staff 2 High Energy Physics Fusion R esearch Reports From 6 Requirements Reviews Have Been Published 3 h%p://www.nersc.gov/science/requirements---reviews/ final---reports/ * Compu<ng a nd s torage requirements f or 2013/2014 * Execu<ve S ummary o f

    6. Energy Aware Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Partnerships Shifter: User Defined Images Archive APEX Home » R & D » Energy Aware Computing Energy Aware Computing Dynamic Frequency Scaling One means to lower the energy required to compute is to reduce the power usage on a node. One way to accomplish this is by lowering the frequency at which the CPU operates. However, reducing the clock speed increases the time to solution, creating a potential tradeoff. NERSC continues to examine how such methods impact its operations and its

    7. Modeling an Application's Theoretical Minimum and Average Transactional Response Times

      SciTech Connect (OSTI)

      Paiz, Mary Rose

      2015-04-01

      The theoretical minimum transactional response time of an application serves as a ba- sis for the expected response time. The lower threshold for the minimum response time represents the minimum amount of time that the application should take to complete a transaction. Knowing the lower threshold is beneficial in detecting anomalies that are re- sults of unsuccessful transactions. On the converse, when an application's response time falls above an upper threshold, there is likely an anomaly in the application that is causing unusual performance issues in the transaction. This report explains how the non-stationary Generalized Extreme Value distribution is used to estimate the lower threshold of an ap- plication's daily minimum transactional response time. It also explains how the seasonal Autoregressive Integrated Moving Average time series model is used to estimate the upper threshold for an application's average transactional response time.

    8. Apparatus and method for closed-loop control of reactor power in minimum time

      DOE Patents [OSTI]

      Bernard, Jr., John A.

      1988-11-01

      Closed-loop control law for altering the power level of nuclear reactors in a safe manner and without overshoot and in minimum time. Apparatus is provided for moving a fast-acting control element such as a control rod or a control drum for altering the nuclear reactor power level. A computer computes at short time intervals either the function: .rho.=(.beta.-.rho.).omega.-.lambda..sub.e '.rho.-.SIGMA..beta..sub.i (.lambda..sub.i -.lambda..sub.e ')+l* .omega.+l* [.omega..sup.2 +.lambda..sub.e '.omega.] or the function: .rho.=(.beta.-.rho.).omega.-.lambda..sub.e .rho.-(.lambda..sub.e /.lambda..sub.e)(.beta.-.rho.)+l* .omega.+l* [.omega..sup.2 +.lambda..sub.e .omega.-(.lambda..sub.e /.lambda..sub.e).omega.] These functions each specify the rate of change of reactivity that is necessary to achieve a specified rate of change of reactor power. The direction and speed of motion of the control element is altered so as to provide the rate of reactivity change calculated using either or both of these functions thereby resulting in the attainment of a new power level without overshoot and in minimum time. These functions are computed at intervals of approximately 0.01-1.0 seconds depending on the specific application.

    9. Modified Theoretical Minimum Emittance Lattice for an Electron Storage Ring

      Office of Scientific and Technical Information (OSTI)

      with Extreme-Low Emittance (Journal Article) | SciTech Connect Modified Theoretical Minimum Emittance Lattice for an Electron Storage Ring with Extreme-Low Emittance Citation Details In-Document Search Title: Modified Theoretical Minimum Emittance Lattice for an Electron Storage Ring with Extreme-Low Emittance Authors: Jiao, Yi ; Cai, Yunhai ; Chao, Alexander Wu ; /SLAC Publication Date: 2013-06-04 OSTI Identifier: 1082826 Report Number(s): SLAC-REPRINT-2013-081 DOE Contract Number:

    10. Table 10.1 Nonswitchable Minimum and Maximum Consumption, 2002

      U.S. Energy Information Administration (EIA) Indexed Site

      Nonswitchable Minimum and Maximum Consumption, 2002; " " Level: National and Regional Data;" " Row: Energy Sources;" " Column: Consumption Potential;" " Unit: Physical Units." ,,,,"RSE" ,"Actual","Minimum","Maximum","Row" "Energy Sources","Consumption","Consumption(a)","Consumption(b)","Factors" ,"Total United States" "RSE Column

    11. Computing | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Computing Computing Fun fact: Most systems require air conditioning or chilled water to cool super powerful supercomputers, but the Olympus supercomputer at Pacific Northwest National Laboratory is cooled by the location's 65 degree groundwater. Traditional cooling systems could cost up to $61,000 in electricity each year, but this more efficient setup uses 70 percent less energy. | Photo courtesy of PNNL. Fun fact: Most systems require air conditioning or chilled water to cool super powerful

    12. Compute nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute nodes Compute nodes Click here to see more detailed hierachical map of the topology of a compute node. Last edited: 2016-02-01 08:07:08

    13. Reporting Requirements

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Reporting Requirements Reporting Requirements Contacts Director Albert Migliori Deputy Franz Freibert 505 667-6879 Email Professional Staff Assistant Susan Ramsay 505 665 0858...

    14. Computing Information

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      here you can find information relating to: Obtaining the right computer accounts. Using NIC terminals. Using BooNE's Computing Resources, including: Choosing your desktop....

    15. Computer System,

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      undergraduate summer institute http:isti.lanl.gov (Educational Prog) 2016 Computer System, Cluster, and Networking Summer Institute Purpose The Computer System,...

    16. QBox | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      computers. Obtaining Qbox http:eslab.ucdavis.edusoftwareqbox Building Qbox for Blue GeneQ Qbox requires the standard math libraries plus the Xerces-C http:...

    17. DOE Challenge Home, California Program Requirements

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      DOE Challenge Home California Program Requirements These Program Requirements shall only be used in the State of California. To qualify as a DOE Challenge Home, a home shall meet the minimum requirements specified below, be verified and field-tested in accordance with HERS Standards by an approved verifier, and meet all applicable codes. Builders may meet the requirements of either the Performance Path or the Prescriptive path to qualify a home. 1 Single family detached and attached dwelling

    18. Computing and Computational Sciences Directorate - Computer Science...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      AWARD Winners: Jess Gehin; Jackie Isaacs; Douglas Kothe; Debbie McCoy; Bonnie Nestor; John Turner; Gilbert Weigand Organization(s): Nuclear Technology Program; Computing and...

    19. Theoretical solution of the minimum charge problem for gaseous detonations

      SciTech Connect (OSTI)

      Ostensen, R.W.

      1990-12-01

      A theoretical model was developed for the minimum charge to trigger a gaseous detonation in spherical geometry as a generalization of the Zeldovich model. Careful comparisons were made between the theoretical predictions and experimental data on the minimum charge to trigger detonations in propane-air mixtures. The predictions are an order of magnitude too high, and there is no apparent resolution to the discrepancy. A dynamic model, which takes into account the experimentally observed oscillations in the detonation zone, may be necessary for reliable predictions. 27 refs., 9 figs.

    20. Animation Requirements

      Broader source: Energy.gov [DOE]

      Animations include dynamic elements such as interactive images and games. For developing animations, follow these design and coding requirements.

    1. NERSC HPC Program Requirements Review Reports

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Published Reports NERSC HPC Program Requirements Review Reports These publications comprise the final reports from the HPC requirements reviews presented to the Department of Energy. Downloads ASCR2017Final.pdf | Adobe Acrobat PDF file Large Scale Computing and Storage Requirements for Advanced Scientific Computing Research - Target 2017 NerscBES2017ReqRevFinal.pdf | Adobe Acrobat PDF file Large Scale Computing and Storage Requirements for Basic Energy Sciences - Target 2017

    2. The"minimum information about an environmental sequence" (MIENS) specification

      SciTech Connect (OSTI)

      Yilmaz, P.; Kottmann, R.; Field, D.; Knight, R.; Cole, J.R.; Amaral-Zettler, L.; Gilbert, J.A.; Karsch-Mizrachi, I.; Johnston, A.; Cochrane, G.; Vaughan, R.; Hunter, C.; Park, J.; Morrison, N.; Rocca-Serra, P.; Sterk, P.; Arumugam, M.; Baumgartner, L.; Birren, B.W.; Blaser, M.J.; Bonazzi, V.; Bork, P.; Buttigieg, P. L.; Chain, P.; Costello, E.K.; Huot-Creasy, H.; Dawyndt, P.; DeSantis, T.; Fierer, N.; Fuhrman, J.; Gallery, R.E.; Gibbs, R.A.; Giglio, M.G.; Gil, I. San; Gonzalez, A.; Gordon, J.I.; Guralnick, R.; Hankeln, W.; Highlander, S.; Hugenholtz, P.; Jansson, J.; Kennedy, J.; Knights, D.; Koren, O.; Kuczynski, J.; Kyrpides, N.; Larsen, R.; Lauber, C.L.; Legg, T.; Ley, R.E.; Lozupone, C.A.; Ludwig, W.; Lyons, D.; Maguire, E.; Methe, B.A.; Meyer, F.; Nakieny, S.; Nelson, K.E.; Nemergut, D.; Neufeld, J.D.; Pace, N.R.; Palanisamy, G.; Peplies, J.; Peterson, J.; Petrosino, J.; Proctor, L.; Raes, J.; Ratnasingham, S.; Ravel, J.; Relman, D.A.; Assunta-Sansone, S.; Schriml, L.; Sodergren, E.; Spor, A.; Stombaugh, J.; Tiedje, J.M.; Ward, D.V.; Weinstock, G.M.; Wendel, D.; White, O.; Wikle, A.; Wortman, J.R.; Glockner, F.O.; Bushman, F.D.; Charlson, E.; Gevers, D.; Kelley, S.T.; Neubold, L.K.; Oliver, A.E.; Pruesse, E.; Quast, C.; Schloss, P.D.; Sinha, R.; Whitely, A.

      2010-10-15

      We present the Genomic Standards Consortium's (GSC) 'Minimum Information about an ENvironmental Sequence' (MIENS) standard for describing marker genes. Adoption of MIENS will enhance our ability to analyze natural genetic diversity across the Tree of Life as it is currently being documented by massive DNA sequencing efforts from myriad ecosystems in our ever-changing biosphere.

    3. Computing Sciences

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Division The Computational Research Division conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and...

    4. Computing Resources

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Cluster-Image TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computing Resources The TRACC Computational Clusters With the addition of a new cluster called Zephyr that was made operational in September of this year (2012), TRACC now offers two clusters to choose from: Zephyr and our original cluster that has now been named Phoenix. Zephyr was acquired from Atipa technologies, and it is a 92-node system with each node having two AMD

    5. Reporting Requirements

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Reporting Requirements Reporting Requirements Contacts Director Albert Migliori Deputy Franz Freibert 505 667-6879 Email Professional Staff Assistant Susan Ramsay 505 665 0858 Email The Fellow will be required to participate in the Actinide Science lecture series by both attending lectures and presenting a scientific lecture on actinide science in this series. Submission of a viewgraph and brief write-up of the project. Provide metrics information as requested. Submission of an overview article

    6. Eligibility Requirements

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Eligibility Requirements Eligibility Requirements A comprehensive benefits package with plan options for health care and retirement to take care of our employees today and tomorrow. Contact Benefits Office (505) 667-1806 Email Eligibility and required supporting documentation The Laboratory offers an extensive benefits package to full and part time employees. Casual employees (excluding High School Coop, Lab Associates and Craft Employees) are eligible to enroll in the HDHP medical plan. NOTE:

    7. Competition Requirements

      Office of Environmental Management (EM)

      ---------------------------------------- Chapter 6.1 (July 2011) 1 Competition Requirements [Reference: FAR 6 and DEAR 906] Overview This section discusses competition requirements and provides a model Justification for Other than Full and Open Competition (JOFOC). Background The Competition in Contracting Act (CICA) of 1984 requires that all acquisitions be made using full and open competition. Seven exceptions to using full and open competition are specifically identified in Federal

    8. Video Requirements

      Broader source: Energy.gov [DOE]

      All EERE videos, including webinar recordings, must meet Section 508's requirements for accessibility. All videos should be hosted on the DOE YouTube channel.

    9. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Nodes Compute Nodes Quad CoreAMDOpteronprocessor Compute Node Configuration 9,572 nodes 1 quad-core AMD 'Budapest' 2.3 GHz processor per node 4 cores per node (38,288 total cores) 8 GB DDR3 800 MHz memory per node Peak Gflop rate 9.2 Gflops/core 36.8 Gflops/node 352 Tflops for the entire machine Each core has their own L1 and L2 caches, with 64 KB and 512KB respectively 2 MB L3 cache shared among the 4 cores Compute Node Software By default the compute nodes run a restricted low-overhead

    10. DOE Challenge Home, Washington Program Requirements

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      DOE Challenge Home Washington Program Requirements 9-1-2013 To qualify as a DOE Challenge Home, a home shall meet the minimum requirements specified below, be verified and field-tested in accordance with HERS Standards by an approved verifier, and meet all applicable codes. Builders may meet the requirements of either the Performance Path or the Prescriptive path to qualify a home. 1 Single family detached and attached dwelling units, and dwelling units in multifamily buildings with 3 stories

    11. New York City- Energy Conservation Requirements for Existing Buildings

      Broader source: Energy.gov [DOE]

      Council Bill No. 564-A (Local Law 85 of 2009): Requires that renovations of existing buildings meet minimum energy conservation standards. The result of this law is essentially a city energy code ...

    12. Is ""predictability"" in computational sciences a myth?

      SciTech Connect (OSTI)

      Hemez, Francois M [Los Alamos National Laboratory

      2011-01-31

      Within the last two decades, Modeling and Simulation (M&S) has become the tool of choice to investigate the behavior of complex phenomena. Successes encountered in 'hard' sciences are prompting interest to apply a similar approach to Computational Social Sciences in support, for example, of national security applications faced by the Intelligence Community (IC). This manuscript attempts to contribute to the debate on the relevance of M&S to IC problems by offering an overview of what it takes to reach 'predictability' in computational sciences. Even though models developed in 'soft' and 'hard' sciences are different, useful analogies can be drawn. The starting point is to view numerical simulations as 'filters' capable to represent information only within specific length, time or energy bandwidths. This simplified view leads to the discussion of resolving versus modeling which motivates the need for sub-scale modeling. The role that modeling assumptions play in 'hiding' our lack-of-knowledge about sub-scale phenomena is explained which leads to discussing uncertainty in simulations. It is argued that the uncertainty caused by resolution and modeling assumptions should be dealt with differently than uncertainty due to randomness or variability. The corollary is that a predictive capability cannot be defined solely as accuracy, or ability of predictions to match the available physical observations. We propose that 'predictability' is the demonstration that predictions from a class of 'equivalent' models are as consistent as possible. Equivalency stems from defining models that share a minimum requirement of accuracy, while being equally robust to the sources of lack-of-knowledge in the problem. Examples in computational physics and engineering are given to illustrate the discussion.

    13. Competition Requirements

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      (JOFOC) using the authority of FAR 6.302-1. Background The Competition in Contracting Act (CICA) of 1984 requires that all acquisitions be made using full and open competition. ...

    14. Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Cite Seer Department of Energy provided open access science research citations in chemistry, physics, materials, engineering, and computer science IEEE Xplore Full text...

    15. Deployment Requirements

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Contamination Workshop Deployment Requirements William Buttner National Renewable Energy Laboratory Hydrogen Safety Codes and Stands Group DOE Hydrogen Contamination Workshop Troy, Michigan June 13, 2014 THIS PRESENTATION DOES NOT CONTAIN ANY PROPRIETARY, CONFIDENTIAL OR OTHERWISE RESTRICTED INFORMATION 2 Outline of talk * SAE 2719 Requirements and the HCD Detector * Application Scenarios - Discreet vs. "real-time" - Centralized vs. On-site * Sensor Performance Parameters -

    16. Proposal for grid computing for nuclear applications

      SciTech Connect (OSTI)

      Idris, Faridah Mohamad; Ismail, Saaidi; Haris, Mohd Fauzi B.; Sulaiman, Mohamad Safuan B.; Aslan, Mohd Dzul Aiman Bin.; Samsudin, Nursuliza Bt.; Ibrahim, Maizura Bt.; Ahmad, Megat Harun Al Rashid B. Megat; Yazid, Hafizal B.; Jamro, Rafhayudi B.; Azman, Azraf B.; Rahman, Anwar B. Abdul; Ibrahim, Mohd Rizal B. Mamat; Muhamad, Shalina Bt. Sheik; Hassan, Hasni; Abdullah, Wan Ahmad Tajuddin Wan; Ibrahim, Zainol Abidin; Zolkapli, Zukhaimira; Anuar, Afiq Aizuddin; Norjoharuddeen, Nurfikri; and others

      2014-02-12

      The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process.

    17. Minimum information about a marker gene sequence (MIMARKS) and minimum information about any (x) sequence (MIxS) specifications.

      SciTech Connect (OSTI)

      Yilmaz, P.; Kottmann, R.; Field, D.; Knight, R.; Cole, J. R.; Amaral-Zettler, L.; Gilbert, J. A.

      2011-05-01

      Here we present a standard developed by the Genomic Standards Consortium (GSC) for reporting marker gene sequences - the minimum information about a marker gene sequence (MIMARKS). We also introduce a system for describing the environment from which a biological sample originates. The 'environmental packages' apply to any genome sequence of known origin and can be used in combination with MIMARKS and other GSC checklists. Finally, to establish a unified standard for describing sequence data and to provide a single point of entry for the scientific community to access and learn about GSC checklists, we present the minimum information about any (x) sequence (MIxS). Adoption of MIxS will enhance our ability to analyze natural genetic diversity documented by massive DNA sequencing efforts from myriad ecosystems in our ever-changing biosphere.

    18. Computing and Computational Sciences Directorate - Divisions

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      CCSD Divisions Computational Sciences and Engineering Computer Sciences and Mathematics Information Technolgoy Services Joint Institute for Computational Sciences National Center for Computational Sciences

    19. DOE Requires Manufacturers to Halt Sales of Heat Pumps and Air Conditioners

      Energy Savers [EERE]

      Violating Minimum Appliance Standards | Department of Energy Requires Manufacturers to Halt Sales of Heat Pumps and Air Conditioners Violating Minimum Appliance Standards DOE Requires Manufacturers to Halt Sales of Heat Pumps and Air Conditioners Violating Minimum Appliance Standards June 3, 2010 - 12:00am Addthis Washington, DC - Today, the Department of Energy announced that three manufacturers -- Aspen Manufacturing, Inc., Summit Manufacturing, and Advanced Distributor Products -- must

    20. HUD (Housing and Urban Development) Intermediate Minimum Property Standards Supplement 4930. 2 (1989 edition). Solar heating and domestic hot water systems

      SciTech Connect (OSTI)

      Not Available

      1989-12-01

      The Minimum Property Standards for Housing 4910.1 were developed to provide a sound technical basis for housing under numerous programs of the Department of Housing and Urban Development (HUD). These Intermediate Minimum Property Standards for Solar Heating and Domestic Hot Water Systems are intended to provide a companion technical basis for the planning and design of solar heating and domestic hot water systems. These standards have been prepared as a supplement to the Minimum Property Standards (MPS) and deal only with aspects of planning and design that are different from conventional housing by reason of the solar systems under consideration. The document contains requirements and standards applicable to one- and two-family dwellings, multifamily housing, and nursing homes and intermediate care facilities references made in the text to the MPS refer to the same section in the Minimum Property Standards for Housing 4910.1.

    1. Minimum length, extra dimensions, modified gravity and black hole remnants

      SciTech Connect (OSTI)

      Maziashvili, Michael

      2013-03-01

      We construct a Hilbert space representation of minimum-length deformed uncertainty relation in presence of extra dimensions. Following this construction, we study corrections to the gravitational potential (back reaction on gravity) with the use of correspondingly modified propagator in presence of two (spatial) extra dimensions. Interestingly enough, for r?0 the gravitational force approaches zero and the horizon for modified Schwarzschild-Tangherlini space-time disappears when the mass approaches quantum-gravity energy scale. This result points out to the existence of zero-temperature black hole remnants in ADD brane-world model.

    2. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Nodes Quad CoreAMDOpteronprocessor Compute Node Configuration 9,572 nodes 1 quad-core AMD 'Budapest' 2.3 GHz processor per node 4 cores per node (38,288 total cores) 8 GB...

    3. Exascale Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Exascale Computing CoDEx Project: A Hardware/Software Codesign Environment for the Exascale Era The next decade will see a rapid evolution of HPC node architectures as power and cooling constraints are limiting increases in microprocessor clock speeds and constraining data movement. Applications and algorithms will need to change and adapt as node architectures evolve. A key element of the strategy as we move forward is the co-design of applications, architectures and programming

    4. Competition Requirements

      Office of Environmental Management (EM)

      ---- ----------------------------------------------- Chapter 5.2 (April 2008) Synopsizing Proposed Non-Competitive Contract Actions Citing the Authority of FAR 6.302-1 [Reference: FAR 5 and DEAR 905] Overview This section discusses publicizing sole source actions as part of the approval of a Justification for Other than Full and Open Competition (JOFOC) using the authority of FAR 6.302-1. Background The Competition in Contracting Act (CICA) of 1984 requires that all acquisitions be made using

    5. Competition Requirements

      Office of Environmental Management (EM)

      ---------------------------- --------------------------- Chapter 6.5 (January 2011) 1 Competition Advocate Responsibilities [Reference: FAR 6.5, FAR 7 and DEAR 906.501] Overview This section discusses the competition advocate requirements and provides a Federal Procurement Data System-New Generation (FPDS-NG) coding assistance sheet and screen shots for the FPDS-NG Competition Report. Background FAR Part 6.5, -Competition Advocates,‖ implements section 20 of the Office of Federal Procurement

    6. Computational mechanics

      SciTech Connect (OSTI)

      Goudreau, G.L.

      1993-03-01

      The Computational Mechanics thrust area sponsors research into the underlying solid, structural and fluid mechanics and heat transfer necessary for the development of state-of-the-art general purpose computational software. The scale of computational capability spans office workstations, departmental computer servers, and Cray-class supercomputers. The DYNA, NIKE, and TOPAZ codes have achieved world fame through our broad collaborators program, in addition to their strong support of on-going Lawrence Livermore National Laboratory (LLNL) programs. Several technology transfer initiatives have been based on these established codes, teaming LLNL analysts and researchers with counterparts in industry, extending code capability to specific industrial interests of casting, metalforming, and automobile crash dynamics. The next-generation solid/structural mechanics code, ParaDyn, is targeted toward massively parallel computers, which will extend performance from gigaflop to teraflop power. Our work for FY-92 is described in the following eight articles: (1) Solution Strategies: New Approaches for Strongly Nonlinear Quasistatic Problems Using DYNA3D; (2) Enhanced Enforcement of Mechanical Contact: The Method of Augmented Lagrangians; (3) ParaDyn: New Generation Solid/Structural Mechanics Codes for Massively Parallel Processors; (4) Composite Damage Modeling; (5) HYDRA: A Parallel/Vector Flow Solver for Three-Dimensional, Transient, Incompressible Viscous How; (6) Development and Testing of the TRIM3D Radiation Heat Transfer Code; (7) A Methodology for Calculating the Seismic Response of Critical Structures; and (8) Reinforced Concrete Damage Modeling.

    7. Computing and Computational Sciences Directorate - Contacts

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Home › About Us Contacts Jeff Nichols Associate Laboratory Director Computing and Computational Sciences Becky Verastegui Directorate Operations Manager Computing and Computational Sciences Directorate Michael Bartell Chief Information Officer Information Technologies Services Division Jim Hack Director, Climate Science Institute National Center for Computational Sciences Shaun Gleason Division Director Computational Sciences and Engineering Barney Maccabe Division Director Computer Science

    8. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Nodes Compute Nodes MC-proc.png Compute Node Configuration 6,384 nodes 2 twelve-core AMD 'MagnyCours' 2.1-GHz processors per node (see die image to the right and schematic below) 24 cores per node (153,216 total cores) 32 GB DDR3 1333-MHz memory per node (6,000 nodes) 64 GB DDR3 1333-MHz memory per node (384 nodes) Peak Gflop/s rate: 8.4 Gflops/core 201.6 Gflops/node 1.28 Peta-flops for the entire machine Each core has its own L1 and L2 caches, with 64 KB and 512KB respectively One 6-MB

    9. Computational mechanics

      SciTech Connect (OSTI)

      Raboin, P J

      1998-01-01

      The Computational Mechanics thrust area is a vital and growing facet of the Mechanical Engineering Department at Lawrence Livermore National Laboratory (LLNL). This work supports the development of computational analysis tools in the areas of structural mechanics and heat transfer. Over 75 analysts depend on thrust area-supported software running on a variety of computing platforms to meet the demands of LLNL programs. Interactions with the Department of Defense (DOD) High Performance Computing and Modernization Program and the Defense Special Weapons Agency are of special importance as they support our ParaDyn project in its development of new parallel capabilities for DYNA3D. Working with DOD customers has been invaluable to driving this technology in directions mutually beneficial to the Department of Energy. Other projects associated with the Computational Mechanics thrust area include work with the Partnership for a New Generation Vehicle (PNGV) for ''Springback Predictability'' and with the Federal Aviation Administration (FAA) for the ''Development of Methodologies for Evaluating Containment and Mitigation of Uncontained Engine Debris.'' In this report for FY-97, there are five articles detailing three code development activities and two projects that synthesized new code capabilities with new analytic research in damage/failure and biomechanics. The article this year are: (1) Energy- and Momentum-Conserving Rigid-Body Contact for NIKE3D and DYNA3D; (2) Computational Modeling of Prosthetics: A New Approach to Implant Design; (3) Characterization of Laser-Induced Mechanical Failure Damage of Optical Components; (4) Parallel Algorithm Research for Solid Mechanics Applications Using Finite Element Analysis; and (5) An Accurate One-Step Elasto-Plasticity Algorithm for Shell Elements in DYNA3D.

    10. Computing Resources

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Resources This page is the repository for sundry items of information relevant to general computing on BooNE. If you have a question or problem that isn't answered here, or a suggestion for improving this page or the information on it, please mail boone-computing@fnal.gov and we'll do our best to address any issues. Note about this page Some links on this page point to www.everything2.com, and are meant to give an idea about a concept or thing without necessarily wading through a whole website

    11. 02-HellandNERSC-Requirements.pptx

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      5, 2014 Barbara Helland Advanced Scientific Computing Research NERSC-ASCR Requirements Review 1 ASCR 2 NERSC---ASCR R equirements R eview 1 /15/2014 3 World C lass F aciliBes * High Performance Production Computing for the Office of Science * Characterized by a large number of projects (over 400) and users ( over 4800) * Leadership Computing for Open Science * Characterized by a small number of projects ( about 50) and users (about 800) with computationally intensive projects * Linking it

    12. Institutional computing (IC) information session

      SciTech Connect (OSTI)

      Koch, Kenneth R; Lally, Bryan R

      2011-01-19

      The LANL Institutional Computing Program (IC) will host an information session about the current state of unclassified Institutional Computing at Los Alamos, exciting plans for the future, and the current call for proposals for science and engineering projects requiring computing. Program representatives will give short presentations and field questions about the call for proposals and future planned machines, and discuss technical support available to existing and future projects. Los Alamos has started making a serious institutional investment in open computing available to our science projects, and that investment is expected to increase even more.

    13. Advanced Scientific Computing Research

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Advanced Scientific Computing Research Advanced Scientific Computing Research Discovering, developing, and deploying computational and networking capabilities to analyze, model,...

    14. Computed Tomography Status

      DOE R&D Accomplishments [OSTI]

      Hansche, B. D.

      1983-01-01

      Computed tomography (CT) is a relatively new radiographic technique which has become widely used in the medical field, where it is better known as computerized axial tomographic (CAT) scanning. This technique is also being adopted by the industrial radiographic community, although the greater range of densities, variation in samples sizes, plus possible requirement for finer resolution make it difficult to duplicate the excellent results that the medical scanners have achieved.

    15. Sandia Energy - Computational Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Science Home Energy Research Advanced Scientific Computing Research (ASCR) Computational Science Computational Sciencecwdd2015-03-26T13:35:2...

    16. DOE Requires Manufacturers to Halt Sales of Heat Pumps and Air Conditioners

      Office of Environmental Management (EM)

      Violating Minimum Appliance Standards | Department of Energy Manufacturers to Halt Sales of Heat Pumps and Air Conditioners Violating Minimum Appliance Standards DOE Requires Manufacturers to Halt Sales of Heat Pumps and Air Conditioners Violating Minimum Appliance Standards June 3, 2010 - 2:17pm Addthis Today, the Department of Energy announced that three manufacturers -- Aspen Manufacturing, Inc., Summit Manufacturing, and Advanced Distributor Products -- must stop distributing 61 heat

    17. Computing Events

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Events Computing Events Spotlighting the most advanced scientific and technical applications in the world! Featuring exhibits of the latest and greatest technologies from industry, academia and government research organizations; many of these technologies will be seen for the first time in Denver. Supercomputing Conference 13 Denver, Colorado November 17-22, 2013 Spotlighting the most advanced scientific and technical applications in the world, SC13 will bring together the international

    18. Data Crosscutting Requirements Review

      SciTech Connect (OSTI)

      Kleese van Dam, Kerstin; Shoshani, Arie; Plata, Charity

      2013-04-01

      In April 2013, a diverse group of researchers from the U.S. Department of Energy (DOE) scientific community assembled to assess data requirements associated with DOE-sponsored scientific facilities and large-scale experiments. Participants in the review included facilities staff, program managers, and scientific experts from the offices of Basic Energy Sciences, Biological and Environmental Research, High Energy Physics, and Advanced Scientific Computing Research. As part of the meeting, review participants discussed key issues associated with three distinct aspects of the data challenge: 1) processing, 2) management, and 3) analysis. These discussions identified commonalities and differences among the needs of varied scientific communities. They also helped to articulate gaps between current approaches and future needs, as well as the research advances that will be required to close these gaps. Moreover, the review provided a rare opportunity for experts from across the Office of Science to learn about their collective expertise, challenges, and opportunities. The "Data Crosscutting Requirements Review" generated specific findings and recommendations for addressing large-scale data crosscutting requirements.

    19. Analysis of Minimum Efficiency Performance Standards for Residential General Service Lighting in Chile

      SciTech Connect (OSTI)

      Letschert, Virginie E.; McNeil, Michael A.; Leiva Ibanez, Francisco Humberto; Ruiz, Ana Maria; Pavon, Mariana; Hall, Stephen

      2011-06-01

      Minimum Efficiency Performance Standards (MEPS) have been chosen as part of Chile's national energy efficiency action plan. As a first MEPS, the Ministry of Energy has decided to focus on a regulation for lighting that would ban the sale of inefficient bulbs, effectively phasing out the use of incandescent lamps. Following major economies such as the US (EISA, 2007) , the EU (Ecodesign, 2009) and Australia (AS/NZS, 2008) who planned a phase out based on minimum efficacy requirements, the Ministry of Energy has undertaken the impact analysis of a MEPS on the residential lighting sector. Fundacion Chile (FC) and Lawrence Berkeley National Laboratory (LBNL) collaborated with the Ministry of Energy and the National Energy Efficiency Program (Programa Pais de Eficiencia Energetica, or PPEE) in order to produce a techno-economic analysis of this future policy measure. LBNL has developed for CLASP (CLASP, 2007) a spreadsheet tool called the Policy Analysis Modeling System (PAMS) that allows for evaluation of costs and benefits at the consumer level but also a wide range of impacts at the national level, such as energy savings, net present value of savings, greenhouse gas (CO2) emission reductions and avoided capacity generation due to a specific policy. Because historically Chile has followed European schemes in energy efficiency programs (test procedures, labelling program definitions), we take the Ecodesign commission regulation No 244/2009 as a starting point when defining our phase out program, which means a tiered phase out based on minimum efficacy per lumen category. The following data were collected in order to perform the techno-economic analysis: (1) Retail prices, efficiency and wattage category in the current market, (2) Usage data (hours of lamp use per day), and (3) Stock data, penetration of efficient lamps in the market. Using these data, PAMS calculates the costs and benefits of efficiency standards from two distinct but related perspectives: (1) The Life-Cycle Cost (LCC) calculation examines costs and benefits from the perspective of the individual household; and (2) The National Perspective projects the total national costs and benefits including both financial benefits, and energy savings and environmental benefits. The national perspective calculations are called the National Energy Savings (NES) and the Net Present Value (NPV) calculations. PAMS also calculate total emission mitigation and avoided generation capacity. This paper describes the data and methodology used in PAMS and presents the results of the proposed phase out of incandescent bulbs in Chile.

    20. Computing and Computational Sciences Directorate - Computer Science and

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Mathematics Division Computer Science and Mathematics Division The Computer Science and Mathematics Division (CSMD) is ORNL's premier source of basic and applied research in high-performance computing, applied mathematics, and intelligent systems. Our mission includes basic research in computational sciences and application of advanced computing systems, computational, mathematical and analysis techniques to the solution of scientific problems of national importance. We seek to work

    1. Computing at JLab

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Jefferson Lab Jefferson Lab Home Search Contact JLab Computing at JLab ---------------------- Accelerator Controls CAD CDEV CODA Computer Center High Performance Computing Scientific Computing JLab Computer Silo maintained by webmaster@jlab.org

    2. Level III Mentoring Requirement

      Broader source: Energy.gov [DOE]

      Level III applicants must be mentored (minimum of six months) by a Level III or IV FPD or demonstrate equivalency (see below Competency 3.12.2 in the PMCDP's CEG). A formal mentoring agreement must...

    3. Accounts Policy | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Accounts Policy All holders of user accounts must abide by all appropriate Argonne Leadership Computing Facility and Argonne National Laboratory computing usage policies. These are described at the time of the account request and include requirements such as using a sufficiently strong password, appropriate use of the system, and so on. Any user not following these requirements will have their account disabled. Furthermore, ALCF resources are intended to be used as a computing resource for

    4. Covered Product Category: Computers | Department of Energy

      Office of Environmental Management (EM)

      Computers Covered Product Category: Computers The Federal Energy Management Program (FEMP) provides acquisition guidance for computers, a product category covered by the ENERGY STAR program. Federal laws and requirements mandate that agencies buy ENERGY STAR-qualified products in all product categories covered by this program and any acquisition actions that are not specifically exempted by law. MEETING EFFICIENCY REQUIREMENTS FOR FEDERAL PURCHASES The U.S. Environmental Protection Agency (EPA)

    5. IDAPA 37.03.03 - Rules and Minimum Standards for the Construction...

      Open Energy Info (EERE)

      3 - Rules and Minimum Standards for the Construction and Use of Injection Wells Jump to: navigation, search OpenEI Reference LibraryAdd to library Legal Document-...

    6. Exploratory Experimentation and Computation

      SciTech Connect (OSTI)

      Bailey, David H.; Borwein, Jonathan M.

      2010-02-25

      We believe the mathematical research community is facing a great challenge to re-evaluate the role of proof in light of recent developments. On one hand, the growing power of current computer systems, of modern mathematical computing packages, and of the growing capacity to data-mine on the Internet, has provided marvelous resources to the research mathematician. On the other hand, the enormous complexity of many modern capstone results such as the Poincare conjecture, Fermat's last theorem, and the classification of finite simple groups has raised questions as to how we can better ensure the integrity of modern mathematics. Yet as the need and prospects for inductive mathematics blossom, the requirement to ensure the role of proof is properly founded remains undiminished.

    7. Computing Resources | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Resources Mira Cetus and Vesta Visualization Cluster Data and Networking Software JLSE Computing Resources Theory and Computing Sciences Building Argonne's Theory and Computing Sciences (TCS) building houses a wide variety of computing systems including some of the most powerful supercomputers in the world. The facility has 25,000 square feet of raised computer floor space and a pair of redundant 20 megavolt amperes electrical feeds from a 90 megawatt substation. The building also

    8. Cyber Security Process Requirements Manual

      Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]

      2008-08-12

      The Manual establishes the minimum implementation standards for cyber security management processes throughout the Department. No cancellation.

    9. High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      HPC INL Logo Home High-Performance Computing INL's high-performance computing center provides general use scientific computing capabilities to support the lab's efforts in advanced...

    10. Computer Architecture Lab

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      User Defined Images Archive APEX Home R & D Exascale Computing CAL Computer Architecture Lab The goal of the Computer Architecture Laboratory (CAL) is engage in...

    11. Multiprocessor computing for images

      SciTech Connect (OSTI)

      Cantoni, V. ); Levialdi, S. )

      1988-08-01

      A review of image processing systems developed until now is given, highlighting the weak points of such systems and the trends that have dictated their evolution through the years producing different generations of machines. Each generation may be characterized by the hardware architecture, the programmability features and the relative application areas. The need for multiprocessing hierarchical systems is discussed focusing on pyramidal architectures. Their computational paradigms, their virtual and physical implementation, their programming and software requirements, and capabilities by means of suitable languages, are discussed.

    12. ASCR Workshop on Quantum Computing for Science

      SciTech Connect (OSTI)

      Aspuru-Guzik, Alan; Van Dam, Wim; Farhi, Edward; Gaitan, Frank; Humble, Travis; Jordan, Stephen; Landahl, Andrew J; Love, Peter; Lucas, Robert; Preskill, John; Muller, Richard P.; Svore, Krysta; Wiebe, Nathan; Williams, Carl

      2015-06-01

      This report details the findings of the DOE ASCR Workshop on Quantum Computing for Science that was organized to assess the viability of quantum computing technologies to meet the computational requirements of the DOE’s science and energy mission, and to identify the potential impact of quantum technologies. The workshop was held on February 17-18, 2015, in Bethesda, MD, to solicit input from members of the quantum computing community. The workshop considered models of quantum computation and programming environments, physical science applications relevant to DOE's science mission as well as quantum simulation, and applied mathematics topics including potential quantum algorithms for linear algebra, graph theory, and machine learning. This report summarizes these perspectives into an outlook on the opportunities for quantum computing to impact problems relevant to the DOE’s mission as well as the additional research required to bring quantum computing to the point where it can have such impact.

    13. TRIDAC host computer functional specification

      SciTech Connect (OSTI)

      Hilbert, S.M.; Hunter, S.L.

      1983-08-23

      The purpose of this document is to outline the baseline functional requirements for the Triton Data Acquisition and Control (TRIDAC) Host Computer Subsystem. The requirements presented in this document are based upon systems that currently support both the SIS and the Uranium Separator Technology Groups in the AVLIS Program at the Lawrence Livermore National Laboratory and upon the specific demands associated with the extended safe operation of the SIS Triton Facility.

    14. DOE SC Exascale Requirements Reviews: High Energy Physics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Physics DOE SC Exascale Requirements Reviews: High Energy Physics The DOE Office of Science Exascale Requirements Review for High Energy Physics will bring together key computational domain scientists, DOE planners and administrators, and experts in computer science and applied mathematics to determine the requirements for an exascale ecosystem that includes computation, data analysis, software, workflows, HPC services, and whatever else is needed to support forefront scientific research in High

    15. BES Requirements Review 2014

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      BES Requirements Review 2014 Science Engagement Move your data Programs & Workshops Science Requirements Reviews Network Requirements Reviews Documents and Background Materials FAQ for Case Study Authors BER Requirements Review 2015 ASCR Requirements Review 2015 Previous Reviews HEP/NP Requirements Review 2013 FES Requirements Review 2014 BES Requirements Review 2014 BES Attendees 2014 Requirements Review Reports Case Studies Contact Us Technical Assistance: 1 800-33-ESnet (Inside US) 1

    16. FES Requirements Review 2014

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      FES Requirements Review 2014 Science Engagement Move your data Programs & Workshops Science Requirements Reviews Network Requirements Reviews Documents and Background Materials FAQ for Case Study Authors BER Requirements Review 2015 ASCR Requirements Review 2015 Previous Reviews HEP/NP Requirements Review 2013 FES Requirements Review 2014 FES Attendees 2014 BES Requirements Review 2014 Requirements Review Reports Case Studies Contact Us Technical Assistance: 1 800-33-ESnet (Inside US) 1

    17. Large Scale Production Computing and Storage Requirements for...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      offerings and will help ASCR Facilities Division justify support for Office of Science research. Final Report PDF Date and Location This review was held October 8-9, 2013 Hilton...

    18. Large Scale Production Computing and Storage Requirements for...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      11-12, 2012 Hilton Rockville Hotel and Executive Meeting Center 1750 Rockville Pike Rockville, MD, 20852-1699 TEL: 1-301-468-1100 Sponsored by: U.S. Department of Energy...

    19. Large Scale Production Computing and Storage Requirements for...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      2012 Hilton Washington DCRockville Hotel &Executive Meeting Center 1750 Rockville Pike, Rockville, MD,20852-1699 Final Report PDF Hotel Information Info on how to reserve a...

    20. Large Scale Computing and Storage Requirements for Basic Energy...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      SciencesAn BES ASCR NERSC WorkshopFebruary 9-10, 2010... Read More Workshop Logistics Workshop location, directions, and registration information are included here......

    1. Large Scale Computing and Storage Requirements for High Energy Physics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      for High Energy Physics Accelerator Physics P. Spentzouris, Fermilab Motivation Accelerators enable many important applications, both in basic research and applied sciences Different machine attributes are emphasized for different applications * Different particle beams and operation principles * Different energies and intensities Accelerator science and technology objectives for all applications * Achieve higher energy and intensity, faster and cheaper machine design, more reliable operation a

    2. Large Scale Computing and Storage Requirements for Nuclear Physics...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      must respond to their e-mail invitation. The Group Registration Deadline for the hotel is May 4, 2011. An official letter of invitation is available (PDF). Workshop Agenda...

    3. Requirements for Wind Development

      Broader source: Energy.gov [DOE]

      In 2015 Oklahoma amended the Oklahoma Wind Energy Development Act. The amendments added new financial security requirements, setback requirements, and notification requirements for wind energy...

    4. BER Requirements Review 2015

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Reviews Network Requirements Reviews Documents and Background Materials FAQ for Case Study Authors BER Requirements Review 2015 BER Attendees 2015 ASCR Requirements...

    5. Network Requirements Reviews

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Reviews Network Requirements Reviews Documents and Background Materials FAQ for Case Study Authors BER Requirements Review 2015 ASCR Requirements Review 2015 Previous...

    6. Sandia National Laboratories: Research: Research Foundations: Computing and

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information Science Research Foundations Bioscience Computing and Information Science Electromagnetics Engineering Science Geoscience Materials Science Nanodevices and Microsystems Radiation Effects and High Energy Density Science Research Computing and Information Science Red Storm photo Our approach Vertically integrated, scalable supercomputing Goal Increase capability while reducing the space and power requirements of future computing systems by changing the nature of computing devices,

    7. Determining collective barrier operation skew in a parallel computer

      DOE Patents [OSTI]

      Faraj, Daniel A.

      2015-11-24

      Determining collective barrier operation skew in a parallel computer that includes a number of compute nodes organized into an operational group includes: for each of the nodes until each node has been selected as a delayed node: selecting one of the nodes as a delayed node; entering, by each node other than the delayed node, a collective barrier operation; entering, after a delay by the delayed node, the collective barrier operation; receiving an exit signal from a root of the collective barrier operation; and measuring, for the delayed node, a barrier completion time. The barrier operation skew is calculated by: identifying, from the compute nodes' barrier completion times, a maximum barrier completion time and a minimum barrier completion time and calculating the barrier operation skew as the difference of the maximum and the minimum barrier completion time.

    8. Determining collective barrier operation skew in a parallel computer

      DOE Patents [OSTI]

      Faraj, Daniel A.

      2015-12-24

      Determining collective barrier operation skew in a parallel computer that includes a number of compute nodes organized into an operational group includes: for each of the nodes until each node has been selected as a delayed node: selecting one of the nodes as a delayed node; entering, by each node other than the delayed node, a collective barrier operation; entering, after a delay by the delayed node, the collective barrier operation; receiving an exit signal from a root of the collective barrier operation; and measuring, for the delayed node, a barrier completion time. The barrier operation skew is calculated by: identifying, from the compute nodes' barrier completion times, a maximum barrier completion time and a minimum barrier completion time and calculating the barrier operation skew as the difference of the maximum and the minimum barrier completion time.

    9. Impact analysis on a massively parallel computer

      SciTech Connect (OSTI)

      Zacharia, T.; Aramayo, G.A.

      1994-06-01

      Advanced mathematical techniques and computer simulation play a major role in evaluating and enhancing the design of beverage cans, industrial, and transportation containers for improved performance. Numerical models are used to evaluate the impact requirements of containers used by the Department of Energy (DOE) for transporting radioactive materials. Many of these models are highly compute-intensive. An analysis may require several hours of computational time on current supercomputers despite the simplicity of the models being studied. As computer simulations and materials databases grow in complexity, massively parallel computers have become important tools. Massively parallel computational research at the Oak Ridge National Laboratory (ORNL) and its application to the impact analysis of shipping containers is briefly described in this paper.

    10. Fermilab | Science at Fermilab | Computing | Grid Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      In the early 2000s, members of Fermilab's Computing Division looked ahead to experiments like those at the Large Hadron Collider, which would collect more data than any computing ...

    11. Mira Computational Readiness Assessment | Argonne Leadership Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Facility INCITE Program 5 Checks & 5 Tips for INCITE Mira Computational Readiness Assessment ALCC Program Director's Discretionary (DD) Program Early Science Program INCITE 2016 Projects ALCC 2015 Projects ESP Projects View All Projects Publications ALCF Tech Reports Industry Collaborations Mira Computational Readiness Assessment Assess your project's computational readiness for Mira A review of the following computational readiness points in relation to scaling, porting, I/O, memory

    12. Shape-memory transformations of NiTi: Minimum-energy pathways...

      Office of Scientific and Technical Information (OSTI)

      Shape-memory transformations of NiTi: Minimum-energy pathways between austenite, ... Citation Details In-Document Search Title: Shape-memory transformations of NiTi: ...

    13. Title 43 CFR 3206.12 What are the Minimum and Maximum Lease Sizes...

      Open Energy Info (EERE)

      .12 What are the Minimum and Maximum Lease Sizes? Jump to: navigation, search OpenEI Reference LibraryAdd to library Legal Document- Federal RegulationFederal Regulation: Title 43...

    14. From Fjords to Open Seas: Ecological Genomics of Expanding Oxygen Minimum Zones (2010 JGI User Meeting)

      ScienceCinema (OSTI)

      Hallam, Steven

      2011-04-26

      Steven Hallam of the University of British Columbia talks "From Fjords to Open Seas: Ecological Genomics of Expanding Oxygen Minimum Zones" on March 24, 2010 at the 5th Annual DOE JGI User Meeting

    15. Shape-memory transformations of NiTi: Minimum-energy pathways between

      Office of Scientific and Technical Information (OSTI)

      austenite, martensites, and kinetically limited intermediate states (Journal Article) | DOE PAGES Shape-memory transformations of NiTi: Minimum-energy pathways between austenite, martensites, and kinetically limited intermediate states « Prev Next » Title: Shape-memory transformations of NiTi: Minimum-energy pathways between austenite, martensites, and kinetically limited intermediate states NiTi is the most used shape-memory alloy, nonetheless, a lack of understanding remains regarding

    16. Shape-memory transformations of NiTi: Minimum-energy pathways between

      Office of Scientific and Technical Information (OSTI)

      austenite, martensites, and kinetically limited intermediate states (Journal Article) | SciTech Connect Journal Article: Shape-memory transformations of NiTi: Minimum-energy pathways between austenite, martensites, and kinetically limited intermediate states Citation Details In-Document Search Title: Shape-memory transformations of NiTi: Minimum-energy pathways between austenite, martensites, and kinetically limited intermediate states NiTi is the most used shape-memory alloy, nonetheless, a

    17. Shape-memory transformations of NiTi: Minimum-energy pathways between

      Office of Scientific and Technical Information (OSTI)

      austenite, martensites, and kinetically limited intermediate states (Journal Article) | SciTech Connect Shape-memory transformations of NiTi: Minimum-energy pathways between austenite, martensites, and kinetically limited intermediate states Citation Details In-Document Search Title: Shape-memory transformations of NiTi: Minimum-energy pathways between austenite, martensites, and kinetically limited intermediate states × You are accessing a document from the Department of Energy's (DOE)

    18. Shape-memory transformations of NiTi: Minimum-energy pathways between

      Office of Scientific and Technical Information (OSTI)

      austenite, martensites, and kinetically limited intermediate states (Journal Article) | DOE PAGES Shape-memory transformations of NiTi: Minimum-energy pathways between austenite, martensites, and kinetically limited intermediate states Title: Shape-memory transformations of NiTi: Minimum-energy pathways between austenite, martensites, and kinetically limited intermediate states NiTi is the most used shape-memory alloy, nonetheless, a lack of understanding remains regarding the associated

    19. Revenue-requirement approach to analysis of financing alternatives

      SciTech Connect (OSTI)

      Ewers, B.J.; Wheaton, K.E.

      1984-07-19

      The minimum revenue requirement discipline (MRRD) is accepted throughout the utility industry as a tool to be used for economic decisions and rate making. At least one utility company has also used MRRD in the analysis of financing alternatives. This article was written to show the versatility of the revenue requirement discipline. It demonstrates that this methodology is appropriate not only for evaluating traditional capital budgeting decisions, but also for identifying the most economic financing alternatives. 5 references, 4 figures, 4 tables.

    20. Computation Modeling and Assessment of Nanocoatings for Ultra Supercritical Boilers

      SciTech Connect (OSTI)

      J. Shingledecker; D. Gandy; N. Cheruvu; R. Wei; K. Chan

      2011-06-21

      Forced outages and boiler unavailability of coal-fired fossil plants is most often caused by fire-side corrosion of boiler waterwalls and tubing. Reliable coatings are required for Ultrasupercritical (USC) application to mitigate corrosion since these boilers will operate at a much higher temperatures and pressures than in supercritical (565 C {at} 24 MPa) boilers. Computational modeling efforts have been undertaken to design and assess potential Fe-Cr-Ni-Al systems to produce stable nanocrystalline coatings that form a protective, continuous scale of either Al{sub 2}O{sub 3} or Cr{sub 2}O{sub 3}. The computational modeling results identified a new series of Fe-25Cr-40Ni with or without 10 wt.% Al nanocrystalline coatings that maintain long-term stability by forming a diffusion barrier layer at the coating/substrate interface. The computational modeling predictions of microstructure, formation of continuous Al{sub 2}O{sub 3} scale, inward Al diffusion, grain growth, and sintering behavior were validated with experimental results. Advanced coatings, such as MCrAl (where M is Fe, Ni, or Co) nanocrystalline coatings, have been processed using different magnetron sputtering deposition techniques. Several coating trials were performed and among the processing methods evaluated, the DC pulsed magnetron sputtering technique produced the best quality coating with a minimum number of shallow defects and the results of multiple deposition trials showed that the process is repeatable. scale, inward Al diffusion, grain growth, and sintering behavior were validated with experimental results. The cyclic oxidation test results revealed that the nanocrystalline coatings offer better oxidation resistance, in terms of weight loss, localized oxidation, and formation of mixed oxides in the Al{sub 2}O{sub 3} scale, than widely used MCrAlY coatings. However, the ultra-fine grain structure in these coatings, consistent with the computational model predictions, resulted in accelerated Al diffusion from the coating into the substrate. An effective diffusion barrier interlayer coating was developed to prevent inward Al diffusion. The fire-side corrosion test results showed that the nanocrystalline coatings with a minimum number of defects have a great potential in providing corrosion protection. The coating tested in the most aggressive environment showed no evidence of coating spallation and/or corrosion attack after 1050 hours exposure. In contrast, evidence of coating spallation in isolated areas and corrosion attack of the base metal in the spalled areas were observed after 500 hours. These contrasting results after 500 and 1050 hours exposure suggest that the premature coating spallation in isolated areas may be related to the variation of defects in the coating between the samples. It is suspected that the cauliflower-type defects in the coating were presumably responsible for coating spallation in isolated areas. Thus, a defect free good quality coating is the key for the long-term durability of nanocrystalline coatings in corrosive environments. Thus, additional process optimization work is required to produce defect-free coatings prior to development of a coating application method for production parts.

    1. Sandia Energy - Computations

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computations Home Transportation Energy Predictive Simulation of Engines Reacting Flow Applied Math & Software Computations ComputationsAshley Otero2015-10-30T02:18:51+00:00...

    2. Cyber Security Process Requirements Manual

      Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]

      2008-08-12

      The Manual establishes the minimum implementation standards for cyber security management processes throughout the Department. No cancellation. Admin Chg 1 dated 9-1-09.

    3. BER Requirements Review 2015

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      BER Attendees 2015 ASCR Requirements Review 2015 Previous Reviews Requirements Review Reports Case Studies News & Publications ESnet News Publications and Presentations Galleries ESnet Awards and Honors Blog ESnet Live Home » Science Engagement » Science Requirements Reviews » Network Requirements Reviews » BER Requirements Review 2015 Science Engagement Move your data Programs & Workshops Science Requirements Reviews Network Requirements Reviews Documents and Background Materials

    4. ASCR Requirements Review 2015

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ASCR Requirements Review 2015 ASCR Attendees 2015 Previous Reviews Requirements Review Reports Case Studies News & Publications ESnet News Publications and Presentations Galleries ESnet Awards and Honors Blog ESnet Live Home » Science Engagement » Science Requirements Reviews » Network Requirements Reviews » ASCR Requirements Review 2015 Science Engagement Move your data Programs & Workshops Science Requirements Reviews Network Requirements Reviews Documents and Background Materials

    5. Science Requirements Process

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Science Requirements Reviews Network Requirements Reviews Requirements Review Reports Case Studies News & Publications ESnet News Publications and Presentations Galleries ESnet Awards and Honors Blog ESnet Live Home » Science Engagement » Science Requirements Reviews Science Engagement Move your data Programs & Workshops Science Requirements Reviews Network Requirements Reviews Requirements Review Reports Case Studies Contact Us Technical Assistance: 1 800-33-ESnet (Inside US) 1

    6. ASCR Science Network Requirements

      SciTech Connect (OSTI)

      Dart, Eli; Tierney, Brian

      2009-08-24

      The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the US Department of Energy Office of Science, the single largest supporter of basic research in the physical sciences in the United States. In support of the Office of Science programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years. In April 2009 ESnet and the Office of Advanced Scientific Computing Research (ASCR), of the DOE Office of Science, organized a workshop to characterize the networking requirements of the programs funded by ASCR. The ASCR facilities anticipate significant increases in wide area bandwidth utilization, driven largely by the increased capabilities of computational resources and the wide scope of collaboration that is a hallmark of modern science. Many scientists move data sets between facilities for analysis, and in some cases (for example the Earth System Grid and the Open Science Grid), data distribution is an essential component of the use of ASCR facilities by scientists. Due to the projected growth in wide area data transfer needs, the ASCR supercomputer centers all expect to deploy and use 100 Gigabit per second networking technology for wide area connectivity as soon as that deployment is financially feasible. In addition to the network connectivity that ESnet provides, the ESnet Collaboration Services (ECS) are critical to several science communities. ESnet identity and trust services, such as the DOEGrids certificate authority, are widely used both by the supercomputer centers and by collaborations such as Open Science Grid (OSG) and the Earth System Grid (ESG). Ease of use is a key determinant of the scientific utility of network-based services. Therefore, a key enabling aspect for scientists beneficial use of high performance networks is a consistent, widely deployed, well-maintained toolset that is optimized for wide area, high-speed data transfer (e.g. GridFTP) that allows scientists to easily utilize the services and capabilities that the network provides. Network test and measurement is an important part of ensuring that these tools and network services are functioning correctly. One example of a tool in this area is the recently developed perfSONAR, which has already shown its usefulness in fault diagnosis during the recent deployment of high-performance data movers at NERSC and ORNL. On the other hand, it is clear that there is significant work to be done in the area of authentication and access control - there are currently compatibility problems and differing requirements between the authentication systems in use at different facilities, and the policies and mechanisms in use at different facilities are sometimes in conflict. Finally, long-term software maintenance was of concern for many attendees. Scientists rely heavily on a large deployed base of software that does not have secure programmatic funding. Software packages for which this is true include data transfer tools such as GridFTP as well as identity management and other software infrastructure that forms a critical part of the Open Science Grid and the Earth System Grid.

    7. Computer hardware fault administration

      DOE Patents [OSTI]

      Archer, Charles J. (Rochester, MN); Megerian, Mark G. (Rochester, MN); Ratterman, Joseph D. (Rochester, MN); Smith, Brian E. (Rochester, MN)

      2010-09-14

      Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

    8. Molecular Science Computing | EMSL

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      computational and state-of-the-art experimental tools, providing a cross-disciplinary environment to further research. Additional Information Computing user policies Partners...

    9. Applied & Computational Math

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      & Computational Math - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us ... Twitter Google + Vimeo GovDelivery SlideShare Applied & Computational Math HomeEnergy ...

    10. advanced simulation and computing

      National Nuclear Security Administration (NNSA)

      Each successive generation of computing system has provided greater computing power and energy efficiency.

      CTS-1 clusters will support NNSA's Life Extension Program and...

    11. NERSC Computer Security

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Security NERSC Computer Security NERSC computer security efforts are aimed at protecting NERSC systems and its users' intellectual property from unauthorized access or...

    12. Requirements Review Reports

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Requirements Review Reports Case Studies News & Publications ESnet News Publications and Presentations Galleries ESnet Awards and Honors Blog ESnet Live Home » Science Engagement » Science Requirements Reviews » Requirements Review Reports Science Engagement Move your data Programs & Workshops Science Requirements Reviews Network Requirements Reviews Requirements Review Reports Case Studies Contact Us Technical Assistance: 1 800-33-ESnet (Inside US) 1 800-333-7638 (Inside US) 1

    13. Regulators, Requirements, Statutes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Air Act (CAA) Requirements for air quality and air emissions from facility operations Clean Water Act (CWA) Requirements for water quality and water discharges from facility...

    14. Cosmic Reionization On Computers | Argonne Leadership Computing...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      its Cosmic Reionization On Computers (CROC) project, using the Adaptive Refinement Tree (ART) code as its main simulation tool. An important objective of this research is to make...

    15. Computing and Computational Sciences Directorate - Information...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      cost-effective, state-of-the-art computing capabilities for research and development. ... communicates and manages strategy, policy and finance across the portfolio of IT assets. ...

    16. Computing for Finance

      ScienceCinema (OSTI)

      None

      2011-10-06

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing ? from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege of being a summer student at CERN.3. Opportunities for gLite in finance and related industriesAdam Vile, Head of Grid, HPC and Technical Computing, Excelian Ltd.gLite, the Grid software developed by the EGEE project, has been exceedingly successful as an enabling infrastructure, and has been a massive success in bringing together scientific and technical communities to provide the compute power to address previously incomputable problems. Not so in the finance industry. In its current form gLite would be a business disabler. There are other middleware tools that solve the finance communities compute problems much better. Things are moving on, however. There are moves afoot in the open source community to evolve the technology to address other, more sophisticated needs such as utility and interactive computing. In this talk, I will describe how Excelian is providing Grid consultancy services for the finance community and how, through its relationship to the EGEE project, Excelian is helping to identify and exploit opportunities as the research and business worlds converge. Because of the strong third party presence in the finance industry, such opportunities are few and far between, but they are there, especially as we expand sideways into related verticals such as the smaller hedge funds and energy companies. This talk will give an overview of the barriers to adoption of gLite in the finance industry and highlight some of the opportunities offered in this and related industries as the ideas around Grid mature. Speaker Bio: Dr Adam Vile is a senior consultant and head of the Grid and HPC practice at Excelian, a consultancy that focuses on financial markets professional services. He has spent many years in investment banking, as a developer, project manager and architect in both front and back office. Before joining Excelian he was senior Grid and HPC architect at Barclays Capital. Prior to joining investment banking, Adam spent a number of years lecturing in IT and mathematics at a UK University and maintains links with academia through lectures, research and through validation and steering of postgraduate courses. He is a chartered mathematician and was the conference chair of the Institute of Mathematics and its Applications first conference in computational Finance.4. From Monte Carlo to Wall Street Daniel Egloff, Head of Financial Engineering Computing Unit, Zrich Cantonal Bank High performance computing techniques provide new means to solve computationally hard problems in the financial service industry. First I consider Monte Carlo simulation and illustrate how it can be used to implement a sophisticated credit risk management and economic capital framework. From a HPC perspective, basic Monte Carlo simulation is embarrassingly parallel and can be implemented efficiently on distributed memory clusters. Additional difficulties arise for adaptive variance reduction schemes, if the information content in a sample is very small, and if the amount of simulated date becomes huge such that incremental processing algorithms are indispensable. We discuss the business value of an advanced credit risk quantification which is particularly compelling in these days. While Monte Carlo simulation is a very versatile tool it is not always the preferred solution for the pricing of complex products like multi asset options, structured products, or credit derivatives. As a second application I show how operator methods can be used to develop a pricing framework. The scalability of operator methods relies heavily on optimized dense matrix-matrix multiplications and requires specialized BLAS level-3 implementations provided by specialized FPGA or GPU boards. Speaker Bio: Daniel Egloff studied mathematics, theoretical physics, and computer science at the University of Zurich and the ETH Zurich. He holds a PhD in Mathematics from University of Fribourg, Switzerland. After his PhD he started to work for a large Swiss insurance company in the area of asset and liability management. He continued his professional career in the consulting industry. At KPMG and Arthur Andersen he consulted international clients and implemented quantitative risk management solutions for financial institutions and insurance companies. In 2002 he joined Zurich Cantonal Bank. He was assigned to develop and implement credit portfolio risk and economic capital methodologies. He built up a competence center for high performance and cluster computing. Currently, Daniel Egloff is heading the Financial Computing unit in the ZKB Financial Engineering division. He and his team is engineering and operating high performance cluster applications for computationally intensive problems in financial risk management.

    17. Computing for Finance

      ScienceCinema (OSTI)

      None

      2011-10-06

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing ? from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege of being a summer student at CERN.3. Opportunities for gLite in finance and related industriesAdam Vile, Head of Grid, HPC and Technical Computing, Excelian Ltd.gLite, the Grid software developed by the EGEE project, has been exceedingly successful as an enabling infrastructure, and has been a massive success in bringing together scientific and technical communities to provide the compute power to address previously incomputable problems. Not so in the finance industry. In its current form gLite would be a business disabler. There are other middleware tools that solve the finance communities compute problems much better. Things are moving on, however. There are moves afoot in the open source community to evolve the technology to address other, more sophisticated needs such as utility and interactive computing. In this talk, I will describe how Excelian is providing Grid consultancy services for the finance community and how, through its relationship to the EGEE project, Excelian is helping to identify and exploit opportunities as the research and business worlds converge. Because of the strong third party presence in the finance industry, such opportunities are few and far between, but they are there, especially as we expand sideways into related verticals such as the smaller hedge funds and energy companies. This talk will give an overview of the barriers to adoption of gLite in the finance industry and highlight some of the opportunities offered in this and related industries as the ideas around Grid mature. Speaker Bio: Dr Adam Vile is a senior consultant and head of the Grid and HPC practice at Excelian, a consultancy that focuses on financial markets professional services. He has spent many years in investment banking, as a developer, project manager and architect in both front and back office. Before joining Excelian he was senior Grid and HPC architect at Barclays Capital. Prior to joining investment banking, Adam spent a number of years lecturing in IT and mathematics at a UK University and maintains links with academia through lectures, research and through validation and steering of postgraduate courses. He is a chartered mathematician and was the conference chair of the Institute of Mathematics and its Applications first conference in computational Finance.4. From Monte Carlo to Wall Street Daniel Egloff, Head of Financial Engineering Computing Unit, Zürich Cantonal Bank High performance computing techniques provide new means to solve computationally hard problems in the financial service industry. First I consider Monte Carlo simulation and illustrate how it can be used to implement a sophisticated credit risk management and economic capital framework. From a HPC perspective, basic Monte Carlo simulation is embarrassingly parallel and can be implemented efficiently on distributed memory clusters. Additional difficulties arise for adaptive variance reduction schemes, if the information content in a sample is very small, and if the amount of simulated date becomes huge such that incremental processing algorithms are indispensable. We discuss the business value of an advanced credit risk quantification which is particularly compelling in these days. While Monte Carlo simulation is a very versatile tool it is not always the preferred solution for the pricing of complex products like multi asset options, structured products, or credit derivatives. As a second application I show how operator methods can be used to develop a pricing framework. The scalability of operator methods relies heavily on optimized dense matrix-matrix multiplications and requires specialized BLAS level-3 implementations provided by specialized FPGA or GPU boards. Speaker Bio: Daniel Egloff studied mathematics, theoretical physics, and computer science at the University of Zurich and the ETH Zurich. He holds a PhD in Mathematics from University of Fribourg, Switzerland. After his PhD he started to work for a large Swiss insurance company in the area of asset and liability management. He continued his professional career in the consulting industry. At KPMG and Arthur Andersen he consulted international clients and implemented quantitative risk management solutions for financial institutions and insurance companies. In 2002 he joined Zurich Cantonal Bank. He was assigned to develop and implement credit portfolio risk and economic capital methodologies. He built up a competence center for high performance and cluster computing. Currently, Daniel Egloff is heading the Financial Computing unit in the ZKB Financial Engineering division. He and his team is engineering and operating high performance cluster applications for computationally intensive problems in financial risk management.

    18. Computing for Finance

      SciTech Connect (OSTI)

      2010-03-24

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing – from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege of being a summer student at CERN.3. Opportunities for gLite in finance and related industriesAdam Vile, Head of Grid, HPC and Technical Computing, Excelian Ltd.gLite, the Grid software developed by the EGEE project, has been exceedingly successful as an enabling infrastructure, and has been a massive success in bringing together scientific and technical communities to provide the compute power to address previously incomputable problems. Not so in the finance industry. In its current form gLite would be a business disabler. There are other middleware tools that solve the finance communities compute problems much better. Things are moving on, however. There are moves afoot in the open source community to evolve the technology to address other, more sophisticated needs such as utility and interactive computing. In this talk, I will describe how Excelian is providing Grid consultancy services for the finance community and how, through its relationship to the EGEE project, Excelian is helping to identify and exploit opportunities as the research and business worlds converge. Because of the strong third party presence in the finance industry, such opportunities are few and far between, but they are there, especially as we expand sideways into related verticals such as the smaller hedge funds and energy companies. This talk will give an overview of the barriers to adoption of gLite in the finance industry and highlight some of the opportunities offered in this and related industries as the ideas around Grid mature. Speaker Bio: Dr Adam Vile is a senior consultant and head of the Grid and HPC practice at Excelian, a consultancy that focuses on financial markets professional services. He has spent many years in investment banking, as a developer, project manager and architect in both front and back office. Before joining Excelian he was senior Grid and HPC architect at Barclays Capital. Prior to joining investment banking, Adam spent a number of years lecturing in IT and mathematics at a UK University and maintains links with academia through lectures, research and through validation and steering of postgraduate courses. He is a chartered mathematician and was the conference chair of the Institute of Mathematics and its Applications first conference in computational Finance.4. From Monte Carlo to Wall Street Daniel Egloff, Head of Financial Engineering Computing Unit, Zürich Cantonal Bank High performance computing techniques provide new means to solve computationally hard problems in the financial service industry. First I consider Monte Carlo simulation and illustrate how it can be used to implement a sophisticated credit risk management and economic capital framework. From a HPC perspective, basic Monte Carlo simulation is embarrassingly parallel and can be implemented efficiently on distributed memory clusters. Additional difficulties arise for adaptive variance reduction schemes, if the information content in a sample is very small, and if the amount of simulated date becomes huge such that incremental processing algorithms are indispensable. We discuss the business value of an advanced credit risk quantification which is particularly compelling in these days. While Monte Carlo simulation is a very versatile tool it is not always the preferred solution for the pricing of complex products like multi asset options, structured products, or credit derivatives. As a second application I show how operator methods can be used to develop a pricing framework. The scalability of operator methods relies heavily on optimized dense matrix-matrix multiplications and requires specialized BLAS level-3 implementations provided by specialized FPGA or GPU boards. Speaker Bio: Daniel Egloff studied mathematics, theoretical physics, and computer science at the University of Zurich and the ETH Zurich. He holds a PhD in Mathematics from University of Fribourg, Switzerland. After his PhD he started to work for a large Swiss insurance company in the area of asset and liability management. He continued his professional career in the consulting industry. At KPMG and Arthur Andersen he consulted international clients and implemented quantitative risk management solutions for financial institutions and insurance companies. In 2002 he joined Zurich Cantonal Bank. He was assigned to develop and implement credit portfolio risk and economic capital methodologies. He built up a competence center for high performance and cluster computing. Currently, Daniel Egloff is heading the Financial Computing unit in the ZKB Financial Engineering division. He and his team is engineering and operating high performance cluster applications for computationally intensive problems in financial risk management.

    19. Molecular Science Computing: 2010 Greenbook

      SciTech Connect (OSTI)

      De Jong, Wibe A.; Cowley, David E.; Dunning, Thom H.; Vorpagel, Erich R.

      2010-04-02

      This 2010 Greenbook outlines the science drivers for performing integrated computational environmental molecular research at EMSL and defines the next-generation HPC capabilities that must be developed at the MSC to address this critical research. The EMSL MSC Science Panel used EMSL’s vision and science focus and white papers from current and potential future EMSL scientific user communities to define the scientific direction and resulting HPC resource requirements presented in this 2010 Greenbook.

    20. Allocations | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Allocation Management Determining Allocation Requirements Querying Allocations Using cbank Mira/Cetus/Vesta Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Allocations ALCF resources are primarily used for DOE INCITE and ALCC awarded projects. Additional information on the INCITE program can be found on the DOE INCITE website and the ALCC program can be found on the Office of

    1. Parallel computing works

      SciTech Connect (OSTI)

      Not Available

      1991-10-23

      An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

    2. Richard Gerber! Harvey Wasserman! Requirements Reviews Organizers

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      9, 2 014 Requirements Reviews * 1½-day reviews with each Program Office * Computing and storage requirements for next 5 years * Participants - DOE ADs & Program Managers - Leading NERSC users & key potential users - NERSC staff 2 High Energy Physics Fusion R esearch Adv. C omp. S cience R esearch J an. 2 014 Basic E nergy S ciences O ct 2 014 Reports From 8 Requirements Reviews Have Been Published 3 h@p://www.nersc.gov/science/hpc---requirements---reviews/reports/ * CompuFng a nd s

    3. Computers-BSA.ppt

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computers! Boy Scout Troop 405! What is a computer?! Is this a computer?! Charles Babbage: Father of the Computer! 1830s Designed mechanical calculators to reduce human error. *Input device *Memory to store instructions and results *A processors *Output device! Vacuum Tube! Edison 1883 & Lee de Forest 1906 discovered that "vacuum tubes" could serve as electrical switches and amplifiers A switch can be ON (1)" or OFF (0) Electronic computers use Boolean (George Bool 1850) logic

    4. Computational Fluid Dynamics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      scour-tracc-cfd TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computational Fluid Dynamics Overview of CFD: Video Clip with Audio Computational fluid dynamics (CFD) research uses mathematical and computational models of flowing fluids to describe and predict fluid response in problems of interest, such as the flow of air around a moving vehicle or the flow of water and sediment in a river. Coupled with appropriate and prototypical

    5. Theory & Computation > Research > The Energy Materials Center...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Theory & Computation In This Section Computation & Simulation Theory & Computation Computation & Simulation...

    6. Point sensitive NMR imaging system using a magnetic field configuration with a spatial minimum

      DOE Patents [OSTI]

      Eberhard, Philippe H. (El Cerrito, CA)

      1985-01-01

      A point-sensitive NMR imaging system (10) in which a main solenoid coil (11) produces a relatively strong and substantially uniform magnetic field and a pair of perturbing coils (PZ1 and PZ2) powered by current in the same direction superimposes a pair of relatively weak perturbing fields on the main field to produce a resultant point of minimum field strength at a desired location in a direction along the Z-axis. Two other pairs of perturbing coils (PX1, PX2; PY1, PY2) superimpose relatively weak field gradients on the main field in directions along the X- and Y-axes to locate the minimum field point at a desired location in a plane normal to the Z-axes. An RF generator (22) irradiates a tissue specimen in the field with radio frequency energy so that desired nuclei in a small volume at the point of minimum field strength will resonate.

    7. Polymorphous computing fabric

      DOE Patents [OSTI]

      Wolinski, Christophe Czeslaw (Los Alamos, NM); Gokhale, Maya B. (Los Alamos, NM); McCabe, Kevin Peter (Los Alamos, NM)

      2011-01-18

      Fabric-based computing systems and methods are disclosed. A fabric-based computing system can include a polymorphous computing fabric that can be customized on a per application basis and a host processor in communication with said polymorphous computing fabric. The polymorphous computing fabric includes a cellular architecture that can be highly parameterized to enable a customized synthesis of fabric instances for a variety of enhanced application performances thereof. A global memory concept can also be included that provides the host processor random access to all variables and instructions associated with the polymorphous computing fabric.

    8. Improving CSE Software through Reproducibility Requirements. (Conference) |

      Office of Scientific and Technical Information (OSTI)

      SciTech Connect Improving CSE Software through Reproducibility Requirements. Citation Details In-Document Search Title: Improving CSE Software through Reproducibility Requirements. Abstract not provided. Authors: Heroux, Michael Allen Publication Date: 2011-02-01 OSTI Identifier: 1109282 Report Number(s): SAND2011-1158C 471476 DOE Contract Number: AC04-94AL85000 Resource Type: Conference Resource Relation: Conference: 4th International Workshop on Software Engineering for Computational

    9. Fermilab | Science at Fermilab | Computing | High-performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Lattice QCD Farm at the Grid Computing Center at Fermilab. Lattice QCD Farm at the Grid Computing Center at Fermilab. Computing High-performance Computing A workstation computer can perform billions of multiplication and addition operations each second. High-performance parallel computing becomes necessary when computations become too large or too long to complete on a single such machine. In parallel computing, computations are divided up so that many computers can work on the same problem at

    10. Computers in Commercial Buildings

      U.S. Energy Information Administration (EIA) Indexed Site

      Government-owned buildings of all types, had, on average, more than one computer per person (1,104 computers per thousand employees). They also had a fairly high ratio of...

    11. Computers for Learning

      Broader source: Energy.gov [DOE]

      Through Executive Order 12999, the Computers for Learning Program was established to provide Federal agencies a quick and easy system for donating excess and surplus computer equipment to schools...

    12. Cognitive Computing for Security.

      SciTech Connect (OSTI)

      Debenedictis, Erik; Rothganger, Fredrick; Aimone, James Bradley; Marinella, Matthew; Evans, Brian Robert; Warrender, Christina E.; Mickel, Patrick

      2015-12-01

      Final report for Cognitive Computing for Security LDRD 165613. It reports on the development of hybrid of general purpose/ne uromorphic computer architecture, with an emphasis on potential implementation with memristors.

    13. Getting Computer Accounts

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Accounts When you first arrive at the lab, you will be presented with lots of forms that must be read and signed in order to get an ID and computer access. You must ensure...

    14. Requirements Management Database

      Energy Science and Technology Software Center (OSTI)

      2009-08-13

      This application is a simplified and customized version of the RBA and CTS databases to capture federal, site, and facility requirements, link to actions that must be performed to maintain compliance with their contractual and other requirements.

    15. Advanced Scientific Computing Research

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Advanced Scientific Computing Research Advanced Scientific Computing Research Discovering, developing, and deploying computational and networking capabilities to analyze, model, simulate, and predict complex phenomena important to the Department of Energy. Get Expertise Pieter Swart (505) 665 9437 Email Pat McCormick (505) 665-0201 Email Dave Higdon (505) 667-2091 Email Fulfilling the potential of emerging computing systems and architectures beyond today's tools and techniques to deliver

    16. Computational Structural Mechanics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      load-2 TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computational Structural Mechanics Overview of CSM Computational structural mechanics is a well-established methodology for the design and analysis of many components and structures found in the transportation field. Modern finite-element models (FEMs) play a major role in these evaluations, and sophisticated software, such as the commercially available LS-DYNA® code, is

    17. Conditions and Requirements

      Broader source: Energy.gov [DOE]

      Conditions and requirements for Energy Efficiency and Renewable Energy (EERE) Postdoctoral Research Awards are spelled out below:

    18. ARM - Reporting Requirements

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      StatisticsReporting Requirements 2016 Quarterly Reports First Quarter (PDF) Second Quarter (PDF) Third Quarter (PDF) Fourth Quarter (PDF) Past Quarterly Reports Historical Statistics Field Campaigns Operational Visitors and Accounts Data Archive and Usage (October 1995 - Present) Reporting Requirements As a matter of government policy, all U.S. Department of Energy user facilities, including the ARM Climate Research Facility, have a number of reporting requirements. The Facility is required to

    19. Housing standards: change to HUD 4930. 2 Intermediate Minimum Property Standard (IMPS) supplement for solar heating and domestic hot water systems

      SciTech Connect (OSTI)

      Not Available

      1982-08-17

      This rule is made to provide an updating, clarification, and improvement of requirements contained in HUD Handbook 4930.2, Intermediate Minimum Property Standards (IMPS) Supplement concerning solar heating and domestic hot water systems. Changes pertain to fire protection, penetration, roof covering, conditions of use, thermal stability, rain resistance, ultraviolet stability, and compatibility with transfer medium. Additional changes cover applicable standards, labeling, flash point, chemical and physical commpatibility, flame spread classification, lightening protection, and parts of a solar energy system. Altogether, there are over 50 changes, some of which apply to tables and worksheets. Footnotes are included.

    20. Computing and Computational Sciences Directorate - Information Technology

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Sciences and Engineering The Computational Sciences and Engineering Division (CSED) is ORNL's premier source of basic and applied research in the field of data sciences and knowledge discovery. CSED's science agenda is focused on research and development related to knowledge discovery enabled by the explosive growth in the availability, size, and variability of dynamic and disparate data sources. This science agenda encompasses data sciences as well as advanced modeling and

    1. BNL ATLAS Grid Computing

      ScienceCinema (OSTI)

      Michael Ernst

      2010-01-08

      As the sole Tier-1 computing facility for ATLAS in the United States and the largest ATLAS computing center worldwide Brookhaven provides a large portion of the overall computing resources for U.S. collaborators and serves as the central hub for storing,

    2. Computing environment logbook

      DOE Patents [OSTI]

      Osbourn, Gordon C; Bouchard, Ann M

      2012-09-18

      A computing environment logbook logs events occurring within a computing environment. The events are displayed as a history of past events within the logbook of the computing environment. The logbook provides search functionality to search through the history of past events to find one or more selected past events, and further, enables an undo of the one or more selected past events.

    3. Mathematical and Computational Epidemiology

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Mathematical and Computational Epidemiology Search Site submit Contacts | Sponsors Mathematical and Computational Epidemiology Los Alamos National Laboratory change this image and alt text Menu About Contact Sponsors Research Agent-based Modeling Mixing Patterns, Social Networks Mathematical Epidemiology Social Internet Research Uncertainty Quantification Publications People Mathematical and Computational Epidemiology (MCEpi) Quantifying model uncertainty in agent-based simulations for

    4. NERSC Requirements Workshop November

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      to show major growth is studies of Higgs particle physics. A burst of computational activity about 15 years ago died out, but anything other than a standard model Higgs at...

    5. Computational Fluid Dynamics Library

      Energy Science and Technology Software Center (OSTI)

      2005-03-04

      CFDLib05 is the Los Alamos Computational Fluid Dynamics LIBrary. This is a collection of hydrocodes using a common data structure and a common numerical method, for problems ranging from single-field, incompressible flow, to multi-species, multi-field, compressible flow. The data structure is multi-block, with a so-called structured grid in each block. The numerical method is a Finite-Volume scheme employing a state vector that is fully cell-centered. This means that the integral form of the conservation lawsmore » is solved on the physical domain that is represented by a mesh of control volumes. The typical control volume is an arbitrary quadrilateral in 2D and an arbitrary hexahedron in 3D. The Finite-Volume scheme is for time-unsteady flow and remains well coupled by means of time and space centered fluxes; if a steady state solution is required, the problem is integrated forward in time until the user is satisfied that the state is stationary.« less

    6. Extreme Scale Computing to Secure the Nation

      SciTech Connect (OSTI)

      Brown, D L; McGraw, J R; Johnson, J R; Frincke, D

      2009-11-10

      Since the dawn of modern electronic computing in the mid 1940's, U.S. national security programs have been dominant users of every new generation of high-performance computer. Indeed, the first general-purpose electronic computer, ENIAC (the Electronic Numerical Integrator and Computer), was used to calculate the expected explosive yield of early thermonuclear weapons designs. Even the U. S. numerical weather prediction program, another early application for high-performance computing, was initially funded jointly by sponsors that included the U.S. Air Force and Navy, agencies interested in accurate weather predictions to support U.S. military operations. For the decades of the cold war, national security requirements continued to drive the development of high performance computing (HPC), including advancement of the computing hardware and development of sophisticated simulation codes to support weapons and military aircraft design, numerical weather prediction as well as data-intensive applications such as cryptography and cybersecurity U.S. national security concerns continue to drive the development of high-performance computers and software in the U.S. and in fact, events following the end of the cold war have driven an increase in the growth rate of computer performance at the high-end of the market. This mainly derives from our nation's observance of a moratorium on underground nuclear testing beginning in 1992, followed by our voluntary adherence to the Comprehensive Test Ban Treaty (CTBT) beginning in 1995. The CTBT prohibits further underground nuclear tests, which in the past had been a key component of the nation's science-based program for assuring the reliability, performance and safety of U.S. nuclear weapons. In response to this change, the U.S. Department of Energy (DOE) initiated the Science-Based Stockpile Stewardship (SBSS) program in response to the Fiscal Year 1994 National Defense Authorization Act, which requires, 'in the absence of nuclear testing, a progam to: (1) Support a focused, multifaceted program to increase the understanding of the enduring stockpile; (2) Predict, detect, and evaluate potential problems of the aging of the stockpile; (3) Refurbish and re-manufacture weapons and components, as required; and (4) Maintain the science and engineering institutions needed to support the nation's nuclear deterrent, now and in the future'. This program continues to fulfill its national security mission by adding significant new capabilities for producing scientific results through large-scale computational simulation coupled with careful experimentation, including sub-critical nuclear experiments permitted under the CTBT. To develop the computational science and the computational horsepower needed to support its mission, SBSS initiated the Accelerated Strategic Computing Initiative, later renamed the Advanced Simulation & Computing (ASC) program (sidebar: 'History of ASC Computing Program Computing Capability'). The modern 3D computational simulation capability of the ASC program supports the assessment and certification of the current nuclear stockpile through calibration with past underground test (UGT) data. While an impressive accomplishment, continued evolution of national security mission requirements will demand computing resources at a significantly greater scale than we have today. In particular, continued observance and potential Senate confirmation of the Comprehensive Test Ban Treaty (CTBT) together with the U.S administration's promise for a significant reduction in the size of the stockpile and the inexorable aging and consequent refurbishment of the stockpile all demand increasing refinement of our computational simulation capabilities. Assessment of the present and future stockpile with increased confidence of the safety and reliability without reliance upon calibration with past or future test data is a long-term goal of the ASC program. This will be accomplished through significant increases in the scientific bases that underlie the computational tools. Computer codes must be de

    7. Cyber Security Process Requirements Manual

      Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]

      2008-08-12

      The Manual establishes minimum implementation standards for cyber security management processes throughout the Department. Admin Chg 1 dated 9-1-09; Admin Chg 2 dated 12-22-09. Canceled by DOE O 205.1B. No cancellations.

    8. Scalable optical quantum computer

      SciTech Connect (OSTI)

      Manykin, E A; Mel'nichenko, E V [Institute for Superconductivity and Solid-State Physics, Russian Research Centre 'Kurchatov Institute', Moscow (Russian Federation)

      2014-12-31

      A way of designing a scalable optical quantum computer based on the photon echo effect is proposed. Individual rare earth ions Pr{sup 3+}, regularly located in the lattice of the orthosilicate (Y{sub 2}SiO{sub 5}) crystal, are suggested to be used as optical qubits. Operations with qubits are performed using coherent and incoherent laser pulses. The operation protocol includes both the method of measurement-based quantum computations and the technique of optical computations. Modern hybrid photon echo protocols, which provide a sufficient quantum efficiency when reading recorded states, are considered as most promising for quantum computations and communications. (quantum computer)

    9. COMPUTATIONAL SCIENCE CENTER

      SciTech Connect (OSTI)

      DAVENPORT, J.

      2005-11-01

      The Brookhaven Computational Science Center brings together researchers in biology, chemistry, physics, and medicine with applied mathematicians and computer scientists to exploit the remarkable opportunities for scientific discovery which have been enabled by modern computers. These opportunities are especially great in computational biology and nanoscience, but extend throughout science and technology and include, for example, nuclear and high energy physics, astrophysics, materials and chemical science, sustainable energy, environment, and homeland security. To achieve our goals we have established a close alliance with applied mathematicians and computer scientists at Stony Brook and Columbia Universities.

    10. Computation Directorate 2007 Annual Report

      SciTech Connect (OSTI)

      Henson, V E; Guse, J A

      2008-03-06

      If there is a single word that both characterized 2007 and dominated the thoughts and actions of many Laboratory employees throughout the year, it is transition. Transition refers to the major shift that took place on October 1, when the University of California relinquished management responsibility for Lawrence Livermore National Laboratory (LLNL), and Lawrence Livermore National Security, LLC (LLNS), became the new Laboratory management contractor for the Department of Energy's (DOE's) National Nuclear Security Administration (NNSA). In the 55 years under the University of California, LLNL amassed an extraordinary record of significant accomplishments, clever inventions, and momentous contributions in the service of protecting the nation. This legacy provides the new organization with a built-in history, a tradition of excellence, and a solid set of core competencies from which to build the future. I am proud to note that in the nearly seven years I have had the privilege of leading the Computation Directorate, our talented and dedicated staff has made far-reaching contributions to the legacy and tradition we passed on to LLNS. Our place among the world's leaders in high-performance computing, algorithmic research and development, applications, and information technology (IT) services and support is solid. I am especially gratified to report that through all the transition turmoil, and it has been considerable, the Computation Directorate continues to produce remarkable achievements. Our most important asset--the talented, skilled, and creative people who work in Computation--has continued a long-standing Laboratory tradition of delivering cutting-edge science even in the face of adversity. The scope of those achievements is breathtaking, and in 2007, our accomplishments span an amazing range of topics. From making an important contribution to a Nobel Prize-winning effort to creating tools that can detect malicious codes embedded in commercial software; from expanding BlueGene/L, the world's most powerful computer, by 60% and using it to capture the most prestigious prize in the field of computing, to helping create an automated control system for the National Ignition Facility (NIF) that monitors and adjusts more than 60,000 control and diagnostic points; from creating a microarray probe that rapidly detects virulent high-threat organisms, natural or bioterrorist in origin, to replacing large numbers of physical computer servers with small numbers of virtual servers, reducing operating expense by 60%, the people in Computation have been at the center of weighty projects whose impacts are felt across the Laboratory and the DOE community. The accomplishments I just mentioned, and another two dozen or so, make up the stories contained in this report. While they form an exceptionally diverse set of projects and topics, it is what they have in common that excites me. They share the characteristic of being central, often crucial, to the mission-driven business of the Laboratory. Computational science has become fundamental to nearly every aspect of the Laboratory's approach to science and even to the conduct of administration. It is difficult to consider how we would proceed without computing, which occurs at all scales, from handheld and desktop computing to the systems controlling the instruments and mechanisms in the laboratories to the massively parallel supercomputers. The reasons for the dramatic increase in the importance of computing are manifest. Practical, fiscal, or political realities make the traditional approach to science, the cycle of theoretical analysis leading to experimental testing, leading to adjustment of theory, and so on, impossible, impractical, or forbidden. How, for example, can we understand the intricate relationship between human activity and weather and climate? We cannot test our hypotheses by experiment, which would require controlled use of the entire earth over centuries. It is only through extremely intricate, detailed computational simulation that we can test our theories, and simulati

    11. THE TURBULENT CASCADE AND PROTON HEATING IN THE SOLAR WIND DURING SOLAR MINIMUM

      SciTech Connect (OSTI)

      Coburn, Jesse T.; Smith, Charles W.; Vasquez, Bernard J.; Stawarz, Joshua E.; Forman, Miriam A. E-mail: Charles.Smith@unh.edu E-mail: Joshua.Stawarz@Colorado.edu

      2012-08-01

      The recently protracted solar minimum provided years of interplanetary data that were largely absent in any association with observed large-scale transient behavior on the Sun. With large-scale shear at 1 AU generally isolated to corotating interaction regions, it is reasonable to ask whether the solar wind is significantly turbulent at this time. We perform a series of third-moment analyses using data from the Advanced Composition Explorer. We show that the solar wind at 1 AU is just as turbulent as at any other time in the solar cycle. Specifically, the turbulent cascade of energy scales in the same manner proportional to the product of wind speed and temperature. Energy cascade rates during solar minimum average a factor of 2-4 higher than during solar maximum, but we contend that this is likely the result of having a different admixture of high-latitude sources.

    12. Cielo Computational Environment Usage Model With Mappings to ACE

      Office of Scientific and Technical Information (OSTI)

      Requirements for the General Availability User Environment Capabilities Release Version 1.1 (Technical Report) | SciTech Connect Technical Report: Cielo Computational Environment Usage Model With Mappings to ACE Requirements for the General Availability User Environment Capabilities Release Version 1.1 Citation Details In-Document Search Title: Cielo Computational Environment Usage Model With Mappings to ACE Requirements for the General Availability User Environment Capabilities Release

    13. Cielo Computational Environment Usage Model With Mappings to ACE

      Office of Scientific and Technical Information (OSTI)

      Requirements for the General Availability User Environment Capabilities Release Version 1.1 (Technical Report) | SciTech Connect Technical Report: Cielo Computational Environment Usage Model With Mappings to ACE Requirements for the General Availability User Environment Capabilities Release Version 1.1 Citation Details In-Document Search Title: Cielo Computational Environment Usage Model With Mappings to ACE Requirements for the General Availability User Environment Capabilities Release

    14. Sandia Energy - High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      High Performance Computing Home Energy Research Advanced Scientific Computing Research (ASCR) High Performance Computing High Performance Computingcwdd2015-03-18T21:41:24+00:00...

    15. ASCR Requirements Review 2015

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ASCR Program Office. These requirements will serve as input to the ESnet architecture and planning processes, and will help ensure that ESnet continues to provide world-class...

    16. BES Requirements Review 2014

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      BES Program Office. These requirements will serve as input to the ESnet architecture and planning processes, and will help ensure that ESnet continues to provide world-class...

    17. Residential Solar Permit Requirements

      Broader source: Energy.gov [DOE]

      Washington's State Building Code sets requirements for the installation, inspection, maintenance and repair of solar photovoltaic (PV) energy systems. Local jurisdictions have the authority to...

    18. Required Annual Notices

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Required Annual Notices The Women's Health and Cancer Rights Act of 1998 (WHCRA) The medical programs sponsored by LANS will not restrict benefits if you or your dependent...

    19. HPC Requirements Reviews

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Reviews: Target 2017 Requirements Reviews: Target 2014 Overview Published Reports Case Study FAQs NERSC HPC Achievement Awards Accelerator Science Astrophysics & Cosmology...

    20. Transuranic Waste Requirements

      Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]

      1999-07-09

      The guide provides criteria for determining if a waste is to be managed in accordance with DOE M 435.1-1, Chapter III, Transuranic Waste Requirements.

    1. Internal combustion engines: Computer applications. (Latest citations from the EI Compendex plus database). Published Search

      SciTech Connect (OSTI)

      Not Available

      1993-10-01

      The bibliography contains citations concerning the application of computers and computerized simulations in the design, analysis, operation, and evaluation of various types of internal combustion engines and associated components and apparatus. Special attention is given to engine control and performance. (Contains a minimum of 67 citations and includes a subject term index and title list.)

    2. Estimate of Technical Potential for Minimum Efficiency Performance Standards in 13 Major World Economies

      SciTech Connect (OSTI)

      Letschert, Virginie; Desroches, Louis-Benoit; Ke, Jing; McNeil, Michael

      2012-07-01

      As part of the ongoing effort to estimate the foreseeable impacts of aggressive minimum efficiency performance standards (MEPS) programs in the worlds major economies, Lawrence Berkeley National Laboratory (LBNL) has developed a scenario to analyze the technical potential of MEPS in 13 major economies around the world1 . The best available technology (BAT) scenario seeks to determine the maximum potential savings that would result from diffusion of the most efficient available technologies in these major economies.

    3. COMPUTATIONAL SCIENCE CENTER

      SciTech Connect (OSTI)

      DAVENPORT, J.

      2006-11-01

      Computational Science is an integral component of Brookhaven's multi science mission, and is a reflection of the increased role of computation across all of science. Brookhaven currently has major efforts in data storage and analysis for the Relativistic Heavy Ion Collider (RHIC) and the ATLAS detector at CERN, and in quantum chromodynamics. The Laboratory is host for the QCDOC machines (quantum chromodynamics on a chip), 10 teraflop/s computers which boast 12,288 processors each. There are two here, one for the Riken/BNL Research Center and the other supported by DOE for the US Lattice Gauge Community and other scientific users. A 100 teraflop/s supercomputer will be installed at Brookhaven in the coming year, managed jointly by Brookhaven and Stony Brook, and funded by a grant from New York State. This machine will be used for computational science across Brookhaven's entire research program, and also by researchers at Stony Brook and across New York State. With Stony Brook, Brookhaven has formed the New York Center for Computational Science (NYCCS) as a focal point for interdisciplinary computational science, which is closely linked to Brookhaven's Computational Science Center (CSC). The CSC has established a strong program in computational science, with an emphasis on nanoscale electronic structure and molecular dynamics, accelerator design, computational fluid dynamics, medical imaging, parallel computing and numerical algorithms. We have been an active participant in DOES SciDAC program (Scientific Discovery through Advanced Computing). We are also planning a major expansion in computational biology in keeping with Laboratory initiatives. Additional laboratory initiatives with a dependence on a high level of computation include the development of hydrodynamics models for the interpretation of RHIC data, computational models for the atmospheric transport of aerosols, and models for combustion and for energy utilization. The CSC was formed to bring together researchers in these areas and to provide a focal point for the development of computational expertise at the Laboratory. These efforts will connect to and support the Department of Energy's long range plans to provide Leadership class computing to researchers throughout the Nation. Recruitment for six new positions at Stony Brook to strengthen its computational science programs is underway. We expect some of these to be held jointly with BNL.

    4. Genome informatics: Requirements and challenges

      SciTech Connect (OSTI)

      Robbins, R.J.

      1993-12-31

      Informatics of some kind will play a role in every aspect of the Human Genome Project (HGP); data acquisition, data analysis, data exchange, data publication, and data visualization. What are the real requirements and challenges? The primary requirement is clear thinking and the main challenge is design. If good design is lacking, the price will be failure of genome informatics and ultimately failure of the genome project itself. Scientists need good designs to deliver the tools necessary for acquiring and analyzing DNA sequences. As these tools become more efficient, they will need new tools for comparative genomic analyses. To make the tools work, the scientists will need to address and solve nomenclature issues that are essential, if also tedious. They must devise systems that will scale gracefully with the increasing flow of data. The scientists must be able to move data easily from one system to another, with no loss of content. As scientists, they will have failed in their responsibility to share results, should repeating experiments ever become preferable to searching the literature. Their databases must become a new kind of scientific literature and the scientists must develop ways to make electronic data publishing as routine as traditional journal publishing. Ultimately, they must build systems so advanced that they are virtually invisible. In summary, the HGP can be considered the most ambitious, most audacious information-management project ever undertaken. In the HGP, computers will not merely serve as tools for cataloging existing knowledge. Rather, they will serve as instruments, helping to create new knowledge by changing the way the scientists see the biological world. Computers will allow them to see genomes, just as radio telescopes let them see quasars and electron microscopes let them see viruses.

    5. Edison Electrifies Scientific Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Edison Electrifies Scientific Computing Edison Electrifies Scientific Computing NERSC Flips Switch on New Flagship Supercomputer January 31, 2014 Contact: Margie Wylie, mwylie@lbl.gov, +1 510 486 7421 The National Energy Research Scientific Computing (NERSC) Center recently accepted "Edison," a new flagship supercomputer designed for scientific productivity. Named in honor of American inventor Thomas Alva Edison, the Cray XC30 will be dedicated in a ceremony held at the Department of

    6. General Responsibilities and Requirements

      Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]

      1999-07-09

      The material presented in this guide provides suggestions and acceptable ways of implementing DOE M 435.1-1 and should not be viewed as additional or mandatory requirements. The objective of the guide is to ensure that responsible individuals understand what is necessary and acceptable for implementing the requirements of DOE M 435.1-1.

    7. Personal Computer Inventory System

      Energy Science and Technology Software Center (OSTI)

      1993-10-04

      PCIS is a database software system that is used to maintain a personal computer hardware and software inventory, track transfers of hardware and software, and provide reports.

    8. Applied Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Results from a climate simulation computed using the Model for Prediction Across Scales (MPAS) code. This visualization shows the temperature of ocean currents using a green and ...

    9. Announcement of Computer Software

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      All Other Editions Are Obsolete UNITED STATES DEPARTMENT OF ENERGY ANNOUNCEMENT OF COMPUTER SOFTWARE OMB Control Number 1910-1400 (OMB Burden Disclosure Statement is on last...

    10. CFD [computational fluid dynamics] And Safety Factors. Computer modeling of complex processes needs old-fashioned experiments to stay in touch with reality.

      SciTech Connect (OSTI)

      Leishear, Robert A.; Lee, Si Y.; Poirier, Michael R.; Steeper, Timothy J.; Ervin, Robert C.; Giddings, Billy J.; Stefanko, David B.; Harp, Keith D.; Fowley, Mark D.; Van Pelt, William B.

      2012-10-07

      Computational fluid dynamics (CFD) is recognized as a powerful engineering tool. That is, CFD has advanced over the years to the point where it can now give us deep insight into the analysis of very complex processes. There is a danger, though, that an engineer can place too much confidence in a simulation. If a user is not careful, it is easy to believe that if you plug in the numbers, the answer comes out, and you are done. This assumption can lead to significant errors. As we discovered in the course of a study on behalf of the Department of Energy's Savannah River Site in South Carolina, CFD models fail to capture some of the large variations inherent in complex processes. These variations, or scatter, in experimental data emerge from physical tests and are inadequately captured or expressed by calculated mean values for a process. This anomaly between experiment and theory can lead to serious errors in engineering analysis and design unless a correction factor, or safety factor, is experimentally validated. For this study, blending times for the mixing of salt solutions in large storage tanks were the process of concern under investigation. This study focused on the blending processes needed to mix salt solutions to ensure homogeneity within waste tanks, where homogeneity is required to control radioactivity levels during subsequent processing. Two of the requirements for this task were to determine the minimum number of submerged, centrifugal pumps required to blend the salt mixtures in a full-scale tank in half a day or less, and to recommend reasonable blending times to achieve nearly homogeneous salt mixtures. A full-scale, low-flow pump with a total discharge flow rate of 500 to 800 gpm was recommended with two opposing 2.27-inch diameter nozzles. To make this recommendation, both experimental and CFD modeling were performed. Lab researchers found that, although CFD provided good estimates of an average blending time, experimental blending times varied significantly from the average.

    11. Techno-Economic Analysis of the Deacetylation and Disk Refining Process. Characterizing the Effect of Refining Energy and Enzyme Usage on Minimum Sugar Selling Price and Minimum Ethanol Selling Price

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Chen, Xiaowen; Shekiro, Joseph; Pschorn, Thomas; Sabourin, Marc; Tucker, Melvin P.; Tao, Ling

      2015-10-29

      A novel, highly efficient deacetylation and disk refining (DDR) process to liberate fermentable sugars from biomass was recently developed at the National Renewable Energy Laboratory (NREL). The DDR process consists of a mild, dilute alkaline deacetylation step followed by low-energy-consumption disk refining. The DDR corn stover substrates achieved high process sugar conversion yields, at low to modest enzyme loadings, and also produced high sugar concentration syrups at high initial insoluble solid loadings. The sugar syrups derived from corn stover are highly fermentable due to low concentrations of fermentation inhibitors. The objective of this work is to evaluate the economic feasibilitymore » of the DDR process through a techno-economic analysis (TEA). A large array of experiments designed using a response surface methodology was carried out to investigate the two major cost-driven operational parameters of the novel DDR process: refining energy and enzyme loadings. The boundary conditions for refining energy (128–468 kWh/ODMT), cellulase (Novozyme’s CTec3) loading (11.6–28.4 mg total protein/g of cellulose), and hemicellulase (Novozyme’s HTec3) loading (0–5 mg total protein/g of cellulose) were chosen to cover the most commercially practical operating conditions. The sugar and ethanol yields were modeled with good adequacy, showing a positive linear correlation between those yields and refining energy and enzyme loadings. The ethanol yields ranged from 77 to 89 gallons/ODMT of corn stover. The minimum sugar selling price (MSSP) ranged from $0.191 to $0.212 per lb of 50 % concentrated monomeric sugars, while the minimum ethanol selling price (MESP) ranged from $2.24 to $2.54 per gallon of ethanol. The DDR process concept is evaluated for economic feasibility through TEA. The MSSP and MESP of the DDR process falls within a range similar to that found with the deacetylation/dilute acid pretreatment process modeled in NREL’s 2011 design report. The DDR process is a much simpler process that requires less capital and maintenance costs when compared to conventional chemical pretreatments with pressure vessels. As a result, we feel the DDR process should be considered as an option for future biorefineries with great potential to be more cost-effective.« less

    12. Techno-Economic Analysis of the Deacetylation and Disk Refining Process. Characterizing the Effect of Refining Energy and Enzyme Usage on Minimum Sugar Selling Price and Minimum Ethanol Selling Price

      SciTech Connect (OSTI)

      Chen, Xiaowen; Shekiro, Joseph; Pschorn, Thomas; Sabourin, Marc; Tucker, Melvin P.; Tao, Ling

      2015-10-29

      A novel, highly efficient deacetylation and disk refining (DDR) process to liberate fermentable sugars from biomass was recently developed at the National Renewable Energy Laboratory (NREL). The DDR process consists of a mild, dilute alkaline deacetylation step followed by low-energy-consumption disk refining. The DDR corn stover substrates achieved high process sugar conversion yields, at low to modest enzyme loadings, and also produced high sugar concentration syrups at high initial insoluble solid loadings. The sugar syrups derived from corn stover are highly fermentable due to low concentrations of fermentation inhibitors. The objective of this work is to evaluate the economic feasibility of the DDR process through a techno-economic analysis (TEA). A large array of experiments designed using a response surface methodology was carried out to investigate the two major cost-driven operational parameters of the novel DDR process: refining energy and enzyme loadings. The boundary conditions for refining energy (128468 kWh/ODMT), cellulase (Novozymes CTec3) loading (11.628.4 mg total protein/g of cellulose), and hemicellulase (Novozymes HTec3) loading (05 mg total protein/g of cellulose) were chosen to cover the most commercially practical operating conditions. The sugar and ethanol yields were modeled with good adequacy, showing a positive linear correlation between those yields and refining energy and enzyme loadings. The ethanol yields ranged from 77 to 89 gallons/ODMT of corn stover. The minimum sugar selling price (MSSP) ranged from $0.191 to $0.212 per lb of 50 % concentrated monomeric sugars, while the minimum ethanol selling price (MESP) ranged from $2.24 to $2.54 per gallon of ethanol. The DDR process concept is evaluated for economic feasibility through TEA. The MSSP and MESP of the DDR process falls within a range similar to that found with the deacetylation/dilute acid pretreatment process modeled in NRELs 2011 design report. The DDR process is a much simpler process that requires less capital and maintenance costs when compared to conventional chemical pretreatments with pressure vessels. As a result, we feel the DDR process should be considered as an option for future biorefineries with great potential to be more cost-effective.

    13. User Requirements Gathered for

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      B 30 X BES 3.0 B 21 X FES 1.9 B 28 X HEP 2.4 B 27 X NP 4.9 B 81 X TOTAL 15.6 B 34 X 11 Historical Trend Advanced Scientific Computing Research Biological and Environmental...

    14. Allocation Management | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Allocation Management Determining Allocation Requirements Querying Allocations Using cbank Mira/Cetus/Vesta Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Allocation Management Allocations require management - balance checks, resource allocation, requesting more time, etc. Checking for an active allocation To determine if there is an active allocation, check Running Jobs. For

    15. 60 Years of Computing | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      60 Years of Computing 60 Years of Computing

    16. Information Science, Computing, Applied Math

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Capabilities Information Science, Computing, Applied Math science-innovationassetsimagesicon-science.jpg Information Science, Computing, Applied Math National security ...

    17. Requirements | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Requirements Requirements Statutes 42 U.S.C. 4321: National Environmental Policy Act of 1969 42 U.S.C. 4371: Environmental Quality Improvement Act of 1970 42 U.S.C. 7401: Clean Air Act, Section 309 CEQ Regulations 40 CFR Part 1500-1508: CEQ - Regulations for Implementing NEPA DOE Regulations and Orders 10 CFR Part 1021: NEPA Implementing Procedures 10 CFR Part 1021: NEPA Rulemaking Process 10 CFR Part 1022: Compliance with Floodplain and Wetland Environmental Review Requirements DOE O 451.1B:

    18. Computer Processor Allocator

      Energy Science and Technology Software Center (OSTI)

      2004-03-01

      The Compute Processor Allocator (CPA) provides an efficient and reliable mechanism for managing and allotting processors in a massively parallel (MP) computer. It maintains information in a database on the health. configuration and allocation of each processor. This persistent information is factored in to each allocation decision. The CPA runs in a distributed fashion to avoid a single point of failure.

    19. SC e-journals, Computer Science

      Office of Scientific and Technical Information (OSTI)

      & Mathematical Organization Theory Computational Complexity Computational Economics Computational Management ... Technology EURASIP Journal on Information Security ...

    20. Selected Guidance & Requirements

      Broader source: Energy.gov [DOE]

      This page contains the most requested NEPA guidance and requirement documents and those most often recommended by the Office of NEPA Policy and Compliance. Documents are listed by agency, in...

    1. Promulgating Nuclear Safety Requirements

      Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]

      1996-05-15

      Applies to all Nuclear Safety Requirements Adopted by the Department to Govern the Conduct of its Nuclear Activities. Cancels DOE P 410.1. Canceled by DOE N 251.85.

    2. Green Building Requirement

      Broader source: Energy.gov [DOE]

      The new standards are phased in over the course of several years with publicly-owned buildings being the first required to comply. All new construction and substantial improvements of non...

    3. Toward Molecular Catalysts by Computer

      SciTech Connect (OSTI)

      Raugei, Simone; DuBois, Daniel L.; Rousseau, Roger J.; Chen, Shentan; Ho, Ming-Hsun; Bullock, R. Morris; Dupuis, Michel

      2015-02-17

      Rational design of molecular catalysts requires a systematic approach to designing ligands with specific functionality and precisely tailored electronic and steric properties. It then becomes possible to devise computer protocols to predict accurately the required properties and ultimately to design catalysts by computer. In this account we first review how thermodynamic properties such as oxidation-reduction potentials (E0), acidities (pKa), and hydride donor abilities (ΔGH-) form the basis for a systematic design of molecular catalysts for reactions that are critical for a secure energy future (hydrogen evolution and oxidation, oxygen and nitrogen reduction, and carbon dioxide reduction). We highlight how density functional theory allows us to determine and predict these properties within “chemical” accuracy (~ 0.06 eV for redox potentials, ~ 1 pKa unit for pKa values, and ~ 1.5 kcal/mol for hydricities). These quantities determine free energy maps and profiles associated with catalytic cycles, i.e. the relative energies of intermediates, and help us distinguish between desirable and high-energy pathways and mechanisms. Good catalysts have flat profiles that avoid high activation barriers due to low and high energy intermediates. We illustrate how the criterion of a flat energy profile lends itself to the prediction of design points by computer for optimum catalysts. This research was carried out in the Center for Molecular Electro-catalysis, an Energy Frontier Research Center funded by the U.S. Department of Energy (DOE), Office of Science, Office of Basic Energy Sciences. Pacific Northwest National Laboratory (PNNL) is operated for the DOE by Battelle.

    4. Regulators, Requirements, Statutes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Regulators, Requirements, Statutes Regulators, Requirements, Statutes The Laboratory must comply with environmental laws and regulations that apply to Laboratory operations. Contact Environmental Communication & Public Involvement P.O. Box 1663 MS M996 Los Alamos, NM 87545 (505) 667-0216 Email Environmental laws and regulations LANL complies with more than 30 state and federal regulations and policies designed to protect human health and the environment. Regulators Regulators Environmental

    5. Applications in Data-Intensive Computing

      SciTech Connect (OSTI)

      Shah, Anuj R.; Adkins, Joshua N.; Baxter, Douglas J.; Cannon, William R.; Chavarra-Miranda, Daniel; Choudhury, Sutanay; Gorton, Ian; Gracio, Deborah K.; Halter, Todd D.; Jaitly, Navdeep; Johnson, John R.; Kouzes, Richard T.; Macduff, Matt C.; Marquez, Andres; Monroe, Matthew E.; Oehmen, Christopher S.; Pike, William A.; Scherrer, Chad; Villa, Oreste; Webb-Robertson, Bobbie-Jo M.; Whitney, Paul D.; Zuljevic, Nino

      2010-04-01

      This book chapter, to be published in Advances in Computers, Volume 78, in 2010 describes applications of data intensive computing (DIC). This is an invited chapter resulting from a previous publication on DIC. This work summarizes efforts coming out of the PNNL's Data Intensive Computing Initiative. Advances in technology have empowered individuals with the ability to generate digital content with mouse clicks and voice commands. Digital pictures, emails, text messages, home videos, audio, and webpages are common examples of digital content that are generated on a regular basis. Data intensive computing facilitates human understanding of complex problems. Data-intensive applications provide timely and meaningful analytical results in response to exponentially growing data complexity and associated analysis requirements through the development of new classes of software, algorithms, and hardware.

    6. Confronting Regulatory Cost and Quality Expectations. An Exploration of Technical Change in Minimum Efficiency Performance Standards

      SciTech Connect (OSTI)

      Taylor, Margaret; Spurlock, C. Anna; Yang, Hung-Chia

      2015-09-21

      The dual purpose of this project was to contribute to basic knowledge about the interaction between regulation and innovation and to inform the cost and benefit expectations related to technical change which are embedded in the rulemaking process of an important area of national regulation. The area of regulation focused on here is minimum efficiency performance standards (MEPS) for appliances and other energy-using products. Relevant both to U.S. climate policy and energy policy for buildings, MEPS remove certain product models from the market that do not meet specified efficiency thresholds.

    7. Exotic equilibria of Harary graphs and a new minimum degree lower bound for synchronization

      SciTech Connect (OSTI)

      Canale, Eduardo A.; Monzn, Pablo

      2015-02-15

      This work is concerned with stability of equilibria in the homogeneous (equal frequencies) Kuramoto model of weakly coupled oscillators. In 2012 [R. Taylor, J. Phys. A: Math. Theor. 45, 115 (2012)], a sufficient condition for almost global synchronization was found in terms of the minimum degreeorder ratio of the graph. In this work, a new lower bound for this ratio is given. The improvement is achieved by a concrete infinite sequence of regular graphs. Besides, non standard unstable equilibria of the graphs studied in Wiley et al. [Chaos 16, 015103 (2006)] are shown to exist as conjectured in that work.

    8. Method for selecting minimum width of leaf in multileaf adjustable collimator while inhibiting passage of particle beams of radiation through sawtooth joints between collimator leaves

      DOE Patents [OSTI]

      Ludewigt, Bernhard (Berkeley, CA); Bercovitz, John (Hayward, CA); Nyman, Mark (Berkeley, CA); Chu, William (Lafayette, CA)

      1995-01-01

      A method is disclosed for selecting the minimum width of individual leaves of a multileaf adjustable collimator having sawtooth top and bottom surfaces between adjacent leaves of a first stack of leaves and sawtooth end edges which are capable of intermeshing with the corresponding sawtooth end edges of leaves in a second stack of leaves of the collimator. The minimum width of individual leaves in the collimator, each having a sawtooth configuration in the surface facing another leaf in the same stack and a sawtooth end edge, is selected to comprise the sum of the penetration depth or range of the particular type of radiation comprising the beam in the particular material used for forming the leaf; plus the total path length across all the air gaps in the area of the joint at the edges between two leaves defined between lines drawn across the peaks of adjacent sawtooth edges; plus at least one half of the length or period of a single sawtooth. To accomplish this, in accordance with the method of the invention, the penetration depth of the particular type of radiation in the particular material to be used for the collimator leaf is first measured. Then the distance or gap between adjoining or abutting leaves is selected, and the ratio of this distance to the height of the sawteeth is selected. Finally the number of air gaps through which the radiation will pass between sawteeth is determined by selecting the number of sawteeth to be formed in the joint. The measurement and/or selection of these parameters will permit one to determine the minimum width of the leaf which is required to prevent passage of the beam through the sawtooth joint.

    9. Computers as tools

      SciTech Connect (OSTI)

      Eriksson, I.V.

      1994-12-31

      The following message was recently posted on a bulletin board and clearly shows the relevance of the conference theme: {open_quotes}The computer and digital networks seem poised to change whole regions of human activity -- how we record knowledge, communicate, learn, work, understand ourselves and the world. What`s the best framework for understanding this digitalization, or virtualization, of seemingly everything? ... Clearly, symbolic tools like the alphabet, book, and mechanical clock have changed some of our most fundamental notions -- self, identity, mind, nature, time, space. Can we say what the computer, a purely symbolic {open_quotes}machine,{close_quotes} is doing to our thinking in these areas? Or is it too early to say, given how much more powerful and less expensive the technology seems destinated to become in the next few decades?{close_quotes} (Verity, 1994) Computers certainly affect our lives and way of thinking but what have computers to do with ethics? A narrow approach would be that on the one hand people can and do abuse computer systems and on the other hand people can be abused by them. Weli known examples of the former are computer comes such as the theft of money, services and information. The latter can be exemplified by violation of privacy, health hazards and computer monitoring. Broadening the concept from computers to information systems (ISs) and information technology (IT) gives a wider perspective. Computers are just the hardware part of information systems which also include software, people and data. Information technology is the concept preferred today. It extends to communication, which is an essential part of information processing. Now let us repeat the question: What has IT to do with ethics? Verity mentioned changes in {open_quotes}how we record knowledge, communicate, learn, work, understand ourselves and the world{close_quotes}.

    10. Applications of Parallel Computers

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computers Applications of Parallel Computers UCB CS267 Spring 2015 Tuesday & Thursday, 9:30-11:00 Pacific Time Applications of Parallel Computers, CS267, is a graduate-level course offered at the University of California, Berkeley. The course is being taught by UC Berkeley professor and LBNL Faculty Scientist Jim Demmel. CS267 is broadcast live over the internet and all NERSC users are invited to monitor the broadcast course, but course credit is available only to student registered for the

    11. Requirements | Photosynthetic Antenna Research Center

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Requirements Requirements Students must earn a total of 11 points from the following options: Please note: To receive points toward the certificate, student are required to submit...

    12. NP Science Network Requirements

      SciTech Connect (OSTI)

      Dart, Eli; Rotman, Lauren; Tierney, Brian

      2011-08-26

      The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. To support SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years. In August 2011, ESnet and the Office of Nuclear Physics (NP), of the DOE SC, organized a workshop to characterize the networking requirements of the programs funded by NP. The requirements identified at the workshop are summarized in the Findings section, and are described in more detail in the body of the report.

    13. Quantum steady computation

      SciTech Connect (OSTI)

      Castagnoli, G. )

      1991-08-10

      This paper reports that current conceptions of quantum mechanical computers inherit from conventional digital machines two apparently interacting features, machine imperfection and temporal development of the computational process. On account of machine imperfection, the process would become ideally reversible only in the limiting case of zero speed. Therefore the process is irreversible in practice and cannot be considered to be a fundamental quantum one. By giving up classical features and using a linear, reversible and non-sequential representation of the computational process - not realizable in classical machines - the process can be identified with the mathematical form of a quantum steady state. This form of steady quantum computation would seem to have an important bearing on the notion of cognition.

    14. Cloud computing security.

      SciTech Connect (OSTI)

      Shin, Dongwan; Claycomb, William R.; Urias, Vincent E.

      2010-10-01

      Cloud computing is a paradigm rapidly being embraced by government and industry as a solution for cost-savings, scalability, and collaboration. While a multitude of applications and services are available commercially for cloud-based solutions, research in this area has yet to fully embrace the full spectrum of potential challenges facing cloud computing. This tutorial aims to provide researchers with a fundamental understanding of cloud computing, with the goals of identifying a broad range of potential research topics, and inspiring a new surge in research to address current issues. We will also discuss real implementations of research-oriented cloud computing systems for both academia and government, including configuration options, hardware issues, challenges, and solutions.

    15. Theory, Modeling and Computation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      modeling and simulation will be enhanced not only by the wealth of data available from MaRIE but by the increased computational capacity made possible by the advent of extreme...

    16. Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      a n n u a l r e p o r t 2 0 1 2 Argonne Leadership Computing Facility Director's Message .............................................................................................................................1 About ALCF ......................................................................................................................................... 2 IntroDuCIng MIrA Introducing Mira

    17. Minimum separation distances for natural gas pipeline and boilers in the 300 area, Hanford Site

      SciTech Connect (OSTI)

      Daling, P.M.; Graham, T.M.

      1997-08-01

      The U.S. Department of Energy (DOE) is proposing actions to reduce energy expenditures and improve energy system reliability at the 300 Area of the Hanford Site. These actions include replacing the centralized heating system with heating units for individual buildings or groups of buildings, constructing a new natural gas distribution system to provide a fuel source for many of these units, and constructing a central control building to operate and maintain the system. The individual heating units will include steam boilers that are to be housed in individual annex buildings located at some distance away from nearby 300 Area nuclear facilities. This analysis develops the basis for siting the package boilers and natural gas distribution systems to be used to supply steam to 300 Area nuclear facilities. The effects of four potential fire and explosion scenarios involving the boiler and natural gas pipeline were quantified to determine minimum separation distances that would reduce the risks to nearby nuclear facilities. The resulting minimum separation distances are shown in Table ES.1.

    18. Applied Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ADTSC » CCS » CCS-7 Applied Computer Science Innovative co-design of applications, algorithms, and architectures in order to enable scientific simulations at extreme scale Leadership Group Leader Linn Collins Email Deputy Group Leader (Acting) Bryan Lally Email Climate modeling visualization Results from a climate simulation computed using the Model for Prediction Across Scales (MPAS) code. This visualization shows the temperature of ocean currents using a green and blue color scale. These

    19. Computational Earth Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      6 Computational Earth Science We develop and apply a range of high-performance computational methods and software tools to Earth science projects in support of environmental health, cleaner energy, and national security. Contact Us Group Leader Carl Gable Deputy Group Leader Gilles Bussod Email Profile pages header Search our Profile pages Hari Viswanathan inspects a microfluidic cell used to study the extraction of hydrocarbon fuels from a complex fracture network. EES-16's Subsurface Flow

    20. Computational Physics and Methods

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      2 Computational Physics and Methods Performing innovative simulations of physics phenomena on tomorrow's scientific computing platforms Growth and emissivity of young galaxy hosting a supermassive black hole as calculated in cosmological code ENZO and post-processed with radiative transfer code AURORA. image showing detailed turbulence simulation, Rayleigh-Taylor Turbulence imaging: the largest turbulence simulations to date Advanced multi-scale modeling Turbulence datasets Density iso-surfaces

    1. Compute Reservation Request Form

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Reservation Request Form Compute Reservation Request Form Users can request a scheduled reservation of machine resources if their jobs have special needs that cannot be accommodated through the regular batch system. A reservation brings some portion of the machine to a specific user or project for an agreed upon duration. Typically this is used for interactive debugging at scale or real time processing linked to some experiment or event. It is not intended to be used to guarantee fast

    2. New TRACC Cluster Computer

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      TRACC Cluster Computer With the addition of a new cluster called Zephyr that was made operational in September of this year (2012), TRACC now offers two clusters to choose from: Zephyr and our original cluster that has now been named Phoenix. Zephyr was acquired from Atipa technologies, and it is a 92-node system with each node having two AMD 16 core, 2.3 GHz, 32 GB processors. See also Computing Resources.

    3. Advanced Simulation and Computing

      National Nuclear Security Administration (NNSA)

      NA-ASC-117R-09-Vol.1-Rev.0 Advanced Simulation and Computing PROGRAM PLAN FY09 October 2008 ASC Focal Point Robert Meisner, Director DOE/NNSA NA-121.2 202-586-0908 Program Plan Focal Point for NA-121.2 Njema Frazier DOE/NNSA NA-121.2 202-586-5789 A Publication of the Office of Advanced Simulation & Computing, NNSA Defense Programs i Contents Executive Summary ----------------------------------------------------------------------------------------------- 1 I. Introduction

    4. Requirements Definition Stage

      Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]

      1997-05-21

      This chapter addresses development of a Software Configuration Management Plan to track and control work products, analysis of the system owner/users' business processes and needs, translation of those processes and needs into formal requirements, and planning the testing activities to validate the performance of the software product.

    5. Requirements for Xenon International

      SciTech Connect (OSTI)

      Hayes, James C.; Ely, James H.; Haas, Derek A.; Harper, Warren W.; Heimbigner, Tom R.; Hubbard, Charles W.; Humble, Paul H.; Madison, Jill C.; Morris, Scott J.; Panisko, Mark E.; Ripplinger, Mike D.; Stewart, Timothy L.

      2013-09-26

      This document defines the requirements for the new Xenon International radioxenon system. The output of this project will be a Pacific Northwest National Laboratory (PNNL) developed prototype and a manufacturer-developed production prototype. The two prototypes are intended to be as close to matching as possible; this will be facilitated by overlapping development cycles and open communication between PNNL and the manufacturer.

    6. Performance Modeling for 3D Visualization in a Heterogeneous Computing

      Office of Scientific and Technical Information (OSTI)

      Environment (Technical Report) | SciTech Connect Performance Modeling for 3D Visualization in a Heterogeneous Computing Environment Citation Details In-Document Search Title: Performance Modeling for 3D Visualization in a Heterogeneous Computing Environment The visualization of large, remotely located data sets necessitates the development of a distributed computing pipeline in order to reduce the data, in stages, to a manageable size. The required baseline infrastructure for launching such

    7. A New Computational Paradigm in Multiscale Simulations: Application to

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Brain Blood Flow | Argonne Leadership Computing Facility A New Computational Paradigm in Multiscale Simulations: Application to Brain Blood Flow Authors: Grinberg, L., Insley, J.A., Morozov, V., Papka, M.E., Karniadakis, G.E., Fedosov, D., Kumaran, K. Interfacing atomistic-based with continuum-based simulation codes is now required in many multiscale physical and biological systems. We present the computational advances that have enabled the first multiscale simulation on 190,740 processors

    8. Computational Tools to Assess Turbine Biological Performance

      SciTech Connect (OSTI)

      Richmond, Marshall C.; Serkowski, John A.; Rakowski, Cynthia L.; Strickler, Brad; Weisbeck, Molly; Dotson, Curtis L.

      2014-07-24

      Public Utility District No. 2 of Grant County (GCPUD) operates the Priest Rapids Dam (PRD), a hydroelectric facility on the Columbia River in Washington State. The dam contains 10 Kaplan-type turbine units that are now more than 50 years old. Plans are underway to refit these aging turbines with new runners. The Columbia River at PRD is a migratory pathway for several species of juvenile and adult salmonids, so passage of fish through the dam is a major consideration when upgrading the turbines. In this paper, a method for turbine biological performance assessment (BioPA) is demonstrated. Using this method, a suite of biological performance indicators is computed based on simulated data from a CFD model of a proposed turbine design. Each performance indicator is a measure of the probability of exposure to a certain dose of an injury mechanism. Using known relationships between the dose of an injury mechanism and frequency of injury (dose–response) from laboratory or field studies, the likelihood of fish injury for a turbine design can be computed from the performance indicator. By comparing the values of the indicators from proposed designs, the engineer can identify the more-promising alternatives. We present an application of the BioPA method for baseline risk assessment calculations for the existing Kaplan turbines at PRD that will be used as the minimum biological performance that a proposed new design must achieve.

    9. Computing and Computational Sciences Directorate - Joint Institute for

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Sciences Joint Institute for Computational Sciences To help realize the full potential of new-generation computers for advancing scientific discovery, the University of Tennessee (UT) and Oak Ridge National Laboratory (ORNL) have created the Joint Institute for Computational Sciences (JICS). JICS combines the experience and expertise in theoretical and computational science and engineering, computer science, and mathematics in these two institutions and focuses these skills on

    10. Computing and Computational Sciences Directorate - National Center for

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Sciences Home National Center for Computational Sciences The National Center for Computational Sciences (NCCS), formed in 1992, is home to two of Oak Ridge National Laboratory's (ORNL's) high-performance computing projects-the Oak Ridge Leadership Computing Facility (OLCF) and the National Climate-Computing Research Center (NCRC). The OLCF (www.olcf.ornl.gov) was established at ORNL in 2004 with the mission of standing up a supercomputer 100 times more powerful than the leading

    11. DOE Office of Science Computing Facility Operational Assessment Program

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Science Computing Facility Operational Assessment Program Stephane Ethier Princeton Plasma Physics Lab NUG Meeting, 17 Sep 2007 Objective * The DOE Office of Science is required to conduct an Operational Assessment (OA) Review of the efficiencies in the steady-state operations of each of the DOE Office of Science High Performance Computing (HPC) Facilities. * OMB requirement for capital planning once an asset is procured and operational * Focuses on the measurement of customer results, business

    12. in High Performance Computing Computer System, Cluster, and Networking...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      iSSH v. Auditd: Intrusion Detection in High Performance Computing Computer System, Cluster, and Networking Summer Institute David Karns, New Mexico State University Katy Protin,...

    13. communications requirements | Department of Energy

      Broader source: Energy.gov (indexed) [DOE]

      communications requirements PDF icon communications requirements More Documents & Publications Re: NBP RFI: Communications Requirements- Implementing the National Broadband Plan by Studying the Communications Requirements of Electric Utilities to Inform Federal Smart Grid Policy Re: NBP RFI: Communications Requirements NBP RFI: Communications Requirements- Comments of Lake Region Electric Cooperative- Minnesota

    14. Cheaper Adjoints by Reversing Address Computations

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Hascoët, L.; Utke, J.; Naumann, U.

      2008-01-01

      The reverse mode of automatic differentiation is widely used in science and engineering. A severe bottleneck for the performance of the reverse mode, however, is the necessity to recover certain intermediate values of the program in reverse order. Among these values are computed addresses, which traditionally are recovered through forward recomputation and storage in memory. We propose an alternative approach for recovery that uses inverse computation based on dependency information. Address storage constitutes a significant portion of the overall storage requirements. An example illustrates substantial gains that the proposed approach yields, and we show use cases in practical applications.

    15. Should Title 24 Ventilation Requirements Be Amended to include an Indoor

      Office of Scientific and Technical Information (OSTI)

      Air Quality Procedure? (Technical Report) | SciTech Connect Should Title 24 Ventilation Requirements Be Amended to include an Indoor Air Quality Procedure? Citation Details In-Document Search Title: Should Title 24 Ventilation Requirements Be Amended to include an Indoor Air Quality Procedure? Minimum outdoor air ventilation rates (VRs) for buildings are specified in standards, including California?s Title 24 standards. The ASHRAE ventilation standard includes two options for

    16. Extensible Computational Chemistry Environment

      Energy Science and Technology Software Center (OSTI)

      2012-08-09

      ECCE provides a sophisticated graphical user interface, scientific visualization tools, and the underlying data management framework enabling scientists to efficiently set up calculations and store, retrieve, and analyze the rapidly growing volumes of data produced by computational chemistry studies. ECCE was conceived as part of the Environmental Molecular Sciences Laboratory construction to solve the problem of researchers being able to effectively utilize complex computational chemistry codes and massively parallel high performance compute resources. Bringing themore » power of these codes and resources to the desktops of researcher and thus enabling world class research without users needing a detailed understanding of the inner workings of either the theoretical codes or the supercomputers needed to run them was a grand challenge problem in the original version of the EMSL. ECCE allows collaboration among researchers using a web-based data repository where the inputs and results for all calculations done within ECCE are organized. ECCE is a first of kind end-to-end problem solving environment for all phases of computational chemistry research: setting up calculations with sophisticated GUI and direct manipulation visualization tools, submitting and monitoring calculations on remote high performance supercomputers without having to be familiar with the details of using these compute resources, and performing results visualization and analysis including creating publication quality images. ECCE is a suite of tightly integrated applications that are employed as the user moves through the modeling process.« less

    17. GPU COMPUTING FOR PARTICLE TRACKING

      SciTech Connect (OSTI)

      Nishimura, Hiroshi; Song, Kai; Muriki, Krishna; Sun, Changchun; James, Susan; Qin, Yong

      2011-03-25

      This is a feasibility study of using a modern Graphics Processing Unit (GPU) to parallelize the accelerator particle tracking code. To demonstrate the massive parallelization features provided by GPU computing, a simplified TracyGPU program is developed for dynamic aperture calculation. Performances, issues, and challenges from introducing GPU are also discussed. General purpose Computation on Graphics Processing Units (GPGPU) bring massive parallel computing capabilities to numerical calculation. However, the unique architecture of GPU requires a comprehensive understanding of the hardware and programming model to be able to well optimize existing applications. In the field of accelerator physics, the dynamic aperture calculation of a storage ring, which is often the most time consuming part of the accelerator modeling and simulation, can benefit from GPU due to its embarrassingly parallel feature, which fits well with the GPU programming model. In this paper, we use the Tesla C2050 GPU which consists of 14 multi-processois (MP) with 32 cores on each MP, therefore a total of 448 cores, to host thousands ot threads dynamically. Thread is a logical execution unit of the program on GPU. In the GPU programming model, threads are grouped into a collection of blocks Within each block, multiple threads share the same code, and up to 48 KB of shared memory. Multiple thread blocks form a grid, which is executed as a GPU kernel. A simplified code that is a subset of Tracy++ [2] is developed to demonstrate the possibility of using GPU to speed up the dynamic aperture calculation by having each thread track a particle.

    18. Information Science, Computing, Applied Math

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Capabilities » Information Science, Computing, Applied Math /science-innovation/_assets/images/icon-science.jpg Information Science, Computing, Applied Math National security depends on science and technology. The United States relies on Los Alamos National Laboratory for the best of both. No place on Earth pursues a broader array of world-class scientific endeavors. Computer, Computational, and Statistical Sciences (CCS)» High Performance Computing (HPC)» Extreme Scale Computing, Co-design»

    19. Required Annual Notices

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Required Annual Notices  The Women's Health and Cancer Rights Act of 1998 (WHCRA) The medical programs sponsored by LANS will not restrict benefits if you or your dependent receives benefits for a mastectomy and elects breast reconstruction in connection with the mastectomy. Benefits will not be restricted provided that the breast reconstruction is performed in a manner determined in consultation with you or your dependent's physician and may include:  all stages of reconstruction of the

    20. Experiment Safety Requirements

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Experiment Safety Requirements Print Safety at the ALS The mission of the ALS is to "Support users in doing outstanding science in a safe environment." How Do I...? Complete an Experiment Safety Sheet? (Do this upon receiving beam time.) Complete Safety Training? Bring and Use Electrical Equipment at the ALS? Determine what Personal Protective Equipment (PPE) to Wear? Get Authorization to Work with Lasers at the ALS? Ship Radioactive Materials to LBNL for Use at the ALS? Ship Samples

    1. Experiment Safety Requirements

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Experiment Safety Requirements Print Safety at the ALS The mission of the ALS is to "Support users in doing outstanding science in a safe environment." How Do I...? Complete an Experiment Safety Sheet? (Do this upon receiving beam time.) Complete Safety Training? Bring and Use Electrical Equipment at the ALS? Determine what Personal Protective Equipment (PPE) to Wear? Get Authorization to Work with Lasers at the ALS? Ship Radioactive Materials to LBNL for Use at the ALS? Ship Samples

    2. Experiment Safety Requirements

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Experiment Safety Requirements Print Safety at the ALS The mission of the ALS is to "Support users in doing outstanding science in a safe environment." How Do I...? Complete an Experiment Safety Sheet? (Do this upon receiving beam time.) Complete Safety Training? Bring and Use Electrical Equipment at the ALS? Determine what Personal Protective Equipment (PPE) to Wear? Get Authorization to Work with Lasers at the ALS? Ship Radioactive Materials to LBNL for Use at the ALS? Ship Samples

    3. Experiment Safety Requirements

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Experiment Safety Requirements Print Safety at the ALS The mission of the ALS is to "Support users in doing outstanding science in a safe environment." How Do I...? Complete an Experiment Safety Sheet? (Do this upon receiving beam time.) Complete Safety Training? Bring and Use Electrical Equipment at the ALS? Determine what Personal Protective Equipment (PPE) to Wear? Get Authorization to Work with Lasers at the ALS? Ship Radioactive Materials to LBNL for Use at the ALS? Ship Samples

    4. Experiment Safety Requirements

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Experiment Safety Requirements Print Safety at the ALS The mission of the ALS is to "Support users in doing outstanding science in a safe environment." How Do I...? Complete an Experiment Safety Sheet? (Do this upon receiving beam time.) Complete Safety Training? Bring and Use Electrical Equipment at the ALS? Determine what Personal Protective Equipment (PPE) to Wear? Get Authorization to Work with Lasers at the ALS? Ship Radioactive Materials to LBNL for Use at the ALS? Ship Samples

    5. Experiment Safety Requirements

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Experiment Safety Requirements Print Safety at the ALS The mission of the ALS is to "Support users in doing outstanding science in a safe environment." How Do I...? Complete an Experiment Safety Sheet? (Do this upon receiving beam time.) Complete Safety Training? Bring and Use Electrical Equipment at the ALS? Determine what Personal Protective Equipment (PPE) to Wear? Get Authorization to Work with Lasers at the ALS? Ship Radioactive Materials to LBNL for Use at the ALS? Ship Samples

    6. Experiment Safety Requirements

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Experiment Safety Experiment Safety Requirements Print Safety at the ALS The mission of the ALS is to "Support users in doing outstanding science in a safe environment." How Do I...? Complete an Experiment Safety Sheet? (Do this upon receiving beam time.) Complete Safety Training? Bring and Use Electrical Equipment at the ALS? Determine what Personal Protective Equipment (PPE) to Wear? Get Authorization to Work with Lasers at the ALS? Ship Radioactive Materials to LBNL for Use at the

    7. Experiment Safety Requirements

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Experiment Safety Requirements Print Safety at the ALS The mission of the ALS is to "Support users in doing outstanding science in a safe environment." How Do I...? Complete an Experiment Safety Sheet? (Do this upon receiving beam time.) Complete Safety Training? Bring and Use Electrical Equipment at the ALS? Determine what Personal Protective Equipment (PPE) to Wear? Get Authorization to Work with Lasers at the ALS? Ship Radioactive Materials to LBNL for Use at the ALS? Ship Samples

    8. LASSO* - Science Requirements

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      LASSO* - Science Requirements *LES ARM Symbiotic Simulation and Observation (LASSO) workflow Andy Vogelmann 1 , William I Gustafson Jr 2 Zhijin Li 3,4 , Xiaoping Cheng 3 , Satoshi Endo 1 , Tami Toto 1 , and Heng Xiao 2 1 Brookhaven National Laboratory 2 Pacific Northwest National Laboratory 3 University of California Los Angeles 4 NASA Jet Propulsion Laboratory And TONS of people from the rest of ARM! LASSO Webpage: http://www.arm.gov/science/themes/lasso LASSO e-mail list sign up:

    9. MarFS-Requirements-Design-Configuration-Admin

      SciTech Connect (OSTI)

      Kettering, Brett Michael; Grider, Gary Alan

      2015-07-08

      This document will be organized into sections that are defined by the requirements for a file system that presents a near-POSIX (Portable Operating System Interface) interface to the user, but whose data is stored in whatever form is most efficient for the type of data being stored. After defining the requirement the design for meeting the requirement will be explained. Finally there will be sections on configuring and administering this file system. More and more, data dominates the computing world. There is a sea of data out there in many different formats that needs to be managed and used. Mar means sea in Spanish. Thus, this product is dubbed MarFS, a file system for a sea of data.

    10. Computing contingency statistics in parallel.

      SciTech Connect (OSTI)

      Bennett, Janine Camille; Thompson, David; Pebay, Philippe Pierre

      2010-09-01

      Statistical analysis is typically used to reduce the dimensionality of and infer meaning from data. A key challenge of any statistical analysis package aimed at large-scale, distributed data is to address the orthogonal issues of parallel scalability and numerical stability. Many statistical techniques, e.g., descriptive statistics or principal component analysis, are based on moments and co-moments and, using robust online update formulas, can be computed in an embarrassingly parallel manner, amenable to a map-reduce style implementation. In this paper we focus on contingency tables, through which numerous derived statistics such as joint and marginal probability, point-wise mutual information, information entropy, and {chi}{sup 2} independence statistics can be directly obtained. However, contingency tables can become large as data size increases, requiring a correspondingly large amount of communication between processors. This potential increase in communication prevents optimal parallel speedup and is the main difference with moment-based statistics where the amount of inter-processor communication is independent of data size. Here we present the design trade-offs which we made to implement the computation of contingency tables in parallel.We also study the parallel speedup and scalability properties of our open source implementation. In particular, we observe optimal speed-up and scalability when the contingency statistics are used in their appropriate context, namely, when the data input is not quasi-diffuse.

    11. The turbulent cascade and proton heating in the solar wind during solar minimum

      SciTech Connect (OSTI)

      Coburn, Jesse T.; Smith, Charles W.; Vasquez, Bernard J.; Stawarz, Joshua E.; Forman, Miriam A.

      2013-06-13

      Solar wind measurements at 1 AU during the recent solar minimum and previous studies of solar maximum provide an opportunity to study the effects of the changing solar cycle on in situ heating. Our interest is to compare the levels of activity associated with turbulence and proton heating. Large-scale shears in the flow caused by transient activity are a source that drives turbulence that heats the solar wind, but as the solar cycle progresses the dynamics that drive the turbulence and heat the medium are likely to change. The application of third-moment theory to Advanced Composition Explorer (ACE) data gives the turbulent energy cascade rate which is not seen to vary with the solar cycle. Likewise, an empirical heating rate shows no significan changes in proton heating over the cycle.

    12. On the minimum dark matter mass testable by neutrinos from the Sun

      SciTech Connect (OSTI)

      Busoni, Giorgio; Simone, Andrea De; Huang, Wei-Chih E-mail: andrea.desimone@sissa.it

      2013-07-01

      We discuss a limitation on extracting bounds on the scattering cross section of dark matter with nucleons, using neutrinos from the Sun. If the dark matter particle is sufficiently light (less than about 4 GeV), the effect of evaporation is not negligible and the capture process goes in equilibrium with the evaporation. In this regime, the flux of solar neutrinos of dark matter origin becomes independent of the scattering cross section and therefore no constraint can be placed on it. We find the minimum values of dark matter masses for which the scattering cross section on nucleons can be probed using neutrinos from the Sun. We also provide simple and accurate fitting functions for all the relevant processes of GeV-scale dark matter in the Sun.

    13. A fast tomographic method for searching the minimum free energy path

      SciTech Connect (OSTI)

      Chen, Changjun; Huang, Yanzhao; Xiao, Yi; Jiang, Xuewei

      2014-10-21

      Minimum Free Energy Path (MFEP) provides a lot of important information about the chemical reactions, like the free energy barrier, the location of the transition state, and the relative stability between reactant and product. With MFEP, one can study the mechanisms of the reaction in an efficient way. Due to a large number of degrees of freedom, searching the MFEP is a very time-consuming process. Here, we present a fast tomographic method to perform the search. Our approach first calculates the free energy surfaces in a sequence of hyperplanes perpendicular to a transition path. Based on an objective function and the free energy gradient, the transition path is optimized in the collective variable space iteratively. Applications of the present method to model systems show that our method is practical. It can be an alternative approach for finding the state-to-state MFEP.

    14. Super recycled water: quenching computers

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Super recycled water: quenching computers Super recycled water: quenching computers New facility and methods support conserving water and creating recycled products. Using reverse...

    15. Computer simulation | Open Energy Information

      Open Energy Info (EERE)

      Computer simulation Jump to: navigation, search OpenEI Reference LibraryAdd to library Web Site: Computer simulation Author wikipedia Published wikipedia, 2013 DOI Not Provided...

    16. NREL: Computational Science Home Page

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      high-performance computing, computational science, applied mathematics, scientific data management, visualization, and informatics. NREL is home to the largest high performance...

    17. SCC: The Strategic Computing Complex

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      computer room, which is an open room about three-fourths the size of a football field. The Strategic Computing Complex (SCC) at the Los Alamos National Laboratory...

    18. Present and Future Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Theoretical Particle Physics Michele Papucci (LBNL) Stefan Hoeche (SLAC) NERSC HEP Requirements Review 1 Theoretical Particle Physics & HPC 2 Theoretical Particle Physics & HPC * ...

    19. Human-computer interface

      DOE Patents [OSTI]

      Anderson, Thomas G.

      2004-12-21

      The present invention provides a method of human-computer interfacing. Force feedback allows intuitive navigation and control near a boundary between regions in a computer-represented space. For example, the method allows a user to interact with a virtual craft, then push through the windshield of the craft to interact with the virtual world surrounding the craft. As another example, the method allows a user to feel transitions between different control domains of a computer representation of a space. The method can provide for force feedback that increases as a user's locus of interaction moves near a boundary, then perceptibly changes (e.g., abruptly drops or changes direction) when the boundary is traversed.

    20. Observed Minimum Illuminance Threshold for Night Market Vendors in Kenya who use LED Lamps

      SciTech Connect (OSTI)

      Johnstone, Peter; Jacobson, Arne; Mills, Evan; Radecsky, Kristen

      2009-03-21

      Creation of light for work, socializing, and general illumination is a fundamental application of technology around the world. For those who lack access to electricity, an emerging and diverse range of LED based lighting products hold promise for replacing and/or augmenting their current fuel-based lighting sources that are costly and dirty. Along with analysis of environmental factors, economic models for total cost-ofownership of LED lighting products are an important tool for studying the impacts of these products as they emerge in markets of developing countries. One important metric in those models is the minimum illuminance demanded by end-users for a given task before recharging the lamp or replacing batteries. It impacts the lighting service cost per unit time if charging is done with purchased electricity, batteries, or charging services. The concept is illustrated in figure 1: LED lighting products are generally brightest immediately after the battery is charged or replaced and the illuminance degrades as the battery is discharged. When a minimum threshold level of illuminance is reached, the operational time for the battery charge cycle is over. The cost to recharge depends on the method utilized; these include charging at a shop at a fixed price per charge, charging on personal grid connections, using solar chargers, and purchasing dry cell batteries. This Research Note reports on the observed"charge-triggering" illuminance level threshold for night market vendors who use LED lighting products to provide general and task oriented illumination. All the study participants charged with AC power, either at a fixed-price charge shop or with electricity at their home.

    1. Thirty-Year Solid Waste Generation Maximum and Minimum Forecast for SRS

      SciTech Connect (OSTI)

      Thomas, L.C.

      1994-10-01

      This report is the third phase (Phase III) of the Thirty-Year Solid Waste Generation Forecast for Facilities at the Savannah River Site (SRS). Phase I of the forecast, Thirty-Year Solid Waste Generation Forecast for Facilities at SRS, forecasts the yearly quantities of low-level waste (LLW), hazardous waste, mixed waste, and transuranic (TRU) wastes generated over the next 30 years by operations, decontamination and decommissioning and environmental restoration (ER) activities at the Savannah River Site. The Phase II report, Thirty-Year Solid Waste Generation Forecast by Treatability Group (U), provides a 30-year forecast by waste treatability group for operations, decontamination and decommissioning, and ER activities. In addition, a 30-year forecast by waste stream has been provided for operations in Appendix A of the Phase II report. The solid wastes stored or generated at SRS must be treated and disposed of in accordance with federal, state, and local laws and regulations. To evaluate, select, and justify the use of promising treatment technologies and to evaluate the potential impact to the environment, the generic waste categories described in the Phase I report were divided into smaller classifications with similar physical, chemical, and radiological characteristics. These smaller classifications, defined within the Phase II report as treatability groups, can then be used in the Waste Management Environmental Impact Statement process to evaluate treatment options. The waste generation forecasts in the Phase II report includes existing waste inventories. Existing waste inventories, which include waste streams from continuing operations and stored wastes from discontinued operations, were not included in the Phase I report. Maximum and minimum forecasts serve as upper and lower boundaries for waste generation. This report provides the maximum and minimum forecast by waste treatability group for operation, decontamination and decommissioning, and ER activities.

    2. NEWLY DISCOVERED GLOBAL TEMPERATURE STRUCTURES IN THE QUIET SUN AT SOLAR MINIMUM

      SciTech Connect (OSTI)

      Huang Zhenguang; Frazin, Richard A.; Landi, Enrico; Manchester, Ward B.; Gombosi, Tamas I.; Vasquez, Alberto M.

      2012-08-20

      Magnetic loops are building blocks of the closed-field corona. While active region loops are readily seen in images taken at EUV and X-ray wavelengths, quiet-Sun (QS) loops are seldom identifiable and are therefore difficult to study on an individual basis. The first analysis of solar minimum (Carrington Rotation 2077) QS coronal loops utilizing a novel technique called the Michigan Loop Diagnostic Technique (MLDT) is presented. This technique combines Differential Emission Measure Tomography and a potential field source surface (PFSS) model, and consists of tracing PFSS field lines through the tomographic grid on which the local differential emission measure is determined. As a result, the electron temperature T{sub e} and density N{sub e} at each point along each individual field line can be obtained. Using data from STEREO/EUVI and SOHO/MDI, the MLDT identifies two types of QS loops in the corona: so-called up loops in which the temperature increases with height and so-called down loops in which the temperature decreases with height. Up loops are expected, however, down loops are a surprise, and furthermore, they are ubiquitous in the low-latitude corona. Up loops dominate the QS at higher latitudes. The MLDT allows independent determination of the empirical pressure and density scale heights, and the differences between the two remain to be explained. The down loops appear to be a newly discovered property of the solar minimum corona that may shed light on the physics of coronal heating. The results are shown to be robust to the calibration uncertainties of the EUVI instrument.

    3. Belle-II Experiment Network Requirements

      SciTech Connect (OSTI)

      Asner, David; Bell, Greg; Carlson, Tim; Cowley, David; Dart, Eli; Erwin, Brock; Godang, Romulus; Hara, Takanori; Johnson, Jerry; Johnson, Ron; Johnston, Bill; Dam, Kerstin Kleese-van; Kaneko, Toshiaki; Kubota, Yoshihiro; Kuhr, Thomas; McCoy, John; Miyake, Hideki; Monga, Inder; Nakamura, Motonori; Piilonen, Leo; Pordes, Ruth; Ray, Douglas; Russell, Richard; Schram, Malachi; Schroeder, Jim; Sevior, Martin; Singh, Surya; Suzuki, Soh; Sasaki, Takashi; Williams, Jim

      2013-05-28

      The Belle experiment, part of a broad-based search for new physics, is a collaboration of ~400 physicists from 55 institutions across four continents. The Belle detector is located at the KEKB accelerator in Tsukuba, Japan. The Belle detector was operated at the asymmetric electron-positron collider KEKB from 1999-2010. The detector accumulated more than 1 ab-1 of integrated luminosity, corresponding to more than 2 PB of data near 10 GeV center-of-mass energy. Recently, KEK has initiated a $400 million accelerator upgrade to be called SuperKEKB, designed to produce instantaneous and integrated luminosity two orders of magnitude greater than KEKB. The new international collaboration at SuperKEKB is called Belle II. The first data from Belle II/SuperKEKB is expected in 2015. In October 2012, senior members of the Belle-II collaboration gathered at PNNL to discuss the computing and neworking requirements of the Belle-II experiment with ESnet staff and other computing and networking experts. The day-and-a-half-long workshop characterized the instruments and facilities used in the experiment, the process of science for Belle-II, and the computing and networking equipment and configuration requirements to realize the full scientific potential of the collaboration's work.

    4. Computing and Computational Sciences Directorate - Visitor Information

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Home › About Us Visitor Information Entering ORNL ORNL welcomes visitors to the Laboratory. However, because of increased security requirements, we've made some changes in how the site is accessed. Bethel Valley Road, which is the main access route to Oak Ridge National Laboratory from both directions, is now closed to the public. If you are planning a visit to ORNL, your host will arrange for you to proceed past entrance stations on Bethel Valley Road leading to the Laboratory's Visitor

    5. BES Science Network Requirements

      SciTech Connect (OSTI)

      Biocca, Alan; Carlson, Rich; Chen, Jackie; Cotter, Steve; Tierney, Brian; Dattoria, Vince; Davenport, Jim; Gaenko, Alexander; Kent, Paul; Lamm, Monica; Miller, Stephen; Mundy, Chris; Ndousse, Thomas; Pederson, Mark; Perazzo, Amedeo; Popescu, Razvan; Rouson, Damian; Sekine, Yukiko; Sumpter, Bobby; Dart, Eli; Wang, Cai-Zhuang -Z; Whitelam, Steve; Zurawski, Jason

      2011-02-01

      The Energy Sciences Network (ESnet) is the primary provider of network connectivityfor the US Department of Energy Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of the Office ofScience programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years.

    6. Synchronizing compute node time bases in a parallel computer

      DOE Patents [OSTI]

      Chen, Dong; Faraj, Daniel A; Gooding, Thomas M; Heidelberger, Philip

      2014-12-30

      Synchronizing time bases in a parallel computer that includes compute nodes organized for data communications in a tree network, where one compute node is designated as a root, and, for each compute node: calculating data transmission latency from the root to the compute node; configuring a thread as a pulse waiter; initializing a wakeup unit; and performing a local barrier operation; upon each node completing the local barrier operation, entering, by all compute nodes, a global barrier operation; upon all nodes entering the global barrier operation, sending, to all the compute nodes, a pulse signal; and for each compute node upon receiving the pulse signal: waking, by the wakeup unit, the pulse waiter; setting a time base for the compute node equal to the data transmission latency between the root node and the compute node; and exiting the global barrier operation.

    7. Synchronizing compute node time bases in a parallel computer

      DOE Patents [OSTI]

      Chen, Dong; Faraj, Daniel A; Gooding, Thomas M; Heidelberger, Philip

      2015-01-27

      Synchronizing time bases in a parallel computer that includes compute nodes organized for data communications in a tree network, where one compute node is designated as a root, and, for each compute node: calculating data transmission latency from the root to the compute node; configuring a thread as a pulse waiter; initializing a wakeup unit; and performing a local barrier operation; upon each node completing the local barrier operation, entering, by all compute nodes, a global barrier operation; upon all nodes entering the global barrier operation, sending, to all the compute nodes, a pulse signal; and for each compute node upon receiving the pulse signal: waking, by the wakeup unit, the pulse waiter; setting a time base for the compute node equal to the data transmission latency between the root node and the compute node; and exiting the global barrier operation.

    8. Computer Security Risk Assessment

      Energy Science and Technology Software Center (OSTI)

      1992-02-11

      LAVA/CS (LAVA for Computer Security) is an application of the Los Alamos Vulnerability Assessment (LAVA) methodology specific to computer and information security. The software serves as a generic tool for identifying vulnerabilities in computer and information security safeguards systems. Although it does not perform a full risk assessment, the results from its analysis may provide valuable insights into security problems. LAVA/CS assumes that the system is exposed to both natural and environmental hazards and tomore » deliberate malevolent actions by either insiders or outsiders. The user in the process of answering the LAVA/CS questionnaire identifies missing safeguards in 34 areas ranging from password management to personnel security and internal audit practices. Specific safeguards protecting a generic set of assets (or targets) from a generic set of threats (or adversaries) are considered. There are four generic assets: the facility, the organization''s environment; the hardware, all computer-related hardware; the software, the information in machine-readable form stored both on-line or on transportable media; and the documents and displays, the information in human-readable form stored as hard-copy materials (manuals, reports, listings in full-size or microform), film, and screen displays. Two generic threats are considered: natural and environmental hazards, storms, fires, power abnormalities, water and accidental maintenance damage; and on-site human threats, both intentional and accidental acts attributable to a perpetrator on the facility''s premises.« less

    9. MHD computations for stellarators

      SciTech Connect (OSTI)

      Johnson, J.L.

      1985-12-01

      Considerable progress has been made in the development of computational techniques for studying the magnetohydrodynamic equilibrium and stability properties of three-dimensional configurations. Several different approaches have evolved to the point where comparison of results determined with different techniques shows good agreement. 55 refs., 7 figs.

    10. Sandia National Laboratories: Advanced Simulation and Computing...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ASC Advanced Simulation and Computing Computational Systems & Software Environment Crack Modeling The Computational Systems & Software Environment program builds integrated,...

    11. DOE Office of Science Exascale Requirements Reviews: Target 2020-2025

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Requirements Reviews: Target 2017 Requirements Reviews: Target 2014 Overview Published Reports Case Study FAQs NERSC HPC Achievement Awards Share Your Research User Submitted Research Citations NERSC Citations Home » Science at NERSC » HPC Requirements Reviews » Exascale Requirements Reviews DOE Office of Science Exascale Requirements Reviews: Target 2020-2025 Official Office of Science Computing and Data Requirements in the Exascale Age website The three DOE Office of Advanced Scientific

    12. Computer-Aided dispatching system design specification

      SciTech Connect (OSTI)

      Briggs, M.G.

      1996-05-03

      This document defines the performance requirements for a graphic display dispatching system to support Hanford Patrol emergency response. This system is defined as a Commercial-Off the-Shelf computer dispatching system providing both text and graphical display information while interfacing with the diverse reporting system within the Hanford Facility. This system also provided expansion capabilities to integrate Hanford Fire and the Occurrence Notification Center and provides back-up capabilities for the Plutonium Processing Facility.

    13. Determining Memory Use | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Allinea DDT Core File Settings Determining Memory Use Using VNC with a Debugger bgq_stack gdb Coreprocessor Runjob termination TotalView Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Determining Memory Use Determining the amount of memory available during the execution of the program requires the use of

    14. Extreme Scale Computing, Co-design

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information Science, Computing, Applied Math Extreme Scale Computing, Co-design Extreme Scale Computing, Co-design Computational co-design may facilitate revolutionary designs ...

    15. Visitor Hanford Computer Access Request - Hanford Site

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Visitor Hanford Computer Access Request Visitor Hanford Computer Access Request Visitor Hanford Computer Access Request Visitor Hanford Computer Access Request Email Email Page |...

    16. Equipment Operational Requirements

      SciTech Connect (OSTI)

      Greenwalt, B; Henderer, B; Hibbard, W; Mercer, M

      2009-06-11

      The Iraq Department of Border Enforcement is rich in personnel, but poor in equipment. An effective border control system must include detection, discrimination, decision, tracking and interdiction, capture, identification, and disposition. An equipment solution that addresses only a part of this will not succeed, likewise equipment by itself is not the answer without considering the personnel and how they would employ the equipment. The solution should take advantage of the existing in-place system and address all of the critical functions. The solutions are envisioned as being implemented in a phased manner, where Solution 1 is followed by Solution 2 and eventually by Solution 3. This allows adequate time for training and gaining operational experience for successively more complex equipment. Detailed descriptions of the components follow the solution descriptions. Solution 1 - This solution is based on changes to CONOPs, and does not have a technology component. It consists of observers at the forts and annexes, forward patrols along the swamp edge, in depth patrols approximately 10 kilometers inland from the swamp, and checkpoints on major roads. Solution 2 - This solution adds a ground sensor array to the Solution 1 system. Solution 3 - This solution is based around installing a radar/video camera system on each fort. It employs the CONOPS from Solution 1, but uses minimal ground sensors deployed only in areas with poor radar/video camera coverage (such as canals and streams shielded by vegetation), or by roads covered by radar but outside the range of the radar associated cameras. This document provides broad operational requirements for major equipment components along with sufficient operational details to allow the technical community to identify potential hardware candidates. Continuing analysis will develop quantities required and more detailed tactics, techniques, and procedures.

    17. Software and High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Software and High Performance Computing Software and High Performance Computing Providing world-class high performance computing capability that enables unsurpassed solutions to complex problems of strategic national interest Contact thumbnail of Kathleen McDonald Head of Intellectual Property, Business Development Executive Kathleen McDonald Richard P. Feynman Center for Innovation (505) 667-5844 Email Software Computational physics, computer science, applied mathematics, statistics and the

    18. Magellan: A Cloud Computing Testbed

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Magellan News & Announcements Archive Petascale Initiative Exascale Computing APEX Home » R & D » Archive » Magellan: A Cloud Computing Testbed Magellan: A Cloud Computing Testbed Cloud computing is gaining a foothold in the business world, but can clouds meet the specialized needs of scientists? That was one of the questions NERSC's Magellan cloud computing testbed explored between 2009 and 2011. The goal of Magellan, a project funded through the U.S. Department of Energy (DOE) Oce

    19. Competition Requirements | Department of Energy

      Office of Environmental Management (EM)

      PDF icon Competition Requirements More Documents & Publications Microsoft Word - AG Chapter 6 1 Nov 2010 AcqGuide 5.2-OPAM Chapter 6 - Competition Requirements

    20. Effects of minimum monitor unit threshold on spot scanning proton plan quality

      SciTech Connect (OSTI)

      Howard, Michelle Beltran, Chris; Mayo, Charles S.; Herman, Michael G.

      2014-09-15

      Purpose: To investigate the influence of the minimum monitor unit (MU) on the quality of clinical treatment plans for scanned proton therapy. Methods: Delivery system characteristics limit the minimum number of protons that can be delivered per spot, resulting in a min-MU limit. Plan quality can be impacted by the min-MU limit. Two sites were used to investigate the impact of min-MU on treatment plans: pediatric brain tumor at a depth of 5–10 cm; a head and neck tumor at a depth of 1–20 cm. Three-field, intensity modulated spot scanning proton plans were created for each site with the following parameter variations: min-MU limit range of 0.0000–0.0060; and spot spacing range of 2–8 mm. Comparisons were based on target homogeneity and normal tissue sparing. For the pediatric brain, two versions of the treatment planning system were also compared to judge the effects of the min-MU limit based on when it is accounted for in the optimization process (Eclipse v.10 and v.13, Varian Medical Systems, Palo Alto, CA). Results: The increase of the min-MU limit with a fixed spot spacing decreases plan quality both in homogeneous target coverage and in the avoidance of critical structures. Both head and neck and pediatric brain plans show a 20% increase in relative dose for the hot spot in the CTV and 10% increase in key critical structures when comparing min-MU limits of 0.0000 and 0.0060 with a fixed spot spacing of 4 mm. The DVHs of CTVs show min-MU limits of 0.0000 and 0.0010 produce similar plan quality and quality decreases as the min-MU limit increases beyond 0.0020. As spot spacing approaches 8 mm, degradation in plan quality is observed when no min-MU limit is imposed. Conclusions: Given a fixed spot spacing of ≤4 mm, plan quality decreases as min-MU increased beyond 0.0020. The effect of min-MU needs to be taken into consideration while planning proton therapy treatments.

    1. Refurbishment program of HANARO control computer system

      SciTech Connect (OSTI)

      Kim, H. K.; Choe, Y. S.; Lee, M. W.; Doo, S. K.; Jung, H. S. [Korea Atomic Energy Research Inst., 989-111 Daedeok-daero, Yuseong, Daejeon, 305-353 (Korea, Republic of)

      2012-07-01

      HANARO, an open-tank-in-pool type research reactor with 30 MW thermal power, achieved its first criticality in 1995. The programmable controller system MLC (Multi Loop Controller) manufactured by MOORE has been used to control and regulate HANARO since 1995. We made a plan to replace the control computer because the system supplier no longer provided technical support and thus no spare parts were available. Aged and obsolete equipment and the shortage of spare parts supply could have caused great problems. The first consideration for a replacement of the control computer dates back to 2007. The supplier did not produce the components of MLC so that this system would no longer be guaranteed. We established the upgrade and refurbishment program in 2009 so as to keep HANARO up to date in terms of safety. We designed the new control computer system that would replace MLC. The new computer system is HCCS (HANARO Control Computer System). The refurbishing activity is in progress and will finish in 2013. The goal of the refurbishment program is a functional replacement of the reactor control system in consideration of suitable interfaces, compliance with no special outage for installation and commissioning, and no change of the well-proved operation philosophy. HCCS is a DCS (Discrete Control System) using PLC manufactured by RTP. To enhance the reliability, we adapt a triple processor system, double I/O system and hot swapping function. This paper describes the refurbishment program of the HANARO control system including the design requirements of HCCS. (authors)

    2. Stellar Astrophysics Requirements NERSC Forecast

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      (Copernicus Center, Warsaw). Computing cycles: DOE NERSC. 14 May 26, 2011 FLASHWDM Parallel Performance strong peak weak 15 May 26, 2011 Example 2: Core-Collapse SN...

    3. Computer Algebra System

      Energy Science and Technology Software Center (OSTI)

      1992-05-04

      DOE-MACSYMA (Project MAC''s SYmbolic MAnipulation system) is a large computer programming system written in LISP. With DOE-MACSYMA the user can differentiate, integrate, take limits, solve systems of linear or polynomial equations, factor polynomials, expand functions in Laurent or Taylor series, solve differential equations (using direct or transform methods), compute Poisson series, plot curves, and manipulate matrices and tensors. A language similar to ALGOL-60 permits users to write their own programs for transforming symbolic expressions. Franzmore » Lisp OPUS 38 provides the environment for the Encore, Celerity, and DEC VAX11 UNIX,SUN(OPUS) versions under UNIX and the Alliant version under Concentrix. Kyoto Common Lisp (KCL) provides the environment for the SUN(KCL),Convex, and IBM PC under UNIX and Data General under AOS/VS.« less

    4. GPU Computational Screening

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      GPU Computational Screening of Carbon Capture Materials J. Kim 1 , A Koniges 1 , R. Martin 1 , M. Haranczyk 1 , J. Swisher 2 , and B. Smit 1,2 1 Lawrence Berkeley National Laboratory, Berkeley, CA 94720 2 Department of Chemical Engineering, University of California, Berkeley, Berkeley, CA 94720 E-mail: jihankim@lbl.gov Abstract. In order to reduce the current costs associated with carbon capture technologies, novel materials such as zeolites and metal-organic frameworks that are based on

    5. Cloud Computing Services

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Services - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced

    6. High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Performance Computing - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced

    7. Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Anti-HIV antibody Software optimized on Mira advances design of mini-proteins for medicines, materials Scientists at the University of Washington are using Mira to virtually design unique, artificial peptides, or short proteins. Read More Celebrating 10 years 10 science highlights celebrating 10 years of Argonne Leadership Computing Facility To celebrate our 10th anniversary, we're highlighting 10 science accomplishments since we opened our doors. Read More Bill Gropp works with students during

    8. Applied & Computational Math

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      & Computational Math - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced

    9. From Federal Computer Week:

      National Nuclear Security Administration (NNSA)

      Federal Computer Week: Energy agency launches performance-based pay system By Richard W. Walker Published on March 27, 2008 The Energy Department's National Nuclear Security Administration has launched a new performance- based pay system involving about 2,000 of its 2,500 employees. NNSA officials described the effort as a pilot project that will test the feasibility of the new system, which collapses the traditional 15 General Schedule pay bands into broader pay bands. The new structure

    10. Development of computer graphics

      SciTech Connect (OSTI)

      Nuttall, H.E.

      1989-07-01

      The purpose of this project was to screen and evaluate three graphics packages as to their suitability for displaying concentration contour graphs. The information to be displayed is from computer code simulations describing air-born contaminant transport. The three evaluation programs were MONGO (John Tonry, MIT, Cambridge, MA, 02139), Mathematica (Wolfram Research Inc.), and NCSA Image (National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign). After a preliminary investigation of each package, NCSA Image appeared to be significantly superior for generating the desired concentration contour graphs. Hence subsequent work and this report describes the implementation and testing of NCSA Image on both an Apple MacII and Sun 4 computers. NCSA Image includes several utilities (Layout, DataScope, HDF, and PalEdit) which were used in this study and installed on Dr. Ted Yamada`s Mac II computer. Dr. Yamada provided two sets of air pollution plume data which were displayed using NCSA Image. Both sets were animated into a sequential expanding plume series.

    11. Use of finite volume radiation for predicting the Knudsen minimum in 2D channel flow

      SciTech Connect (OSTI)

      Malhotra, Chetan P.; Mahajan, Roop L.

      2014-12-09

      In an earlier paper we employed an analogy between surface-to-surface radiation and free-molecular flow to model Knudsen flow through tubes and onto planes. In the current paper we extend the analogy between thermal radiation and molecular flow to model the flow of a gas in a 2D channel across all regimes of rarefaction. To accomplish this, we break down the problem of gaseous flow into three sub-problems (self-diffusion, mass-motion and generation of pressure gradient) and use the finite volume method for modeling radiation through participating media to model the transport in each sub-problem as a radiation problem. We first model molecular self-diffusion in the stationary gas by modeling the transport of the molecular number density through the gas starting from the analytical asymptote for free-molecular flow to the kinetic theory limit of gaseous self-diffusion. We then model the transport of momentum through the gas at unit pressure gradient to predict Poiseuille flow and slip flow in the 2D gas. Lastly, we predict the generation of pressure gradient within the gas due to molecular collisions by modeling the transport of the forces generated due to collisions per unit volume of gas. We then proceed to combine the three radiation problems to predict flow of the gas over the entire Knudsen number regime from free-molecular to transition to continuum flow and successfully capture the Knudsen minimum at Kn ? 1.

    12. Approaching the Minimum Thermal Conductivity in Rhenium-Substituted Higher Manganese Silicides

      SciTech Connect (OSTI)

      Chen, Xi [University of Texas at Austin] [University of Texas at Austin; Girard, S. N. [University of Wisconsin, Madison] [University of Wisconsin, Madison; Meng, F. [University of Wisconsin, Madison] [University of Wisconsin, Madison; Lara-Curzio, Edgar [ORNL] [ORNL; Jin, S [University of Wisconsin, Madison] [University of Wisconsin, Madison; Goodenough, J. B. [University of Texas at Austin] [University of Texas at Austin; Zhou, J. S. [University of Texas at Austin] [University of Texas at Austin; Shi, L [University of Texas at Austin] [University of Texas at Austin

      2014-01-01

      Higher manganese silicides (HMS) made of earth-abundant and non-toxic elements are regarded as promising p-type thermoelectric materials because their complex crystal structure results in low lattice thermal conductivity. It is shown here that the already low thermal conductivity of HMS can be reduced further to approach the minimum thermal conductivity via partial substitu- tion of Mn with heavier rhenium (Re) to increase point defect scattering. The solubility limit of Re in the obtained RexMn1 xSi1.8 is determined to be about x = 0.18. Elemental inhomogeneity and the formation of ReSi1.75 inclusions with 50 200 nm size are found within the HMS matrix. It is found that the power factor does not change markedly at low Re content of x 0.04 before it drops considerably at higher Re contents. Compared to pure HMS, the reduced lattice thermal conductivity in RexMn1 xSi1.8 results in a 25% increase of the peak figure of merit ZT to reach 0.57 0.08 at 800 K for x = 0.04. The suppressed thermal conductivity in the pure RexMn1 xSi1.8 can enable further investigations of the ZT limit of this system by exploring different impurity doping strategies to optimize the carrier concentration and power factor.

    13. Metaproteomics reveals differential modes of metabolic coupling among ubiquitous oxygen minimum zone microbes

      SciTech Connect (OSTI)

      Hawley, Alyse K.; Brewer, Heather M.; Norbeck, Angela D.; Pasa-Tolic, Ljiljana; Hallam, Steven J.

      2014-08-05

      Oxygen minimum zones (OMZs) are intrinsic water column features arising from respiratory oxygen demand during organic matter degradation in stratified marine waters. Currently OMZs are expanding due to global climate change. This expansion alters marine ecosystem function and the productivity of fisheries due to habitat compression and changes in biogeochemical cycling leading to fixed nitrogen loss and greenhouse gas production. Here we use metaproteomics to chart spatial and temporal patterns of gene expression along defined redox gradients in a seasonally anoxic fjord, Saanich Inlet to better understand microbial community responses to OMZ expansion. The expression of metabolic pathway components for nitrification, anaerobic ammonium oxidation (anammox), denitrification and inorganic carbon fixation predominantly co-varied with abundance and distribution patterns of Thaumarchaeota, Nitrospira, Planctomycetes and SUP05/ARCTIC96BD-19 Gammaproteobacteria. Within these groups, pathways mediating inorganic carbon fixation and nitrogen and sulfur transformations were differentially expressed across the redoxcline. Nitrification and inorganic carbon fixation pathways affiliated with Thaumarchaeota dominated dysoxic waters and denitrification, sulfur-oxidation and inorganic carbon fixation pathways affiliated with SUP05 dominated suboxic and anoxic waters. Nitrite-oxidation and anammox pathways affiliated with Nitrospina and Planctomycetes respectively, also exhibited redox partitioning between dysoxic and suboxic waters. The differential expression of these pathways under changing water column redox conditions has quantitative implications for coupled biogeochemical cycling linking different modes of inorganic carbon fixation with distributed nitrogen and sulfur-based energy metabolism extensible to coastal and open ocean OMZs.

    14. Solid Waste Information and Tracking System (SWITS) Software Requirements Specification

      SciTech Connect (OSTI)

      MAY, D.L.

      2000-03-22

      This document is the primary document establishing requirements for the Solid Waste Information and Tracking System (SWITS) as it is converted to a client-server architecture. The purpose is to provide the customer and the performing organizations with the requirements for the SWITS in the new environment. This Software Requirement Specification (SRS) describes the system requirements for the SWITS Project, and follows the PHMC Engineering Requirements, HNF-PRO-1819, and Computer Software Qualify Assurance Requirements, HNF-PRO-309, policies. This SRS includes sections on general description, specific requirements, references, appendices, and index. The SWITS system defined in this document stores information about the solid waste inventory on the Hanford site. Waste is tracked as it is generated, analyzed, shipped, stored, and treated. In addition to inventory reports a number of reports for regulatory agencies are produced.

    15. High Performance Computing at the Oak Ridge Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      High Performance Computing at the Oak Ridge Leadership Computing Facility Go to Menu Page 2 Outline * Our Mission * Computer Systems: Present, Past, Future * Challenges Along the Way * Resources for Users Go to Menu Page 3 Our Mission Go to Menu Page 4 * World's most powerful computing facility * Nation's largest concentration of open source materials research * $1.3B budget * 4,250 employees * 3,900 research guests annually * $350 million invested in modernization * Nation's most diverse energy

    16. computational-hydraulics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      and Aerodynamics using STAR-CCM+ for CFD Analysis March 21-22, 2012 Argonne, Illinois Dr. Steven Lottes This email address is being protected from spambots. You need JavaScript enabled to view it. A training course in the use of computational hydraulics and aerodynamics CFD software using CD-adapco's STAR-CCM+ for analysis will be held at TRACC from March 21-22, 2012. The course assumes a basic knowledge of fluid mechanics and will make extensive use of hands on tutorials. CD-adapco will issue

    17. developing-compute-efficient

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Developing Compute-efficient, Quality Models with LS-PrePost® 3 on the TRACC Cluster Oct. 21-22, 2010 Argonne TRACC Dr. Cezary Bojanowski Dr. Ronald F. Kulak This email address is being protected from spambots. You need JavaScript enabled to view it. Announcement pdficon small The LS-PrePost Introductory Course was held October 21-22, 2010 at TRACC in West Chicago with interactive participation on-site as well as remotely via the Internet. Intended primarily for finite element analysts with

    18. Computing for Finance

      ScienceCinema (OSTI)

      None

      2011-10-06

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing ? from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege o

    19. Computer generated holographic microtags

      DOE Patents [OSTI]

      Sweatt, William C.

      1998-01-01

      A microlithographic tag comprising an array of individual computer generated holographic patches having feature sizes between 250 and 75 nanometers. The tag is a composite hologram made up of the individual holographic patches and contains identifying information when read out with a laser of the proper wavelength and at the proper angles of probing and reading. The patches are fabricated in a steep angle Littrow readout geometry to maximize returns in the -1 diffracted order. The tags are useful as anti-counterfeiting markers because of the extreme difficulty in reproducing them.

    20. Computer generated holographic microtags

      DOE Patents [OSTI]

      Sweatt, W.C.

      1998-03-17

      A microlithographic tag comprising an array of individual computer generated holographic patches having feature sizes between 250 and 75 nanometers is disclosed. The tag is a composite hologram made up of the individual holographic patches and contains identifying information when read out with a laser of the proper wavelength and at the proper angles of probing and reading. The patches are fabricated in a steep angle Littrow readout geometry to maximize returns in the -1 diffracted order. The tags are useful as anti-counterfeiting markers because of the extreme difficulty in reproducing them. 5 figs.

    1. Scanning computed confocal imager

      DOE Patents [OSTI]

      George, John S. (Los Alamos, NM)

      2000-03-14

      There is provided a confocal imager comprising a light source emitting a light, with a light modulator in optical communication with the light source for varying the spatial and temporal pattern of the light. A beam splitter receives the scanned light and direct the scanned light onto a target and pass light reflected from the target to a video capturing device for receiving the reflected light and transferring a digital image of the reflected light to a computer for creating a virtual aperture and outputting the digital image. In a transmissive mode of operation the invention omits the beam splitter means and captures light passed through the target.

    2. Introduction to High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Introduction to High Performance Computing Introduction to High Performance Computing June 10, 2013 Photo on 7 30 12 at 7.10 AM Downloads Download File Gerber-HPC-2.pdf...

    3. Computer Wallpaper | The Ames Laboratory

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Wallpaper We've incorporated the tagline, Creating Materials and Energy Solutions, into a computer wallpaper so you can display it on your desktop as a constant reminder....

    4. Institutional Computing Executive Group Review of Multi-programmatic & Institutional Computing, Fiscal Year 2005 and 2006

      SciTech Connect (OSTI)

      Langer, S; Rotman, D; Schwegler, E; Folta, P; Gee, R; White, D

      2006-12-18

      The Institutional Computing Executive Group (ICEG) review of FY05-06 Multiprogrammatic and Institutional Computing (M and IC) activities is presented in the attached report. In summary, we find that the M and IC staff does an outstanding job of acquiring and supporting a wide range of institutional computing resources to meet the programmatic and scientific goals of LLNL. The responsiveness and high quality of support given to users and the programs investing in M and IC reflects the dedication and skill of the M and IC staff. M and IC has successfully managed serial capacity, parallel capacity, and capability computing resources. Serial capacity computing supports a wide range of scientific projects which require access to a few high performance processors within a shared memory computer. Parallel capacity computing supports scientific projects that require a moderate number of processors (up to roughly 1000) on a parallel computer. Capability computing supports parallel jobs that push the limits of simulation science. M and IC has worked closely with Stockpile Stewardship, and together they have made LLNL a premier institution for computational and simulation science. Such a standing is vital to the continued success of laboratory science programs and to the recruitment and retention of top scientists. This report provides recommendations to build on M and IC's accomplishments and improve simulation capabilities at LLNL. We recommend that institution fully fund (1) operation of the atlas cluster purchased in FY06 to support a few large projects; (2) operation of the thunder and zeus clusters to enable 'mid-range' parallel capacity simulations during normal operation and a limited number of large simulations during dedicated application time; (3) operation of the new yana cluster to support a wide range of serial capacity simulations; (4) improvements to the reliability and performance of the Lustre parallel file system; (5) support for the new GDO petabyte-class storage facility on the green network for use in data intensive external collaborations; and (6) continued support for visualization and other methods for analyzing large simulations. We also recommend that M and IC begin planning in FY07 for the next upgrade of its parallel clusters. LLNL investments in M and IC have resulted in a world-class simulation capability leading to innovative science. We thank the LLNL management for its continued support and thank the M and IC staff for its vision and dedicated efforts to make it all happen.

    5. Super recycled water: quenching computers

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Super recycled water: quenching computers Super recycled water: quenching computers New facility and methods support conserving water and creating recycled products. Using reverse osmosis to "super purify" water allows the system to reuse water and cool down our powerful yet thirsty computers. January 30, 2014 Super recycled water: quenching computers LANL's Sanitary Effluent Reclamation Facility, key to reducing the Lab's discharge of liquid. Millions of gallons of industrial

    6. Fermilab | Science at Fermilab | Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Computing is indispensable to science at Fermilab. High-energy physics experiments generate an astounding amount of data that physicists need to store, analyze and communicate with others. Cutting-edge technology allows scientists to work quickly and efficiently to advance our understanding of the world . Fermilab's Computing Division is recognized for its expertise in handling huge amounts of data, its success in high-speed parallel computing and its willingness to take its craft in

    7. History | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Leadership Computing The Argonne Leadership Computing Facility (ALCF) was established at Argonne National Laboratory in 2004 as part of a U.S. Department of Energy (DOE) initiative dedicated to enabling leading-edge computational capabilities to advance fundamental discovery and understanding in a broad range of scientific and engineering disciplines. Supported by the Advanced Scientific Computing Research (ASCR) program within DOE's Office of Science, the ALCF is one half of the DOE Leadership

    8. Computing architecture for autonomous microgrids

      DOE Patents [OSTI]

      Goldsmith, Steven Y.

      2015-09-29

      A computing architecture that facilitates autonomously controlling operations of a microgrid is described herein. A microgrid network includes numerous computing devices that execute intelligent agents, each of which is assigned to a particular entity (load, source, storage device, or switch) in the microgrid. The intelligent agents can execute in accordance with predefined protocols to collectively perform computations that facilitate uninterrupted control of the microgrid.

    9. Regulatory Requirements | The Ames Laboratory

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Regulatory Requirements Executive Order 13423, Strengthening Federal Environment, Energy, and Transportation Management (January 26, 2007) and Executive Order 13514, Federal...

    10. Quality Work Plan Training Requirement

      Broader source: Energy.gov [DOE]

      Weatherization Assistance Program's comprehensive Quality Work Plan requirements and resources to meet this obligation in the field.

    11. Noise tolerant spatiotemporal chaos computing

      SciTech Connect (OSTI)

      Kia, Behnam; Kia, Sarvenaz; Ditto, William L.; Lindner, John F.; Sinha, Sudeshna

      2014-12-01

      We introduce and design a noise tolerant chaos computing system based on a coupled map lattice (CML) and the noise reduction capabilities inherent in coupled dynamical systems. The resulting spatiotemporal chaos computing system is more robust to noise than a single map chaos computing system. In this CML based approach to computing, under the coupled dynamics, the local noise from different nodes of the lattice diffuses across the lattice, and it attenuates each other's effects, resulting in a system with less noise content and a more robust chaos computing architecture.

    12. Investigating an API for resilient exascale computing. (Technical Report) |

      Office of Scientific and Technical Information (OSTI)

      SciTech Connect Technical Report: Investigating an API for resilient exascale computing. Citation Details In-Document Search Title: Investigating an API for resilient exascale computing. Increased HPC capability comes with increased complexity, part counts, and fault occurrences. In- creasing the resilience of systems and applications to faults is a critical requirement facing the viability of exascale systems, as the overhead of traditional checkpoint/restart is projected to outweigh its

    13. Investigating an API for resilient exascale computing. (Technical Report) |

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      SciTech Connect Investigating an API for resilient exascale computing. Citation Details In-Document Search Title: Investigating an API for resilient exascale computing. Increased HPC capability comes with increased complexity, part counts, and fault occurrences. In- creasing the resilience of systems and applications to faults is a critical requirement facing the viability of exascale systems, as the overhead of traditional checkpoint/restart is projected to outweigh its bene ts due to fault

    14. Name Center for Applied Scientific Computing month day, 1998

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Bosl, Art Mirin, Phil Duffy Lawrence Livermore National Lab Climate and Carbon Cycle Modeling Group Center for Applied Scientific Computing April 24, 2003 High Resolution Climate Simulation and Regional Water Supplies WJB 2 CASC/CCCM High-Performance Computing for Climate Modeling as a Planning Tool GLOBAL WARMING IS HERE!! ... so now what? How will climate change really affect societies? Effects of global climate change are local Some effects of climate change can be mitigated Requires accurate

    15. John Shalf Gives Talk at San Francisco High Performance Computing Meetup

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      John Shalf Gives Talk at San Francisco High Performance Computing Meetup John Shalf Gives Talk at San Francisco High Performance Computing Meetup September 17, 2014 XBD200503 00083 John Shalf In his role as NERSC's chief technology officer, John Shalf gave a talk on "Converging Interconnect Requirements for HPC and Warehouse Scale Computing at San Francisco High Performance Computing Meetup. The Sept 17 meeting was held at GeekdomSF in downtown San Francisco. The group, which describes

    16. AMRITA -- A computational facility

      SciTech Connect (OSTI)

      Shepherd, J.E.; Quirk, J.J.

      1998-02-23

      Amrita is a software system for automating numerical investigations. The system is driven using its own powerful scripting language, Amrita, which facilitates both the composition and archiving of complete numerical investigations, as distinct from isolated computations. Once archived, an Amrita investigation can later be reproduced by any interested party, and not just the original investigator, for no cost other than the raw CPU time needed to parse the archived script. In fact, this entire lecture can be reconstructed in such a fashion. To do this, the script: constructs a number of shock-capturing schemes; runs a series of test problems, generates the plots shown; outputs the LATEX to typeset the notes; performs a myriad of behind-the-scenes tasks to glue everything together. Thus Amrita has all the characteristics of an operating system and should not be mistaken for a common-or-garden code.

    17. Computer memory management system

      DOE Patents [OSTI]

      Kirk, III, Whitson John

      2002-01-01

      A computer memory management system utilizing a memory structure system of "intelligent" pointers in which information related to the use status of the memory structure is designed into the pointer. Through this pointer system, The present invention provides essentially automatic memory management (often referred to as garbage collection) by allowing relationships between objects to have definite memory management behavior by use of coding protocol which describes when relationships should be maintained and when the relationships should be broken. In one aspect, the present invention system allows automatic breaking of strong links to facilitate object garbage collection, coupled with relationship adjectives which define deletion of associated objects. In another aspect, The present invention includes simple-to-use infinite undo/redo functionality in that it has the capability, through a simple function call, to undo all of the changes made to a data model since the previous `valid state` was noted.

    18. Open-Source Software in Computational Research: A Case Study

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Syamlal, Madhava; O'Brien, Thomas J.; Benyahia, Sofiane; Gel, Aytekin; Pannala, Sreekanth

      2008-01-01

      A case study of open-source (OS) development of the computational research software MFIX, used for multiphase computational fluid dynamics simulations, is presented here. The verification and validation steps required for constructing modern computational software and the advantages of OS development in those steps are discussed. The infrastructure used for enabling the OS development of MFIX is described. The impact of OS development on computational research and education in gas-solids flow, as well as the dissemination of information to other areas such as geophysical and volcanology research, is demonstrated. This study shows that the advantages of OS development were realized inmore » the case of MFIX: verification by many users, which enhances software quality; the use of software as a means for accumulating and exchanging information; the facilitation of peer review of the results of computational research.« less

    19. Introduction to High Performance Computers Richard Gerber NERSC User Services

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      What are the main parts of a computer? Merit Badge Requirements ... 4. Explain the following to your counselor: a. The five major parts of a computer. ... Boy Scouts of America Offer a Computers Merit Badge 5 What are the "5 major parts"? 6 Five Major Parts eHow.com Answers.com Fluther.com Yahoo! Wikipedia CPU CPU CPU CPU Motherboard RAM Monitor RAM RAM Power Supply Hard Drive Printer Storage Power Supply Removable Media Video Card Mouse Keyboard/ Mouse Video Card Secondary Storage

    20. Using the NEPA Requirements and Guidance - Search Index

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      the NEPA Requirements and Guidance - Search Index Step 2: Entering a Search Term or Phrase 1. Locate the downloaded file, right click on it, select "Extract all", extract it to any location on your computer or USB drive. 2. Locate and Open the extracted folder "NEPA Requirements and Guidance - Search Index". 3. Locate and Open the .PDX file titled "Search - NEPA Requirements and Guidance" to open search form. Step 1: Download and Set Up Please Note: the search form

    1. Measures of agreement between computation and experiment:validation metrics.

      SciTech Connect (OSTI)

      Barone, Matthew Franklin; Oberkampf, William Louis

      2005-08-01

      With the increasing role of computational modeling in engineering design, performance estimation, and safety assessment, improved methods are needed for comparing computational results and experimental measurements. Traditional methods of graphically comparing computational and experimental results, though valuable, are essentially qualitative. Computable measures are needed that can quantitatively compare computational and experimental results over a range of input, or control, variables and sharpen assessment of computational accuracy. This type of measure has been recently referred to as a validation metric. We discuss various features that we believe should be incorporated in a validation metric and also features that should be excluded. We develop a new validation metric that is based on the statistical concept of confidence intervals. Using this fundamental concept, we construct two specific metrics: one that requires interpolation of experimental data and one that requires regression (curve fitting) of experimental data. We apply the metrics to three example problems: thermal decomposition of a polyurethane foam, a turbulent buoyant plume of helium, and compressibility effects on the growth rate of a turbulent free-shear layer. We discuss how the present metrics are easily interpretable for assessing computational model accuracy, as well as the impact of experimental measurement uncertainty on the accuracy assessment.

    2. Part B - Requirements & Funding Information PART B - Requirements...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      highlights are instructions. Remove the instructions from the interagency agreement. Attachment 3.a. Part B 2 PART B - Requirements & Funding Information B.1. Purpose This is for...

    3. Biological and Environmental Research Network Requirements

      SciTech Connect (OSTI)

      Balaji, V.; Boden, Tom; Cowley, Dave; Dart, Eli; Dattoria, Vince; Desai, Narayan; Egan, Rob; Foster, Ian; Goldstone, Robin; Gregurick, Susan; Houghton, John; Izaurralde, Cesar; Johnston, Bill; Joseph, Renu; Kleese-van Dam, Kerstin; Lipton, Mary; Monga, Inder; Pritchard, Matt; Rotman, Lauren; Strand, Gary; Stuart, Cory; Tatusova, Tatiana; Tierney, Brian; Thomas, Brian; Williams, Dean N.; Zurawski, Jason

      2013-09-01

      The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet be a highly successful enabler of scientific discovery for over 25 years. In November 2012, ESnet and the Office of Biological and Environmental Research (BER) of the DOE SC organized a review to characterize the networking requirements of the programs funded by the BER program office. Several key findings resulted from the review. Among them: 1) The scale of data sets available to science collaborations continues to increase exponentially. This has broad impact, both on the network and on the computational and storage systems connected to the network. 2) Many science collaborations require assistance to cope with the systems and network engineering challenges inherent in managing the rapid growth in data scale. 3) Several science domains operate distributed facilities that rely on high-performance networking for success. Key examples illustrated in this report include the Earth System Grid Federation (ESGF) and the Systems Biology Knowledgebase (KBase). This report expands on these points, and addresses others as well. The report contains a findings section as well as the text of the case studies discussed at the review.

    4. Other World Computing | Open Energy Information

      Open Energy Info (EERE)

      World Computing Jump to: navigation, search Name Other World Computing Facility Other World Computing Sector Wind energy Facility Type Community Wind Facility Status In Service...

    5. CLAMR (Compute Language Adaptive Mesh Refinement)

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      CLAMR (Compute Language Adaptive Mesh Refinement) CLAMR (Compute Language Adaptive Mesh Refinement) CLAMR (Compute Language Adaptive Mesh Refinement) is being developed as a DOE...

    6. Computer_Vision

      Energy Science and Technology Software Center (OSTI)

      2002-10-04

      The Computer_Vision software performs object recognition using a novel multi-scale characterization and matching algorithm. To understand the multi-scale characterization and matching software, it is first necessary to understand some details of the Computer Vision (CV) Project. This project has focused on providing algorithms and software that provide an end-to-end toolset for image processing applications. At a high-level, this end-to-end toolset focuses on 7 coy steps. The first steps are geometric transformations. 1) Image Segmentation. Thismore » step essentially classifies pixels in foe input image as either being of interest or not of interest. We have also used GENIE segmentation output for this Image Segmentation step. 2 Contour Extraction (patent submitted). This takes the output of Step I and extracts contours for the blobs consisting of pixels of interest. 3) Constrained Delaunay Triangulation. This is a well-known geometric transformation that creates triangles inside the contours. 4 Chordal Axis Transform (CAT) . This patented geometric transformation takes the triangulation output from Step 3 and creates a concise and accurate structural representation of a contour. From the CAT, we create a linguistic string, with associated metrical information, that provides a detailed structural representation of a contour. 5.) Normalization. This takes an attributed linguistic string output from Step 4 and balances it. This ensures that the linguistic representation accurately represents the major sections of the contour. Steps 6 and 7 are implemented by the multi-scale characterization and matching software. 6) Multi scale Characterization. This takes as input the attributed linguistic string output from Normalization. Rules from a context free grammar are applied in reverse to create a tree-like representation for each contour. For example, one of the grammar’s rules is L -> (LL ). When an (LL) is seen in a string, a parent node is created that points to the four child symbols ‘(‘ , ‘L’ , ‘L’, and ‘)‘ . Levels in the tree can then be thought of as coarser (towards the root) or finer (towards the leaves) representations of the same contours. 7.) Multi scale Matching. Having a multi-scale characterization allows us to compare objects at a coarser level before matching at finer levels of detail. Matching at a coarser level not only increases the speed of the matching process (you’re comparing fewer symbols) , but also increases accuracy since small variations along contours do not significantly detract from two objects’ similarity.« less

    7. Cluster computing software for GATE simulations

      SciTech Connect (OSTI)

      Beenhouwer, Jan de; Staelens, Steven; Kruecker, Dirk; Ferrer, Ludovic; D'Asseler, Yves; Lemahieu, Ignace; Rannou, Fernando R.

      2007-06-15

      Geometry and tracking (GEANT4) is a Monte Carlo package designed for high energy physics experiments. It is used as the basis layer for Monte Carlo simulations of nuclear medicine acquisition systems in GEANT4 Application for Tomographic Emission (GATE). GATE allows the user to realistically model experiments using accurate physics models and time synchronization for detector movement through a script language contained in a macro file. The downside of this high accuracy is long computation time. This paper describes a platform independent computing approach for running GATE simulations on a cluster of computers in order to reduce the overall simulation time. Our software automatically creates fully resolved, nonparametrized macros accompanied with an on-the-fly generated cluster specific submit file used to launch the simulations. The scalability of GATE simulations on a cluster is investigated for two imaging modalities, positron emission tomography (PET) and single photon emission computed tomography (SPECT). Due to a higher sensitivity, PET simulations are characterized by relatively high data output rates that create rather large output files. SPECT simulations, on the other hand, have lower data output rates but require a long collimator setup time. Both of these characteristics hamper scalability as a function of the number of CPUs. The scalability of PET simulations is improved here by the development of a fast output merger. The scalability of SPECT simulations is improved by greatly reducing the collimator setup time. Accordingly, these two new developments result in higher scalability for both PET and SPECT simulations and reduce the computation time to more practical values.

    8. Managing System of Systems Requirements with a Requirements Screening Group

      SciTech Connect (OSTI)

      Ronald R. Barden

      2012-07-01

      Figuring out an effective and efficient way to manage not only your Requirements Baseline, but also the development of all your individual requirements during a Programs/Projects Conceptual and Development Life Cycle Stages can be both daunting and difficult. This is especially so when you are dealing with a complex and large System of Systems (SoS) Program with potentially thousands and thousands of Top Level Requirements as well as an equal number of lower level System, Subsystem and Configuration Item requirements that need to be managed. This task is made even more overwhelming when you have to add in integration with multiple requirements development teams (e.g., Integrated Product Development Teams (IPTs)) and/or numerous System/Subsystem Design Teams. One solution for tackling this difficult activity on a recent large System of Systems Program was to develop and make use of a Requirements Screening Group (RSG). This group is essentially a Team made up of co-chairs from the various Stakeholders with an interest in the Program of record that are enabled and accountable for Requirements Development on the Program/Project. The RSG co-chairs, often with the help of individual support team, work together as a Program Board to monitor, make decisions on, and provide guidance on all Requirements Development activities during the Conceptual and Development Life Cycle Stages of a Program/Project. In addition, the RSG can establish and maintain the Requirements Baseline, monitor and enforce requirements traceability across the entire Program, and work with other elements of the Program/Project to ensure integration and coordination.

    9. An estimate for the sum of a Dirichlet series in terms of the minimum of its modulus on a vertical line segment

      SciTech Connect (OSTI)

      Gaisin, Ahtyar M; Rakhmatullina, Zhanna G

      2011-12-31

      The behaviour of the sum of an entire Dirichlet series is analyzed in terms of the minimum of its modulus on a system of vertical line segments. Also a more general problem, connected with the Polya conjecture is posed and solved. It concerns the minimum modulus of an entire function with Fabri gaps and its growth along curves going to infinity. Bibliography: 33 titles.

    10. HEP Science Network Requirements--Final Report

      SciTech Connect (OSTI)

      Bakken, Jon; Barczyk, Artur; Blatecky, Alan; Boehnlein, Amber; Carlson, Rich; Chekanov, Sergei; Cotter, Steve; Cottrell, Les; Crawford, Glen; Crawford, Matt; Dart, Eli; Dattoria, Vince; Ernst, Michael; Fisk, Ian; Gardner, Rob; Johnston, Bill; Kent, Steve; Lammel, Stephan; Loken, Stewart; Metzger, Joe; Mount, Richard; Ndousse-Fetter, Thomas; Newman, Harvey; Schopf, Jennifer; Sekine, Yukiko; Stone, Alan; Tierney, Brian; Tull, Craig; Zurawski, Jason

      2010-04-27

      The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the US Department of Energy Office of Science, the single largest supporter of basic research in the physical sciences in the United States. In support of the Office of Science programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years. In August 2009 ESnet and the Office of High Energy Physics (HEP), of the DOE Office of Science, organized a workshop to characterize the networking requirements of the programs funded by HEP. The International HEP community has been a leader in data intensive science from the beginning. HEP data sets have historically been the largest of all scientific data sets, and the communty of interest the most distributed. The HEP community was also the first to embrace Grid technologies. The requirements identified at the workshop are summarized below, and described in more detail in the case studies and the Findings section: (1) There will be more LHC Tier-3 sites than orginally thought, and likely more Tier-2 to Tier-2 traffic than was envisioned. It it not yet known what the impact of this will be on ESnet, but we will need to keep an eye on this traffic. (2) The LHC Tier-1 sites (BNL and FNAL) predict the need for 40-50 Gbps of data movement capacity in 2-5 years, and 100-200 Gbps in 5-10 years for HEP program related traffic. Other key HEP sites include LHC Tier-2 and Tier-3 sites, many of which are located at universities. To support the LHC, ESnet must continue its collaborations with university and international networks. (3) While in all cases the deployed 'raw' network bandwidth must exceed the user requirements in order to meet the data transfer and reliability requirements, network engineering for trans-Atlantic connectivity is more complex than network engineering for intra-US connectivity. This is because transoceanic circuits have lower reliability and longer repair times when compared with land-based circuits. Therefore, trans-Atlantic connectivity requires greater deployed bandwidth and diversity to ensure reliability and service continuity of the user-level required data transfer rates. (4) Trans-Atlantic traffic load and patterns must be monitored, and projections adjusted if necessary. There is currently a shutdown planned for the LHC in 2012 that may affect projections of trans-Atlantic bandwidth requirements. (5) There is a significant need for network tuning and troubleshooting during the establishment of new LHC Tier-2 and Tier-3 facilities. ESnet will work with the HEP community to help new sites effectively use the network. (6) SLAC is building the CCD camera for the LSST. This project will require significant bandwidth (up to 30Gbps) to NCSA over the next few years. (7) The accelerator modeling program at SLAC could require the movement of 1PB simulation data sets from the Leadership Computing Facilities at Argonne and Oak Ridge to SLAC. The data sets would need to be moved overnight, and moving 1PB in eight hours requires more than 300Gbps of throughput. This requirement is dependent on the deployment of analysis capabilities at SLAC, and is about five years away. (8) It is difficult to achieve high data transfer throughput to sites in China. Projects that need to transfer data in or out of China are encouraged to deploy test and measurement infrastructure (e.g. perfSONAR) and allow time for performance tuning.

    11. Bioinformatics Computing Consultant Position Available

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Bioinformatics Computing Consultant Position Available Bioinformatics Computing Consultant Position Available October 31, 2011 by Katie Antypas NERSC and the Joint Genome Institute (JGI) are searching for two individuals who can help biologists exploit advanced computing platforms. JGI provides production sequencing and genomics for the Department of Energy. These activities are critical to the DOE missions in areas related to clean energy generation and environmental characterization and

    12. Parallel Computing Summer Research Internship

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Parallel Computing Parallel Computing Summer Research Internship Creates next-generation leaders in HPC research and applications development Contacts Program Co-Lead Robert (Bob) Robey Email Program Co-Lead Gabriel Rockefeller Email Program Co-Lead Hai Ah Nam Email Professional Staff Assistant Nickole Aguilar Garcia (505) 665-3048 Email The Parallel Computing Summer Research Internship is an intense 10 week program aimed at providing students with a solid foundation in modern high performance

    13. computational-fluid-dynamics-training

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Table of Contents Date Location Advanced Hydraulic and Aerodynamic Analysis Using CFD March 27-28, 2013 Argonne TRACC Argonne, IL Computational Hydraulics and Aerodynamics using STAR-CCM+ for CFD Analysis March 21-22, 2012 Argonne TRACC Argonne, IL Computational Hydraulics and Aerodynamics using STAR-CCM+ for CFD Analysis March 30-31, 2011 Argonne TRACC Argonne, IL Computational Hydraulics for Transportation Workshop September 23-24, 2009 Argonne TRACC West Chicago, IL

    14. Careers | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Careers at Argonne Looking for a unique opportunity to work at the forefront of high-performance computing? At the Argonne Leadership Computing Facility, we are helping to redefine what's possible in computational science. With some of the most powerful supercomputers in the world and a talented and diverse team of experts, we enable researchers to pursue groundbreaking discoveries that would otherwise not be possible. Check out our open positions below. For the most current listing of

    15. Federal Requirements for the Web

      Broader source: Energy.gov [DOE]

      Federal laws and requirements govern the Office of Energy Efficiency and Renewable Energy (EERE) with regards to its websites and other digital media.

    16. Bioinformatics Computing Consultant Position Available

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      You can read more about the positions and apply at jobs.lbl.gov: Bioinformatics High Performance Computing Consultant (job number: 73194) and Software Developer for High...

    17. Parallel Computing Summer Research Internship

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      should have basic experience with a scientific computing language, such as C, C++, Fortran and with the LINUX operating system. Duration & Location The program will last ten...

    18. Tukey | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. Feedback Form Tukey The primary purpose of...

    19. Thrusts in High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Exascale computers (1000x Hopper) in next decade: - Manycore processors using graphics, games, embedded cores, or other low power designs offer 100x in power efficiency -...

    20. Advanced Simulation and Computing Program

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      The SSP mission is to analyze and predict the performance, safety, and reliability of nuclear weapons and certify their functionality. ASC works in partnership with computer ...

    1. Efficient parallel global garbage collection on massively parallel computers

      SciTech Connect (OSTI)

      Kamada, Tomio; Matsuoka, Satoshi; Yonezawa, Akinori

      1994-12-31

      On distributed-memory high-performance MPPs where processors are interconnected by an asynchronous network, efficient Garbage Collection (GC) becomes difficult due to inter-node references and references within pending, unprocessed messages. The parallel global GC algorithm (1) takes advantage of reference locality, (2) efficiently traverses references over nodes, (3) admits minimum pause time of ongoing computations, and (4) has been shown to scale up to 1024 node MPPs. The algorithm employs a global weight counting scheme to substantially reduce message traffic. The two methods for confirming the arrival of pending messages are used: one counts numbers of messages and the other uses network `bulldozing.` Performance evaluation in actual implementations on a multicomputer with 32-1024 nodes, Fujitsu AP1000, reveals various favorable properties of the algorithm.

    2. THE ROCHE LIMIT FOR CLOSE-ORBITING PLANETS: MINIMUM DENSITY, COMPOSITION CONSTRAINTS, AND APPLICATION TO THE 4.2 hr PLANET KOI 1843.03

      SciTech Connect (OSTI)

      Rappaport, Saul; Sanchis-Ojeda, Roberto; Winn, Joshua N.; Rogers, Leslie A.; Levine, Alan E-mail: sar@mit.edu E-mail: larogers@caltech.edu

      2013-08-10

      The requirement that a planet must orbit outside of its Roche limit gives a lower limit on the planet's mean density. The minimum density depends almost entirely on the orbital period and is immune to systematic errors in the stellar properties. We consider the implications of this density constraint for the newly identified class of small planets with periods shorter than half a day. When the planet's radius is accurately known, this lower limit to the density can be used to restrict the possible combinations of iron and rock within the planet. Applied to KOI 1843.03, a 0.6 R{sub Circled-Plus} planet with the shortest known orbital period of 4.245 hr, the planet's mean density must be {approx}> 7 g cm{sup -3}. By modeling the planetary interior subject to this constraint, we find that the composition of the planet must be mostly iron, with at most a modest fraction of silicates ({approx}< 30% by mass)

    3. Optimizing minimum free-energy crossing points in solution: Linear-response free energy/spin-flip density functional theory approach

      SciTech Connect (OSTI)

      Minezawa, Noriyuki

      2014-10-28

      Examining photochemical processes in solution requires understanding the solvent effects on the potential energy profiles near conical intersections (CIs). For that purpose, the CI point in solution is determined as the crossing between nonequilibrium free energy surfaces. In this work, the nonequilibrium free energy is described using the combined method of linear-response free energy and collinear spin-flip time-dependent density functional theory. The proposed approach reveals the solvent effects on the CI geometries of stilbene in an acetonitrile solution and those of thymine in water. Polar acetonitrile decreases the energy difference between the twisted minimum and twisted-pyramidalized CI of stilbene. For thymine in water, the hydrogen bond formation stabilizes significantly the CI puckered at the carbonyl carbon atom. The result is consistent with the recent simulation showing that the reaction path via this geometry is open in water. Therefore, the present method is a promising way of identifying the free-energy crossing points that play an essential role in photochemistry of solvated molecules.

    4. Towards a Real-Time Cluster Computing Infrastructure

      SciTech Connect (OSTI)

      Hui, Peter SY; Chikkagoudar, Satish; Chavarría-Miranda, Daniel; Johnston, Mark R.

      2011-11-01

      Real-time computing has traditionally been considered largely in the context of single-processor and embedded systems, and indeed, the terms real-time computing, embedded systems, and control systems are often mentioned in closely related contexts. However, real-time computing in the context of multinode systems, specifically high-performance, cluster-computing systems, remains relatively unexplored, largely due to the fact that until now, there has not been a need for such an environment. In this paper, we motivate the need for a cluster computing infrastructure capable of supporting computation over large datasets in real-time. Our motivating example is an analytical framework to support the next generation North American power grid, which is growing both in size and complexity. With streaming sensor data in the future power grid potentially reaching rates on the order of terabytes per day, the task of analyzing this data subject to real-time guarantees becomes a daunting task which will require the power of high-performance cluster computing capable of functioning under real-time constraints. One specific challenge that such an environment presents is the need for real-time networked communication between cluster nodes. In this paper, we discuss the need for real-time high-performance cluster computation, along with our work-in-progress towards an infrastructure which will ultimately enable such an environment.

    5. Manufacturing Energy and Carbon Footprint - Sector: Computer...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Computers, Electronics and Electrical Equipment (NAICS 334, 335) Process Energy ... Carbon Footprint Sector: Computers, Electronics and Electrical Equipment (NAICS 334, ...

    6. Section H: Special Contract Requirements

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      H SPECIAL CONTRACT REQUIREMENTS Request for Proposal # DE-RP36-07GO97036 PART I SECTION H SPECIAL CONTRACT REQUIREMENTS TABLE OF CONTENTS H.1 No Third Party Beneficiaries.............................................................................................1 H.2 Workforce Transition ........................................................................................................1 H.3 Employee Compensation: Pay and

    7. Computer modeling of the global warming effect

      SciTech Connect (OSTI)

      Washington, W.M.

      1993-12-31

      The state of knowledge of global warming will be presented and two aspects examined: observational evidence and a review of the state of computer modeling of climate change due to anthropogenic increases in greenhouse gases. Observational evidence, indeed, shows global warming, but it is difficult to prove that the changes are unequivocally due to the greenhouse-gas effect. Although observational measurements of global warming are subject to ``correction,`` researchers are showing consistent patterns in their interpretation of the data. Since the 1960s, climate scientists have been making their computer models of the climate system more realistic. Models started as atmospheric models and, through the addition of oceans, surface hydrology, and sea-ice components, they then became climate-system models. Because of computer limitations and the limited understanding of the degree of interaction of the various components, present models require substantial simplification. Nevertheless, in their present state of development climate models can reproduce most of the observed large-scale features of the real system, such as wind, temperature, precipitation, ocean current, and sea-ice distribution. The use of supercomputers to advance the spatial resolution and realism of earth-system models will also be discussed.

    8. Computation Directorate 2008 Annual Report

      SciTech Connect (OSTI)

      Crawford, D L

      2009-03-25

      Whether a computer is simulating the aging and performance of a nuclear weapon, the folding of a protein, or the probability of rainfall over a particular mountain range, the necessary calculations can be enormous. Our computers help researchers answer these and other complex problems, and each new generation of system hardware and software widens the realm of possibilities. Building on Livermore's historical excellence and leadership in high-performance computing, Computation added more than 331 trillion floating-point operations per second (teraFLOPS) of power to LLNL's computer room floors in 2008. In addition, Livermore's next big supercomputer, Sequoia, advanced ever closer to its 2011-2012 delivery date, as architecture plans and the procurement contract were finalized. Hyperion, an advanced technology cluster test bed that teams Livermore with 10 industry leaders, made a big splash when it was announced during Michael Dell's keynote speech at the 2008 Supercomputing Conference. The Wall Street Journal touted Hyperion as a 'bright spot amid turmoil' in the computer industry. Computation continues to measure and improve the costs of operating LLNL's high-performance computing systems by moving hardware support in-house, by measuring causes of outages to apply resources asymmetrically, and by automating most of the account and access authorization and management processes. These improvements enable more dollars to go toward fielding the best supercomputers for science, while operating them at less cost and greater responsiveness to the customers.

    9. High-Performance Computing for Advanced Smart Grid Applications

      SciTech Connect (OSTI)

      Huang, Zhenyu; Chen, Yousu

      2012-07-06

      The power grid is becoming far more complex as a result of the grid evolution meeting an information revolution. Due to the penetration of smart grid technologies, the grid is evolving as an unprecedented speed and the information infrastructure is fundamentally improved with a large number of smart meters and sensors that produce several orders of magnitude larger amounts of data. How to pull data in, perform analysis, and put information out in a real-time manner is a fundamental challenge in smart grid operation and planning. The future power grid requires high performance computing to be one of the foundational technologies in developing the algorithms and tools for the significantly increased complexity. New techniques and computational capabilities are required to meet the demands for higher reliability and better asset utilization, including advanced algorithms and computing hardware for large-scale modeling, simulation, and analysis. This chapter summarizes the computational challenges in smart grid and the need for high performance computing, and present examples of how high performance computing might be used for future smart grid operation and planning.

    10. Automotive Turbocharging: Industrial Requirements and Technology...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Turbocharging: Industrial Requirements and Technology Developments Automotive Turbocharging: Industrial Requirements and Technology Developments Significant improvements in...

    11. Radiant energy required for infrared neural stimulation

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Tan, Xiaodong; Rajguru, Suhrud; Young, Hunter; Xia, Nan; Stock, Stuart R.; Xiao, Xianghui; Richter, Claus-Peter

      2015-08-25

      Infrared neural stimulation (INS) has been proposed as an alternative method to electrical stimulation because of its spatial selective stimulation. Independent of the mechanism for INS, to translate the method into a device it is important to determine the energy for stimulation required at the target structure. Custom-designed, flat and angle polished fibers, were used to deliver the photons. By rotating the angle polished fibers, the orientation of the radiation beam in the cochlea could be changed. INS-evoked compound action potentials and single unit responses in the central nucleus of the inferior colliculus (ICC) were recorded. X-ray computed tomography wasmore » used to determine the orientation of the optical fiber. Maximum responses were observed when the radiation beam was directed towards the spiral ganglion neurons (SGNs), whereas little responses were seen when the beam was directed towards the basilar membrane. The radiant exposure required at the SGNs to evoke compound action potentials (CAPs) or ICC responses was on average 18.9 ± 12.2 or 10.3 ± 4.9 mJ/cm2, respectively. For cochlear INS it has been debated whether the radiation directly stimulates the SGNs or evokes a photoacoustic effect. The results support the view that a direct interaction between neurons and radiation dominates the response to INS.« less

    12. Radiant energy required for infrared neural stimulation

      SciTech Connect (OSTI)

      Tan, Xiaodong; Rajguru, Suhrud; Young, Hunter; Xia, Nan; Stock, Stuart R.; Xiao, Xianghui; Richter, Claus-Peter

      2015-08-25

      Infrared neural stimulation (INS) has been proposed as an alternative method to electrical stimulation because of its spatial selective stimulation. Independent of the mechanism for INS, to translate the method into a device it is important to determine the energy for stimulation required at the target structure. Custom-designed, flat and angle polished fibers, were used to deliver the photons. By rotating the angle polished fibers, the orientation of the radiation beam in the cochlea could be changed. INS-evoked compound action potentials and single unit responses in the central nucleus of the inferior colliculus (ICC) were recorded. X-ray computed tomography was used to determine the orientation of the optical fiber. Maximum responses were observed when the radiation beam was directed towards the spiral ganglion neurons (SGNs), whereas little responses were seen when the beam was directed towards the basilar membrane. The radiant exposure required at the SGNs to evoke compound action potentials (CAPs) or ICC responses was on average 18.9 12.2 or 10.3 4.9 mJ/cm2, respectively. For cochlear INS it has been debated whether the radiation directly stimulates the SGNs or evokes a photoacoustic effect. The results support the view that a direct interaction between neurons and radiation dominates the response to INS.

    13. BGE Communications Requirements | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      BGE Communications Requirements BGE Communications Requirements Chart of BGE Communications Requirements PDF icon BGE Communications Requirements More Documents & Publications Chart of communications requirements Lower Colorado River Authority Lower Colorado River Authority

    14. Chart of communications requirements | Department of Energy

      Office of Environmental Management (EM)

      Chart of communications requirements Chart of communications requirements Chart of communications requirements for BGE PDF icon Chart of communications requirements More Documents & Publications BGE Communications Requirements Lower Colorado River Authority Lower Colorado River Authority

    15. Multicore Challenges and Benefits for High Performance Scientific Computing

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Nielsen, Ida M.B.; Janssen, Curtis L.

      2008-01-01

      Until recently, performance gains in processors were achieved largely by improvements in clock speeds and instruction level parallelism. Thus, applications could obtain performance increases with relatively minor changes by upgrading to the latest generation of computing hardware. Currently, however, processor performance improvements are realized by using multicore technology and hardware support for multiple threads within each core, and taking full advantage of this technology to improve the performance of applications requires exposure of extreme levels of software parallelism. We will here discuss the architecture of parallel computers constructed from many multicore chips as well as techniques for managing the complexitymore » of programming such computers, including the hybrid message-passing/multi-threading programming model. We will illustrate these ideas with a hybrid distributed memory matrix multiply and a quantum chemistry algorithm for energy computation using Møller–Plesset perturbation theory.« less

    16. Scalable Computation of Streamlines on Very Large Datasets

      SciTech Connect (OSTI)

      Pugmire, David; Childs, Hank; Garth, Christoph; Ahern, Sean; Weber, Gunther H.

      2009-09-01

      Understanding vector fields resulting from large scientific simulations is an important and often difficult task. Streamlines, curves that are tangential to a vector field at each point, are a powerful visualization method in this context. Application of streamline-based visualization to very large vector field data represents a significant challenge due to the non-local and data-dependent nature of streamline computation, and requires careful balancing of computational demands placed on I/O, memory, communication, and processors. In this paper we review two parallelization approaches based on established parallelization paradigms (static decomposition and on-demand loading) and present a novel hybrid algorithm for computing streamlines. Our algorithm is aimed at good scalability and performance across the widely varying computational characteristics of streamline-based problems. We perform performance and scalability studies of all three algorithms on a number of prototypical application problems and demonstrate that our hybrid scheme is able to perform well in different settings.

    17. Meeting Federal Energy Security Requirements

      Broader source: Energy.gov [DOE]

      Presentation—given at at the Fall 2012 Federal Utility Partnership Working Group (FUPWG) meeting—discusses the opportunity to increase the scope of federal-utility partnerships for meeting energy security requirements.

    18. Oak Ridge National Laboratory - Computing and Computational Sciences

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Directorate Oak Ridge to acquire next generation supercomputer Oak Ridge to acquire next generation supercomputer The U.S. Department of Energy's (DOE) Oak Ridge Leadership Computing Facility (OLCF) has signed a contract with IBM to bring a next-generation supercomputer to Oak Ridge National Laboratory (ORNL). The OLCF's new hybrid CPU/GPU computing system, Summit, will be delivered in 2017. (more) Links Department of Energy Consortium for Advanced Simulation of Light Water Reactors Extreme

    19. Introduction to computers: Reference guide

      SciTech Connect (OSTI)

      Ligon, F.V.

      1995-04-01

      The ``Introduction to Computers`` program establishes formal partnerships with local school districts and community-based organizations, introduces computer literacy to precollege students and their parents, and encourages students to pursue Scientific, Mathematical, Engineering, and Technical careers (SET). Hands-on assignments are given in each class, reinforcing the lesson taught. In addition, the program is designed to broaden the knowledge base of teachers in scientific/technical concepts, and Brookhaven National Laboratory continues to act as a liaison, offering educational outreach to diverse community organizations and groups. This manual contains the teacher`s lesson plans and the student documentation to this introduction to computer course.

    20. Power throttling of collections of computing elements

      DOE Patents [OSTI]

      Bellofatto, Ralph E. (Ridgefield, CT); Coteus, Paul W. (Yorktown Heights, NY); Crumley, Paul G. (Yorktown Heights, NY); Gara, Alan G. (Mount Kidsco, NY); Giampapa, Mark E. (Irvington, NY); Gooding; Thomas M. (Rochester, MN); Haring, Rudolf A. (Cortlandt Manor, NY); Megerian, Mark G. (Rochester, MN); Ohmacht, Martin (Yorktown Heights, NY); Reed, Don D. (Mantorville, MN); Swetz, Richard A. (Mahopac, NY); Takken, Todd (Brewster, NY)

      2011-08-16

      An apparatus and method for controlling power usage in a computer includes a plurality of computers communicating with a local control device, and a power source supplying power to the local control device and the computer. A plurality of sensors communicate with the computer for ascertaining power usage of the computer, and a system control device communicates with the computer for controlling power usage of the computer.

    1. Federal and State Ethanol and Biodiesel Requirements (released in AEO2007)

      Reports and Publications (EIA)

      2007-01-01

      The Energy Policy Act 2005 requires that the use of renewable motor fuels be increased from the 2004 level of just over 4 billion gallons to a minimum of 7.5 billion gallons in 2012, after which the requirement grows at a rate equal to the growth of the gasoline pool. The law does not require that every gallon of gasoline or diesel fuel be blended with renewable fuels. Refiners are free to use renewable fuels, such as ethanol and biodiesel, in geographic regions and fuel formulations that make the most sense, as long as they meet the overall standard. Conventional gasoline and diesel can be blended with renewables without any change to the petroleum components, although fuels used in areas with air quality problems are likely to require adjustment to the base gasoline or diesel fuel if they are to be blended with renewables.

    2. DOE SC Exascale Requirements Review: High Energy Physics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      SC Exascale Requirements Review: High Energy Physics Bethesda Hyatt, June 10, 2015 Jim Siegrist Associate Director for High Energy Physics Office of Science, U.S. Department of Energy HEP Computing and Data Challenges * What's new? * In May 2014, the U.S. particle physics community updated its vision for the future - The P5 (Particle Physics Project Prioritization Panel) report presents a strategy for the next decade and beyond that enables discovery and maintains our position as a global leader

    3. Quantum Computing: Solving Complex Problems

      ScienceCinema (OSTI)

      DiVincenzo, David [IBM Watson Research Center

      2009-09-01

      One of the motivating ideas of quantum computation was that there could be a new kind of machine that would solve hard problems in quantum mechanics. There has been significant progress towards the experimental realization of these machines (which I will review), but there are still many questions about how such a machine could solve computational problems of interest in quantum physics. New categorizations of the complexity of computational problems have now been invented to describe quantum simulation. The bad news is that some of these problems are believed to be intractable even on a quantum computer, falling into a quantum analog of the NP class. The good news is that there are many other new classifications of tractability that may apply to several situations of physical interest.

    4. SSRL Computer Account Request Form

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      SSRL/LCLS Computer Account Request Form August 2009 Fill in this form and sign the security statement mentioned at the bottom of this page to obtain an account. Your Name: __________________________________________________________ Institution: ___________________________________________________________ Mailing Address: ______________________________________________________ Email Address: _______________________________________________________ Telephone:

    5. Computing at SSRL Home Page

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      contents you are looking for have moved. You will be redirected to the new location automatically in 5 seconds. Please bookmark the correct page at http://www-ssrl.slac.stanford.edu/content/staff-resources/computer-networking-group

    6. Events | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      2:00 PM Finding Multiple Local Minima of Computationally Expensive Simulations Jeffery Larson Postdoctoral Appointee, MCS Building 240Room 4301 Pages 1 2 3 4 5 6 7 8 9 ... next...

    7. SSRL Computer Account Request Form

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      SSRLLCLS Computer Account Request Form August 2009 Fill in this form and sign the security statement mentioned at the bottom of this page to obtain an account. Your Name:...

    8. Computer Assisted Virtual Environment - CAVE

      ScienceCinema (OSTI)

      Erickson, Phillip; Podgorney, Robert; Weingartner, Shawn; Whiting, Eric

      2014-06-09

      Research at the Center for Advanced Energy Studies is taking on another dimension with a 3-D device known as a Computer Assisted Virtual Environment. The CAVE uses projection to display high-end computer graphics on three walls and the floor. By wearing 3-D glasses to create depth perception and holding a wand to move and rotate images, users can delve into data.

    9. Secure computing for the 'Everyman'

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Secure computing for the 'Everyman' Secure computing for the 'Everyman' If implemented on a wide scale, quantum key distribution technology could ensure truly secure commerce, banking, communications and data transfer. September 2, 2014 This small device developed at Los Alamos National Laboratory uses the truly random spin of light particles as defined by laws of quantum mechanics to generate a random number for use in a cryptographic key that can be used to securely transmit information

    10. computational-hydraulics-for-transportation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Transportation Workshop Sept. 23-24, 2009 Argonne TRACC Dr. Steven Lottes This email address is being protected from spambots. You need JavaScript enabled to view it. Announcement pdficon small The Transportation Research and Analysis Computing Center at Argonne National Laboratory will hold a workshop on the use of computational hydraulics for transportation applications. The goals of the workshop are: Bring together people who are using or would benefit from the use of high performance cluster

    11. Computational Sciences and Engineering Division

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      If you have questions or comments regarding any of our research and development activities, how to work with ORNL and the Computational Sciences and Engineering (CSE) Division, or the content of this website please contact one of the following people: If you have questions regarding CSE technologies and capabilities, job opportunities, working with ORNL and the CSE Division, intellectual property, etc., contact, Shaun S. Gleason, Ph.D. Division Director, Computational Sciences and Engineering

    12. Computational Sciences and Engineering Division

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      The Computational Sciences and Engineering Division is a major research division at the Department of Energy's Oak Ridge National Laboratory. CSED develops and applies creative information technology and modeling and simulation research solutions for National Security and National Energy Infrastructure needs. The mission of the Computational Sciences and Engineering Division is to enhance the country's capabilities in achieving important objectives in the areas of national defense, homeland

    13. Mira | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Resources Mira Cetus and Vesta Visualization Cluster Data and Networking Software JLSE Featured Videos Mira: Argonne's 10-Petaflop Supercomputer Mira's Dedication Ceremony Introducing Mira: Our Next-Generation Supercomputer Mira Mira Ushers in a New Era of Scientific Supercomputing As one of the fastest supercomputers, Mira, our 10-petaflops IBM Blue Gene/Q system, is capable of 10 quadrillion calculations per second. With this computing power, Mira can do in one day what it would take

    14. Cooley | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Changes from Tukey to Cooley Compiling and Linking Using Cobalt on Cooley Visit on Cooley Paraview on Cooley ParaView Tutorial VNC on Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Cooley The primary purpose of Cooley is to analyze and visualize data produced on Mira. Equipped with state-of-the-art graphics processing units (GPUs), Cooley converts computational data from Mira

    15. LAMMPS | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Software & Libraries Boost CPMD Code_Saturne GAMESS GPAW GROMACS LAMMPS MADNESS QBox IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] LAMMPS Overview LAMMPS is a general-purpose molecular dynamics software package for massively parallel computers. It is written in an exceptionally clean style that makes it one of the most popular codes for users to extend and

    16. Automatic computation of transfer functions

      DOE Patents [OSTI]

      Atcitty, Stanley; Watson, Luke Dale

      2015-04-14

      Technologies pertaining to the automatic computation of transfer functions for a physical system are described herein. The physical system is one of an electrical system, a mechanical system, an electromechanical system, an electrochemical system, or an electromagnetic system. A netlist in the form of a matrix comprises data that is indicative of elements in the physical system, values for the elements in the physical system, and structure of the physical system. Transfer functions for the physical system are computed based upon the netlist.

    17. HEP/NP Requirements Review 2013

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      HEP/NP Requirements Review 2013 Science Engagement Move your data Programs & Workshops Science Requirements Reviews Network Requirements Reviews Documents and Background Materials FAQ for Case Study Authors BER Requirements Review 2015 ASCR Requirements Review 2015 Previous Reviews HEP/NP Requirements Review 2013 HEP Attendees 2013 FES Requirements Review 2014 BES Requirements Review 2014 Requirements Review Reports Case Studies Contact Us Technical Assistance: 1 800-33-ESnet (Inside US) 1

    18. High Performance Computing Facility Operational Assessment, FY 2010 Oak Ridge Leadership Computing Facility

      SciTech Connect (OSTI)

      Bland, Arthur S Buddy; Hack, James J; Baker, Ann E; Barker, Ashley D; Boudwin, Kathlyn J.; Kendall, Ricky A; Messer, Bronson; Rogers, James H; Shipman, Galen M; White, Julia C

      2010-08-01

      Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energy assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools and resources for next-generation systems.

    19. Authorization basis requirements comparison report

      SciTech Connect (OSTI)

      Brantley, W.M.

      1997-08-18

      The TWRS Authorization Basis (AB) consists of a set of documents identified by TWRS management with the concurrence of DOE-RL. Upon implementation of the TWRS Basis for Interim Operation (BIO) and Technical Safety Requirements (TSRs), the AB list will be revised to include the BIO and TSRs. Some documents that currently form part of the AB will be removed from the list. This SD identifies each - requirement from those documents, and recommends a disposition for each to ensure that necessary requirements are retained when the AB is revised to incorporate the BIO and TSRs. This SD also identifies documents that will remain part of the AB after the BIO and TSRs are implemented. This document does not change the AB, but provides guidance for the preparation of change documentation.

    20. A directory service for configuring high-performance distributed computations

      SciTech Connect (OSTI)

      Fitzgerald, S.; Kesselman, C.; Foster, I.

      1997-08-01

      High-performance execution in distributed computing environments often requires careful selection and configuration not only of computers, networks, and other resources but also of the protocols and algorithms used by applications. Selection and configuration in turn require access to accurate, up-to-date information on the structure and state of available resources. Unfortunately, no standard mechanism exists for organizing or accessing such information. Consequently, different tools and applications adopt ad hoc mechanisms, or they compromise their portability and performance by using default configurations. We propose a Metacomputing Directory Service that provides efficient and scalable access to diverse, dynamic, and distributed information about resource structure and state. We define an extensible data model to represent required information and present a scalable, high-performance, distributed implementation. The data representation and application programming interface are adopted from the Lightweight Directory Access Protocol; the data model and implementation are new. We use the Globus distributed computing toolkit to illustrate how this directory service enables the development of more flexible and efficient distributed computing services and applications.

    1. National Ignition Facility sub-system design requirements integrated timing system SSDR 1.5.3

      SciTech Connect (OSTI)

      Wiedwald, J.; Van Aersau, P.; Bliss, E.

      1996-08-26

      This System Design Requirement document establishes the performance, design, development, and test requirements for the Integrated Timing System, WBS 1.5.3 which is part of the NIF Integrated Computer Control System (ICCS). The Integrated Timing System provides all temporally-critical hardware triggers to components and equipment in other NIF systems.

    2. Johnson Noise Thermometry System Requirements

      SciTech Connect (OSTI)

      Britton Jr, Charles L; Roberts, Michael; Ezell, N Dianne Bull; Qualls, A L; Holcomb, David Eugene

      2013-01-01

      This document is intended to capture the requirements for the architecture of the developmental electronics for the ORNL-lead drift-free Johnson Noise Thermometry (JNT) project conducted under the Instrumentation, Controls, and Human-Machine Interface (ICHMI) research pathway of the U.S. Department of Energy (DOE) Advanced Small Modular Reactor (SMR) Research and Development (R&D) program. The requirements include not only the performance of the system but also the allowable measurement environment of the probe and the allowable physical environment of the associated electronics. A more extensive project background including the project rationale is available in the initial project report [1].

    3. Project X functional requirements specification

      SciTech Connect (OSTI)

      Holmes, S.D.; Henderson, S.D.; Kephart, R.; Kerby, J.; Kourbanis, I.; Lebedev, V.; Mishra, S.; Nagaitsev, S.; Solyak, N.; Tschirhart, R.; /Fermilab

      2012-05-01

      Project X is a multi-megawatt proton facility being developed to support a world-leading program in Intensity Frontier physics at Fermilab. The facility is designed to support programs in elementary particle and nuclear physics, with possible applications to nuclear energy research. A Functional Requirements Specification has been developed in order to establish performance criteria for the Project X complex in support of these multiple missions, and to assure that the facility is designed with sufficient upgrade capability to provide U.S. leadership for many decades to come. This paper will briefly review the previously described Functional Requirements, and then discuss their recent evolution.

    4. OMB Requirements | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      OMB Requirements OMB Requirements Acquisitions OMB Circular A-109, Acquisition of Major Systems (04-05-76) (Available in hard copy only) OMB M-04-08, Maximizing Use of SmartBuy and Avoiding Duplication of Agency Activities with with the President's 24 E-Gov Initiatives (02-25-2004) (pdf) OMB M-04-16, Software Acquisition (07-01-2004) Budget/Capital Planning OMB Circular A-11 OMB M-05-23, Improving Informational Technology (IT) Project Planning and Execution (8-04-2005) (pdf) Cyber Security &

    5. Microsoft Word - S07566_Requirements

      Office of Environmental Management (EM)

      Long-Term Surveillance and Maintenance Requirements for Remediated FUSRAP Sites This document supersedes DOE-LM/GJ1242-2006, Long-Term Surveillance and Maintenance Needs Assessment for the 25 DOE FUSRAP Sites (S01649), December 2006 March 2011 LMS/FUSRAP/S07566 This page intentionally left blank LMS/FUSRAP/S07566 Long-Term Surveillance and Maintenance Requirements for Remediated FUSRAP Sites This document supersedes DOE-LM/GJ1242-2006, Long-Term Surveillance and Maintenance Needs Assessment for

    6. Computer-based and web-based radiation safety training

      SciTech Connect (OSTI)

      Owen, C., LLNL

      1998-03-01

      The traditional approach to delivering radiation safety training has been to provide a stand-up lecture of the topic, with the possible aid of video, and to repeat the same material periodically. New approaches to meeting training requirements are needed to address the advent of flexible work hours and telecommuting, and to better accommodate individuals learning at their own pace. Computer- based and web-based radiation safety training can provide this alternative. Computer-based and web- based training is an interactive form of learning that the student controls, resulting in enhanced and focused learning at a time most often chosen by the student.

    7. Surveillance & Maintenance: The Requirements Based Surveillance...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Surveillance & Maintenance: The Requirements Based Surveillance and Maintenance Review Process (RBSM) Surveillance & Maintenance: The Requirements Based Surveillance and ...

    8. Snowmass Computing Frontier I2: Distributed

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Snowmass Computing Frontier I2: Distributed Computing and Facility Infrastructures Ken Bloom Richard Gerber July 31, 2013 Thursday, October 10, 13 Computing Frontier I2: Distributed Computing and Facility Infrastructures 7/31/13 Who we are ‣ Ken Bloom, Associate Professor, Department of Physics and Astronomy, University of Nebraska-Lincoln ‣ Co-PI for the Nebraska CMS Tier-2 computing facility ‣ Tier-2 program manager and Deputy Manager of Software and Computing for US CMS ‣ Tier-2

    9. Yuri Alexeev | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Yuri Alexeev Assistant Computational Scientist Yury Alekseev Argonne National Laboratory 9700 South Cass Avenue Building 240 - Rm. 1126 Argonne IL, 60439 630-252-0157 yuri@alcf.anl.gov Yuri Alexeev is an Assistant Computational Scientist at the Argonne Leadership Computing Facility where he applies his skills, knowledge and experience for using and enabling computational methods in chemistry and biology for high-performance computing on next-generation high-performance computers. Yuri is

    10. Impact of Rate Design Alternatives on Residential Solar Customer Bills. Increased Fixed Charges, Minimum Bills and Demand-based Rates

      SciTech Connect (OSTI)

      Bird, Lori; Davidson, Carolyn; McLaren, Joyce; Miller, John

      2015-09-01

      With rapid growth in energy efficiency and distributed generation, electric utilities are anticipating stagnant or decreasing electricity sales, particularly in the residential sector. Utilities are increasingly considering alternative rates structures that are designed to recover fixed costs from residential solar photovoltaic (PV) customers with low net electricity consumption. Proposed structures have included fixed charge increases, minimum bills, and increasingly, demand rates - for net metered customers and all customers. This study examines the electricity bill implications of various residential rate alternatives for multiple locations within the United States. For the locations analyzed, the results suggest that residential PV customers offset, on average, between 60% and 99% of their annual load. However, roughly 65% of a typical customer's electricity demand is non-coincidental with PV generation, so the typical PV customer is generally highly reliant on the grid for pooling services.

    11. Computer Science and Information Technology Student Pipeline

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Divisions recruit and hire promising undergraduate and graduate students in the areas of Computer Science, Information Technology, Management Information Systems, Computer...

    12. Applications for Postdoctoral Fellowship in Computational Science...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      at Berkeley Lab due November 26 October 15, 2012 by Francesca Verdier Researchers in computer science, applied mathematics or any computational science discipline who have...

    13. Sandia National Laboratories: Careers: Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Science Red Storm photo Sandia's supercomputing research is reaching for tomorrow's exascale performance while solving real-world problems today. Computer scientists and...

    14. Personal Computing Equipment | Open Energy Information

      Open Energy Info (EERE)

      Computing Equipment Jump to: navigation, search TODO: Add description List of Personal Computing Equipment Incentives Retrieved from "http:en.openei.orgwindex.php?titlePersona...

    15. Advanced Materials Development through Computational Design ...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Development through Computational Design Advanced Materials Development through Computational Design Presentation given at the 2007 Diesel Engine-Efficiency & Emissions Research ...

    16. Thermoelectric Materials by Design, Computational Theory and...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      by Design, Computational Theory and Structure Thermoelectric Materials by Design, Computational Theory and Structure 2009 DOE Hydrogen Program and Vehicle Technologies Program...

    17. Extreme Scale Computing, Co-Design

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information Science, Computing, Applied Math Extreme Scale Computing, Co-design Publications Publications Ramon Ravelo, Qi An, Timothy C. Germann, and Brad Lee Holian, ...

    18. Energy Storage Computational Tool | Open Energy Information

      Open Energy Info (EERE)

      Energy Storage Computational Tool Jump to: navigation, search Tool Summary LAUNCH TOOL Name: Energy Storage Computational Tool AgencyCompany Organization: Navigant Consulting...

    19. Integrated Computational Materials Engineering (ICME) for Mg...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      lm012li2012o.pdf More Documents & Publications Integrated Computational Materials Engineering (ICME) for Mg: International Pilot Project Integrated Computational Materials...

    20. Integrated Computational Materials Engineering (ICME) for Mg...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      lm012li2011o.pdf More Documents & Publications Integrated Computational Materials Engineering (ICME) for Mg: International Pilot Project Integrated Computational Materials...

    1. Hybrid Rotaxanes: Interlocked Structures for Quantum Computing...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      based on molecular magnets that may make them suitable as qubits for quantum computers. Chemistry Aids Quantum Computing Quantum bits or qubits are the fundamental...

    2. Compare Activities by Number of Computers

      U.S. Energy Information Administration (EIA) Indexed Site

      of Computers Office buildings contained the most computers per square foot, followed by education and outpatient health care buildings. Education buildings were the only type...

    3. Hybrid Rotaxanes: Interlocked Structures for Quantum Computing...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Hybrid Rotaxanes: Interlocked Structures for Quantum Computing? Hybrid Rotaxanes: Interlocked Structures for Quantum Computing? Print Wednesday, 26 August 2009 00:00 Rotaxanes are...

    4. NERSC Enhances PDSF, Genepool Computing Capabilities

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Capabilities NERSC Enhances PDSF, Genepool Computing Capabilities Linux cluster expansion speeds data access and analysis January 3, 2014 Christmas came early for...

    5. Solvate Structures and Computational/Spectroscopic Characterization...

      Office of Scientific and Technical Information (OSTI)

      Solvate Structures and ComputationalSpectroscopic Characterization of LiPF6 Electrolytes Citation Details In-Document Search Title: Solvate Structures and Computational...

    6. Solvate Structures and Computational/Spectroscopic Characterization...

      Office of Scientific and Technical Information (OSTI)

      Solvate Structures and ComputationalSpectroscopic Characterization of LiBF4 Electrolytes Citation Details In-Document Search Title: Solvate Structures and Computational...

    7. Improved computer models support genetics research

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      February Simple computer models unravel genetic stress reactions in cells Simple computer models unravel genetic stress reactions in cells Integrated biological and...

    8. Computer Accounts | Stanford Synchrotron Radiation Lightsource

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Accounts Each user group must have a computer account. Additionally, all persons using these accounts are responsible for understanding and complying with the terms...

    9. LANL computer model boosts engine efficiency

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      LANL computer model boosts engine efficiency LANL computer model boosts engine efficiency The KIVA model has been instrumental in helping researchers and manufacturers understand...

    10. Sandia National Laboratories: Advanced Simulation Computing:...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      These collaborations help solve the challenges of developing computing platforms and simulation tools across a number of disciplines. Computer Science Research Institute The...

    11. Significant Enhancement of Computational Efficiency in Nonlinear Multiscale Battery Model for Computer Aided Engineering

      SciTech Connect (OSTI)

      Smith, Kandler; Graf, Peter; Jun, Myungsoo; Yang, Chuanbo; Li, Genong; Li, Shaoping; Hochman, Amit; Tselepidakis, Dimitrios

      2015-06-09

      This presentation provides an update on improvements in computational efficiency in a nonlinear multiscale battery model for computer aided engineering.

    12. ASCR NERSC Requirement presentation.pptx

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Scientific Computing Research
 Research Priorities Overview " Karen Pao Advanced Scientific Computing Research (ASCR) Karen.Pao@science.doe.gov 5 January 2011 The mission of the Advanced Scientific Computing Research (ASCR) program is to discover, develop, and deploy the computational and networking capabilities that enable researchers to analyze, model, simulate, and predict complex phenomena important to the Department of Energy. A particular challenge of this program is fulfilling the

    13. The Future Looks Bright for Teraflop Computing

      SciTech Connect (OSTI)

      Farber, Rob

      2007-11-01

      Wouldnt it be great to have a teraflop of computing power sitting in your lab, desktop workstation, or remote instrument server? Talk about simplifying workflows, eliminating competition for HPC resources, and allowing more scientists and technicians to get more work done! Well, the computer industry is marketing that capability now in the form of high-end video cards and for a bargain price with more and better technology on the market horizon. As the industry evolves to become more oriented toward multi-core and multi-threaded hardware; video card manufacturers are attempting to transition from a niche to multi-purpose market. One of the products currently getting attention is the Nvidia Tesla family of products based on the Tesla GPGPU (general purpose graphics processing unit). This card contains 128 processor computing core engines advertised as having the ability to deliver an aggregate 518 billion single-precision floating operations per second (518 Gflop), which is being introduced at a $1499 MSRP price-point. Nvidia also offers other commodity graphics cards, such as the GeForce 8800, which appear on paper to have roughly the same performance for roughly half the price although with half the memory (768M vs the Tesla 1.5 GB). This highlights how the Tesla GPGPUs are essentially redesigned graphics cards (with no video capability, increased memory, and clock changes) that fit into PCI-Express slots in your motherboard. If you believe Nvidias claims, two Tesla cards will - for the right applications - turn your lab workstation into a teraflop capable supercomputer. Double-precision versions are projected for a late 2007 introduction with expected 2008 delivery. The Nvidia Tesla GPGPU is one step forward in the many-core revolution that is happening in the computer industry. Instead of making two or four processing cores available to the user, many-core processors offer tens or hundreds of processing cores. Many-core processors promise to provide very high performance-per-dollar and performance-per-watt for many computational workloads. Intel is working on their version of many-core processors but delivery dates appear to be several years in the future. Last year Intel made a large splash with their proof-of-concept teraflop 80-core chip, which they announced might be available sometime in 2011. Intel is also working on something similar to the Nvidia Tesla codename Larrabee which will perform in the teraflop range and has a release date of sometime around 2009 or 2010. Larrabee is supposed to have 16 24 cores and several nice features. Bottom line: A teraflop lab computer is feasible today as the programmable Nvidia GeForce 8 and Quadro family of graphics cards are available now, Tesla cards will be shipping, and exciting many-core architectures are on the horizon from a number of vendors. Definitely, the potential for parallel processing systems is huge, and GPGPUs certainly provide parallel processing, but are there enough applications out there to take them mainstream and make it more appealing to businesses other than just research firms? Only time will tell as more applications are developed to utilize this computational capability. Right now, programming is required. Recently Google purchased PeakStream, a firm that engaged in abstracting the task of running multiple threads to software with specific GPGPU applicability. However, Google is a visionary software company. Instrument vendors and much of the software industry are still in the early stages of the transition to multi-threaded many-core data processing. Applications that exploit the full potential of parallel processing systems, and GPGPUs in particular, really dont exist in todays market. The development of Matlab plug-ins is a very positive sign for the future of GPGPUs and is indicative of Nvidias sense of where the market is headed.

    14. Closed loop computer control for an automatic transmission

      DOE Patents [OSTI]

      Patil, Prabhakar B.

      1989-01-01

      In an automotive vehicle having an automatic transmission that driveably connects a power source to the driving wheels, a method to control the application of hydraulic pressure to a clutch, whose engagement produces an upshift and whose disengagement produces a downshift, the speed of the power source, and the output torque of the transmission. The transmission output shaft torque and the power source speed are the controlled variables. The commanded power source torque and commanded hydraulic pressure supplied to the clutch are the control variables. A mathematical model is formulated that describes the kinematics and dynamics of the powertrain before, during and after a gear shift. The model represents the operating characteristics of each component and the structural arrangement of the components within the transmission being controlled. Next, a close loop feedback control is developed to determine the proper control law or compensation strategy to achieve an acceptably smooth gear ratio change, one in which the output torque disturbance is kept to a minimum and the duration of the shift is minimized. Then a computer algorithm simulating the shift dynamics employing the mathematical model is used to study the effects of changes in the values of the parameters established from a closed loop control of the clutch hydraulic and the power source torque on the shift quality. This computer simulation is used also to establish possible shift control strategies. The shift strategies determined from the prior step are reduced to an algorithm executed by a computer to control the operation of the power source and the transmission.

    15. System for computer controlled shifting of an automatic transmission

      DOE Patents [OSTI]

      Patil, Prabhakar B.

      1989-01-01

      In an automotive vehicle having an automatic transmission that driveably connects a power source to the driving wheels, a method to control the application of hydraulic pressure to a clutch, whose engagement produces an upshift and whose disengagement produces a downshift, the speed of the power source, and the output torque of the transmission. The transmission output shaft torque and the power source speed are the controlled variables. The commanded power source torque and commanded hydraulic pressure supplied to the clutch are the control variables. A mathematical model is formulated that describes the kinematics and dynamics of the powertrain before, during and after a gear shift. The model represents the operating characteristics of each component and the structural arrangement of the components within the transmission being controlled. Next, a close loop feedback control is developed to determine the proper control law or compensation strategy to achieve an acceptably smooth gear ratio change, one in which the output torque disturbance is kept to a minimum and the duration of the shift is minimized. Then a computer algorithm simulating the shift dynamics employing the mathematical model is used to study the effects of changes in the values of the parameters established from a closed loop control of the clutch hydraulic and the power source torque on the shift quality. This computer simulation is used also to establish possible shift control strategies. The shift strategies determine from the prior step are reduced to an algorithm executed by a computer to control the operation of the power source and the transmission.

    16. Argonne Lea Computing F A

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Lea Computing F A r g o n n e L e a d e r s h i p C o m p u t i n g FA c i l i t y 2 0 1 3 S c i e n c e H i g H l i g H t S Argonne leadership computing Facility C O N T E N T S About ALCF ...............................................................................................................................2 MirA...............................................................................................................................................3 SCienCe DireCtor'S MeSSAge

    17. Supporting collaborative computing and interaction

      SciTech Connect (OSTI)

      Agarwal, Deborah; McParland, Charles; Perry, Marcia

      2002-05-22

      To enable collaboration on the daily tasks involved in scientific research, collaborative frameworks should provide lightweight and ubiquitous components that support a wide variety of interaction modes. We envision a collaborative environment as one that provides a persistent space within which participants can locate each other, exchange synchronous and asynchronous messages, share documents and applications, share workflow, and hold videoconferences. We are developing the Pervasive Collaborative Computing Environment (PCCE) as such an environment. The PCCE will provide integrated tools to support shared computing and task control and monitoring. This paper describes the PCCE and the rationale for its design.

    18. Future missions require improving LANSCE

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Future missions require improving LANSCE capabilities to support five principal research areas. 1) Improving capabilities at the Lujan Center-to use neutrons to probe soft materials-will improve understanding of the performance and aging of weapons materials, and will support development of the broad spectrum of materials needed for stockpile stewardship and threat reduction. 2) Enhancing high-accuracy nuclear cross-section measurements of actinide and short-lived isotopes for higher-fidelity

    19. New Solutions Require New Thinking

      Office of Environmental Management (EM)

      Solutions Require New Thinking America's demand for power threatens to overburden an already congested electric system. The U.S. Department of Energy is addressing these energy challenges with innovative solutions to energy generation. Its Renewable and Distributed Systems Integration (RDSI) Program is helping to alleviate congestion, reduce greenhouse gas emissions, and improve reliability by investigating answers such as * Microgrid technologies * Distributed generation * Two-way communication

    20. Energy and crude oil input requirements for the production of reformulated gasolines

      SciTech Connect (OSTI)

      Singh, M.; McNutt, B.

      1993-11-01

      The energy and crude oil requirements for the production of reformulated gasolines (RFG) are estimated. Both the energy and crude oil embodied in the final product and the process energy required to manufacture the RFG and its components are included. The effects on energy and crude oil use of using various oxygenates to meet the minimum oxygen content level required by the Clean Air Act Amendments are evaluated. The analysis illustrates that production of RFG requires more total energy than that of conventional gasoline but uses less crude oil. The energy and crude oil use requirements of the different RFGs vary considerably. For the same emissions performance level, RFG with ethanol requires substantially more total energy and crude oil than RFG with MTBE or ETBE. A specific proposal by the EPA designed to allow the use of ethanol in RFG would increase the total energy required to produce RFG by 2% and the total crude oil required by 2.0 to 2.5% over that for the base RFG with MTBE.

    1. Energy and crude oil input requirements for the production of reformulated gasolines

      SciTech Connect (OSTI)

      Singh, M.; McNutt, B.

      1993-10-01

      The energy and crude oil requirements for the production of reformulated gasoline (RFG) are estimated. The scope of the study includes both the energy and crude oil embodied in the final product and the process energy required to manufacture the RFG and its components. The effects on energy and crude oil use of employing various oxygenates to meet the minimum oxygen-content level required by the Clean Air Act Amendments are evaluated. The analysis shows that production of RFG requires more total energy, but uses less crude oil, than that of conventional gasoline. The energy and crude oil use requirements of the different RFGs vary considerably. For the same emissions performance level, RFG with ethanol requires substantially more total energy and crude oil than does RFG with methyl tertiary butyl ether (MTBE) or ethyl tertiary butyl ether. A specific proposal by the US Environmental Protection Agency, designed to allow the use of ethanol in RFG, would increase the total energy required to produce RFG by 2% and the total crude oil required by 2.0 to 2.5% over the corresponding values for the base RFG with MTBE.

    2. Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... The Challenge is project-based learning geared to teaching a wide range of skills - 41615 A simulation of vortex induced motion shows how ocean currents affect offshore oil rigs. ...

    3. Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      This turbulent transport is caused by drift-wave instabilities, driven by free energy in plasma temperature and density gradients. * Unavoidable: These instabilities will persist ...

    4. Hanford general employee training: Computer-based training instructor's manual

      SciTech Connect (OSTI)

      Not Available

      1990-10-01

      The Computer-Based Training portion of the Hanford General Employee Training course is designed to be used in a classroom setting with a live instructor. Future references to this course'' refer only to the computer-based portion of the whole. This course covers the basic Safety, Security, and Quality issues that pertain to all employees of Westinghouse Hanford Company. The topics that are covered were taken from the recommendations and requirements for General Employee Training as set forth by the Institute of Nuclear Power Operations (INPO) in INPO 87-004, Guidelines for General Employee Training, applicable US Department of Energy orders, and Westinghouse Hanford Company procedures and policy. Besides presenting fundamental concepts, this course also contains information on resources that are available to assist students. It does this using Interactive Videodisk technology, which combines computer-generated text and graphics with audio and video provided by a videodisk player.

    5. PERTURBATION APPROACH FOR QUANTUM COMPUTATION

      SciTech Connect (OSTI)

      G. P. BERMAN; D. I. KAMENEV; V. I. TSIFRINOVICH

      2001-04-01

      We discuss how to simulate errors in the implementation of simple quantum logic operations in a nuclear spin quantum computer with many qubits, using radio-frequency pulses. We verify our perturbation approach using the exact solutions for relatively small (L = 10) number of qubits.

    6. New challenges in computational biochemistry

      SciTech Connect (OSTI)

      Honig, B.

      1996-12-31

      The new challenges in computational biochemistry to which the title refers include the prediction of the relative binding free energy of different substrates to the same protein, conformational sampling, and other examples of theoretical predictions matching known protein structure and behavior.

    7. Radiological Worker Computer Based Training

      Energy Science and Technology Software Center (OSTI)

      2003-02-06

      Argonne National Laboratory has developed an interactive computer based training (CBT) version of the standardized DOE Radiological Worker training program. This CD-ROM based program utilizes graphics, animation, photographs, sound and video to train users in ten topical areas: radiological fundamentals, biological effects, dose limits, ALARA, personnel monitoring, controls and postings, emergency response, contamination controls, high radiation areas, and lessons learned.

    8. Experimental Mathematics and Computational Statistics

      SciTech Connect (OSTI)

      Bailey, David H.; Borwein, Jonathan M.

      2009-04-30

      The field of statistics has long been noted for techniques to detect patterns and regularities in numerical data. In this article we explore connections between statistics and the emerging field of 'experimental mathematics'. These includes both applications of experimental mathematics in statistics, as well as statistical methods applied to computational mathematics.

    9. ALCC Quarterly Report Policy | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ALCC Quarterly Report Policy The Department of Energy (DOE) requires the Argonne Leadership Computing Facility (ALCF) to report the progress and scientific accomplishments of all ALCC projects. ALCF, in turn, requires PIs from all ALCC projects to complete a quarterly report and a final end-of-project (EOP) report. Due dates for the 2015-2016 ALCC quarterly and the EOP reports are: * October 1, 2015 (CY2015 - Q3) * January 1, 2016 (CY2015 - Q4) * April 1, 2016 (CY2016 - Q1) * August 15, 2016

    10. Computer Science and Information Technology Student Pipeline

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Science and Information Technology Student Pipeline Program Description Los Alamos National Laboratory's High Performance Computing and Information Technology Divisions recruit and hire promising undergraduate and graduate students in the areas of Computer Science, Information Technology, Management Information Systems, Computer Security, Software Engineering, Computer Engineering, and Electrical Engineering. Students are provided a mentor and challenging projects to demonstrate their

    11. Lawrence Livermore National Laboratory Emergency Response Capability Baseline Needs Assessment Requirement Document

      SciTech Connect (OSTI)

      Sharry, J A

      2009-12-30

      This revision of the LLNL Fire Protection Baseline Needs Assessment (BNA) was prepared by John A. Sharry, LLNL Fire Marshal and LLNL Division Leader for Fire Protection and reviewed by Martin Gresho, Sandia/CA Fire Marshal. The document follows and expands upon the format and contents of the DOE Model Fire Protection Baseline Capabilities Assessment document contained on the DOE Fire Protection Web Site, but only address emergency response. The original LLNL BNA was created on April 23, 1997 as a means of collecting all requirements concerning emergency response capabilities at LLNL (including response to emergencies at Sandia/CA) into one BNA document. The original BNA documented the basis for emergency response, emergency personnel staffing, and emergency response equipment over the years. The BNA has been updated and reissued five times since in 1998, 1999, 2000, 2002, and 2004. A significant format change was performed in the 2004 update of the BNA in that it was 'zero based.' Starting with the requirement documents, the 2004 BNA evaluated the requirements, and determined minimum needs without regard to previous evaluations. This 2010 update maintains the same basic format and requirements as the 2004 BNA. In this 2010 BNA, as in the previous BNA, the document has been intentionally divided into two separate documents - the needs assessment (1) and the compliance assessment (2). The needs assessment will be referred to as the BNA and the compliance assessment will be referred to as the BNA Compliance Assessment. The primary driver for separation is that the needs assessment identifies the detailed applicable regulations (primarily NFPA Standards) for emergency response capabilities based on the hazards present at LLNL and Sandia/CA and the geographical location of the facilities. The needs assessment also identifies areas where the modification of the requirements in the applicable NFPA standards is appropriate, due to the improved fire protection provided, the remote location and low population density of some the facilities. As such, the needs assessment contains equivalencies to the applicable requirements. The compliance assessment contains no such equivalencies and simply assesses the existing emergency response resources to the requirements of the BNA and can be updated as compliance changes independent of the BNA update schedule. There are numerous NFPA codes and standards and other requirements and guidance documents that address the subject of emergency response. These requirements documents are not always well coordinated and may contain duplicative or conflicting requirements or even coverage gaps. Left unaddressed, this regulatory situation results in frequent interpretation of requirements documents. Different interpretations can then lead to inconsistent implementation. This BNA addresses this situation by compiling applicable requirements from all identified sources (see Section 5) and analyzing them collectively to address conflict and overlap as applicable to the hazards presented by the LLNL and Sandia/CA sites (see Section 7). The BNA also generates requirements when needed to fill any identified gaps in regulatory coverage. Finally, the BNA produces a customized simple set of requirements, appropriate for the DOE protection goals, such as those defined in DOE O 420.1B, the hazard level, the population density, the topography, and the site layout at LLNL and Sandia/CA that will be used as the baseline requirements set - the 'baseline needs' - for emergency response at LLNL and Sandia/CA. A template approach is utilized to accomplish this evaluation for each of the nine topical areas that comprise the baseline needs for emergency response. The basis for conclusions reached in determining the baseline needs for each of the topical areas is presented in Sections 7.1 through 7.9. This BNA identifies only mandatory requirements and establishes the minimum performance criteria. The minimum performance criteria may not be the level of performance desired Lawrence Livermore National Laboratory or Sandia/CA

    12. Internode data communications in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J.; Blocksome, Michael A.; Miller, Douglas R.; Parker, Jeffrey J.; Ratterman, Joseph D.; Smith, Brian E.

      2013-09-03

      Internode data communications in a parallel computer that includes compute nodes that each include main memory and a messaging unit, the messaging unit including computer memory and coupling compute nodes for data communications, in which, for each compute node at compute node boot time: a messaging unit allocates, in the messaging unit's computer memory, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; receives, prior to initialization of a particular process on the compute node, a data communications message intended for the particular process; and stores the data communications message in the message buffer associated with the particular process. Upon initialization of the particular process, the process establishes a messaging buffer in main memory of the compute node and copies the data communications message from the message buffer of the messaging unit into the message buffer of main memory.

    13. Internode data communications in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Parker, Jeffrey J; Ratterman, Joseph D; Smith, Brian E

      2014-02-11

      Internode data communications in a parallel computer that includes compute nodes that each include main memory and a messaging unit, the messaging unit including computer memory and coupling compute nodes for data communications, in which, for each compute node at compute node boot time: a messaging unit allocates, in the messaging unit's computer memory, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; receives, prior to initialization of a particular process on the compute node, a data communications message intended for the particular process; and stores the data communications message in the message buffer associated with the particular process. Upon initialization of the particular process, the process establishes a messaging buffer in main memory of the compute node and copies the data communications message from the message buffer of the messaging unit into the message buffer of main memory.

    14. Link failure detection in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J. (Rochester, MN); Blocksome, Michael A. (Rochester, MN); Megerian, Mark G. (Rochester, MN); Smith, Brian E. (Rochester, MN)

      2010-11-09

      Methods, apparatus, and products are disclosed for link failure detection in a parallel computer including compute nodes connected in a rectangular mesh network, each pair of adjacent compute nodes in the rectangular mesh network connected together using a pair of links, that includes: assigning each compute node to either a first group or a second group such that adjacent compute nodes in the rectangular mesh network are assigned to different groups; sending, by each of the compute nodes assigned to the first group, a first test message to each adjacent compute node assigned to the second group; determining, by each of the compute nodes assigned to the second group, whether the first test message was received from each adjacent compute node assigned to the first group; and notifying a user, by each of the compute nodes assigned to the second group, whether the first test message was received.

    15. Broadcasting a message in a parallel computer

      DOE Patents [OSTI]

      Berg, Jeremy E. (Rochester, MN); Faraj, Ahmad A. (Rochester, MN)

      2011-08-02

      Methods, systems, and products are disclosed for broadcasting a message in a parallel computer. The parallel computer includes a plurality of compute nodes connected together using a data communications network. The data communications network optimized for point to point data communications and is characterized by at least two dimensions. The compute nodes are organized into at least one operational group of compute nodes for collective parallel operations of the parallel computer. One compute node of the operational group assigned to be a logical root. Broadcasting a message in a parallel computer includes: establishing a Hamiltonian path along all of the compute nodes in at least one plane of the data communications network and in the operational group; and broadcasting, by the logical root to the remaining compute nodes, the logical root's message along the established Hamiltonian path.

    16. 2011 Computation Directorate Annual Report

      SciTech Connect (OSTI)

      Crawford, D L

      2012-04-11

      From its founding in 1952 until today, Lawrence Livermore National Laboratory (LLNL) has made significant strategic investments to develop high performance computing (HPC) and its application to national security and basic science. Now, 60 years later, the Computation Directorate and its myriad resources and capabilities have become a key enabler for LLNL programs and an integral part of the effort to support our nation's nuclear deterrent and, more broadly, national security. In addition, the technological innovation HPC makes possible is seen as vital to the nation's economic vitality. LLNL, along with other national laboratories, is working to make supercomputing capabilities and expertise available to industry to boost the nation's global competitiveness. LLNL is on the brink of an exciting milestone with the 2012 deployment of Sequoia, the National Nuclear Security Administration's (NNSA's) 20-petaFLOP/s resource that will apply uncertainty quantification to weapons science. Sequoia will bring LLNL's total computing power to more than 23 petaFLOP/s-all brought to bear on basic science and national security needs. The computing systems at LLNL provide game-changing capabilities. Sequoia and other next-generation platforms will enable predictive simulation in the coming decade and leverage industry trends, such as massively parallel and multicore processors, to run petascale applications. Efficient petascale computing necessitates refining accuracy in materials property data, improving models for known physical processes, identifying and then modeling for missing physics, quantifying uncertainty, and enhancing the performance of complex models and algorithms in macroscale simulation codes. Nearly 15 years ago, NNSA's Accelerated Strategic Computing Initiative (ASCI), now called the Advanced Simulation and Computing (ASC) Program, was the critical element needed to shift from test-based confidence to science-based confidence. Specifically, ASCI/ASC accelerated the development of simulation capabilities necessary to ensure confidence in the nuclear stockpile-far exceeding what might have been achieved in the absence of a focused initiative. While stockpile stewardship research pushed LLNL scientists to develop new computer codes, better simulation methods, and improved visualization technologies, this work also stimulated the exploration of HPC applications beyond the standard sponsor base. As LLNL advances to a petascale platform and pursues exascale computing (1,000 times faster than Sequoia), ASC will be paramount to achieving predictive simulation and uncertainty quantification. Predictive simulation and quantifying the uncertainty of numerical predictions where little-to-no data exists demands exascale computing and represents an expanding area of scientific research important not only to nuclear weapons, but to nuclear attribution, nuclear reactor design, and understanding global climate issues, among other fields. Aside from these lofty goals and challenges, computing at LLNL is anything but 'business as usual.' International competition in supercomputing is nothing new, but the HPC community is now operating in an expanded, more aggressive climate of global competitiveness. More countries understand how science and technology research and development are inextricably linked to economic prosperity, and they are aggressively pursuing ways to integrate HPC technologies into their native industrial and consumer products. In the interest of the nation's economic security and the science and technology that underpins it, LLNL is expanding its portfolio and forging new collaborations. We must ensure that HPC remains an asymmetric engine of innovation for the Laboratory and for the U.S. and, in doing so, protect our research and development dynamism and the prosperity it makes possible. One untapped area of opportunity LLNL is pursuing is to help U.S. industry understand how supercomputing can benefit their business. Industrial investment in HPC applications has historically been limited by the prohibitive cost of entry, the inaccessibility of software to run the powerful systems, and the years it takes to grow the expertise to develop codes and run them in an optimal way. LLNL is helping industry better compete in the global market place by providing access to some of the world's most powerful computing systems, the tools to run them, and the experts who are adept at using them. Our scientists are collaborating side by side with industrial partners to develop solutions to some of industry's toughest problems. The goal of the Livermore Valley Open Campus High Performance Computing Innovation Center is to allow American industry the opportunity to harness the power of supercomputing by leveraging the scientific and computational expertise at LLNL in order to gain a competitive advantage in the global economy.

    17. NERSC Enhances PDSF, Genepool Computing Capabilities

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Enhances PDSF, Genepool Computing Capabilities NERSC Enhances PDSF, Genepool Computing Capabilities Linux cluster expansion speeds data access and analysis January 3, 2014 Christmas came early for users of the Parallel Distributed Systems Facility (PDSF) and Genepool systems at Department of Energy's National Energy Research Scientific Computer Center (NERSC). Throughout November members of NERSC's Computational Systems Group were busy expanding the Linux computing resources that support PDSF's

    18. Extreme Scale Computing, Co-design

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information Science, Computing, Applied Math » Extreme Scale Computing, Co-design Extreme Scale Computing, Co-design Computational co-design may facilitate revolutionary designs in the next generation of supercomputers. Get Expertise Tim Germann Physics and Chemistry of Materials Email Allen McPherson Energy and Infrastructure Analysis Email Turab Lookman Physics and Condensed Matter and Complex Systems Email Computational co-design involves developing the interacting components of a

    19. NERSC seeks Computational Systems Group Lead

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      seeks Computational Systems Group Lead NERSC seeks Computational Systems Group Lead January 6, 2011 by Katie Antypas Note: This position is now closed. The Computational Systems Group provides production support and advanced development for the supercomputer systems at NERSC. Manage the Computational Systems Group (CSG) which provides production support and advanced development for the supercomputer systems at NERSC (National Energy Research Scientific Computing Center). These systems, which

    20. Presentation: High Performance Computing Applications | Department of

      Energy Savers [EERE]

      Energy High Performance Computing Applications Presentation: High Performance Computing Applications A briefing to the Secretary's Energy Advisory Board on High Performance Computing Applications delivered by Frederick H. Streitz, Lawrence Livermore National Laboratory. PDF icon High Performance Computing More Documents & Publications Presentation: QER Energy Topics 2011_INCITE_Fact_Sheets.pdf DOEs Effort to Reduce Truck Aerodynamic Drag through Joint Experiments and Computations