National Library of Energy BETA

Sample records for minimum computer requirements

  1. HEAT Loan Minimum Standards and Requirements

    Broader source: Energy.gov [DOE]

    Presents additional resources on loan standards and requirements from Elise Avers' presentation on HEAT Loan Minimum Standards and Requirements.

  2. DOE CYBER SECURITY EBK: MINIMUM CORE COMPETENCY TRAINING REQUIREMENTS...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    CYBER SECURITY EBK: MINIMUM CORE COMPETENCY TRAINING REQUIREMENTS DOE CYBER SECURITY EBK: MINIMUM CORE COMPETENCY TRAINING REQUIREMENTS puzzle-693870960720.jpg PDF icon DOE CYBER ...

  3. HEAT Loan Minimum Standards and Requirements | Department of...

    Broader source: Energy.gov (indexed) [DOE]

    Presents additional resources on loan standards and requirements from Elise Avers' presentation on HEAT Loan Minimum Standards and Requirements. PDF icon Minimum Standards and ...

  4. HEAT Loan Minimum Standards and Requirements

    Energy Savers [EERE]

    you must meet the following minimum standards listed below. * New natural gas or propane boilers must be at least 90% AFUE to be eligible. * New oil boilers must be at least...

  5. Minimum Efficiency Requirements Tables for Heating and Cooling...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Minimum Efficiency Requirements Tables for Heating and Cooling Product Categories The Federal Energy Management Program (FEMP) created tables that mirror American Society of ...

  6. Minimum Velocity Required to Transport Solid Particles from the...

    Office of Scientific and Technical Information (OSTI)

    Required to Transport Solid Particles from the 2H-Evaporator to the Tank Farm Citation Details In-Document Search Title: Minimum Velocity Required to Transport Solid Particles ...

  7. Incorporate Minimum Efficiency Requirements for Heating and Cooling

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Products into Federal Acquisition Documents | Department of Energy Incorporate Minimum Efficiency Requirements for Heating and Cooling Products into Federal Acquisition Documents Incorporate Minimum Efficiency Requirements for Heating and Cooling Products into Federal Acquisition Documents The Federal Energy Management Program (FEMP) organized information about FEMP-designated and ENERGY STAR-qualified heating, ventilating, and air conditioning (HVAC) and water heating products into tables

  8. Minimum Efficiency Requirements Tables for Heating and Cooling Product Categories

    Broader source: Energy.gov [DOE]

    The Federal Energy Management Program (FEMP) created tables that mirror American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) 90.1-2013 tables, which include minimum efficiency requirements for FEMP-designated and ENERGY STAR-qualified heating and cooling product categories. Download the tables below to incorporate FEMP and ENERGY STAR purchasing requirements into federal product acquisition documents.

  9. Present and Future Computing Requirements for PETSc

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    and Future Computing Requirements for PETSc Jed Brown jedbrown@mcs.anl.gov Mathematics and Computer Science Division, Argonne National Laboratory Department of Computer Science, ...

  10. Intro to computer programming, no computer required! | Argonne...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... "Computational thinking requires you to think in abstractions," said Papka, who spoke to computer science and computer-aided design students at Kaneland High School in Maple Park about ...

  11. Can Cloud Computing Address the Scientific Computing Requirements for DOE

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Researchers? Well, Yes, No and Maybe Can Cloud Computing Address the Scientific Computing Requirements for DOE Researchers? Well, Yes, No and Maybe Can Cloud Computing Address the Scientific Computing Requirements for DOE Researchers? Well, Yes, No and Maybe January 30, 2012 Jon Bashor, Jbashor@lbl.gov, +1 510-486-5849 Magellan1.jpg Magellan at NERSC After a two-year study of the feasibility of cloud computing systems for meeting the ever-increasing computational needs of scientists,

  12. Present and Future Computing Requirements

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Cosmology SciDAC-3 Project Ann Almgren (LBNL) Nick Gnedin (FNAL) Dave Higdon (LANL) Rob Ross (ANL) Martin White (UC Berkeley LBNL) Large Scale Production Computing and Storage...

  13. ASHRAE Minimum Efficiency Requirements Tables for Heating and Cooling Product Categories

    Broader source: Energy.gov [DOE]

    The Federal Energy Management Program (FEMP) created tables that mirror American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) 90.1-2013 tables, which include minimum efficiency requirements for FEMP-designated and ENERGY STAR-qualified heating and cooling product categories. Download the tables below to incorporate FEMP and ENERGY STAR purchasing requirements into federal product acquisition documents.

  14. Determining Allocation Requirements | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Allocation Management Determining Allocation Requirements Querying Allocations Using cbank Mira/Cetus/Vesta Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Determining Allocation Requirements Estimating CPU-Hours for ALCF Blue Gene/Q Systems When estimating CPU-hours for the ALCF Blue Gene/Q systems, it is important to take into consideration the unique aspects of the Blue Gene

  15. Minimum 186 Basin levels required for operation of ECS and CWS pumps

    SciTech Connect (OSTI)

    Reeves, K.K.; Barbour, K.L.

    1992-10-01

    Operation of K Reactor with a cooling tower requires that 186 Basin loss of inventory transients be considered during Design Basis Accident analyses requiring ECS injection, such as the LOCA and LOPA. Since the cooling tower systems are not considered safety systems, credit is not taken for their continued operation during a LOPA or LOCA even though they would likely continue to operate as designed. Without the continued circulation of cooling water to the 186 Basin by the cooling tower pumps, the 186 Basin will lose inventory until additional make-up can be obtained from the river water supply system. Increasing the make-up to the 186 Basin from the river water system may require the opening of manually operated valves, the starting of additional river water pumps, and adjustments of the flow to L Area. In the time required for these actions a loss of basin inventory could occur. The ECS and CWS pumps are supplied by the 186 Basin. A reduction in the basin level will result in decreased pump suction head. This reduction in suction head will result in decreased output from the pumps and, if severe enough, could lead to pump cavitation for some configurations. The subject of this report is the minimum 186 Basin level required to prevent ECS and CWS pump cavitation. The reduction in ECS flow due to a reduced 186 Basin level without cavitation is part of a separate study.

  16. Advanced Scientific Computing Research Network Requirements

    SciTech Connect (OSTI)

    Bacon, Charles; Bell, Greg; Canon, Shane; Dart, Eli; Dattoria, Vince; Goodwin, Dave; Lee, Jason; Hicks, Susan; Holohan, Ed; Klasky, Scott; Lauzon, Carolyn; Rogers, Jim; Shipman, Galen; Skinner, David; Tierney, Brian

    2013-03-08

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In October 2012, ESnet and the Office of Advanced Scientific Computing Research (ASCR) of the DOE SC organized a review to characterize the networking requirements of the programs funded by the ASCR program office. The requirements identified at the review are summarized in the Findings section, and are described in more detail in the body of the report.

  17. Large Scale Computing and Storage Requirements for Advanced Scientific

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing Research: Target 2014 Large Scale Computing and Storage Requirements for Advanced Scientific Computing Research: Target 2014 ASCRFrontcover.png Large Scale Computing and Storage Requirements for Advanced Scientific Computing Research An ASCR / NERSC Review January 5-6, 2011 Final Report Large Scale Computing and Storage Requirements for Advanced Scientific Computing Research, Report of the Joint ASCR / NERSC Workshop conducted January 5-6, 2011 Goals This workshop is being

  18. Large Scale Computing and Storage Requirements for Fusion Energy...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Home Science at NERSC HPC Requirements Reviews Requirements Reviews: Target 2014 Fusion Energy Sciences (FES) Large Scale Computing and Storage Requirements for Fusion ...

  19. Large Scale Computing and Storage Requirements for Advanced Scientific...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large Scale Computing and Storage Requirements for Advanced Scientific Computing Research: Target 2014 ... This workshop is being organized by the Department of Energy's Office of ...

  20. Large Scale Computing and Storage Requirements for High Energy...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    * Thermal, mechanical, and electromagnetic; "virtual prototyping" * Supporting ... variety of approaches resulting to wide spectrum of computational requirements * Beam ...

  1. Large Scale Production Computing and Storage Requirements for Fusion Energy

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Sciences: Target 2017 Large Scale Production Computing and Storage Requirements for Fusion Energy Sciences: Target 2017 The NERSC Program Requirements Review "Large Scale Production Computing and Storage Requirements for Fusion Energy Sciences" is organized by the Department of Energy's Office of Fusion Energy Sciences (FES), Office of Advanced Scientific Computing Research (ASCR), and the National Energy Research Scientific Computing Center (NERSC). The review's goal is to

  2. Large Scale Production Computing and Storage Requirements for High Energy

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Physics: Target 2017 Large Scale Production Computing and Storage Requirements for High Energy Physics: Target 2017 HEPlogo.jpg The NERSC Program Requirements Review "Large Scale Computing and Storage Requirements for High Energy Physics" is organized by the Department of Energy's Office of High Energy Physics (HEP), Office of Advanced Scientific Computing Research (ASCR), and the National Energy Research Scientific Computing Center (NERSC). The review's goal is to characterize

  3. Can Cloud Computing Address the Scientific Computing Requirements...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    the ever-increasing computational needs of scientists, Department of Energy ... and as the largest funder of basic scientific research in the U.S., DOE was interested in ...

  4. Architectural requirements for the Red Storm computing system. (Technical

    Office of Scientific and Technical Information (OSTI)

    Report) | SciTech Connect Technical Report: Architectural requirements for the Red Storm computing system. Citation Details In-Document Search Title: Architectural requirements for the Red Storm computing system. This report is based on the Statement of Work (SOW) describing the various requirements for delivering 3 new supercomputer system to Sandia National Laboratories (Sandia) as part of the Department of Energy's (DOE) Accelerated Strategic Computing Initiative (ASCI) program. This

  5. Large Scale Computing and Storage Requirements for Basic Energy Sciences:

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Target 2014 Large Scale Computing and Storage Requirements for Basic Energy Sciences: Target 2014 BESFrontcover.png Final Report Large Scale Computing and Storage Requirements for Basic Energy Sciences, Report of the Joint BES/ ASCR / NERSC Workshop conducted February 9-10, 2010 Workshop Agenda The agenda for this workshop is presented here: including presentation times and speaker information. Read More » Workshop Presentations Large Scale Computing and Storage Requirements for Basic

  6. Large Scale Production Computing and Storage Requirements for Advanced

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Scientific Computing Research: Target 2017 Large Scale Production Computing and Storage Requirements for Advanced Scientific Computing Research: Target 2017 ASCRLogo.png This is an invitation-only review organized by the Department of Energy's Office of Advanced Scientific Computing Research (ASCR) and NERSC. The general goal is to determine production high-performance computing, storage, and services that will be needed for ASCR to achieve its science goals through 2017. A specific focus

  7. Large Scale Production Computing and Storage Requirements for...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Requirements for Advanced Scientific Computing Research: Target 2017 ASCRLogo.png This is an invitation-only review organized by the Department of Energy's Office of Advanced ...

  8. Large Scale Production Computing and Storage Requirements for...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    2013 Hilton Washington DCRockville Hotel and Executive Meeting Center 1750 Rockville Pike, Rockville, MD 20852-1699 Final Report Large Scale Computing and Storage Requirements...

  9. Large Scale Production Computing and Storage Requirements for Basic Energy

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Sciences: Target 2017 Large Scale Production Computing and Storage Requirements for Basic Energy Sciences: Target 2017 BES-Montage.png This is an invitation-only review organized by the Department of Energy's Office of Basic Energy Sciences (BES), Office of Advanced Scientific Computing Research (ASCR), and the National Energy Research Scientific Computing Center (NERSC). The goal is to determine production high-performance computing, storage, and services that will be needed for BES to

  10. Large Scale Production Computing and Storage Requirements for Nuclear

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Physics: Target 2017 Large Scale Production Computing and Storage Requirements for Nuclear Physics: Target 2017 NPicon.png This invitation-only review is organized by the Department of Energy's Offices of Nuclear Physics (NP) and Advanced Scientific Computing Research (ASCR) and by NERSC. The goal is to determine production high-performance computing, storage, and services that will be needed for NP to achieve its science goals through 2017. The review brings together DOE Program Managers,

  11. PREPARING FOR EXASCALE: ORNL Leadership Computing Application Requirements and Strategy

    SciTech Connect (OSTI)

    Joubert, Wayne; Kothe, Douglas B; Nam, Hai Ah

    2009-12-01

    In 2009 the Oak Ridge Leadership Computing Facility (OLCF), a U.S. Department of Energy (DOE) facility at the Oak Ridge National Laboratory (ORNL) National Center for Computational Sciences (NCCS), elicited petascale computational science requirements from leading computational scientists in the international science community. This effort targeted science teams whose projects received large computer allocation awards on OLCF systems. A clear finding of this process was that in order to reach their science goals over the next several years, multiple projects will require computational resources in excess of an order of magnitude more powerful than those currently available. Additionally, for the longer term, next-generation science will require computing platforms of exascale capability in order to reach DOE science objectives over the next decade. It is generally recognized that achieving exascale in the proposed time frame will require disruptive changes in computer hardware and software. Processor hardware will become necessarily heterogeneous and will include accelerator technologies. Software must undergo the concomitant changes needed to extract the available performance from this heterogeneous hardware. This disruption portends to be substantial, not unlike the change to the message passing paradigm in the computational science community over 20 years ago. Since technological disruptions take time to assimilate, we must aggressively embark on this course of change now, to insure that science applications and their underlying programming models are mature and ready when exascale computing arrives. This includes initiation of application readiness efforts to adapt existing codes to heterogeneous architectures, support of relevant software tools, and procurement of next-generation hardware testbeds for porting and testing codes. The 2009 OLCF requirements process identified numerous actions necessary to meet this challenge: (1) Hardware capabilities must be advanced on multiple fronts, including peak flops, node memory capacity, interconnect latency, interconnect bandwidth, and memory bandwidth. (2) Effective parallel programming interfaces must be developed to exploit the power of emerging hardware. (3) Science application teams must now begin to adapt and reformulate application codes to the new hardware and software, typified by hierarchical and disparate layers of compute, memory and concurrency. (4) Algorithm research must be realigned to exploit this hierarchy. (5) When possible, mathematical libraries must be used to encapsulate the required operations in an efficient and useful way. (6) Software tools must be developed to make the new hardware more usable. (7) Science application software must be improved to cope with the increasing complexity of computing systems. (8) Data management efforts must be readied for the larger quantities of data generated by larger, more accurate science models. Requirements elicitation, analysis, validation, and management comprise a difficult and inexact process, particularly in periods of technological change. Nonetheless, the OLCF requirements modeling process is becoming increasingly quantitative and actionable, as the process becomes more developed and mature, and the process this year has identified clear and concrete steps to be taken. This report discloses (1) the fundamental science case driving the need for the next generation of computer hardware, (2) application usage trends that illustrate the science need, (3) application performance characteristics that drive the need for increased hardware capabilities, (4) resource and process requirements that make the development and deployment of science applications on next-generation hardware successful, and (5) summary recommendations for the required next steps within the computer and computational science communities.

  12. Large Scale Computing and Storage Requirements for Nuclear Physics Research

    SciTech Connect (OSTI)

    Gerber, Richard A.; Wasserman, Harvey J.

    2012-03-02

    IThe National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,000 users and hosting some 550 projects that involve nearly 700 codes for a wide variety of scientific disciplines. In addition to large-scale computing resources NERSC provides critical staff support and expertise to help scientists make the most efficient use of these resources to advance the scientific mission of the Office of Science. In May 2011, NERSC, DOE’s Office of Advanced Scientific Computing Research (ASCR) and DOE’s Office of Nuclear Physics (NP) held a workshop to characterize HPC requirements for NP research over the next three to five years. The effort is part of NERSC’s continuing involvement in anticipating future user needs and deploying necessary resources to meet these demands. The workshop revealed several key requirements, in addition to achieving its goal of characterizing NP computing. The key requirements include: 1. Larger allocations of computational resources at NERSC; 2. Visualization and analytics support; and 3. Support at NERSC for the unique needs of experimental nuclear physicists. This report expands upon these key points and adds others. The results are based upon representative samples, called “case studies,” of the needs of science teams within NP. The case studies were prepared by NP workshop participants and contain a summary of science goals, methods of solution, current and future computing requirements, and special software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, “multi-core” environment that is expected to dominate HPC architectures over the next few years. The report also includes a section with NERSC responses to the workshop findings. NERSC has many initiatives already underway that address key workshop findings and all of the action items are aligned with NERSC strategic plans.

  13. Large Scale Production Computing and Storage Requirements for Biological

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    and Environmental Research: Target 2017 Large Scale Production Computing and Storage Requirements for Biological and Environmental Research: Target 2017 BERmontage.gif September 11-12, 2012 Hilton Rockville Hotel and Executive Meeting Center 1750 Rockville Pike Rockville, MD, 20852-1699 TEL: 1-301-468-1100 Sponsored by: U.S. Department of Energy Office of Science Office of Advanced Scientific Computing Research (ASCR) Office of Biological and Environmental Research (BER) National Energy

  14. Large Scale Computing and Storage Requirements for High Energy Physics

    SciTech Connect (OSTI)

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes a section that describes efforts already underway or planned at NERSC that address requirements collected at the workshop. NERSC has many initiatives in progress that address key workshop findings and are aligned with NERSC's strategic plans.

  15. Present and Future Computing Requirements Radiative Transfer of Astrophysical Explosions

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Requirements Radiative Transfer of Astrophysical Explosions Daniel Kasen (UCB/LBNL) SciDAC computational astrophysics consortium Stan Woosley, Ann Almgren, John Bell, Haitao Ma, Peter Nugent, Rollin Thomas, Weiquin Zhang, Adam Burrows, Jason Nordhaus, Louis Howell, Mike Zingale topics and open questions * thermonuclear supernova: What are the progenitors: 1 or 2 white dwarfs? How does the nuclear runaway ignite and develop? How regular are these "standard candles" for cosmology? * core

  16. Scientific Application Requirements for Leadership Computing at the Exascale

    SciTech Connect (OSTI)

    Ahern, Sean; Alam, Sadaf R; Fahey, Mark R; Hartman-Baker, Rebecca J; Barrett, Richard F; Kendall, Ricky A; Kothe, Douglas B; Mills, Richard T; Sankaran, Ramanan; Tharrington, Arnold N; White III, James B

    2007-12-01

    The Department of Energy s Leadership Computing Facility, located at Oak Ridge National Laboratory s National Center for Computational Sciences, recently polled scientific teams that had large allocations at the center in 2007, asking them to identify computational science requirements for future exascale systems (capable of an exaflop, or 1018 floating point operations per second). These requirements are necessarily speculative, since an exascale system will not be realized until the 2015 2020 timeframe, and are expressed where possible relative to a recent petascale requirements analysis of similar science applications [1]. Our initial findings, which beg further data collection, validation, and analysis, did in fact align with many of our expectations and existing petascale requirements, yet they also contained some surprises, complete with new challenges and opportunities. First and foremost, the breadth and depth of science prospects and benefits on an exascale computing system are striking. Without a doubt, they justify a large investment, even with its inherent risks. The possibilities for return on investment (by any measure) are too large to let us ignore this opportunity. The software opportunities and challenges are enormous. In fact, as one notable computational scientist put it, the scale of questions being asked at the exascale is tremendous and the hardware has gotten way ahead of the software. We are in grave danger of failing because of a software crisis unless concerted investments and coordinating activities are undertaken to reduce and close this hardwaresoftware gap over the next decade. Key to success will be a rigorous requirement for natural mapping of algorithms to hardware in a way that complements (rather than competes with) compilers and runtime systems. The level of abstraction must be raised, and more attention must be paid to functionalities and capabilities that incorporate intent into data structures, are aware of memory hierarchy, possess fault tolerance, exploit asynchronism, and are power-consumption aware. On the other hand, we must also provide application scientists with the ability to develop software without having to become experts in the computer science components. Numerical algorithms are scattered broadly across science domains, with no one particular algorithm being ubiquitous and no one algorithm going unused. Structured grids and dense linear algebra continue to dominate, but other algorithm categories will become more common. A significant increase is projected for Monte Carlo algorithms, unstructured grids, sparse linear algebra, and particle methods, and a relative decrease foreseen in fast Fourier transforms. These projections reflect the expectation of much higher architecture concurrency and the resulting need for very high scalability. The new algorithm categories that application scientists expect to be increasingly important in the next decade include adaptive mesh refinement, implicit nonlinear systems, data assimilation, agent-based methods, parameter continuation, and optimization. The attributes of leadership computing systems expected to increase most in priority over the next decade are (in order of importance) interconnect bandwidth, memory bandwidth, mean time to interrupt, memory latency, and interconnect latency. The attributes expected to decrease most in relative priority are disk latency, archival storage capacity, disk bandwidth, wide area network bandwidth, and local storage capacity. These choices by application developers reflect the expected needs of applications or the expected reality of available hardware. One interpretation is that the increasing priorities reflect the desire to increase computational efficiency to take advantage of increasing peak flops [floating point operations per second], while the decreasing priorities reflect the expectation that computational efficiency will not increase. Per-core requirements appear to be relatively static, while aggregate requirements will grow with the system. This projection is consistent with a relatively small increase in performance per core with a dramatic increase in the number of cores. Leadership system software must face and overcome issues that will undoubtedly be exacerbated at the exascale. The operating system (OS) must be as unobtrusive as possible and possess more stability, reliability, and fault tolerance during application execution. As applications will be more likely at the exascale to experience loss of resources during an execution, the OS must mitigate such a loss with a range of responses. New fault tolerance paradigms must be developed and integrated into applications. Just as application input and output must not be an afterthought in hardware design, job management, too, must not be an afterthought in system software design. Efficient scheduling of those resources will be a major obstacle faced by leadership computing centers at the exas...

  17. Architectural requirements for the Red Storm computing system...

    Office of Scientific and Technical Information (OSTI)

    ... Computer architecture.; Supercomputers.; Accelerated Strategic Computing Initiative (ASCI)* Word Cloud More Like This Full Text preview image File size NAView Full Text View ...

  18. User's manual for RATEPAC: a digital-computer program for revenue requirements and rate-impact analysis

    SciTech Connect (OSTI)

    Fuller, L.C.

    1981-09-01

    The RATEPAC computer program is designed to model the financial aspects of an electric power plant or other investment requiring capital outlays and having annual operating expenses. The program produces incremental pro forma financial statements showing how an investment will affect the overall financial statements of a business entity. The code accepts parameters required to determine capital investment and expense as a function of time and sums these to determine minimum revenue requirements (cost of service). The code also calculates present worth of revenue requirements and required return on rate base. This user's manual includes a general description of the code as well as the instructions for input data preparation. A complete example case is appended.

  19. Large Scale Production Computing and Storage Requirements for...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    This is an invitation-only review organized by the Department of Energy's Office of Basic Energy Sciences (BES), Office of Advanced Scientific Computing Research (ASCR), and the ...

  20. Large Scale Computing and Storage Requirements for Biological...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    the requirements input from the workshop attendees. Workshop attendees should review the case study update document and other background materials on the Reference Materials page....

  1. Large Scale Computing Requirements for Basic Energy Sciences...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    significant solution acceleration order of magnitude OFF SHORE BRAZIL CSEM DATA 3D Image Processing Requirements 3D Data and Imaging Volumes - nearly 1 million data points,...

  2. National Ignition Facility sub-system design requirements computer system SSDR 1.5.1

    SciTech Connect (OSTI)

    Spann, J.; VanArsdall, P.; Bliss, E.

    1996-09-05

    This System Design Requirement document establishes the performance, design, development and test requirements for the Computer System, WBS 1.5.1 which is part of the NIF Integrated Computer Control System (ICCS). This document responds directly to the requirements detailed in ICCS (WBS 1.5) which is the document directly above.

  3. Computer

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    I. INTRODUCTION This paper presents several computational tools required for processing images of a heavy ion beam and estimating the magnetic field within a plasma. The...

  4. Barbara Helland Advanced Scientific Computing Research NERSC-HEP Requirements Review

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    7-28, 2012 Barbara Helland Advanced Scientific Computing Research NERSC-HEP Requirements Review 1 Science C ase S tudies d rive d iscussions Program R equirements R eviews  Program offices evaluated every two-three years  Participants include program managers, PI/ Scientists, ESnet/NERSC staff and management  User-driven discussion of science opportunities and needs  What: Instruments and facilities, data scale, computational requirements  How: science process, data analysis,

  5. Large Scale Computing and Storage Requirements for Biological and Environmental Research

    SciTech Connect (OSTI)

    DOE Office of Science, Biological and Environmental Research Program Office ,

    2009-09-30

    In May 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of Biological and Environmental Research (BER) held a workshop to characterize HPC requirements for BER-funded research over the subsequent three to five years. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. Chief among them: scientific progress in BER-funded research is limited by current allocations of computational resources. Additionally, growth in mission-critical computing -- combined with new requirements for collaborative data manipulation and analysis -- will demand ever increasing computing, storage, network, visualization, reliability and service richness from NERSC. This report expands upon these key points and adds others. It also presents a number of"case studies" as significant representative samples of the needs of science teams within BER. Workshop participants were asked to codify their requirements in this"case study" format, summarizing their science goals, methods of solution, current and 3-5 year computing requirements, and special software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel,"multi-core" environment that is expected to dominate HPC architectures over the next few years.

  6. Large Scale Computing and Storage Requirements for Basic Energy Sciences Research

    SciTech Connect (OSTI)

    Gerber, Richard; Wasserman, Harvey

    2011-03-31

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility supporting research within the Department of Energy's Office of Science. NERSC provides high-performance computing (HPC) resources to approximately 4,000 researchers working on about 400 projects. In addition to hosting large-scale computing facilities, NERSC provides the support and expertise scientists need to effectively and efficiently use HPC systems. In February 2010, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR) and DOE's Office of Basic Energy Sciences (BES) held a workshop to characterize HPC requirements for BES research through 2013. The workshop was part of NERSC's legacy of anticipating users future needs and deploying the necessary resources to meet these demands. Workshop participants reached a consensus on several key findings, in addition to achieving the workshop's goal of collecting and characterizing computing requirements. The key requirements for scientists conducting research in BES are: (1) Larger allocations of computational resources; (2) Continued support for standard application software packages; (3) Adequate job turnaround time and throughput; and (4) Guidance and support for using future computer architectures. This report expands upon these key points and presents others. Several 'case studies' are included as significant representative samples of the needs of science teams within BES. Research teams scientific goals, computational methods of solution, current and 2013 computing requirements, and special software and support needs are summarized in these case studies. Also included are researchers strategies for computing in the highly parallel, 'multi-core' environment that is expected to dominate HPC architectures over the next few years. NERSC has strategic plans and initiatives already underway that address key workshop findings. This report includes a brief summary of those relevant to issues raised by researchers at the workshop.

  7. Large Scale Computing and Storage Requirements for Fusion Energy Sciences: Target 2017

    SciTech Connect (OSTI)

    Gerber, Richard

    2014-05-02

    The National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,500 users working on some 650 projects that involve nearly 600 codes in a wide variety of scientific disciplines. In March 2013, NERSC, DOE?s Office of Advanced Scientific Computing Research (ASCR) and DOE?s Office of Fusion Energy Sciences (FES) held a review to characterize High Performance Computing (HPC) and storage requirements for FES research through 2017. This report is the result.

  8. ASC Computational Environment (ACE) requirements version 8.0 final report.

    SciTech Connect (OSTI)

    Larzelere, Alex R. (Exagrid Engineering, Alexandria, VA); Sturtevant, Judith E.

    2006-11-01

    A decision was made early in the Tri-Lab Usage Model process, that the collection of the user requirements be separated from the document describing capabilities of the user environment. The purpose in developing the requirements as a separate document was to allow the requirements to take on a higher-level view of user requirements for ASC platforms in general. In other words, a separate ASC user requirement document could capture requirements in a way that was not focused on ''how'' the requirements would be fulfilled. The intent of doing this was to create a set of user requirements that were not linked to any particular computational platform. The idea was that user requirements would endure from one ASC platform user environment to another. The hope was that capturing the requirements in this way would assist in creating stable user environments even though the particular platforms would be evolving and changing. In order to clearly make the separation, the Tri-lab S&CS program decided to create a new title for the requirements. The user requirements became known as the ASC Computational Environment (ACE) Requirements.

  9. Harvey Wasserman! Large Scale Computing and Storage Requirements for High Energy Physics

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Harvey Wasserman! Large Scale Computing and Storage Requirements for High Energy Physics Research: Target 2017 Meeting Goals & Process! ! --- 1 --- December 3 , 2 012 Logistics: Schedule * Agenda o n w orkshop w eb p age - h%p://www.nersc.gov/science/requirements/HEP * Mid---morning / a <ernoon b reak, l unch * Self---organizaBon for dinner * MulBple s cience a reas, o ne w orkshop - Science---focused b ut c rosscu?ng d iscussion - Explore a reas o f c ommon n eed ( within H EP) *

  10. High Performance Computing and Storage Requirements for Nuclear Physics: Target 2017

    SciTech Connect (OSTI)

    Gerber, Richard; Wasserman, Harvey

    2015-01-20

    In April 2014, NERSC, ASCR, and the DOE Office of Nuclear Physics (NP) held a review to characterize high performance computing (HPC) and storage requirements for NP research through 2017. This review is the 12th in a series of reviews held by NERSC and Office of Science program offices that began in 2009. It is the second for NP, and the final in the second round of reviews that covered the six Office of Science program offices. This report is the result of that review

  11. Program Evaluation: Minimum EERE Requirements | Department of...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    ... Program Manager Review and Response. Before the report is finalized and goes to senior management, the office leadership will add written responses to peer reviewer findings and ...

  12. Improved initial guess for minimum energy path calculations

    SciTech Connect (OSTI)

    Smidstrup, Sren; Pedersen, Andreas; Stokbro, Kurt

    2014-06-07

    A method is presented for generating a good initial guess of a transition path between given initial and final states of a system without evaluation of the energy. An objective function surface is constructed using an interpolation of pairwise distances at each discretization point along the path and the nudged elastic band method then used to find an optimal path on this image dependent pair potential (IDPP) surface. This provides an initial path for the more computationally intensive calculations of a minimum energy path on an energy surface obtained, for example, by ab initio or density functional theory. The optimal path on the IDPP surface is significantly closer to a minimum energy path than a linear interpolation of the Cartesian coordinates and, therefore, reduces the number of iterations needed to reach convergence and averts divergence in the electronic structure calculations when atoms are brought too close to each other in the initial path. The method is illustrated with three examples: (1) rotation of a methyl group in an ethane molecule, (2) an exchange of atoms in an island on a crystal surface, and (3) an exchange of two Si-atoms in amorphous silicon. In all three cases, the computational effort in finding the minimum energy path with DFT was reduced by a factor ranging from 50% to an order of magnitude by using an IDPP path as the initial path. The time required for parallel computations was reduced even more because of load imbalance when linear interpolation of Cartesian coordinates was used.

  13. Requirements for Control Room Computer-Based Procedures for use in Hybrid Control Rooms

    SciTech Connect (OSTI)

    Le Blanc, Katya Lee; Oxstrand, Johanna Helene; Joe, Jeffrey Clark

    2015-05-01

    Many plants in the U.S. are currently undergoing control room modernization. The main drivers for modernization are the aging and obsolescence of existing equipment, which typically results in a like-for-like replacement of analogue equipment with digital systems. However, the modernization efforts present an opportunity to employ advanced technology that would not only extend the life, but enhance the efficiency and cost competitiveness of nuclear power. Computer-based procedures (CBPs) are one example of near-term advanced technology that may provide enhanced efficiencies above and beyond like for like replacements of analog systems. Researchers in the LWRS program are investigating the benefits of advanced technologies such as CBPs, with the goal of assisting utilities in decision making during modernization projects. This report will describe the existing research on CBPs, discuss the unique issues related to using CBPs in hybrid control rooms (i.e., partially modernized analog control rooms), and define the requirements of CBPs for hybrid control rooms.

  14. QCD Thermodynamics at High Temperature Peter Petreczky Large Scale Computing and Storage Requirements for Nuclear Physics (NP),

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    QCD Thermodynamics at High Temperature Peter Petreczky Large Scale Computing and Storage Requirements for Nuclear Physics (NP), Bethesda MD, April 29-30, 2014 NY Center for Computational Science 2 Defining questions of nuclear physics research in US: Nuclear Science Advisory Committee (NSAC) "The Frontiers of Nuclear Science", 2007 Long Range Plan "What are the phases of strongly interacting matter and what roles do they play in the cosmos ?" "What does QCD predict for

  15. computers

    National Nuclear Security Administration (NNSA)

    Each successive generation of computing system has provided greater computing power and energy efficiency.

    CTS-1 clusters will support NNSA's Life Extension Program and...

  16. Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing /newsroom/_assets/images/computing-icon.png Computing Providing world-class high performance computing capability that enables unsurpassed solutions to complex problems of strategic national interest. Health Space Computing Energy Earth Materials Science Technology The Lab All Los Alamos National Laboratory sits on top of a once-remote mesa in northern New Mexico with the Jemez mountains as a backdrop to research and innovation covering multi-disciplines from bioscience, sustainable

  17. Computations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... Software Computations Uncertainty Quantification Stochastic About CRF Transportation Energy Consortiums Engine Combustion Heavy Duty Heavy Duty Low-Temperature & Diesel Combustion ...

  18. computers

    National Nuclear Security Administration (NNSA)

    California.

    Retired computers used for cybersecurity research at Sandia National...

  19. Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Office of Advanced Scientific Computing Research in the Department of Energy Office of Science under contract number DE-AC02-05CH11231. Application and System Memory Use, ...

  20. Standardized Procedure Content And Data Structure Based On Human Factors Requirements For Computer-Based Procedures

    SciTech Connect (OSTI)

    Bly, Aaron; Oxstrand, Johanna; Le Blanc, Katya L

    2015-02-01

    Most activities that involve human interaction with systems in a nuclear power plant are guided by procedures. Traditionally, the use of procedures has been a paper-based process that supports safe operation of the nuclear power industry. However, the nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. Advances in digital technology make computer-based procedures (CBPs) a valid option that provides further enhancement of safety by improving human performance related to procedure use. The transition from paper-based procedures (PBPs) to CBPs creates a need for a computer-based procedure system (CBPS). A CBPS needs to have the ability to perform logical operations in order to adjust to the inputs received from either users or real time data from plant status databases. Without the ability for logical operations the procedure is just an electronic copy of the paper-based procedure. In order to provide the CBPS with the information it needs to display the procedure steps to the user, special care is needed in the format used to deliver all data and instructions to create the steps. The procedure should be broken down into basic elements and formatted in a standard method for the CBPS. One way to build the underlying data architecture is to use an Extensible Markup Language (XML) schema, which utilizes basic elements to build each step in the smart procedure. The attributes of each step will determine the type of functionality that the system will generate for that step. The CBPS will provide the context for the step to deliver referential information, request a decision, or accept input from the user. The XML schema needs to provide all data necessary for the system to accurately perform each step without the need for the procedure writer to reprogram the CBPS. The research team at the Idaho National Laboratory has developed a prototype CBPS for field workers as well as the underlying data structure for such CBPS. The objective of the research effort is to develop guidance on how to design both the user interface and the underlying schema. This paper will describe the result and insights gained from the research activities conducted to date.

    1. Computations

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computations - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced Nuclear

    2. Requirements for Computer Based-Procedures for Nuclear Power Plant Field Operators Results from a Qualitative Study

      SciTech Connect (OSTI)

      Katya Le Blanc; Johanna Oxstrand

      2012-05-01

      Although computer-based procedures (CBPs) have been investigated as a way to enhance operator performance on procedural tasks in the nuclear industry for almost thirty years, they are not currently widely deployed at United States utilities. One of the barriers to the wide scale deployment of CBPs is the lack of operational experience with CBPs that could serve as a sound basis for justifying the use of CBPs for nuclear utilities. Utilities are hesitant to adopt CBPs because of concern over potential costs of implementation, and concern over regulatory approval. Regulators require a sound technical basis for the use of any procedure at the utilities; without operating experience to support the use CBPs, it is difficult to establish such a technical basis. In an effort to begin the process of developing a technical basis for CBPs, researchers at Idaho National Laboratory are partnering with industry to explore CBPs with the objective of defining requirements for CBPs and developing an industry-wide vision and path forward for the use of CBPs. This paper describes the results from a qualitative study aimed at defining requirements for CBPs to be used by field operators and maintenance technicians.

    3. Theoretical minimum energies to produce steel for selected conditions

      SciTech Connect (OSTI)

      Fruehan, R. J.; Fortini, O.; Paxton, H. W.; Brindle, R.

      2000-03-01

      An ITP study has determined the theoretical minimum energy requirements for producing steel from ore, scrap, and direct reduced iron. Dr. Richard Fruehan's report, Theoretical Minimum Energies to Produce Steel for Selected Conditions, provides insight into the potential energy savings (and associated reductions in carbon dioxide emissions) for ironmaking, steelmaking, and rolling processes (PDF459 KB).

    4. GMTI radar minimum detectable velocity.

      SciTech Connect (OSTI)

      Richards, John Alfred

      2011-04-01

      Minimum detectable velocity (MDV) is a fundamental consideration for the design, implementation, and exploitation of ground moving-target indication (GMTI) radar imaging modes. All single-phase-center air-to-ground radars are characterized by an MDV, or a minimum radial velocity below which motion of a discrete nonstationary target is indistinguishable from the relative motion between the platform and the ground. Targets with radial velocities less than MDV are typically overwhelmed by endoclutter ground returns, and are thus not generally detectable. Targets with radial velocities greater than MDV typically produce distinct returns falling outside of the endoclutter ground returns, and are thus generally discernible using straightforward detection algorithms. This document provides a straightforward derivation of MDV for an air-to-ground single-phase-center GMTI radar operating in an arbitrary geometry.

    5. Computer-Based Procedures for Field Workers in Nuclear Power Plants: Development of a Model of Procedure Usage and Identification of Requirements

      SciTech Connect (OSTI)

      Katya Le Blanc; Johanna Oxstrand

      2012-04-01

      The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts we are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field workers. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do so. This paper describes the development of a Model of Procedure Use and the qualitative study on which the model is based. The study was conducted in collaboration with four nuclear utilities and five research institutes. During the qualitative study and the model development requirements and for computer-based procedures were identified.

    6. Computing Videos

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Videos Computing

    7. Incorporate Minimum Efficiency Requirements for Heating and Cooling...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      FEMP-designated and ENERGY STAR-qualified heating, ventilating, and air conditioning (HVAC) and water heating products into tables that mirror American Society of Heating, ...

    8. "Table A52. Nonswitchable Minimum Requirements and Maximum...

      U.S. Energy Information Administration (EIA) Indexed Site

      ... for which the" "switching status was not ascertained." " Notes: To obtain a RSE percentage for any table cell, multiply the cell's" "corresponding RSE column and RSE row factors. ...

    9. Computer System,

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      System, Cluster, and Networking Summer Institute New Mexico Consortium and Los Alamos National Laboratory HOW TO APPLY Applications will be accepted JANUARY 5 - FEBRUARY 13, 2016 Computing and Information Technology undegraduate students are encouraged to apply. Must be a U.S. citizen. * Submit a current resume; * Offcial University Transcript (with spring courses posted and/or a copy of spring 2016 schedule) 3.0 GPA minimum; * One Letter of Recommendation from a Faculty Member; and * Letter of

    10. Minimum Day Time Load Calculation and Screening

      Broader source: Energy.gov (indexed) [DOE]

      ... DPU 36 Supplemental Review Screens * Penetration screen (Minimum load screen) * Power quality and voltage screen * Safety and reliability screen 37 Federal Energy Regulatory ...

    11. Fermilab | Science at Fermilab | Computing | Grid Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Grid Computing Center interior. Grid Computing Center interior. Computing Grid Computing As high-energy physics experiments grow larger in scope, they require more computing power to process and analyze data. Laboratories purchase rooms full of computer nodes for experiments to use. But many experiments need even more capacity during peak periods . And some experiments do not need to use all of their computing power all of the time. In the early 2000s, members of Fermilab's Computing Division

    12. Sample size requirements for estimating effective dose from computed tomography using solid-state metal-oxide-semiconductor field-effect transistor dosimetry

      SciTech Connect (OSTI)

      Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.; Hoffmann, Udo; Douglas, Pamela S.; Einstein, Andrew J.

      2014-04-15

      Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample size required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomenpelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same precision and confidence.

    13. Designing a minimum-functionality neutron and gamma measurement instrument with a focus on authentication

      SciTech Connect (OSTI)

      Karpius, Peter J; Williams, Richard B

      2009-01-01

      During the design and construction of the Next-Generation Attribute-Measurement System, which included a largely commercial off-the-shelf (COTS), nondestructive assay (NDA) system, we realized that commercial NDA equipment tends to include numerous features that are not required for an attribute-measurement system. Authentication of the hardware, firmware, and software in these instruments is still required, even for those features not used in this application. However, such a process adds to the complexity, cost, and time required for authentication. To avoid these added authenticat ion difficulties, we began to design NDA systems capable of performing neutron multiplicity and gamma-ray spectrometry measurements by using simplified hardware and software that avoids unused features and complexity. This paper discusses one possible approach to this design: A hardware-centric system that attempts to perform signal analysis as much as possible in the hardware. Simpler processors and minimal firmware are used because computational requirements are kept to a bare minimum. By hard-coding the majority of the device's operational parameters, we could cull large sections of flexible, configurable hardware and software found in COTS instruments, thus yielding a functional core that is more straightforward to authenticate.

    14. Computer Security

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Security All JLF participants must fully comply with all LLNL computer security regulations and procedures. A laptop entering or leaving B-174 for the sole use by a US citizen and so configured, and requiring no IP address, need not be registered for use in the JLF. By September 2009, it is expected that computers for use by Foreign National Investigators will have no special provisions. Notify maricle1@llnl.gov of all other computers entering, leaving, or being moved within B 174. Use

    15. ITP Steel: Theoretical Minimum Energies to Produce Steel for...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Theoretical Minimum Energies to Produce Steel for Selected Conditions, March 2000 ITP Steel: Theoretical Minimum Energies to Produce Steel for Selected Conditions, March 2000 PDF ...

    16. Theoretical Minimum Energies to Produce Steel for Selected Conditions

      SciTech Connect (OSTI)

      Fruehan, R.J.; Fortini, O.; Paxton, H.W.; Brindle, R.

      2000-05-01

      The energy used to produce liquid steel in today's integrated and electric arc furnace (EAF) facilities is significantly higher than the theoretical minimum energy requirements. This study presents the absolute minimum energy required to produce steel from ore and mixtures of scrap and scrap alternatives. Additional cases in which the assumptions are changed to more closely approximate actual operating conditions are also analyzed. The results, summarized in Table E-1, should give insight into the theoretical and practical potentials for reducing steelmaking energy requirements. The energy values have also been converted to carbon dioxide (CO{sub 2}) emissions in order to indicate the potential for reduction in emissions of this greenhouse gas (Table E-2). The study showed that increasing scrap melting has the largest impact on energy consumption. However, scrap should be viewed as having ''invested'' energy since at one time it was produced by reducing ore. Increasing scrap melting in the BOF mayor may not decrease energy if the ''invested'' energy in scrap is considered.

    17. LHC Computing

      SciTech Connect (OSTI)

      Lincoln, Don

      2015-07-28

      The LHC is the world’s highest energy particle accelerator and scientists use it to record an unprecedented amount of data. This data is recorded in electronic format and it requires an enormous computational infrastructure to convert the raw data into conclusions about the fundamental rules that govern matter. In this video, Fermilab’s Dr. Don Lincoln gives us a sense of just how much data is involved and the incredible computer resources that makes it all possible.

    18. Minimum Day Time Load Calculation and Screening

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Minimum Day Time Load Calculation and Screening" Dora Nakafuji and Anthony Hong, Hawaiian Electric Co. Babak Enayati, DG Techincal Standards Review Group April 30, 2014 2 Speakers Babak Enayati Chair of Massachusetts DG Technical Standards Review Group Dora Nakafuji Director of Renewable Energy Planning Hawaiian Electric Company (HECO) Kristen Ardani Solar Analyst, (today's moderator) NREL Anthony Hong Director of Distribution Planning Hawaiian Electric Company (HECO) Standardization of

    19. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Nodes Compute Nodes There are currently 2632 nodes available on PDSF. The compute (batch) nodes at PDSF are heterogenous, reflecting the periodic procurement of new nodes (and the eventual retirement of old nodes). From the user's perspective they are essentially all equivalent except that some have more memory per job slot. If your jobs have memory requirements beyond the default maximum of 1.1GB you should specify that in your job submission and the batch system will run your job on an

    20. Minimum wear tube support hole design

      DOE Patents [OSTI]

      Glatthorn, Raymond H. (St. Petersburg, FL)

      1986-01-01

      A minimum-wear through-bore (16) is defined within a heat exchanger tube support plate (14) so as to have an hourglass configuration as determined by means of a constant radiused surface curvature (18) as defined by means of an external radius (R3), wherein the surface (18) extends between the upper surface (20) and lower surface (22) of the tube support plate (14). When a heat exchange tube (12) is disposed within the tube support plate (14) so as to pass through the through-bore (16), the heat exchange tube (12) is always in contact with a smoothly curved or radiused portion of the through-bore surface (16) whereby unacceptably excessive wear upon the heat exchange tube (12), as normally developed by means of sharp edges, lands, ridges, or the like conventionally part of the tube support plates, is eliminated or substantially reduced.

    1. Energy and IAQ Implications of Alternative Minimum Ventilation Rates in California Retail and School Buildings

      SciTech Connect (OSTI)

      Dutton, Spencer M.; Fisk, William J.

      2015-01-01

      For a stand-alone retail building, a primary school, and a secondary school in each of the 16 California climate zones, the EnergyPlus building energy simulation model was used to estimate how minimum mechanical ventilation rates (VRs) affect energy use and indoor air concentrations of an indoor-generated contaminant. The modeling indicates large changes in heating energy use, but only moderate changes in total building energy use, as minimum VRs in the retail building are changed. For example, predicted state-wide heating energy consumption in the retail building decreases by more than 50% and total building energy consumption decreases by approximately 10% as the minimum VR decreases from the Title 24 requirement to no mechanical ventilation. The primary and secondary schools have notably higher internal heat gains than in the retail building models, resulting in significantly reduced demand for heating. The school heating energy use was correspondingly less sensitive to changes in the minimum VR. The modeling indicates that minimum VRs influence HVAC energy and total energy use in schools by only a few percent. For both the retail building and the school buildings, minimum VRs substantially affected the predicted annual-average indoor concentrations of an indoor generated contaminant, with larger effects in schools. The shape of the curves relating contaminant concentrations with VRs illustrate the importance of avoiding particularly low VRs.

    2. Computing and Computational Sciences Directorate - Computer Science...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Science and Mathematics Division The Computer Science and Mathematics Division (CSMD) is ORNL's premier source of basic and applied research in high-performance computing, ...

    3. Computation & Simulation > Theory & Computation > Research >...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      it. Click above to view. computational2 computational3 In This Section Computation & Simulation Computation & Simulation Extensive combinatorial results and ongoing basic...

    4. ITP Steel: Theoretical Minimum Energies to Produce Steel for Selected

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Conditions, March 2000 | Department of Energy Theoretical Minimum Energies to Produce Steel for Selected Conditions, March 2000 ITP Steel: Theoretical Minimum Energies to Produce Steel for Selected Conditions, March 2000 PDF icon theoretical_minimum_energies.pdf More Documents & Publications Ironmaking Process Alternatives Screening Study ITP Steel: Steel Industry Marginal Opportunity Study September 2005 ITP Steel: Steel Industry Energy Bandwidth Study October 2004

    5. Optimizing minimum free-energy crossing points in solution: Linear...

      Office of Scientific and Technical Information (OSTI)

      Optimizing minimum free-energy crossing points in solution: Linear-response free energyspin-flip density functional theory approach Citation Details In-Document Search Title:...

    6. 02-HellandNERSC-Requirements.pptx

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      5, 2014 Barbara Helland Advanced Scientific Computing Research NERSC-ASCR Requirements Review 1 ASCR 2 ... Production Computing for the Office of Science * Characterized by a ...

    7. ON COMPUTING UPPER LIMITS TO SOURCE INTENSITIES

      SciTech Connect (OSTI)

      Kashyap, Vinay L.; Siemiginowska, Aneta [Smithsonian Astrophysical Observatory, 60 Garden Street, Cambridge, MA 02138 (United States); Van Dyk, David A.; Xu Jin [Department of Statistics, University of California, Irvine, CA 92697-1250 (United States); Connors, Alanna [Eureka Scientific, 2452 Delmer Street, Suite 100, Oakland, CA 94602-3017 (United States); Freeman, Peter E. [Department of Statistics, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213 (United States); Zezas, Andreas, E-mail: vkashyap@cfa.harvard.ed, E-mail: asiemiginowska@cfa.harvard.ed, E-mail: dvd@ics.uci.ed, E-mail: jinx@ics.uci.ed, E-mail: aconnors@eurekabayes.co, E-mail: pfreeman@cmu.ed, E-mail: azezas@cfa.harvard.ed [Physics Department, University of Crete, P.O. Box 2208, GR-710 03, Heraklion, Crete (Greece)

      2010-08-10

      A common problem in astrophysics is determining how bright a source could be and still not be detected in an observation. Despite the simplicity with which the problem can be stated, the solution involves complicated statistical issues that require careful analysis. In contrast to the more familiar confidence bound, this concept has never been formally analyzed, leading to a great variety of often ad hoc solutions. Here we formulate and describe the problem in a self-consistent manner. Detection significance is usually defined by the acceptable proportion of false positives (background fluctuations that are claimed as detections, or Type I error), and we invoke the complementary concept of false negatives (real sources that go undetected, or Type II error), based on the statistical power of a test, to compute an upper limit to the detectable source intensity. To determine the minimum intensity that a source must have for it to be detected, we first define a detection threshold and then compute the probabilities of detecting sources of various intensities at the given threshold. The intensity that corresponds to the specified Type II error probability defines that minimum intensity and is identified as the upper limit. Thus, an upper limit is a characteristic of the detection procedure rather than the strength of any particular source. It should not be confused with confidence intervals or other estimates of source intensity. This is particularly important given the large number of catalogs that are being generated from increasingly sensitive surveys. We discuss, with examples, the differences between these upper limits and confidence bounds. Both measures are useful quantities that should be reported in order to extract the most science from catalogs, though they answer different statistical questions: an upper bound describes an inference range on the source intensity, while an upper limit calibrates the detection process. We provide a recipe for computing upper limits that applies to all detection algorithms.

    8. Computing and Computational Sciences Directorate - Computer Science...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Science and Mathematics Division Citation: For exemplary administrative secretarial support to the Computer Science and Mathematics Division and to the ORNL ...

    9. Present and Future Computational Requirements General Plasma...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      HDF5, PETSc, (Parallel Python?) visualization cluster (Paraview, python, matlab) with large memory and fast disk access Kai Germaschewski and Homa Karimabadi CICART...

    10. BER Science Network Requirements

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      BER Science Network Requirements Report of the Biological and Environmental Research Network Requirements Workshop Conducted July 26 and 27, 2007 BER Science Network Requirements Workshop Biological and Environmental Research Program Office, DOE Office of Science Energy Sciences Network Bethesda, MD - July 26 and 27, 2007 ESnet is funded by the US Dept. of Energy, Office of Science, Advanced Scientific Computing Research (ASCR) program. Dan Hitchcock is the ESnet Program Manager. ESnet is

    11. BES Science Network Requirements

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Network Requirements Report of the Basic Energy Sciences Network Requirements Workshop Conducted June 4-5, 2007 BES Science Network Requirements Workshop Basic Energy Sciences Program Office, DOE Office of Science Energy Sciences Network Washington, DC - June 4 and 5, 2007 ESnet is funded by the US Dept. of Energy, Office of Science, Advanced Scientific Computing Research (ASCR) program. Dan Hitchcock is the ESnet Program Manager. ESnet is operated by Lawrence Berkeley National Laboratory, which

    12. Computing Frontier: Distributed Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      and using specialized high-speed, low-latency networks to communicate partial results ... possibly requiring the use of multiple Web browsers and a number of utility programs ...

    13. RMACS software requirements specification

      SciTech Connect (OSTI)

      Gneiting, B.C.

      1996-10-01

      This document defines the essential user (or functional) requirements of the Requirements Management and Assured Compliance System (RMACS), which is used by the Tank Waste Remediation System program (TWRS). RMACS provides a computer-based environment that TWRS management and systems engineers can use to identify, define, and document requirements. The intent of the system is to manage information supporting definition of the TWRS technical baseline using a structured systems engineering process. RMACS has the capability to effectively manage a complete set of complex requirements and relationships in a manner that satisfactorily assures compliance to the program requirements over the TWRS life-cycle.

    14. Requirement-Reviews.pptx

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      1, 2 013 Requirements Reviews * 1½-day reviews with each Program Office * Computing and storage requirements for next 5 years * Participants - DOE ADs & Program Managers - Leading scientists using NERSC & key potential users - NERSC staff 2 High Energy Physics Fusion R esearch Reports From 6 Requirements Reviews Have Been Published 3 h%p://www.nersc.gov/science/requirements---reviews/ final---reports/ * Compu<ng a nd s torage requirements f or 2013/2014 * Execu<ve S ummary o f

    15. DOE Requires Manufacturers to Halt Sales of Heat Pumps and Air Conditioners Violating Minimum Appliance Standards

      Broader source: Energy.gov [DOE]

      Today, the Department of Energy announced that three manufacturers -- Aspen Manufacturing, Inc., Summit Manufacturing, and Advanced Distributor Products -- must stop distributing 61 heat pump...

    16. Modeling an Application's Theoretical Minimum and Average Transactional Response Times

      SciTech Connect (OSTI)

      Paiz, Mary Rose

      2015-04-01

      The theoretical minimum transactional response time of an application serves as a ba- sis for the expected response time. The lower threshold for the minimum response time represents the minimum amount of time that the application should take to complete a transaction. Knowing the lower threshold is beneficial in detecting anomalies that are re- sults of unsuccessful transactions. On the converse, when an application's response time falls above an upper threshold, there is likely an anomaly in the application that is causing unusual performance issues in the transaction. This report explains how the non-stationary Generalized Extreme Value distribution is used to estimate the lower threshold of an ap- plication's daily minimum transactional response time. It also explains how the seasonal Autoregressive Integrated Moving Average time series model is used to estimate the upper threshold for an application's average transactional response time.

    17. Apparatus and method for closed-loop control of reactor power in minimum time

      DOE Patents [OSTI]

      Bernard, Jr., John A.

      1988-11-01

      Closed-loop control law for altering the power level of nuclear reactors in a safe manner and without overshoot and in minimum time. Apparatus is provided for moving a fast-acting control element such as a control rod or a control drum for altering the nuclear reactor power level. A computer computes at short time intervals either the function: .rho.=(.beta.-.rho.).omega.-.lambda..sub.e '.rho.-.SIGMA..beta..sub.i (.lambda..sub.i -.lambda..sub.e ')+l* .omega.+l* [.omega..sup.2 +.lambda..sub.e '.omega.] or the function: .rho.=(.beta.-.rho.).omega.-.lambda..sub.e .rho.-(.lambda..sub.e /.lambda..sub.e)(.beta.-.rho.)+l* .omega.+l* [.omega..sup.2 +.lambda..sub.e .omega.-(.lambda..sub.e /.lambda..sub.e).omega.] These functions each specify the rate of change of reactivity that is necessary to achieve a specified rate of change of reactor power. The direction and speed of motion of the control element is altered so as to provide the rate of reactivity change calculated using either or both of these functions thereby resulting in the attainment of a new power level without overshoot and in minimum time. These functions are computed at intervals of approximately 0.01-1.0 seconds depending on the specific application.

    18. Reporting Requirements

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Reporting Requirements Reporting Requirements Contacts Director Albert Migliori Deputy Franz Freibert 505 667-6879 Email Professional Staff Assistant Susan Ramsay 505 665 0858...

    19. ELECTRONIC DIGITAL COMPUTER

      DOE Patents [OSTI]

      Stone, J.J. Jr.; Bettis, E.S.; Mann, E.R.

      1957-10-01

      The electronic digital computer is designed to solve systems involving a plurality of simultaneous linear equations. The computer can solve a system which converges rather rapidly when using Von Seidel's method of approximation and performs the summations required for solving for the unknown terms by a method of successive approximations.

    20. Computing | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Computing Computing Fun fact: Most systems require air conditioning or chilled water to cool super powerful supercomputers, but the Olympus supercomputer at Pacific Northwest National Laboratory is cooled by the location's 65 degree groundwater. Traditional cooling systems could cost up to $61,000 in electricity each year, but this more efficient setup uses 70 percent less energy. | Photo courtesy of PNNL. Fun fact: Most systems require air conditioning or chilled water to cool super powerful

    1. Compute nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute nodes Compute nodes Click here to see more detailed hierachical map of the topology of a compute node. Last edited: 2016-04-29 11:35:0

    2. Computer System,

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      undergraduate summer institute http:isti.lanl.gov (Educational Prog) 2016 Computer System, Cluster, and Networking Summer Institute Purpose The Computer System,...

    3. Exascale Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      DesignForward FastForward CAL Partnerships Shifter: User Defined Images Archive APEX Home R & D Exascale Computing Exascale Computing Moving forward into the exascale era, ...

    4. Computing Sciences

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Sciences Our Vision National User Facilities Research Areas In Focus Global Solutions ⇒ Navigate Section Our Vision National User Facilities Research Areas In Focus Global Solutions Computational Research Division The Computational Research Division conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation. Scientific Networking

    5. Animation Requirements

      Broader source: Energy.gov [DOE]

      Animations include dynamic elements such as interactive images and games. For developing animations, follow these design and coding requirements.

    6. Theoretical solution of the minimum charge problem for gaseous detonations

      SciTech Connect (OSTI)

      Ostensen, R.W.

      1990-12-01

      A theoretical model was developed for the minimum charge to trigger a gaseous detonation in spherical geometry as a generalization of the Zeldovich model. Careful comparisons were made between the theoretical predictions and experimental data on the minimum charge to trigger detonations in propane-air mixtures. The predictions are an order of magnitude too high, and there is no apparent resolution to the discrepancy. A dynamic model, which takes into account the experimentally observed oscillations in the detonation zone, may be necessary for reliable predictions. 27 refs., 9 figs.

    7. Computing Information

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information From here you can find information relating to: Obtaining the right computer accounts. Using NIC terminals. Using BooNE's Computing Resources, including: Choosing your desktop. Kerberos. AFS. Printing. Recommended applications for various common tasks. Running CPU- or IO-intensive programs (batch jobs) Commonly encountered problems Computing support within BooNE Bringing a computer to FNAL, or purchasing a new one. Laptops. The Computer Security Program Plan for MiniBooNE The

    8. The"minimum information about an environmental sequence" (MIENS) specification

      SciTech Connect (OSTI)

      Yilmaz, P.; Kottmann, R.; Field, D.; Knight, R.; Cole, J.R.; Amaral-Zettler, L.; Gilbert, J.A.; Karsch-Mizrachi, I.; Johnston, A.; Cochrane, G.; Vaughan, R.; Hunter, C.; Park, J.; Morrison, N.; Rocca-Serra, P.; Sterk, P.; Arumugam, M.; Baumgartner, L.; Birren, B.W.; Blaser, M.J.; Bonazzi, V.; Bork, P.; Buttigieg, P. L.; Chain, P.; Costello, E.K.; Huot-Creasy, H.; Dawyndt, P.; DeSantis, T.; Fierer, N.; Fuhrman, J.; Gallery, R.E.; Gibbs, R.A.; Giglio, M.G.; Gil, I. San; Gonzalez, A.; Gordon, J.I.; Guralnick, R.; Hankeln, W.; Highlander, S.; Hugenholtz, P.; Jansson, J.; Kennedy, J.; Knights, D.; Koren, O.; Kuczynski, J.; Kyrpides, N.; Larsen, R.; Lauber, C.L.; Legg, T.; Ley, R.E.; Lozupone, C.A.; Ludwig, W.; Lyons, D.; Maguire, E.; Methe, B.A.; Meyer, F.; Nakieny, S.; Nelson, K.E.; Nemergut, D.; Neufeld, J.D.; Pace, N.R.; Palanisamy, G.; Peplies, J.; Peterson, J.; Petrosino, J.; Proctor, L.; Raes, J.; Ratnasingham, S.; Ravel, J.; Relman, D.A.; Assunta-Sansone, S.; Schriml, L.; Sodergren, E.; Spor, A.; Stombaugh, J.; Tiedje, J.M.; Ward, D.V.; Weinstock, G.M.; Wendel, D.; White, O.; Wikle, A.; Wortman, J.R.; Glockner, F.O.; Bushman, F.D.; Charlson, E.; Gevers, D.; Kelley, S.T.; Neubold, L.K.; Oliver, A.E.; Pruesse, E.; Quast, C.; Schloss, P.D.; Sinha, R.; Whitely, A.

      2010-10-15

      We present the Genomic Standards Consortium's (GSC) 'Minimum Information about an ENvironmental Sequence' (MIENS) standard for describing marker genes. Adoption of MIENS will enhance our ability to analyze natural genetic diversity across the Tree of Life as it is currently being documented by massive DNA sequencing efforts from myriad ecosystems in our ever-changing biosphere.

    9. NERSC HPC Program Requirements Review Reports

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Published Reports NERSC HPC Program Requirements Review Reports These publications comprise the final reports from the HPC requirements reviews presented to the Department of Energy. Downloads ASCR2017Final.pdf | Adobe Acrobat PDF file Large Scale Computing and Storage Requirements for Advanced Scientific Computing Research - Target 2017 NerscBES2017ReqRevFinal.pdf | Adobe Acrobat PDF file Large Scale Computing and Storage Requirements for Basic Energy Sciences - Target 2017

    10. Competition Requirements

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      ---------------------------------------- Chapter 6.1 (July 2011) 1 Competition Requirements [Reference: FAR 6 and DEAR 906] Overview This section discusses competition requirements and provides a model Justification for Other than Full and Open Competition (JOFOC). Background The Competition in Contracting Act (CICA) of 1984 requires that all acquisitions be made using full and open competition. Seven exceptions to using full and open competition are specifically identified in Federal

    11. Eligibility Requirements

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Eligibility Requirements Eligibility Requirements A comprehensive benefits package with plan options for health care and retirement to take care of our employees today and tomorrow. Contact Benefits Office (505) 667-1806 Email Eligibility and required supporting documentation The Laboratory offers an extensive benefits package to full and part time employees. Casual employees (excluding High School Coop, Lab Associates and Craft Employees) are eligible to enroll in the HDHP medical plan. NOTE:

    12. Reporting Requirements

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Reporting Requirements Reporting Requirements Contacts Director Albert Migliori Deputy Franz Freibert 505 667-6879 Email Professional Staff Assistant Susan Ramsay 505 665 0858 Email The Fellow will be required to participate in the Actinide Science lecture series by both attending lectures and presenting a scientific lecture on actinide science in this series. Submission of a viewgraph and brief write-up of the project. Provide metrics information as requested. Submission of an overview article

    13. Video Requirements

      Broader source: Energy.gov [DOE]

      All EERE videos, including webinar recordings, must meet Section 508's requirements for accessibility. All videos should be hosted on the DOE YouTube channel.

    14. Deployment Requirements

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Placement, Physical Size, Control Circuitry, Power Requirement, Electronic Interface, Pneumatic Design, Shelf Life, MaturityAvailability, Regulations (Codes), Alarm Set Points * ...

    15. Computing Resources

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Cluster-Image TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computing Resources The TRACC Computational Clusters With the addition of a new cluster called Zephyr that was made operational in September of this year (2012), TRACC now offers two clusters to choose from: Zephyr and our original cluster that has now been named Phoenix. Zephyr was acquired from Atipa technologies, and it is a 92-node system with each node having two AMD

    16. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Nodes Compute Nodes Quad CoreAMDOpteronprocessor Compute Node Configuration 9,572 nodes 1 quad-core AMD 'Budapest' 2.3 GHz processor per node 4 cores per node (38,288 total cores) 8 GB DDR3 800 MHz memory per node Peak Gflop rate 9.2 Gflops/core 36.8 Gflops/node 352 Tflops for the entire machine Each core has their own L1 and L2 caches, with 64 KB and 512KB respectively 2 MB L3 cache shared among the 4 cores Compute Node Software By default the compute nodes run a restricted low-overhead

    17. New York City- Energy Conservation Requirements for Existing Buildings

      Office of Energy Efficiency and Renewable Energy (EERE)

      Council Bill No. 564-A (Local Law 85 of 2009): Requires that renovations of existing buildings meet minimum energy conservation standards. The result of this law is essentially a city energy code ...

    18. Competition Requirements

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      --------------------------- Chapter 6.5 (January 2011) 1 Competition Advocate Responsibilities [Reference: FAR 6.5, FAR 7 and DEAR 906.501] Overview This section discusses the competition advocate requirements and provides a Federal Procurement Data System-New Generation (FPDS-NG) coding assistance sheet and screen shots for the FPDS-NG Competition Report. Background FAR Part 6.5, -Competition Advocates,‖ implements section 20 of the Office of Federal Procurement Policy Act, which requires

    19. Minimum information about a marker gene sequence (MIMARKS) and minimum information about any (x) sequence (MIxS) specifications.

      SciTech Connect (OSTI)

      Yilmaz, P.; Kottmann, R.; Field, D.; Knight, R.; Cole, J. R.; Amaral-Zettler, L.; Gilbert, J. A.

      2011-05-01

      Here we present a standard developed by the Genomic Standards Consortium (GSC) for reporting marker gene sequences - the minimum information about a marker gene sequence (MIMARKS). We also introduce a system for describing the environment from which a biological sample originates. The 'environmental packages' apply to any genome sequence of known origin and can be used in combination with MIMARKS and other GSC checklists. Finally, to establish a unified standard for describing sequence data and to provide a single point of entry for the scientific community to access and learn about GSC checklists, we present the minimum information about any (x) sequence (MIxS). Adoption of MIxS will enhance our ability to analyze natural genetic diversity documented by massive DNA sequencing efforts from myriad ecosystems in our ever-changing biosphere.

    20. Is ""predictability"" in computational sciences a myth?

      SciTech Connect (OSTI)

      Hemez, Francois M [Los Alamos National Laboratory

      2011-01-31

      Within the last two decades, Modeling and Simulation (M&S) has become the tool of choice to investigate the behavior of complex phenomena. Successes encountered in 'hard' sciences are prompting interest to apply a similar approach to Computational Social Sciences in support, for example, of national security applications faced by the Intelligence Community (IC). This manuscript attempts to contribute to the debate on the relevance of M&S to IC problems by offering an overview of what it takes to reach 'predictability' in computational sciences. Even though models developed in 'soft' and 'hard' sciences are different, useful analogies can be drawn. The starting point is to view numerical simulations as 'filters' capable to represent information only within specific length, time or energy bandwidths. This simplified view leads to the discussion of resolving versus modeling which motivates the need for sub-scale modeling. The role that modeling assumptions play in 'hiding' our lack-of-knowledge about sub-scale phenomena is explained which leads to discussing uncertainty in simulations. It is argued that the uncertainty caused by resolution and modeling assumptions should be dealt with differently than uncertainty due to randomness or variability. The corollary is that a predictive capability cannot be defined solely as accuracy, or ability of predictions to match the available physical observations. We propose that 'predictability' is the demonstration that predictions from a class of 'equivalent' models are as consistent as possible. Equivalency stems from defining models that share a minimum requirement of accuracy, while being equally robust to the sources of lack-of-knowledge in the problem. Examples in computational physics and engineering are given to illustrate the discussion.

    1. DOE Requires Manufacturers to Halt Sales of Heat Pumps and Air Conditioners

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Violating Minimum Appliance Standards | Department of Energy Requires Manufacturers to Halt Sales of Heat Pumps and Air Conditioners Violating Minimum Appliance Standards DOE Requires Manufacturers to Halt Sales of Heat Pumps and Air Conditioners Violating Minimum Appliance Standards June 3, 2010 - 12:00am Addthis Washington, DC - Today, the Department of Energy announced that three manufacturers -- Aspen Manufacturing, Inc., Summit Manufacturing, and Advanced Distributor Products -- must

    2. Computing Events

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Laboratory (pdf) DOENNSA Laboratories Fulfill National Mission with Trinity and Cielo Petascale Computers (pdf) Exascale Co-design Center for Materials in Extreme...

    3. Computational Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... Advanced Materials Laboratory Center for Integrated Nanotechnologies Combustion Research Facility Computational Science Research Institute Joint BioEnergy Institute About EC News ...

    4. Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Cite Seer Department of Energy provided open access science research citations in chemistry, physics, materials, engineering, and computer science IEEE Xplore Full text...

    5. Optimization of Operating Parameters for Minimum Mechanical Specific...

      Office of Scientific and Technical Information (OSTI)

      in maximum Rate of Penetration. Current methods for computing MSE make it possible to ... Mathematical relationships between the parameters were established, and the conventional ...

    6. Computing and Computational Sciences Directorate - Divisions

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      CCSD Divisions Computational Sciences and Engineering Computer Sciences and Mathematics Information Technolgoy Services Joint Institute for Computational Sciences National Center ...

    7. Computing and Computational Sciences Directorate - Contacts

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Home About Us Contacts Jeff Nichols Associate Laboratory Director Computing and Computational Sciences Becky Verastegui Directorate Operations Manager Computing and...

    8. The transition from the open minimum to the ring minimum on the ground state and on the lowest excited state of like symmetry in ozone: A configuration interaction study

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Theis, Daniel; Ivanic, Joseph; Windus, Theresa L.; Ruedenberg, Klaus

      2016-03-10

      The metastable ring structure of the ozone 11A1 ground state, which theoretical calculations have shown to exist, has so far eluded experimental detection. An accurate prediction for the energy difference between this isomer and the lower open structure is therefore of interest, as is a prediction for the isomerization barrier between them, which results from interactions between the lowest two 1A1 states. In the present work, valence correlated energies of the 11A1 state and the 21A1 state were calculated at the 11A1 open minimum, the 11A1 ring minimum, the transition state between these two minima, the minimum of the 21A1more » state, and the conical intersection between the two states. The geometries were determined at the full-valence multi-configuration self-consistent-field level. Configuration interaction (CI) expansions up to quadruple excitations were calculated with triple-zeta atomic basis sets. The CI expansions based on eight different reference configuration spaces were explored. To obtain some of the quadruple excitation energies, the method of CorrelationEnergy Extrapolation by Intrinsic Scaling was generalized to the simultaneous extrapolation for two states. This extrapolation method was shown to be very accurate. On the other hand, none of the CI expansions were found to have converged to millihartree (mh) accuracy at the quadruple excitation level. The data suggest that convergence to mh accuracy is probably attained at the sextuple excitation level. On the 11A1 state, the present calculations yield the estimates of (ring minimum—open minimum) ~45–50 mh and (transition state—open minimum) ~85–90 mh. For the (21A1–1A1) excitation energy, the estimate of ~130–170 mh is found at the open minimum and 270–310 mh at the ring minimum. At the transition state, the difference (21A1–1A1) is found to be between 1 and 10 mh. The geometry of the transition state on the 11A1 surface and that of the minimum on the 21A1 surface nearly coincide. More accurate predictions of the energydifferences also require CI expansions to at least sextuple excitations with respect to the valence space. Furthermore, for every wave function considered, the omission of the correlations of the 2s oxygen orbitals, which is a widely used approximation, was found to cause errors of about ±10 mh with respect to the energy differences.« less

    9. Proposal for grid computing for nuclear applications

      SciTech Connect (OSTI)

      Idris, Faridah Mohamad; Ismail, Saaidi; Haris, Mohd Fauzi B.; Sulaiman, Mohamad Safuan B.; Aslan, Mohd Dzul Aiman Bin.; Samsudin, Nursuliza Bt.; Ibrahim, Maizura Bt.; Ahmad, Megat Harun Al Rashid B. Megat; Yazid, Hafizal B.; Jamro, Rafhayudi B.; Azman, Azraf B.; Rahman, Anwar B. Abdul; Ibrahim, Mohd Rizal B. Mamat; Muhamad, Shalina Bt. Sheik; Hassan, Hasni; Abdullah, Wan Ahmad Tajuddin Wan; Ibrahim, Zainol Abidin; Zolkapli, Zukhaimira; Anuar, Afiq Aizuddin; Norjoharuddeen, Nurfikri; and others

      2014-02-12

      The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process.

    10. HUD (Housing and Urban Development) Intermediate Minimum Property Standards Supplement 4930. 2 (1989 edition). Solar heating and domestic hot water systems

      SciTech Connect (OSTI)

      Not Available

      1989-12-01

      The Minimum Property Standards for Housing 4910.1 were developed to provide a sound technical basis for housing under numerous programs of the Department of Housing and Urban Development (HUD). These Intermediate Minimum Property Standards for Solar Heating and Domestic Hot Water Systems are intended to provide a companion technical basis for the planning and design of solar heating and domestic hot water systems. These standards have been prepared as a supplement to the Minimum Property Standards (MPS) and deal only with aspects of planning and design that are different from conventional housing by reason of the solar systems under consideration. The document contains requirements and standards applicable to one- and two-family dwellings, multifamily housing, and nursing homes and intermediate care facilities references made in the text to the MPS refer to the same section in the Minimum Property Standards for Housing 4910.1.

    11. Computer, Computational, and Statistical Sciences

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... Directed Research and Development (LDRD) Defense Advanced Research Projects Agency (DARPA) Defense Threat Reduction Agency (DTRA) Research Applied Computer Science Co-design ...

    12. Minimum length, extra dimensions, modified gravity and black hole remnants

      SciTech Connect (OSTI)

      Maziashvili, Michael

      2013-03-01

      We construct a Hilbert space representation of minimum-length deformed uncertainty relation in presence of extra dimensions. Following this construction, we study corrections to the gravitational potential (back reaction on gravity) with the use of correspondingly modified propagator in presence of two (spatial) extra dimensions. Interestingly enough, for r?0 the gravitational force approaches zero and the horizon for modified Schwarzschild-Tangherlini space-time disappears when the mass approaches quantum-gravity energy scale. This result points out to the existence of zero-temperature black hole remnants in ADD brane-world model.

    13. Competition Requirements

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      ---- ----------------------------------------------- Chapter 5.2 (April 2008) Synopsizing Proposed Non-Competitive Contract Actions Citing the Authority of FAR 6.302-1 [Reference: FAR 5 and DEAR 905] Overview This section discusses publicizing sole source actions as part of the approval of a Justification for Other than Full and Open Competition (JOFOC) using the authority of FAR 6.302-1. Background The Competition in Contracting Act (CICA) of 1984 requires that all acquisitions be made using

    14. Level III Mentoring Requirement | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Level III Mentoring Requirement Level III Mentoring Requirement Level III applicants must be mentored (minimum of six months) by a Level III or IV FPD or demonstrate equivalency (see below Competency 3.12.2 in the PMCDP's CEG). A formal mentoring agreement must be signed by both parties detailing the goals and activities of the mentoring arrangement, and a signed copy of the agreement must be submitted with the certification application when it is presented to the PMCDP. Applications will be

    15. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Nodes Quad CoreAMDOpteronprocessor Compute Node Configuration 9,572 nodes 1 quad-core AMD 'Budapest' 2.3 GHz processor per node 4 cores per node (38,288 total cores) 8 GB...

    16. Nuclear Forces and High-Performance Computing: The Perfect Match...

      Office of Scientific and Technical Information (OSTI)

      We give estimates of computational requirements needed to obtain certain milestones and describe the scientific and computational challenges of this field. Authors: Luu, T ; Walker...

    17. Computational mechanics

      SciTech Connect (OSTI)

      Goudreau, G.L.

      1993-03-01

      The Computational Mechanics thrust area sponsors research into the underlying solid, structural and fluid mechanics and heat transfer necessary for the development of state-of-the-art general purpose computational software. The scale of computational capability spans office workstations, departmental computer servers, and Cray-class supercomputers. The DYNA, NIKE, and TOPAZ codes have achieved world fame through our broad collaborators program, in addition to their strong support of on-going Lawrence Livermore National Laboratory (LLNL) programs. Several technology transfer initiatives have been based on these established codes, teaming LLNL analysts and researchers with counterparts in industry, extending code capability to specific industrial interests of casting, metalforming, and automobile crash dynamics. The next-generation solid/structural mechanics code, ParaDyn, is targeted toward massively parallel computers, which will extend performance from gigaflop to teraflop power. Our work for FY-92 is described in the following eight articles: (1) Solution Strategies: New Approaches for Strongly Nonlinear Quasistatic Problems Using DYNA3D; (2) Enhanced Enforcement of Mechanical Contact: The Method of Augmented Lagrangians; (3) ParaDyn: New Generation Solid/Structural Mechanics Codes for Massively Parallel Processors; (4) Composite Damage Modeling; (5) HYDRA: A Parallel/Vector Flow Solver for Three-Dimensional, Transient, Incompressible Viscous How; (6) Development and Testing of the TRIM3D Radiation Heat Transfer Code; (7) A Methodology for Calculating the Seismic Response of Critical Structures; and (8) Reinforced Concrete Damage Modeling.

    18. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Nodes Compute Nodes MC-proc.png Compute Node Configuration 6,384 nodes 2 twelve-core AMD 'MagnyCours' 2.1-GHz processors per node (see die image to the right and schematic below) 24 cores per node (153,216 total cores) 32 GB DDR3 1333-MHz memory per node (6,000 nodes) 64 GB DDR3 1333-MHz memory per node (384 nodes) Peak Gflop/s rate: 8.4 Gflops/core 201.6 Gflops/node 1.28 Peta-flops for the entire machine Each core has its own L1 and L2 caches, with 64 KB and 512KB respectively One 6-MB

    19. Computational mechanics

      SciTech Connect (OSTI)

      Raboin, P J

      1998-01-01

      The Computational Mechanics thrust area is a vital and growing facet of the Mechanical Engineering Department at Lawrence Livermore National Laboratory (LLNL). This work supports the development of computational analysis tools in the areas of structural mechanics and heat transfer. Over 75 analysts depend on thrust area-supported software running on a variety of computing platforms to meet the demands of LLNL programs. Interactions with the Department of Defense (DOD) High Performance Computing and Modernization Program and the Defense Special Weapons Agency are of special importance as they support our ParaDyn project in its development of new parallel capabilities for DYNA3D. Working with DOD customers has been invaluable to driving this technology in directions mutually beneficial to the Department of Energy. Other projects associated with the Computational Mechanics thrust area include work with the Partnership for a New Generation Vehicle (PNGV) for ''Springback Predictability'' and with the Federal Aviation Administration (FAA) for the ''Development of Methodologies for Evaluating Containment and Mitigation of Uncontained Engine Debris.'' In this report for FY-97, there are five articles detailing three code development activities and two projects that synthesized new code capabilities with new analytic research in damage/failure and biomechanics. The article this year are: (1) Energy- and Momentum-Conserving Rigid-Body Contact for NIKE3D and DYNA3D; (2) Computational Modeling of Prosthetics: A New Approach to Implant Design; (3) Characterization of Laser-Induced Mechanical Failure Damage of Optical Components; (4) Parallel Algorithm Research for Solid Mechanics Applications Using Finite Element Analysis; and (5) An Accurate One-Step Elasto-Plasticity Algorithm for Shell Elements in DYNA3D.

    20. Visitor Hanford Computer Access Request - Hanford Site

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Visitor Hanford Computer Access Request Visitor Hanford Computer Access Request Visitor Hanford Computer Access Request Visitor Hanford Computer Access Request Email Email Page | Print Print Page |Text Increase Font Size Decrease Font Size The U.S. Department of Energy (DOE), Richland Operations Office (RL), in compliance with the 'Tri-Party Agreement Databases, Access Mechanism and Procedures' document, DOE/RL-93-69, Revision 5; set forth the requirements for access to the Hanford Site computer

    1. PREPARING FOR EXASCALE: ORNL Leadership Computing Application...

      Office of Scientific and Technical Information (OSTI)

      ... Requirements elicitation, analysis, validation, and management comprise a difficult and ... Research Org: Oak Ridge National Laboratory (ORNL); Oak Ridge Leadership Computing ...

    2. Institutional computing (IC) information session

      SciTech Connect (OSTI)

      Koch, Kenneth R; Lally, Bryan R

      2011-01-19

      The LANL Institutional Computing Program (IC) will host an information session about the current state of unclassified Institutional Computing at Los Alamos, exciting plans for the future, and the current call for proposals for science and engineering projects requiring computing. Program representatives will give short presentations and field questions about the call for proposals and future planned machines, and discuss technical support available to existing and future projects. Los Alamos has started making a serious institutional investment in open computing available to our science projects, and that investment is expected to increase even more.

    3. Advanced Scientific Computing Research

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Advanced Scientific Computing Research Advanced Scientific Computing Research Discovering, ... The DOE Office of Science's Advanced Scientific Computing Research (ASCR) program ...

    4. Data Crosscutting Requirements Review

      SciTech Connect (OSTI)

      Kleese van Dam, Kerstin; Shoshani, Arie; Plata, Charity

      2013-04-01

      In April 2013, a diverse group of researchers from the U.S. Department of Energy (DOE) scientific community assembled to assess data requirements associated with DOE-sponsored scientific facilities and large-scale experiments. Participants in the review included facilities staff, program managers, and scientific experts from the offices of Basic Energy Sciences, Biological and Environmental Research, High Energy Physics, and Advanced Scientific Computing Research. As part of the meeting, review participants discussed key issues associated with three distinct aspects of the data challenge: 1) processing, 2) management, and 3) analysis. These discussions identified commonalities and differences among the needs of varied scientific communities. They also helped to articulate gaps between current approaches and future needs, as well as the research advances that will be required to close these gaps. Moreover, the review provided a rare opportunity for experts from across the Office of Science to learn about their collective expertise, challenges, and opportunities. The "Data Crosscutting Requirements Review" generated specific findings and recommendations for addressing large-scale data crosscutting requirements.

    5. Computing at JLab

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      JLab --- Accelerator Controls CAD CDEV CODA Computer Center High Performance Computing Scientific Computing JLab Computer Silo maintained by webmaster@jlab.org...

    6. Analysis of Minimum Efficiency Performance Standards for Residential General Service Lighting in Chile

      SciTech Connect (OSTI)

      Letschert, Virginie E.; McNeil, Michael A.; Leiva Ibanez, Francisco Humberto; Ruiz, Ana Maria; Pavon, Mariana; Hall, Stephen

      2011-06-01

      Minimum Efficiency Performance Standards (MEPS) have been chosen as part of Chile's national energy efficiency action plan. As a first MEPS, the Ministry of Energy has decided to focus on a regulation for lighting that would ban the sale of inefficient bulbs, effectively phasing out the use of incandescent lamps. Following major economies such as the US (EISA, 2007) , the EU (Ecodesign, 2009) and Australia (AS/NZS, 2008) who planned a phase out based on minimum efficacy requirements, the Ministry of Energy has undertaken the impact analysis of a MEPS on the residential lighting sector. Fundacion Chile (FC) and Lawrence Berkeley National Laboratory (LBNL) collaborated with the Ministry of Energy and the National Energy Efficiency Program (Programa Pais de Eficiencia Energetica, or PPEE) in order to produce a techno-economic analysis of this future policy measure. LBNL has developed for CLASP (CLASP, 2007) a spreadsheet tool called the Policy Analysis Modeling System (PAMS) that allows for evaluation of costs and benefits at the consumer level but also a wide range of impacts at the national level, such as energy savings, net present value of savings, greenhouse gas (CO2) emission reductions and avoided capacity generation due to a specific policy. Because historically Chile has followed European schemes in energy efficiency programs (test procedures, labelling program definitions), we take the Ecodesign commission regulation No 244/2009 as a starting point when defining our phase out program, which means a tiered phase out based on minimum efficacy per lumen category. The following data were collected in order to perform the techno-economic analysis: (1) Retail prices, efficiency and wattage category in the current market, (2) Usage data (hours of lamp use per day), and (3) Stock data, penetration of efficient lamps in the market. Using these data, PAMS calculates the costs and benefits of efficiency standards from two distinct but related perspectives: (1) The Life-Cycle Cost (LCC) calculation examines costs and benefits from the perspective of the individual household; and (2) The National Perspective projects the total national costs and benefits including both financial benefits, and energy savings and environmental benefits. The national perspective calculations are called the National Energy Savings (NES) and the Net Present Value (NPV) calculations. PAMS also calculate total emission mitigation and avoided generation capacity. This paper describes the data and methodology used in PAMS and presents the results of the proposed phase out of incandescent bulbs in Chile.

    7. RATIO COMPUTER

      DOE Patents [OSTI]

      Post, R.F.

      1958-11-11

      An electronic computer circuit is described for producing an output voltage proportional to the product or quotient of tbe voltages of a pair of input signals. ln essence, the disclosed invention provides a computer having two channels adapted to receive separate input signals and each having amplifiers with like fixed amplification factors and like negatlve feedback amplifiers. One of the channels receives a constant signal for comparison purposes, whereby a difference signal is produced to control the amplification factors of the variable feedback amplifiers. The output of the other channel is thereby proportional to the product or quotient of input signals depending upon the relation of input to fixed signals in the first mentioned channel.

    8. Shape-memory transformations of NiTi: Minimum-energy pathways...

      Office of Scientific and Technical Information (OSTI)

      Accepted Manuscript: Shape-memory transformations of NiTi: Minimum-energy pathways between austenite, martensites, and kinetically limited intermediate states Title: Shape-memory ...

    9. IDAPA 37.03.03 - Rules and Minimum Standards for the Construction...

      Open Energy Info (EERE)

      3 - Rules and Minimum Standards for the Construction and Use of Injection Wells Jump to: navigation, search OpenEI Reference LibraryAdd to library Legal Document-...

    10. Accounts Policy | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Accounts Policy All holders of user accounts must abide by all appropriate Argonne Leadership Computing Facility and Argonne National Laboratory computing usage policies. These are described at the time of the account request and include requirements such as using a sufficiently strong password, appropriate use of the system, and so on. Any user not following these requirements will have their account disabled. Furthermore, ALCF resources are intended to be used as a computing resource for

    11. Covered Product Category: Computers | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Computers Covered Product Category: Computers The Federal Energy Management Program (FEMP) provides acquisition guidance for computers, a product category covered by the ENERGY STAR program. Federal laws and requirements mandate that agencies buy ENERGY STAR-qualified products in all product categories covered by this program and any acquisition actions that are not specifically exempted by law. MEETING EFFICIENCY REQUIREMENTS FOR FEDERAL PURCHASES The U.S. Environmental Protection Agency (EPA)

    12. Cyber Security Process Requirements Manual

      Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]

      2008-08-12

      The Manual establishes the minimum implementation standards for cyber security management processes throughout the Department. No cancellation.

    13. Exploratory Experimentation and Computation

      SciTech Connect (OSTI)

      Bailey, David H.; Borwein, Jonathan M.

      2010-02-25

      We believe the mathematical research community is facing a great challenge to re-evaluate the role of proof in light of recent developments. On one hand, the growing power of current computer systems, of modern mathematical computing packages, and of the growing capacity to data-mine on the Internet, has provided marvelous resources to the research mathematician. On the other hand, the enormous complexity of many modern capstone results such as the Poincare conjecture, Fermat's last theorem, and the classification of finite simple groups has raised questions as to how we can better ensure the integrity of modern mathematics. Yet as the need and prospects for inductive mathematics blossom, the requirement to ensure the role of proof is properly founded remains undiminished.

    14. NERSC HPC Program Requirements Reviews Overview

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Overview NERSC HPC Program Requirements Reviews Overview Scope These workshops are focused on determining the computational challenges facing research teams and the computational resources scientists will need to meet their research objectives. The goal is to assure that NERSC, the DOE Office of Science, and its program offices, will be able to provide the high performance computing and storage resources necessary to support the Office of Science's scientific goals. The merits of the scientific

    15. Computing and Computational Sciences Directorate - Joint Institute...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      (JICS). JICS combines the experience and expertise in theoretical and computational science and engineering, computer science, and mathematics in these two institutions and ...

    16. High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      HPC INL Logo Home High-Performance Computing INL's high-performance computing center provides general use scientific computing capabilities to support the lab's efforts in advanced...

    17. Multiprocessor computing for images

      SciTech Connect (OSTI)

      Cantoni, V. ); Levialdi, S. )

      1988-08-01

      A review of image processing systems developed until now is given, highlighting the weak points of such systems and the trends that have dictated their evolution through the years producing different generations of machines. Each generation may be characterized by the hardware architecture, the programmability features and the relative application areas. The need for multiprocessing hierarchical systems is discussed focusing on pyramidal architectures. Their computational paradigms, their virtual and physical implementation, their programming and software requirements, and capabilities by means of suitable languages, are discussed.

    18. ASCR Workshop on Quantum Computing for Science

      SciTech Connect (OSTI)

      Aspuru-Guzik, Alan; Van Dam, Wim; Farhi, Edward; Gaitan, Frank; Humble, Travis; Jordan, Stephen; Landahl, Andrew J; Love, Peter; Lucas, Robert; Preskill, John; Muller, Richard P.; Svore, Krysta; Wiebe, Nathan; Williams, Carl

      2015-06-01

      This report details the findings of the DOE ASCR Workshop on Quantum Computing for Science that was organized to assess the viability of quantum computing technologies to meet the computational requirements of the DOE’s science and energy mission, and to identify the potential impact of quantum technologies. The workshop was held on February 17-18, 2015, in Bethesda, MD, to solicit input from members of the quantum computing community. The workshop considered models of quantum computation and programming environments, physical science applications relevant to DOE's science mission as well as quantum simulation, and applied mathematics topics including potential quantum algorithms for linear algebra, graph theory, and machine learning. This report summarizes these perspectives into an outlook on the opportunities for quantum computing to impact problems relevant to the DOE’s mission as well as the additional research required to bring quantum computing to the point where it can have such impact.

    19. TRIDAC host computer functional specification

      SciTech Connect (OSTI)

      Hilbert, S.M.; Hunter, S.L.

      1983-08-23

      The purpose of this document is to outline the baseline functional requirements for the Triton Data Acquisition and Control (TRIDAC) Host Computer Subsystem. The requirements presented in this document are based upon systems that currently support both the SIS and the Uranium Separator Technology Groups in the AVLIS Program at the Lawrence Livermore National Laboratory and upon the specific demands associated with the extended safe operation of the SIS Triton Facility.

    20. FES Requirements Review 2014

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      FES Requirements Review 2014 Science Engagement Move your data Programs & Workshops Science Requirements Reviews Network Requirements Reviews Documents and Background Materials FAQ for Case Study Authors BER Requirements Review 2015 ASCR Requirements Review 2015 Previous Reviews HEP/NP Requirements Review 2013 FES Requirements Review 2014 FES Attendees 2014 BES Requirements Review 2014 Requirements Review Reports Case Studies Contact Us Technical Assistance: 1 800-33-ESnet (Inside US) 1

    1. BES Requirements Review 2014

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      BES Requirements Review 2014 Science Engagement Move your data Programs & Workshops Science Requirements Reviews Network Requirements Reviews Documents and Background Materials FAQ for Case Study Authors BER Requirements Review 2015 ASCR Requirements Review 2015 Previous Reviews HEP/NP Requirements Review 2013 FES Requirements Review 2014 BES Requirements Review 2014 BES Attendees 2014 Requirements Review Reports Case Studies Contact Us Technical Assistance: 1 800-33-ESnet (Inside US) 1

    2. BER Requirements Review 2015

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Reviews Network Requirements Reviews Documents and Background Materials FAQ for Case Study Authors BER Requirements Review 2015 BER Attendees 2015 ASCR Requirements...

    3. Requirements for Wind Development

      Broader source: Energy.gov [DOE]

      In 2015 Oklahoma amended the Oklahoma Wind Energy Development Act. The amendments added new financial security requirements, setback requirements, and notification requirements for wind energy...

    4. Using the NEPA Requirements and Guidance - Search Index

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      your computer or USB drive. 2. Locate and Open the extracted folder "NEPA Requirements and Guidance - Search Index". 3. Locate and Open the .PDX file titled "Search - NEPA ...

    5. Large Scale Computing and Storage Requirements for Nuclear Physics...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      must respond to their e-mail invitation. The Group Registration Deadline for the hotel is May 4, 2011. An official letter of invitation is available (PDF). Workshop Agenda...

    6. ComPASS Present and Future Computing Requirements

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... TAO, Trillinos * Shared library support is very important ... a nd p ossibly c heaper energy f ron8er a ccelerators: * ... the experiment Current HPC usage Hours: 6 Myear ...

    7. Present and Future Computing Requirements Sergey Syritsyn RIKEN...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      30% of the proton spin can be explained by quark spins 2012 1 value. It is up to workers in this field to solve this ... PDG 2012 Dirac Radius Magnetic Moment Pauli Radius ...

    8. Large Scale Computing and Storage Requirements for Basic Energy...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      SciencesAn BES ASCR NERSC WorkshopFebruary 9-10, 2010... Read More Workshop Logistics Workshop location, directions, and registration information are included here......

    9. Large Scale Computing and Storage Requirements for High Energy...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      13, 2009. Hilton Washington DCRockville Executive Meeting Center 1750 Rockville Pike Rockville, MD 20852 Registration The workshop is by invitation only. There is no...

    10. Large Scale Production Computing and Storage Requirements for...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      11-12, 2012 Hilton Rockville Hotel and Executive Meeting Center 1750 Rockville Pike Rockville, MD, 20852-1699 TEL: 1-301-468-1100 Sponsored by: U.S. Department of Energy...

    11. Large Scale Production Computing and Storage Requirements for...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      2012 Hilton Washington DCRockville Hotel &Executive Meeting Center 1750 Rockville Pike, Rockville, MD,20852-1699 Final Report PDF Hotel Information Info on how to reserve a...

    12. Determining collective barrier operation skew in a parallel computer

      DOE Patents [OSTI]

      Faraj, Daniel A.

      2015-11-24

      Determining collective barrier operation skew in a parallel computer that includes a number of compute nodes organized into an operational group includes: for each of the nodes until each node has been selected as a delayed node: selecting one of the nodes as a delayed node; entering, by each node other than the delayed node, a collective barrier operation; entering, after a delay by the delayed node, the collective barrier operation; receiving an exit signal from a root of the collective barrier operation; and measuring, for the delayed node, a barrier completion time. The barrier operation skew is calculated by: identifying, from the compute nodes' barrier completion times, a maximum barrier completion time and a minimum barrier completion time and calculating the barrier operation skew as the difference of the maximum and the minimum barrier completion time.

    13. Determining collective barrier operation skew in a parallel computer

      DOE Patents [OSTI]

      Faraj, Daniel A.

      2015-12-24

      Determining collective barrier operation skew in a parallel computer that includes a number of compute nodes organized into an operational group includes: for each of the nodes until each node has been selected as a delayed node: selecting one of the nodes as a delayed node; entering, by each node other than the delayed node, a collective barrier operation; entering, after a delay by the delayed node, the collective barrier operation; receiving an exit signal from a root of the collective barrier operation; and measuring, for the delayed node, a barrier completion time. The barrier operation skew is calculated by: identifying, from the compute nodes' barrier completion times, a maximum barrier completion time and a minimum barrier completion time and calculating the barrier operation skew as the difference of the maximum and the minimum barrier completion time.

    14. Feed tank transfer requirements

      SciTech Connect (OSTI)

      Freeman-Pollard, J.R.

      1998-09-16

      This document presents a definition of tank turnover; DOE responsibilities; TWRS DST permitting requirements; TWRS Authorization Basis (AB) requirements; TWRS AP Tank Farm operational requirements; unreviewed safety question (USQ) requirements; records and reporting requirements, and documentation which will require revision in support of transferring a DST in AP Tank Farm to a privatization contractor for use during Phase 1B.

    15. From Fjords to Open Seas: Ecological Genomics of Expanding Oxygen Minimum Zones (2010 JGI User Meeting)

      ScienceCinema (OSTI)

      Hallam, Steven

      2011-04-26

      Steven Hallam of the University of British Columbia talks "From Fjords to Open Seas: Ecological Genomics of Expanding Oxygen Minimum Zones" on March 24, 2010 at the 5th Annual DOE JGI User Meeting

    16. Title 43 CFR 3206.12 What are the Minimum and Maximum Lease Sizes...

      Open Energy Info (EERE)

      .12 What are the Minimum and Maximum Lease Sizes? Jump to: navigation, search OpenEI Reference LibraryAdd to library Legal Document- Federal RegulationFederal Regulation: Title 43...

    17. Revenue-requirement approach to analysis of financing alternatives

      SciTech Connect (OSTI)

      Ewers, B.J.; Wheaton, K.E.

      1984-07-19

      The minimum revenue requirement discipline (MRRD) is accepted throughout the utility industry as a tool to be used for economic decisions and rate making. At least one utility company has also used MRRD in the analysis of financing alternatives. This article was written to show the versatility of the revenue requirement discipline. It demonstrates that this methodology is appropriate not only for evaluating traditional capital budgeting decisions, but also for identifying the most economic financing alternatives. 5 references, 4 figures, 4 tables.

    18. NERSC HPC RequirementsReviews: Target 2014

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Basic Energy Sciences (BES) Biological and Environmental Science (BER) Fusion Energy Sciences (FES) High Energy Physics (HEP) Nuclear Physics (NP) Overview Published Reports Case Study FAQs NERSC HPC Achievement Awards Share Your Research User Submitted Research Citations NERSC Citations Home » Science at NERSC » HPC Requirements Reviews » Requirements Reviews: Target 2014 NERSC HPC RequirementsReviews: Target 2014 NERSC and the Office of Advanced Computational Research held six program

    19. Shape-memory transformations of NiTi: Minimum-energy pathways between

      Office of Scientific and Technical Information (OSTI)

      austenite, martensites, and kinetically limited intermediate states (Journal Article) | DOE PAGES Shape-memory transformations of NiTi: Minimum-energy pathways between austenite, martensites, and kinetically limited intermediate states Title: Shape-memory transformations of NiTi: Minimum-energy pathways between austenite, martensites, and kinetically limited intermediate states NiTi is the most used shape-memory alloy, nonetheless, a lack of understanding remains regarding the associated

    20. Impact analysis on a massively parallel computer

      SciTech Connect (OSTI)

      Zacharia, T.; Aramayo, G.A.

      1994-06-01

      Advanced mathematical techniques and computer simulation play a major role in evaluating and enhancing the design of beverage cans, industrial, and transportation containers for improved performance. Numerical models are used to evaluate the impact requirements of containers used by the Department of Energy (DOE) for transporting radioactive materials. Many of these models are highly compute-intensive. An analysis may require several hours of computational time on current supercomputers despite the simplicity of the models being studied. As computer simulations and materials databases grow in complexity, massively parallel computers have become important tools. Massively parallel computational research at the Oak Ridge National Laboratory (ORNL) and its application to the impact analysis of shipping containers is briefly described in this paper.

    1. Cyber Security Process Requirements Manual

      Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]

      2008-08-12

      The Manual establishes the minimum implementation standards for cyber security management processes throughout the Department. No cancellation. Admin Chg 1 dated 9-1-09.

    2. A common language for computer security incidents

      SciTech Connect (OSTI)

      John D. Howard; Thomas A Longstaff

      1998-10-01

      Much of the computer security information regularly gathered and disseminated by individuals and organizations cannot currently be combined or compared because a common language has yet to emerge in the field of computer security. A common language consists of terms and taxonomies (principles of classification) which enable the gathering, exchange and comparison of information. This paper presents the results of a project to develop such a common language for computer security incidents. This project results from cooperation between the Security and Networking Research Group at the Sandia National Laboratories, Livermore, CA, and the CERT{reg_sign} Coordination Center at Carnegie Mellon University, Pittsburgh, PA. This Common Language Project was not an effort to develop a comprehensive dictionary of terms used in the field of computer security. Instead, the authors developed a minimum set of high-level terms, along with a structure indicating their relationship (a taxonomy), which can be used to classify and understand computer security incident information. They hope these high-level terms and their structure will gain wide acceptance, be useful, and most importantly, enable the exchange and comparison of computer security incident information. They anticipate, however, that individuals and organizations will continue to use their own terms, which may be more specific both in meaning and use. They designed the common language to enable these lower-level terms to be classified within the common language structure.

    3. Science Requirements Process

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Science Requirements Reviews Network Requirements Reviews Requirements Review Reports Case Studies News & Publications ESnet News Publications and Presentations Galleries ESnet Awards and Honors Blog ESnet Live Home » Science Engagement » Science Requirements Reviews Science Engagement Move your data Programs & Workshops Science Requirements Reviews Network Requirements Reviews Requirements Review Reports Case Studies Contact Us Technical Assistance: 1 800-33-ESnet (Inside US) 1

    4. ASCR Requirements Review 2015

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ASCR Requirements Review 2015 ASCR Attendees 2015 Previous Reviews Requirements Review Reports Case Studies News & Publications ESnet News Publications and Presentations Galleries ESnet Awards and Honors Blog ESnet Live Home » Science Engagement » Science Requirements Reviews » Network Requirements Reviews » ASCR Requirements Review 2015 Science Engagement Move your data Programs & Workshops Science Requirements Reviews Network Requirements Reviews Documents and Background Materials

    5. BER Requirements Review 2015

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      BER Attendees 2015 ASCR Requirements Review 2015 Previous Reviews Requirements Review Reports Case Studies News & Publications ESnet News Publications and Presentations Galleries ESnet Awards and Honors Blog ESnet Live Home » Science Engagement » Science Requirements Reviews » Network Requirements Reviews » BER Requirements Review 2015 Science Engagement Move your data Programs & Workshops Science Requirements Reviews Network Requirements Reviews Documents and Background Materials

    6. ASCR Science Network Requirements

      SciTech Connect (OSTI)

      Dart, Eli; Tierney, Brian

      2009-08-24

      The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the US Department of Energy Office of Science, the single largest supporter of basic research in the physical sciences in the United States. In support of the Office of Science programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years. In April 2009 ESnet and the Office of Advanced Scientific Computing Research (ASCR), of the DOE Office of Science, organized a workshop to characterize the networking requirements of the programs funded by ASCR. The ASCR facilities anticipate significant increases in wide area bandwidth utilization, driven largely by the increased capabilities of computational resources and the wide scope of collaboration that is a hallmark of modern science. Many scientists move data sets between facilities for analysis, and in some cases (for example the Earth System Grid and the Open Science Grid), data distribution is an essential component of the use of ASCR facilities by scientists. Due to the projected growth in wide area data transfer needs, the ASCR supercomputer centers all expect to deploy and use 100 Gigabit per second networking technology for wide area connectivity as soon as that deployment is financially feasible. In addition to the network connectivity that ESnet provides, the ESnet Collaboration Services (ECS) are critical to several science communities. ESnet identity and trust services, such as the DOEGrids certificate authority, are widely used both by the supercomputer centers and by collaborations such as Open Science Grid (OSG) and the Earth System Grid (ESG). Ease of use is a key determinant of the scientific utility of network-based services. Therefore, a key enabling aspect for scientists beneficial use of high performance networks is a consistent, widely deployed, well-maintained toolset that is optimized for wide area, high-speed data transfer (e.g. GridFTP) that allows scientists to easily utilize the services and capabilities that the network provides. Network test and measurement is an important part of ensuring that these tools and network services are functioning correctly. One example of a tool in this area is the recently developed perfSONAR, which has already shown its usefulness in fault diagnosis during the recent deployment of high-performance data movers at NERSC and ORNL. On the other hand, it is clear that there is significant work to be done in the area of authentication and access control - there are currently compatibility problems and differing requirements between the authentication systems in use at different facilities, and the policies and mechanisms in use at different facilities are sometimes in conflict. Finally, long-term software maintenance was of concern for many attendees. Scientists rely heavily on a large deployed base of software that does not have secure programmatic funding. Software packages for which this is true include data transfer tools such as GridFTP as well as identity management and other software infrastructure that forms a critical part of the Open Science Grid and the Earth System Grid.

    7. Computation Modeling and Assessment of Nanocoatings for Ultra Supercritical Boilers

      SciTech Connect (OSTI)

      J. Shingledecker; D. Gandy; N. Cheruvu; R. Wei; K. Chan

      2011-06-21

      Forced outages and boiler unavailability of coal-fired fossil plants is most often caused by fire-side corrosion of boiler waterwalls and tubing. Reliable coatings are required for Ultrasupercritical (USC) application to mitigate corrosion since these boilers will operate at a much higher temperatures and pressures than in supercritical (565 C {at} 24 MPa) boilers. Computational modeling efforts have been undertaken to design and assess potential Fe-Cr-Ni-Al systems to produce stable nanocrystalline coatings that form a protective, continuous scale of either Al{sub 2}O{sub 3} or Cr{sub 2}O{sub 3}. The computational modeling results identified a new series of Fe-25Cr-40Ni with or without 10 wt.% Al nanocrystalline coatings that maintain long-term stability by forming a diffusion barrier layer at the coating/substrate interface. The computational modeling predictions of microstructure, formation of continuous Al{sub 2}O{sub 3} scale, inward Al diffusion, grain growth, and sintering behavior were validated with experimental results. Advanced coatings, such as MCrAl (where M is Fe, Ni, or Co) nanocrystalline coatings, have been processed using different magnetron sputtering deposition techniques. Several coating trials were performed and among the processing methods evaluated, the DC pulsed magnetron sputtering technique produced the best quality coating with a minimum number of shallow defects and the results of multiple deposition trials showed that the process is repeatable. scale, inward Al diffusion, grain growth, and sintering behavior were validated with experimental results. The cyclic oxidation test results revealed that the nanocrystalline coatings offer better oxidation resistance, in terms of weight loss, localized oxidation, and formation of mixed oxides in the Al{sub 2}O{sub 3} scale, than widely used MCrAlY coatings. However, the ultra-fine grain structure in these coatings, consistent with the computational model predictions, resulted in accelerated Al diffusion from the coating into the substrate. An effective diffusion barrier interlayer coating was developed to prevent inward Al diffusion. The fire-side corrosion test results showed that the nanocrystalline coatings with a minimum number of defects have a great potential in providing corrosion protection. The coating tested in the most aggressive environment showed no evidence of coating spallation and/or corrosion attack after 1050 hours exposure. In contrast, evidence of coating spallation in isolated areas and corrosion attack of the base metal in the spalled areas were observed after 500 hours. These contrasting results after 500 and 1050 hours exposure suggest that the premature coating spallation in isolated areas may be related to the variation of defects in the coating between the samples. It is suspected that the cauliflower-type defects in the coating were presumably responsible for coating spallation in isolated areas. Thus, a defect free good quality coating is the key for the long-term durability of nanocrystalline coatings in corrosive environments. Thus, additional process optimization work is required to produce defect-free coatings prior to development of a coating application method for production parts.

    8. Applications of Parallel Computers

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computers Applications of Parallel Computers UCB CS267 Spring 2015 Tuesday & Thursday, 9:30-11:00 Pacific Time Applications of Parallel Computers, CS267, is a graduate-level course...

    9. Computer hardware fault administration

      DOE Patents [OSTI]

      Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

      2010-09-14

      Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

    10. advanced simulation and computing

      National Nuclear Security Administration (NNSA)

      Each successive generation of computing system has provided greater computing power and energy efficiency.

      CTS-1 clusters will support NNSA's Life Extension Program and...

    11. Applied & Computational Math

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      & Computational Math - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us ... Twitter Google + Vimeo GovDelivery SlideShare Applied & Computational Math HomeEnergy ...

    12. Computational Earth Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      6 Computational Earth Science We develop and apply a range of high-performance computational methods and software tools to Earth science projects in support of environmental ...

    13. Energy Aware Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Partnerships Shifter: User Defined Images Archive APEX Home R & D Energy Aware Computing Energy Aware Computing Dynamic Frequency Scaling One means to lower the energy ...

    14. Molecular Science Computing | EMSL

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      computational and state-of-the-art experimental tools, providing a cross-disciplinary environment to further research. Additional Information Computing user policies Partners...

    15. Feed tank transfer requirements

      SciTech Connect (OSTI)

      Freeman-Pollard, J.R.

      1998-09-16

      This document presents a definition of tank turnover. Also, DOE and PC responsibilities; TWRS DST permitting requirements; TWRS Authorization Basis (AB) requirements; TWRS AP Tank Farm operational requirements; unreviewed safety question (USQ) requirements are presented for two cases (i.e., tank modifications occurring before tank turnover and tank modification occurring after tank turnover). Finally, records and reporting requirements, and documentation which will require revision in support of transferring a DST in AP Tank Farm to a privatization contractor are presented.

    16. Requirements Review Reports

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Requirements Review Reports Case Studies News & Publications ESnet News Publications and Presentations Galleries ESnet Awards and Honors Blog ESnet Live Home » Science Engagement » Science Requirements Reviews » Requirements Review Reports Science Engagement Move your data Programs & Workshops Science Requirements Reviews Network Requirements Reviews Requirements Review Reports Case Studies Contact Us Technical Assistance: 1 800-33-ESnet (Inside US) 1 800-333-7638 (Inside US) 1

    17. Regulators, Requirements, Statutes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Air Act (CAA) Requirements for air quality and air emissions from facility operations Clean Water Act (CWA) Requirements for water quality and water discharges from facility...

    18. ALCF Data Science Program | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ALCF Data Science Program The ALCF Data Science Program (ADSP) is targeted at "big data" science problems that require the scale and performance of leadership computing resources. ...

    19. ALCC Quarterly Report Policy | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ALCC Quarterly Report Policy The Department of Energy (DOE) requires the Argonne Leadership Computing Facility (ALCF) to report the progress and scientific accomplishments of all...

    20. Fermilab computing at the Intensity Frontier

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Group, Craig; Fuess, S.; Gutsche, O.; Kirby, M.; Kutschke, R.; Lyon, A.; Norman, A.; Perdue, G.; Sexton-Kennedy, E.

      2015-12-23

      The Intensity Frontier refers to a diverse set of particle physics experiments using high- intensity beams. In this paper I will focus the discussion on the computing requirements and solutions of a set of neutrino and muon experiments in progress or planned to take place at the Fermi National Accelerator Laboratory located near Chicago, Illinois. In addition, the experiments face unique challenges, but also have overlapping computational needs. In principle, by exploiting the commonality and utilizing centralized computing tools and resources, requirements can be satisfied efficiently and scientists of individual experiments can focus more on the science and less onmore » the development of tools and infrastructure.« less

    1. Richard Gerber! Harvey Wasserman! Requirements Reviews Organizers

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      9, 2 014 Requirements Reviews * 1½-day reviews with each Program Office * Computing and storage requirements for next 5 years * Participants - DOE ADs & Program Managers - Leading NERSC users & key potential users - NERSC staff 2 High Energy Physics Fusion R esearch Adv. C omp. S cience R esearch J an. 2 014 Basic E nergy S ciences O ct 2 014 Reports From 8 Requirements Reviews Have Been Published 3 h@p://www.nersc.gov/science/hpc---requirements---reviews/reports/ * CompuFng a nd s

    2. Point sensitive NMR imaging system using a magnetic field configuration with a spatial minimum

      DOE Patents [OSTI]

      Eberhard, Philippe H. (El Cerrito, CA)

      1985-01-01

      A point-sensitive NMR imaging system (10) in which a main solenoid coil (11) produces a relatively strong and substantially uniform magnetic field and a pair of perturbing coils (PZ1 and PZ2) powered by current in the same direction superimposes a pair of relatively weak perturbing fields on the main field to produce a resultant point of minimum field strength at a desired location in a direction along the Z-axis. Two other pairs of perturbing coils (PX1, PX2; PY1, PY2) superimpose relatively weak field gradients on the main field in directions along the X- and Y-axes to locate the minimum field point at a desired location in a plane normal to the Z-axes. An RF generator (22) irradiates a tissue specimen in the field with radio frequency energy so that desired nuclei in a small volume at the point of minimum field strength will resonate.

    3. Computing and Computational Sciences Directorate - Information...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      cost-effective, state-of-the-art computing capabilities for research and development. ... communicates and manages strategy, policy and finance across the portfolio of IT assets. ...

    4. Computational Science and Engineering

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Science and Engineering NETL's Computational Science and Engineering competency consists of conducting applied scientific research and developing physics-based simulation models, methods, and tools to support the development and deployment of novel process and equipment designs. Research includes advanced computations to generate information beyond the reach of experiments alone by integrating experimental and computational sciences across different length and time scales. Specific

    5. Computing for Finance

      ScienceCinema (OSTI)

      None

      2011-10-06

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing ? from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege of being a summer student at CERN.3. Opportunities for gLite in finance and related industriesAdam Vile, Head of Grid, HPC and Technical Computing, Excelian Ltd.gLite, the Grid software developed by the EGEE project, has been exceedingly successful as an enabling infrastructure, and has been a massive success in bringing together scientific and technical communities to provide the compute power to address previously incomputable problems. Not so in the finance industry. In its current form gLite would be a business disabler. There are other middleware tools that solve the finance communities compute problems much better. Things are moving on, however. There are moves afoot in the open source community to evolve the technology to address other, more sophisticated needs such as utility and interactive computing. In this talk, I will describe how Excelian is providing Grid consultancy services for the finance community and how, through its relationship to the EGEE project, Excelian is helping to identify and exploit opportunities as the research and business worlds converge. Because of the strong third party presence in the finance industry, such opportunities are few and far between, but they are there, especially as we expand sideways into related verticals such as the smaller hedge funds and energy companies. This talk will give an overview of the barriers to adoption of gLite in the finance industry and highlight some of the opportunities offered in this and related industries as the ideas around Grid mature. Speaker Bio: Dr Adam Vile is a senior consultant and head of the Grid and HPC practice at Excelian, a consultancy that focuses on financial markets professional services. He has spent many years in investment banking, as a developer, project manager and architect in both front and back office. Before joining Excelian he was senior Grid and HPC architect at Barclays Capital. Prior to joining investment banking, Adam spent a number of years lecturing in IT and mathematics at a UK University and maintains links with academia through lectures, research and through validation and steering of postgraduate courses. He is a chartered mathematician and was the conference chair of the Institute of Mathematics and its Applications first conference in computational Finance.4. From Monte Carlo to Wall Street Daniel Egloff, Head of Financial Engineering Computing Unit, Zrich Cantonal Bank High performance computing techniques provide new means to solve computationally hard problems in the financial service industry. First I consider Monte Carlo simulation and illustrate how it can be used to implement a sophisticated credit risk management and economic capital framework. From a HPC perspective, basic Monte Carlo simulation is embarrassingly parallel and can be implemented efficiently on distributed memory clusters. Additional difficulties arise for adaptive variance reduction schemes, if the information content in a sample is very small, and if the amount of simulated date becomes huge such that incremental processing algorithms are indispensable. We discuss the business value of an advanced credit risk quantification which is particularly compelling in these days. While Monte Carlo simulation is a very versatile tool it is not always the preferred solution for the pricing of complex products like multi asset options, structured products, or credit derivatives. As a second application I show how operator methods can be used to develop a pricing framework. The scalability of operator methods relies heavily on optimized dense matrix-matrix multiplications and requires specialized BLAS level-3 implementations provided by specialized FPGA or GPU boards. Speaker Bio: Daniel Egloff studied mathematics, theoretical physics, and computer science at the University of Zurich and the ETH Zurich. He holds a PhD in Mathematics from University of Fribourg, Switzerland. After his PhD he started to work for a large Swiss insurance company in the area of asset and liability management. He continued his professional career in the consulting industry. At KPMG and Arthur Andersen he consulted international clients and implemented quantitative risk management solutions for financial institutions and insurance companies. In 2002 he joined Zurich Cantonal Bank. He was assigned to develop and implement credit portfolio risk and economic capital methodologies. He built up a competence center for high performance and cluster computing. Currently, Daniel Egloff is heading the Financial Computing unit in the ZKB Financial Engineering division. He and his team is engineering and operating high performance cluster applications for computationally intensive problems in financial risk management.

    6. Computing for Finance

      SciTech Connect (OSTI)

      2010-03-24

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing – from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege of being a summer student at CERN.3. Opportunities for gLite in finance and related industriesAdam Vile, Head of Grid, HPC and Technical Computing, Excelian Ltd.gLite, the Grid software developed by the EGEE project, has been exceedingly successful as an enabling infrastructure, and has been a massive success in bringing together scientific and technical communities to provide the compute power to address previously incomputable problems. Not so in the finance industry. In its current form gLite would be a business disabler. There are other middleware tools that solve the finance communities compute problems much better. Things are moving on, however. There are moves afoot in the open source community to evolve the technology to address other, more sophisticated needs such as utility and interactive computing. In this talk, I will describe how Excelian is providing Grid consultancy services for the finance community and how, through its relationship to the EGEE project, Excelian is helping to identify and exploit opportunities as the research and business worlds converge. Because of the strong third party presence in the finance industry, such opportunities are few and far between, but they are there, especially as we expand sideways into related verticals such as the smaller hedge funds and energy companies. This talk will give an overview of the barriers to adoption of gLite in the finance industry and highlight some of the opportunities offered in this and related industries as the ideas around Grid mature. Speaker Bio: Dr Adam Vile is a senior consultant and head of the Grid and HPC practice at Excelian, a consultancy that focuses on financial markets professional services. He has spent many years in investment banking, as a developer, project manager and architect in both front and back office. Before joining Excelian he was senior Grid and HPC architect at Barclays Capital. Prior to joining investment banking, Adam spent a number of years lecturing in IT and mathematics at a UK University and maintains links with academia through lectures, research and through validation and steering of postgraduate courses. He is a chartered mathematician and was the conference chair of the Institute of Mathematics and its Applications first conference in computational Finance.4. From Monte Carlo to Wall Street Daniel Egloff, Head of Financial Engineering Computing Unit, Zürich Cantonal Bank High performance computing techniques provide new means to solve computationally hard problems in the financial service industry. First I consider Monte Carlo simulation and illustrate how it can be used to implement a sophisticated credit risk management and economic capital framework. From a HPC perspective, basic Monte Carlo simulation is embarrassingly parallel and can be implemented efficiently on distributed memory clusters. Additional difficulties arise for adaptive variance reduction schemes, if the information content in a sample is very small, and if the amount of simulated date becomes huge such that incremental processing algorithms are indispensable. We discuss the business value of an advanced credit risk quantification which is particularly compelling in these days. While Monte Carlo simulation is a very versatile tool it is not always the preferred solution for the pricing of complex products like multi asset options, structured products, or credit derivatives. As a second application I show how operator methods can be used to develop a pricing framework. The scalability of operator methods relies heavily on optimized dense matrix-matrix multiplications and requires specialized BLAS level-3 implementations provided by specialized FPGA or GPU boards. Speaker Bio: Daniel Egloff studied mathematics, theoretical physics, and computer science at the University of Zurich and the ETH Zurich. He holds a PhD in Mathematics from University of Fribourg, Switzerland. After his PhD he started to work for a large Swiss insurance company in the area of asset and liability management. He continued his professional career in the consulting industry. At KPMG and Arthur Andersen he consulted international clients and implemented quantitative risk management solutions for financial institutions and insurance companies. In 2002 he joined Zurich Cantonal Bank. He was assigned to develop and implement credit portfolio risk and economic capital methodologies. He built up a competence center for high performance and cluster computing. Currently, Daniel Egloff is heading the Financial Computing unit in the ZKB Financial Engineering division. He and his team is engineering and operating high performance cluster applications for computationally intensive problems in financial risk management.

    7. Computing for Finance

      ScienceCinema (OSTI)

      None

      2011-10-06

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing ? from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege of being a summer student at CERN.3. Opportunities for gLite in finance and related industriesAdam Vile, Head of Grid, HPC and Technical Computing, Excelian Ltd.gLite, the Grid software developed by the EGEE project, has been exceedingly successful as an enabling infrastructure, and has been a massive success in bringing together scientific and technical communities to provide the compute power to address previously incomputable problems. Not so in the finance industry. In its current form gLite would be a business disabler. There are other middleware tools that solve the finance communities compute problems much better. Things are moving on, however. There are moves afoot in the open source community to evolve the technology to address other, more sophisticated needs such as utility and interactive computing. In this talk, I will describe how Excelian is providing Grid consultancy services for the finance community and how, through its relationship to the EGEE project, Excelian is helping to identify and exploit opportunities as the research and business worlds converge. Because of the strong third party presence in the finance industry, such opportunities are few and far between, but they are there, especially as we expand sideways into related verticals such as the smaller hedge funds and energy companies. This talk will give an overview of the barriers to adoption of gLite in the finance industry and highlight some of the opportunities offered in this and related industries as the ideas around Grid mature. Speaker Bio: Dr Adam Vile is a senior consultant and head of the Grid and HPC practice at Excelian, a consultancy that focuses on financial markets professional services. He has spent many years in investment banking, as a developer, project manager and architect in both front and back office. Before joining Excelian he was senior Grid and HPC architect at Barclays Capital. Prior to joining investment banking, Adam spent a number of years lecturing in IT and mathematics at a UK University and maintains links with academia through lectures, research and through validation and steering of postgraduate courses. He is a chartered mathematician and was the conference chair of the Institute of Mathematics and its Applications first conference in computational Finance.4. From Monte Carlo to Wall Street Daniel Egloff, Head of Financial Engineering Computing Unit, Zürich Cantonal Bank High performance computing techniques provide new means to solve computationally hard problems in the financial service industry. First I consider Monte Carlo simulation and illustrate how it can be used to implement a sophisticated credit risk management and economic capital framework. From a HPC perspective, basic Monte Carlo simulation is embarrassingly parallel and can be implemented efficiently on distributed memory clusters. Additional difficulties arise for adaptive variance reduction schemes, if the information content in a sample is very small, and if the amount of simulated date becomes huge such that incremental processing algorithms are indispensable. We discuss the business value of an advanced credit risk quantification which is particularly compelling in these days. While Monte Carlo simulation is a very versatile tool it is not always the preferred solution for the pricing of complex products like multi asset options, structured products, or credit derivatives. As a second application I show how operator methods can be used to develop a pricing framework. The scalability of operator methods relies heavily on optimized dense matrix-matrix multiplications and requires specialized BLAS level-3 implementations provided by specialized FPGA or GPU boards. Speaker Bio: Daniel Egloff studied mathematics, theoretical physics, and computer science at the University of Zurich and the ETH Zurich. He holds a PhD in Mathematics from University of Fribourg, Switzerland. After his PhD he started to work for a large Swiss insurance company in the area of asset and liability management. He continued his professional career in the consulting industry. At KPMG and Arthur Andersen he consulted international clients and implemented quantitative risk management solutions for financial institutions and insurance companies. In 2002 he joined Zurich Cantonal Bank. He was assigned to develop and implement credit portfolio risk and economic capital methodologies. He built up a competence center for high performance and cluster computing. Currently, Daniel Egloff is heading the Financial Computing unit in the ZKB Financial Engineering division. He and his team is engineering and operating high performance cluster applications for computationally intensive problems in financial risk management.

    8. Molecular Science Computing: 2010 Greenbook

      SciTech Connect (OSTI)

      De Jong, Wibe A.; Cowley, David E.; Dunning, Thom H.; Vorpagel, Erich R.

      2010-04-02

      This 2010 Greenbook outlines the science drivers for performing integrated computational environmental molecular research at EMSL and defines the next-generation HPC capabilities that must be developed at the MSC to address this critical research. The EMSL MSC Science Panel used EMSL’s vision and science focus and white papers from current and potential future EMSL scientific user communities to define the scientific direction and resulting HPC resource requirements presented in this 2010 Greenbook.

    9. Network Requirements Reviews

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Reviews Science Engagement Move your data Programs & Workshops Science Requirements Reviews Network Requirements Reviews Documents and Background Materials FAQ for Case Study Authors BER Requirements Review 2015 ASCR Requirements Review 2015 Previous Reviews Requirements Review Reports Case Studies Contact Us Technical Assistance: 1 800-33-ESnet (Inside US) 1 800-333-7638 (Inside US) 1 510-486-7600 (Globally) 1 510-486-7607 (Globally) Report Network Problems: trouble@es.net Provide Web Site

    10. Parallel computing works

      SciTech Connect (OSTI)

      Not Available

      1991-10-23

      An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

    11. Computational Fluid Dynamics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      scour-tracc-cfd TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computational Fluid Dynamics Overview of CFD: Video Clip with Audio Computational fluid dynamics (CFD) research uses mathematical and computational models of flowing fluids to describe and predict fluid response in problems of interest, such as the flow of air around a moving vehicle or the flow of water and sediment in a river. Coupled with appropriate and prototypical

    12. Improving CSE Software through Reproducibility Requirements. (Conference) |

      Office of Scientific and Technical Information (OSTI)

      SciTech Connect Improving CSE Software through Reproducibility Requirements. Citation Details In-Document Search Title: Improving CSE Software through Reproducibility Requirements. Abstract not provided. Authors: Heroux, Michael Allen Publication Date: 2011-02-01 OSTI Identifier: 1109282 Report Number(s): SAND2011-1158C 471476 DOE Contract Number: AC04-94AL85000 Resource Type: Conference Resource Relation: Conference: 4th International Workshop on Software Engineering for Computational

    13. Program Requirements | National Nuclear Security Administration

      National Nuclear Security Administration (NNSA)

      Program Requirements Participants Academies: USAFA, USNA, USMA, USCGA, USMMA cadets/midshipmen NNSA Sites: LANL, LLNL, SNL, NNSS, Pantex, KC Plant, Y-12 Plant, Savannah River NNSA Headquarters: Defense Programs management Eligibility Requirements Student in good standing Secret security clearance with some authorized to CNWDI (RD) desired Major in physics, chemistry, engineering, material science, life science, computer science, social science (political science, psychology and public affairs)

    14. Requirements Management Database

      Energy Science and Technology Software Center (OSTI)

      2009-08-13

      This application is a simplified and customized version of the RBA and CTS databases to capture federal, site, and facility requirements, link to actions that must be performed to maintain compliance with their contractual and other requirements.

    15. Polymorphous computing fabric

      DOE Patents [OSTI]

      Wolinski, Christophe Czeslaw; Gokhale, Maya B.; McCabe, Kevin Peter

      2011-01-18

      Fabric-based computing systems and methods are disclosed. A fabric-based computing system can include a polymorphous computing fabric that can be customized on a per application basis and a host processor in communication with said polymorphous computing fabric. The polymorphous computing fabric includes a cellular architecture that can be highly parameterized to enable a customized synthesis of fabric instances for a variety of enhanced application performances thereof. A global memory concept can also be included that provides the host processor random access to all variables and instructions associated with the polymorphous computing fabric.

    16. NERSC Requirements Workshop November

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      to show major growth is studies of Higgs particle physics. A burst of computational activity about 15 years ago died out, but anything other than a standard model Higgs at...

    17. Housing standards: change to HUD 4930. 2 Intermediate Minimum Property Standard (IMPS) supplement for solar heating and domestic hot water systems

      SciTech Connect (OSTI)

      Not Available

      1982-08-17

      This rule is made to provide an updating, clarification, and improvement of requirements contained in HUD Handbook 4930.2, Intermediate Minimum Property Standards (IMPS) Supplement concerning solar heating and domestic hot water systems. Changes pertain to fire protection, penetration, roof covering, conditions of use, thermal stability, rain resistance, ultraviolet stability, and compatibility with transfer medium. Additional changes cover applicable standards, labeling, flash point, chemical and physical commpatibility, flame spread classification, lightening protection, and parts of a solar energy system. Altogether, there are over 50 changes, some of which apply to tables and worksheets. Footnotes are included.

    18. Fermilab | Science at Fermilab | Computing | High-performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Lattice QCD Farm at the Grid Computing Center at Fermilab. Lattice QCD Farm at the Grid Computing Center at Fermilab. Computing High-performance Computing A workstation computer can perform billions of multiplication and addition operations each second. High-performance parallel computing becomes necessary when computations become too large or too long to complete on a single such machine. In parallel computing, computations are divided up so that many computers can work on the same problem at

    19. Cyber Security Process Requirements Manual

      Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]

      2008-08-12

      The Manual establishes minimum implementation standards for cyber security management processes throughout the Department. Admin Chg 1 dated 9-1-09; Admin Chg 2 dated 12-22-09. Canceled by DOE O 205.1B. No cancellations.

    20. The National Environmental Policy Act (NEPA) requires that Federal agencies

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Environmental Policy Act (NEPA) requires that Federal agencies determine the impact of their actions on the natural and human environments and disclose those impacts to the public Requested by stakeholders in 2008 Driven by current missions and proposed changes in Nevada National Security Site activities Updates the environmental baseline Log No. 2011-308 Prepare DRAFT SWEIS Prepare Final SWEIS Public Comment Period Public Comment Period Minimum 30-day Waiting Period Record of Decision No ce of

    1. developing-compute-efficient

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Developing Compute-efficient, Quality Models with LS-PrePost 3 on the TRACC Cluster Oct. ... with an emphasis on applying these capabilities to build computationally efficient models. ...

    2. Computers for Learning

      Broader source: Energy.gov [DOE]

      Through Executive Order 12999, the Computers for Learning Program was established to provide Federal agencies a quick and easy system for donating excess and surplus computer equipment to schools...

    3. Cognitive Computing for Security.

      SciTech Connect (OSTI)

      Debenedictis, Erik; Rothganger, Fredrick; Aimone, James Bradley; Marinella, Matthew; Evans, Brian Robert; Warrender, Christina E.; Mickel, Patrick

      2015-12-01

      Final report for Cognitive Computing for Security LDRD 165613. It reports on the development of hybrid of general purpose/ne uromorphic computer architecture, with an emphasis on potential implementation with memristors.

    4. Computers in Commercial Buildings

      U.S. Energy Information Administration (EIA) Indexed Site

      Government-owned buildings of all types, had, on average, more than one computer per person (1,104 computers per thousand employees). They also had a fairly high ratio of...

    5. Advanced Scientific Computing Research

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Advanced Scientific Computing Research Advanced Scientific Computing Research Discovering, developing, and deploying computational and networking capabilities to analyze, model, simulate, and predict complex phenomena important to the Department of Energy. Get Expertise Pieter Swart (505) 665 9437 Email Pat McCormick (505) 665-0201 Email Dave Higdon (505) 667-2091 Email Fulfilling the potential of emerging computing systems and architectures beyond today's tools and techniques to deliver

    6. Computational Structural Mechanics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      load-2 TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computational Structural Mechanics Overview of CSM Computational structural mechanics is a well-established methodology for the design and analysis of many components and structures found in the transportation field. Modern finite-element models (FEMs) play a major role in these evaluations, and sophisticated software, such as the commercially available LS-DYNA® code, is

    7. Other Requirements - DOE Directives, Delegations, and Requirements

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Other Requirements by Website Administrator More filters Less filters Other Policy Type Secretarial Memo Program Office Memo Invoked Technical Standards 100 Office of Primary Interest (OPI) Office of Primary Interest (OPI) All AD - Office of Administrative Services AU - Office of Environment, Health, Safety and Security CF - Office of the Chief Financial Officer CI - Office of Congressional and Intergovernmental Affairs CN - Office of Counterintelligence CP - Office of the Press Secretary CR -

    8. Computing environment logbook

      DOE Patents [OSTI]

      Osbourn, Gordon C; Bouchard, Ann M

      2012-09-18

      A computing environment logbook logs events occurring within a computing environment. The events are displayed as a history of past events within the logbook of the computing environment. The logbook provides search functionality to search through the history of past events to find one or more selected past events, and further, enables an undo of the one or more selected past events.

    9. Mathematical and Computational Epidemiology

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Mathematical and Computational Epidemiology Search Site submit Contacts | Sponsors Mathematical and Computational Epidemiology Los Alamos National Laboratory change this image and alt text Menu About Contact Sponsors Research Agent-based Modeling Mixing Patterns, Social Networks Mathematical Epidemiology Social Internet Research Uncertainty Quantification Publications People Mathematical and Computational Epidemiology (MCEpi) Quantifying model uncertainty in agent-based simulations for

    10. BNL ATLAS Grid Computing

      ScienceCinema (OSTI)

      Michael Ernst

      2010-01-08

      As the sole Tier-1 computing facility for ATLAS in the United States and the largest ATLAS computing center worldwide Brookhaven provides a large portion of the overall computing resources for U.S. collaborators and serves as the central hub for storing,

    11. Computational Fluid Dynamics Library

      Energy Science and Technology Software Center (OSTI)

      2005-03-04

      CFDLib05 is the Los Alamos Computational Fluid Dynamics LIBrary. This is a collection of hydrocodes using a common data structure and a common numerical method, for problems ranging from single-field, incompressible flow, to multi-species, multi-field, compressible flow. The data structure is multi-block, with a so-called structured grid in each block. The numerical method is a Finite-Volume scheme employing a state vector that is fully cell-centered. This means that the integral form of the conservation lawsmore » is solved on the physical domain that is represented by a mesh of control volumes. The typical control volume is an arbitrary quadrilateral in 2D and an arbitrary hexahedron in 3D. The Finite-Volume scheme is for time-unsteady flow and remains well coupled by means of time and space centered fluxes; if a steady state solution is required, the problem is integrated forward in time until the user is satisfied that the state is stationary.« less

    12. Scalable optical quantum computer

      SciTech Connect (OSTI)

      Manykin, E A; Mel'nichenko, E V [Institute for Superconductivity and Solid-State Physics, Russian Research Centre 'Kurchatov Institute', Moscow (Russian Federation)

      2014-12-31

      A way of designing a scalable optical quantum computer based on the photon echo effect is proposed. Individual rare earth ions Pr{sup 3+}, regularly located in the lattice of the orthosilicate (Y{sub 2}SiO{sub 5}) crystal, are suggested to be used as optical qubits. Operations with qubits are performed using coherent and incoherent laser pulses. The operation protocol includes both the method of measurement-based quantum computations and the technique of optical computations. Modern hybrid photon echo protocols, which provide a sufficient quantum efficiency when reading recorded states, are considered as most promising for quantum computations and communications. (quantum computer)

    13. COMPUTATIONAL SCIENCE CENTER

      SciTech Connect (OSTI)

      DAVENPORT, J.

      2005-11-01

      The Brookhaven Computational Science Center brings together researchers in biology, chemistry, physics, and medicine with applied mathematicians and computer scientists to exploit the remarkable opportunities for scientific discovery which have been enabled by modern computers. These opportunities are especially great in computational biology and nanoscience, but extend throughout science and technology and include, for example, nuclear and high energy physics, astrophysics, materials and chemical science, sustainable energy, environment, and homeland security. To achieve our goals we have established a close alliance with applied mathematicians and computer scientists at Stony Brook and Columbia Universities.

    14. Computational Systems & Software Environment | National Nuclear Security

      National Nuclear Security Administration (NNSA)

      Administration Computational Systems & Software Environment The mission of this national sub-program is to build integrated, balanced, and scalable computational capabilities to meet the predictive simulation requirements of NNSA. This sub-program strives to provide users of ASC computing resources a stable and seamless computing environment for all ASC-deployed platforms. Along with these powerful systems that ASC will maintain and field the supporting software infrastructure that the

    15. Computational and Experimental Screening of Mixed-Metal Perovskite

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Administration Computational Systems & Software Environment The mission of this national sub-program is to build integrated, balanced, and scalable computational capabilities to meet the predictive simulation requirements of NNSA. This sub-program strives to provide users of ASC computing resources a stable and seamless computing environment for all ASC-deployed platforms. Along with these powerful systems that ASC will maintain and field the supporting software infrastructure that the

    16. ESnet Requirements Workshops

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Requirements Workshops Summary for Sites Eli Dart, Network Engineer ESnet Network Engineering Group ESnet Site Coordinating Committee Meeting Clemson, SC February 2, 2011 Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science Overview ESnet requirements workshops - what are they? Common themes Discussion of subset of requirements learned * Examples of trends * Success stories * Upcoming needs that are currently unmet Thoughts for discussion 2/1/11 2 Lawrence Berkeley

    17. User Requirements Gathered for

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Requirements Gathered for the NERSC 7 Procurement Richard Gerber NERSC User Services NUG 2012 February 3, 2012 NERSC Oakland Scientific Facility User Requirements for NERSC 7 Largely based on a series of NERSC workshops. * Goal: Ensure that NERSC continues to provide the world-class facilities and services needed to support DOE Office of Science Research" * Method: Workshops to derive and document each DOE SC Office's HPC requirements for NERSC in 2013-14" * Deliverables: Reports that

    18. ASCR Requirements Review 2015

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ASCR Program Office. These requirements will serve as input to the ESnet architecture and planning processes, and will help ensure that ESnet continues to provide world-class...

    19. BES Requirements Review 2014

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      BES Program Office. These requirements will serve as input to the ESnet architecture and planning processes, and will help ensure that ESnet continues to provide world-class...

    20. Transuranic Waste Requirements

      Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]

      1999-07-09

      The guide provides criteria for determining if a waste is to be managed in accordance with DOE M 435.1-1, Chapter III, Transuranic Waste Requirements.

    1. HPC Requirements Reviews

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Reviews: Target 2017 Requirements Reviews: Target 2014 Overview Published Reports Case Study FAQs NERSC HPC Achievement Awards Accelerator Science Astrophysics & Cosmology...

    2. Environmental Requirements Management

      SciTech Connect (OSTI)

      Cusack, Laura J.; Bramson, Jeffrey E.; Archuleta, Jose A.; Frey, Jeffrey A.

      2015-01-08

      CH2M HILL Plateau Remediation Company (CH2M HILL) is the U.S. Department of Energy (DOE) prime contractor responsible for the environmental cleanup of the Hanford Site Central Plateau. As part of this responsibility, the CH2M HILL is faced with the task of complying with thousands of environmental requirements which originate from over 200 federal, state, and local laws and regulations, DOE Orders, waste management and effluent discharge permits, Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) response and Resource Conservation and Recovery Act (RCRA) corrective action documents, and official regulatory agency correspondence. The challenge is to manage this vast number of requirements to ensure they are appropriately and effectively integrated into CH2M HILL operations. Ensuring compliance with a large number of environmental requirements relies on an organization’s ability to identify, evaluate, communicate, and verify those requirements. To ensure that compliance is maintained, all changes need to be tracked. The CH2M HILL identified that the existing system used to manage environmental requirements was difficult to maintain and that improvements should be made to increase functionality. CH2M HILL established an environmental requirements management procedure and tools to assure that all environmental requirements are effectively and efficiently managed. Having a complete and accurate set of environmental requirements applicable to CH2M HILL operations will promote a more efficient approach to: • Communicating requirements • Planning work • Maintaining work controls • Maintaining compliance

    3. Residential Solar Permit Requirements

      Office of Energy Efficiency and Renewable Energy (EERE)

      Washington's State Building Code sets requirements for the installation, inspection, maintenance and repair of solar photovoltaic (PV) energy systems. Local jurisdictions have the authority to...

    4. Required Annual Notices

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Required Annual Notices The Women's Health and Cancer Rights Act of 1998 (WHCRA) The medical programs sponsored by LANS will not restrict benefits if you or your dependent...

    5. THE TURBULENT CASCADE AND PROTON HEATING IN THE SOLAR WIND DURING SOLAR MINIMUM

      SciTech Connect (OSTI)

      Coburn, Jesse T.; Smith, Charles W.; Vasquez, Bernard J.; Stawarz, Joshua E.; Forman, Miriam A. E-mail: Charles.Smith@unh.edu E-mail: Joshua.Stawarz@Colorado.edu

      2012-08-01

      The recently protracted solar minimum provided years of interplanetary data that were largely absent in any association with observed large-scale transient behavior on the Sun. With large-scale shear at 1 AU generally isolated to corotating interaction regions, it is reasonable to ask whether the solar wind is significantly turbulent at this time. We perform a series of third-moment analyses using data from the Advanced Composition Explorer. We show that the solar wind at 1 AU is just as turbulent as at any other time in the solar cycle. Specifically, the turbulent cascade of energy scales in the same manner proportional to the product of wind speed and temperature. Energy cascade rates during solar minimum average a factor of 2-4 higher than during solar maximum, but we contend that this is likely the result of having a different admixture of high-latitude sources.

    6. Estimate of Technical Potential for Minimum Efficiency Performance Standards in 13 Major World Economies

      SciTech Connect (OSTI)

      Letschert, Virginie; Desroches, Louis-Benoit; Ke, Jing; McNeil, Michael

      2012-07-01

      As part of the ongoing effort to estimate the foreseeable impacts of aggressive minimum efficiency performance standards (MEPS) programs in the worlds major economies, Lawrence Berkeley National Laboratory (LBNL) has developed a scenario to analyze the technical potential of MEPS in 13 major economies around the world1 . The best available technology (BAT) scenario seeks to determine the maximum potential savings that would result from diffusion of the most efficient available technologies in these major economies.

    7. Genome informatics: Requirements and challenges

      SciTech Connect (OSTI)

      Robbins, R.J.

      1993-12-31

      Informatics of some kind will play a role in every aspect of the Human Genome Project (HGP); data acquisition, data analysis, data exchange, data publication, and data visualization. What are the real requirements and challenges? The primary requirement is clear thinking and the main challenge is design. If good design is lacking, the price will be failure of genome informatics and ultimately failure of the genome project itself. Scientists need good designs to deliver the tools necessary for acquiring and analyzing DNA sequences. As these tools become more efficient, they will need new tools for comparative genomic analyses. To make the tools work, the scientists will need to address and solve nomenclature issues that are essential, if also tedious. They must devise systems that will scale gracefully with the increasing flow of data. The scientists must be able to move data easily from one system to another, with no loss of content. As scientists, they will have failed in their responsibility to share results, should repeating experiments ever become preferable to searching the literature. Their databases must become a new kind of scientific literature and the scientists must develop ways to make electronic data publishing as routine as traditional journal publishing. Ultimately, they must build systems so advanced that they are virtually invisible. In summary, the HGP can be considered the most ambitious, most audacious information-management project ever undertaken. In the HGP, computers will not merely serve as tools for cataloging existing knowledge. Rather, they will serve as instruments, helping to create new knowledge by changing the way the scientists see the biological world. Computers will allow them to see genomes, just as radio telescopes let them see quasars and electron microscopes let them see viruses.

    8. Internal combustion engines: Computer applications. (Latest citations from the EI Compendex plus database). Published Search

      SciTech Connect (OSTI)

      Not Available

      1993-10-01

      The bibliography contains citations concerning the application of computers and computerized simulations in the design, analysis, operation, and evaluation of various types of internal combustion engines and associated components and apparatus. Special attention is given to engine control and performance. (Contains a minimum of 67 citations and includes a subject term index and title list.)

    9. General Responsibilities and Requirements

      Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]

      1999-07-09

      The material presented in this guide provides suggestions and acceptable ways of implementing DOE M 435.1-1 and should not be viewed as additional or mandatory requirements. The objective of the guide is to ensure that responsible individuals understand what is necessary and acceptable for implementing the requirements of DOE M 435.1-1.

    10. Evaluating iterative reconstruction performance in computed tomography

      SciTech Connect (OSTI)

      Chen, Baiyu Solomon, Justin; Ramirez Giraldo, Juan Carlos; Samei, Ehsan

      2014-12-15

      Purpose: Iterative reconstruction (IR) offers notable advantages in computed tomography (CT). However, its performance characterization is complicated by its potentially nonlinear behavior, impacting performance in terms of specific tasks. This study aimed to evaluate the performance of IR with both task-specific and task-generic strategies. Methods: The performance of IR in CT was mathematically assessed with an observer model that predicted the detection accuracy in terms of the detectability index (d′). d′ was calculated based on the properties of the image noise and resolution, the observer, and the detection task. The characterizations of image noise and resolution were extended to accommodate the nonlinearity of IR. A library of tasks was mathematically modeled at a range of sizes (radius 1–4 mm), contrast levels (10–100 HU), and edge profiles (sharp and soft). Unique d′ values were calculated for each task with respect to five radiation exposure levels (volume CT dose index, CTDI{sub vol}: 3.4–64.8 mGy) and four reconstruction algorithms (filtered backprojection reconstruction, FBP; iterative reconstruction in imaging space, IRIS; and sinogram affirmed iterative reconstruction with strengths of 3 and 5, SAFIRE3 and SAFIRE5; all provided by Siemens Healthcare, Forchheim, Germany). The d′ values were translated into the areas under the receiver operating characteristic curve (AUC) to represent human observer performance. For each task and reconstruction algorithm, a threshold dose was derived as the minimum dose required to achieve a threshold AUC of 0.9. A task-specific dose reduction potential of IR was calculated as the difference between the threshold doses for IR and FBP. A task-generic comparison was further made between IR and FBP in terms of the percent of all tasks yielding an AUC higher than the threshold. Results: IR required less dose than FBP to achieve the threshold AUC. In general, SAFIRE5 showed the most significant dose reduction potentials (11–54 mGy, 77%–84%), followed by SAFIRE3 (7–36 mGy, 50%–61%) and IRIS (6–26 mGy, 37%–50%). The dose reduction potentials highly depended on task size and task contrast, with tasks of lower contrasts and smaller sizes, i.e., more challenging tasks, indicating higher dose reductions. Softer edge profile showed higher dose reduction potentials with SAFIRE3 and SAFIRE5, but not with IRIS. The task-generic comparison between IR and FBP demonstrated the overall superiority of IR performance, as IR allowed a larger percent of tasks to exceed the threshold AUC: IRIS, 8%–12%; SAFIRE3, 10%–16%; and SAFIRE5, 20%–33%. The improvement with IR was generally more pronounced at lower dose levels. Conclusions: Expanding beyond traditional contrast and noise based assessments of IR, we performed both task-specific and task-generic evaluations of IR performance. The task-specific evaluation demonstrated the dependency of IR’s dose reduction potential on task attributes, which can be employed to optimize IR for clinical indications with specific range of size and contrast. The task-generic evaluation demonstrated IR’s overall superiority over FBP in terms of the range of tasks exceeding a threshold performance level, which can be employed for general comparisons between algorithms.

    11. Cielo Computational Environment Usage Model With Mappings to ACE

      Office of Scientific and Technical Information (OSTI)

      Requirements for the General Availability User Environment Capabilities Release Version 1.1 (Technical Report) | SciTech Connect Technical Report: Cielo Computational Environment Usage Model With Mappings to ACE Requirements for the General Availability User Environment Capabilities Release Version 1.1 Citation Details In-Document Search Title: Cielo Computational Environment Usage Model With Mappings to ACE Requirements for the General Availability User Environment Capabilities Release

    12. Sandia Energy - High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      High Performance Computing Home Energy Research Advanced Scientific Computing Research (ASCR) High Performance Computing High Performance Computingcwdd2015-03-18T21:41:24+00:00...

    13. Techno-Economic Analysis of the Deacetylation and Disk Refining Process. Characterizing the Effect of Refining Energy and Enzyme Usage on Minimum Sugar Selling Price and Minimum Ethanol Selling Price

      SciTech Connect (OSTI)

      Chen, Xiaowen; Shekiro, Joseph; Pschorn, Thomas; Sabourin, Marc; Tucker, Melvin P.; Tao, Ling

      2015-10-29

      A novel, highly efficient deacetylation and disk refining (DDR) process to liberate fermentable sugars from biomass was recently developed at the National Renewable Energy Laboratory (NREL). The DDR process consists of a mild, dilute alkaline deacetylation step followed by low-energy-consumption disk refining. The DDR corn stover substrates achieved high process sugar conversion yields, at low to modest enzyme loadings, and also produced high sugar concentration syrups at high initial insoluble solid loadings. The sugar syrups derived from corn stover are highly fermentable due to low concentrations of fermentation inhibitors. The objective of this work is to evaluate the economic feasibility of the DDR process through a techno-economic analysis (TEA). A large array of experiments designed using a response surface methodology was carried out to investigate the two major cost-driven operational parameters of the novel DDR process: refining energy and enzyme loadings. The boundary conditions for refining energy (128–468 kWh/ODMT), cellulase (Novozyme’s CTec3) loading (11.6–28.4 mg total protein/g of cellulose), and hemicellulase (Novozyme’s HTec3) loading (0–5 mg total protein/g of cellulose) were chosen to cover the most commercially practical operating conditions. The sugar and ethanol yields were modeled with good adequacy, showing a positive linear correlation between those yields and refining energy and enzyme loadings. The ethanol yields ranged from 77 to 89 gallons/ODMT of corn stover. The minimum sugar selling price (MSSP) ranged from $0.191 to $0.212 per lb of 50 % concentrated monomeric sugars, while the minimum ethanol selling price (MESP) ranged from $2.24 to $2.54 per gallon of ethanol. The DDR process concept is evaluated for economic feasibility through TEA. The MSSP and MESP of the DDR process falls within a range similar to that found with the deacetylation/dilute acid pretreatment process modeled in NREL’s 2011 design report. The DDR process is a much simpler process that requires less capital and maintenance costs when compared to conventional chemical pretreatments with pressure vessels. As a result, we feel the DDR process should be considered as an option for future biorefineries with great potential to be more cost-effective.

    14. Techno-Economic Analysis of the Deacetylation and Disk Refining Process. Characterizing the Effect of Refining Energy and Enzyme Usage on Minimum Sugar Selling Price and Minimum Ethanol Selling Price

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Chen, Xiaowen; Shekiro, Joseph; Pschorn, Thomas; Sabourin, Marc; Tucker, Melvin P.; Tao, Ling

      2015-10-29

      A novel, highly efficient deacetylation and disk refining (DDR) process to liberate fermentable sugars from biomass was recently developed at the National Renewable Energy Laboratory (NREL). The DDR process consists of a mild, dilute alkaline deacetylation step followed by low-energy-consumption disk refining. The DDR corn stover substrates achieved high process sugar conversion yields, at low to modest enzyme loadings, and also produced high sugar concentration syrups at high initial insoluble solid loadings. The sugar syrups derived from corn stover are highly fermentable due to low concentrations of fermentation inhibitors. The objective of this work is to evaluate the economic feasibilitymore » of the DDR process through a techno-economic analysis (TEA). A large array of experiments designed using a response surface methodology was carried out to investigate the two major cost-driven operational parameters of the novel DDR process: refining energy and enzyme loadings. The boundary conditions for refining energy (128–468 kWh/ODMT), cellulase (Novozyme’s CTec3) loading (11.6–28.4 mg total protein/g of cellulose), and hemicellulase (Novozyme’s HTec3) loading (0–5 mg total protein/g of cellulose) were chosen to cover the most commercially practical operating conditions. The sugar and ethanol yields were modeled with good adequacy, showing a positive linear correlation between those yields and refining energy and enzyme loadings. The ethanol yields ranged from 77 to 89 gallons/ODMT of corn stover. The minimum sugar selling price (MSSP) ranged from $0.191 to $0.212 per lb of 50 % concentrated monomeric sugars, while the minimum ethanol selling price (MESP) ranged from $2.24 to $2.54 per gallon of ethanol. The DDR process concept is evaluated for economic feasibility through TEA. The MSSP and MESP of the DDR process falls within a range similar to that found with the deacetylation/dilute acid pretreatment process modeled in NREL’s 2011 design report. The DDR process is a much simpler process that requires less capital and maintenance costs when compared to conventional chemical pretreatments with pressure vessels. As a result, we feel the DDR process should be considered as an option for future biorefineries with great potential to be more cost-effective.« less

    15. COMPUTATIONAL SCIENCE CENTER

      SciTech Connect (OSTI)

      DAVENPORT, J.

      2006-11-01

      Computational Science is an integral component of Brookhaven's multi science mission, and is a reflection of the increased role of computation across all of science. Brookhaven currently has major efforts in data storage and analysis for the Relativistic Heavy Ion Collider (RHIC) and the ATLAS detector at CERN, and in quantum chromodynamics. The Laboratory is host for the QCDOC machines (quantum chromodynamics on a chip), 10 teraflop/s computers which boast 12,288 processors each. There are two here, one for the Riken/BNL Research Center and the other supported by DOE for the US Lattice Gauge Community and other scientific users. A 100 teraflop/s supercomputer will be installed at Brookhaven in the coming year, managed jointly by Brookhaven and Stony Brook, and funded by a grant from New York State. This machine will be used for computational science across Brookhaven's entire research program, and also by researchers at Stony Brook and across New York State. With Stony Brook, Brookhaven has formed the New York Center for Computational Science (NYCCS) as a focal point for interdisciplinary computational science, which is closely linked to Brookhaven's Computational Science Center (CSC). The CSC has established a strong program in computational science, with an emphasis on nanoscale electronic structure and molecular dynamics, accelerator design, computational fluid dynamics, medical imaging, parallel computing and numerical algorithms. We have been an active participant in DOES SciDAC program (Scientific Discovery through Advanced Computing). We are also planning a major expansion in computational biology in keeping with Laboratory initiatives. Additional laboratory initiatives with a dependence on a high level of computation include the development of hydrodynamics models for the interpretation of RHIC data, computational models for the atmospheric transport of aerosols, and models for combustion and for energy utilization. The CSC was formed to bring together researchers in these areas and to provide a focal point for the development of computational expertise at the Laboratory. These efforts will connect to and support the Department of Energy's long range plans to provide Leadership class computing to researchers throughout the Nation. Recruitment for six new positions at Stony Brook to strengthen its computational science programs is underway. We expect some of these to be held jointly with BNL.

    16. CFD [computational fluid dynamics] And Safety Factors. Computer modeling of complex processes needs old-fashioned experiments to stay in touch with reality.

      SciTech Connect (OSTI)

      Leishear, Robert A.; Lee, Si Y.; Poirier, Michael R.; Steeper, Timothy J.; Ervin, Robert C.; Giddings, Billy J.; Stefanko, David B.; Harp, Keith D.; Fowley, Mark D.; Van Pelt, William B.

      2012-10-07

      Computational fluid dynamics (CFD) is recognized as a powerful engineering tool. That is, CFD has advanced over the years to the point where it can now give us deep insight into the analysis of very complex processes. There is a danger, though, that an engineer can place too much confidence in a simulation. If a user is not careful, it is easy to believe that if you plug in the numbers, the answer comes out, and you are done. This assumption can lead to significant errors. As we discovered in the course of a study on behalf of the Department of Energy's Savannah River Site in South Carolina, CFD models fail to capture some of the large variations inherent in complex processes. These variations, or scatter, in experimental data emerge from physical tests and are inadequately captured or expressed by calculated mean values for a process. This anomaly between experiment and theory can lead to serious errors in engineering analysis and design unless a correction factor, or safety factor, is experimentally validated. For this study, blending times for the mixing of salt solutions in large storage tanks were the process of concern under investigation. This study focused on the blending processes needed to mix salt solutions to ensure homogeneity within waste tanks, where homogeneity is required to control radioactivity levels during subsequent processing. Two of the requirements for this task were to determine the minimum number of submerged, centrifugal pumps required to blend the salt mixtures in a full-scale tank in half a day or less, and to recommend reasonable blending times to achieve nearly homogeneous salt mixtures. A full-scale, low-flow pump with a total discharge flow rate of 500 to 800 gpm was recommended with two opposing 2.27-inch diameter nozzles. To make this recommendation, both experimental and CFD modeling were performed. Lab researchers found that, although CFD provided good estimates of an average blending time, experimental blending times varied significantly from the average.

    17. NERSC Computer Security

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Security NERSC Computer Security NERSC computer security efforts are aimed at protecting NERSC systems and its users' intellectual property from unauthorized access or modification. Among NERSC's security goal are: 1. To protect NERSC systems from unauthorized access. 2. To prevent the interruption of services to its users. 3. To prevent misuse or abuse of NERSC resources. Security Incidents If you think there has been a computer security incident you should contact NERSC Security as soon as

    18. Edison Electrifies Scientific Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Edison Electrifies Scientific Computing Edison Electrifies Scientific Computing NERSC Flips Switch on New Flagship Supercomputer January 31, 2014 Contact: Margie Wylie, mwylie@lbl.gov, +1 510 486 7421 The National Energy Research Scientific Computing (NERSC) Center recently accepted "Edison," a new flagship supercomputer designed for scientific productivity. Named in honor of American inventor Thomas Alva Edison, the Cray XC30 will be dedicated in a ceremony held at the Department of

    19. Computational Earth Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Nucleosynthesis (Technical Report) | SciTech Connect Computational Astrophysics Consortium 3 - Supernovae, Gamma-Ray Bursts and Nucleosynthesis Citation Details In-Document Search Title: Computational Astrophysics Consortium 3 - Supernovae, Gamma-Ray Bursts and Nucleosynthesis Final project report for UCSC's participation in the Computational Astrophysics Consortium - Supernovae, Gamma-Ray Bursts and Nucleosynthesis. As an appendix, the report of the entire Consortium is also appended.

    20. Computer Architecture Lab

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      FastForward CAL Partnerships Shifter: User Defined Images Archive APEX Home » R & D » Exascale Computing » CAL Computer Architecture Lab The goal of the Computer Architecture Laboratory (CAL) is engage in research and development into energy efficient and effective processor and memory architectures for DOE's Exascale program. CAL coordinates hardware architecture R&D activities across the DOE. CAL is a joint NNSA/SC activity involving Sandia National Laboratories (CAL-Sandia) and

    1. Applied Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Results from a climate simulation computed using the Model for Prediction Across Scales (MPAS) code. This visualization shows the temperature of ocean currents using a green and ...

    2. Computational Physics and Methods

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... for use in Advanced Strategic Computing codes Theory and modeling of dense plasmas in ICF and astrophysics environments Theory and modeling of astrophysics in support of NASA ...

    3. Personal Computer Inventory System

      Energy Science and Technology Software Center (OSTI)

      1993-10-04

      PCIS is a database software system that is used to maintain a personal computer hardware and software inventory, track transfers of hardware and software, and provide reports.

    4. Computer-aided dispatching system design specification

      SciTech Connect (OSTI)

      Briggs, M.G.

      1997-12-16

      This document defines the performance requirements for a graphic display dispatching system to support Hanford Patrol Operations Center. This document reflects the as-built requirements for the system that was delivered by GTE Northwest, Inc. This system provided a commercial off-the-shelf computer-aided dispatching system and alarm monitoring system currently in operations at the Hanford Patrol Operations Center, Building 2721E. This system also provides alarm back-up capability for the Plutonium Finishing Plant (PFP).

    5. Allocation Management | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Allocation Management Determining Allocation Requirements Querying Allocations Using cbank Mira/Cetus/Vesta Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Allocation Management Allocations require management - balance checks, resource allocation, requesting more time, etc. Checking for an active allocation To determine if there is an active allocation, check Running Jobs. For

    6. 60 Years of Computing | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      60 Years of Computing 60 Years of Computing

    7. ARM - Reporting Requirements

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      required to report to the DOE ARM Program Director, to the DOE's Office of Biological and Environmental Research, and to the White House Office of Management and Budget. A primary...

    8. Experiment Safety Requirements

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Experiment Safety Requirements Print Safety at the ALS The mission of the ALS is to "Support users in doing outstanding science in a safe environment." How Do I...? Complete an...

    9. Experiment Safety Requirements

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Experiment Safety Experiment Safety Requirements Print Safety at the ALS The mission of the ALS is to "Support users in doing outstanding science in a safe environment." How Do...

    10. Green Building Requirement

      Broader source: Energy.gov [DOE]

      The new standards are phased in over the course of several years with publicly-owned buildings being the first required to comply. All new construction and substantial improvements of non...

    11. Requirements for security signalling

      SciTech Connect (OSTI)

      Pierson, L.G.; Tarman, T.D.

      1995-02-05

      There has been some interest lately in the need for ``authenticated signalling``, and the development of signalling specifications by the ATM Forum that support this need. The purpose of this contribution is to show that if authenticated signalling is required, then supporting signalling facilities for directory services (i.e. key management) are also required. Furthermore, this contribution identifies other security related mechanisms that may also benefit from ATM-level signalling accommodations. For each of these mechanisms outlined here, an overview of the signalling issues and a rough cut at the required fields for supporting Information Elements are provided. Finally, since each of these security mechanisms are specified by a number of different standards, issues pertaining to the selection of a particular security mechanism at connection setup time (i.e. specification of a required ``Security Quality of Service``) are also discussed.

    12. Promulgating Nuclear Safety Requirements

      Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]

      1996-05-15

      Applies to all Nuclear Safety Requirements Adopted by the Department to Govern the Conduct of its Nuclear Activities. Cancels DOE P 410.1. Canceled by DOE N 251.85.

    13. Selected Guidance & Requirements

      Broader source: Energy.gov [DOE]

      This page contains the most requested NEPA guidance and requirement documents and those most often recommended by the Office of NEPA Policy and Compliance. Documents are listed by agency, in...

    14. Regulators, Requirements, Statutes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Regulators, Requirements, Statutes Regulators, Requirements, Statutes The Laboratory must comply with environmental laws and regulations that apply to Laboratory operations. Contact Environmental Communication & Public Involvement P.O. Box 1663 MS M996 Los Alamos, NM 87545 (505) 667-0216 Email Environmental laws and regulations LANL complies with more than 30 state and federal regulations and policies designed to protect human health and the environment. Regulators Regulators Environmental

    15. Information Science, Computing, Applied Math

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information Science, Computing, Applied Math science-innovationassetsimagesicon-science.jpg Information Science, Computing, Applied Math National security depends on science ...

    16. Trusted Computing Technologies, Intel Trusted Execution Technology.

      SciTech Connect (OSTI)

      Guise, Max Joseph; Wendt, Jeremy Daniel

      2011-01-01

      We describe the current state-of-the-art in Trusted Computing Technologies - focusing mainly on Intel's Trusted Execution Technology (TXT). This document is based on existing documentation and tests of two existing TXT-based systems: Intel's Trusted Boot and Invisible Things Lab's Qubes OS. We describe what features are lacking in current implementations, describe what a mature system could provide, and present a list of developments to watch. Critical systems perform operation-critical computations on high importance data. In such systems, the inputs, computation steps, and outputs may be highly sensitive. Sensitive components must be protected from both unauthorized release, and unauthorized alteration: Unauthorized users should not access the sensitive input and sensitive output data, nor be able to alter them; the computation contains intermediate data with the same requirements, and executes algorithms that the unauthorized should not be able to know or alter. Due to various system requirements, such critical systems are frequently built from commercial hardware, employ commercial software, and require network access. These hardware, software, and network system components increase the risk that sensitive input data, computation, and output data may be compromised.

    17. Theory, Simulation, and Computation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ADTSC Theory, Simulation, and Computation Supporting the Laboratory's overarching strategy to provide cutting-edge tools to guide and interpret experiments and further our fundamental understanding and predictive capabilities for complex systems. Theory, modeling, informatics Suites of experiment data High performance computing, simulation, visualization Contacts Associate Director John Sarrao Deputy Associate Director Paul Dotson Directorate Office (505) 667-6645 Email Applying the Scientific

    18. Computer Processor Allocator

      Energy Science and Technology Software Center (OSTI)

      2004-03-01

      The Compute Processor Allocator (CPA) provides an efficient and reliable mechanism for managing and allotting processors in a massively parallel (MP) computer. It maintains information in a database on the health. configuration and allocation of each processor. This persistent information is factored in to each allocation decision. The CPA runs in a distributed fashion to avoid a single point of failure.

    19. GALACTIC COSMIC-RAY ENERGY SPECTRA AND COMPOSITION DURING THE 2009-2010 SOLAR MINIMUM PERIOD

      SciTech Connect (OSTI)

      Lave, K. A.; Binns, W. R.; Israel, M. H.; Wiedenbeck, M. E.; Christian, E. R.; De Nolfo, G. A.; Von Rosenvinge, T. T.; Cummings, A. C.; Davis, A. J.; Leske, R. A.; Mewaldt, R. A.; Stone, E. C.

      2013-06-20

      We report new measurements of the elemental energy spectra and composition of galactic cosmic rays during the 2009-2010 solar minimum period using observations from the Cosmic Ray Isotope Spectrometer (CRIS) onboard the Advanced Composition Explorer. This period of time exhibited record-setting cosmic-ray intensities and very low levels of solar activity. Results are given for particles with nuclear charge 5 {<=} Z {<=} 28 in the energy range {approx}50-550 MeV nucleon{sup -1}. Several recent improvements have been made to the earlier CRIS data analysis, and therefore updates of our previous observations for the 1997-1998 solar minimum and 2001-2003 solar maximum are also given here. For most species, the reported intensities changed by less than {approx}7%, and the relative abundances changed by less than {approx}4%. Compared with the 1997-1998 solar minimum relative abundances, the 2009-2010 abundances differ by less than 2{sigma}, with a trend of fewer secondary species observed in the more recent time period. The new 2009-2010 data are also compared with results of a simple ''leaky-box'' galactic transport model combined with a spherically symmetric solar modulation model. We demonstrate that this model is able to give reasonable fits to the energy spectra and the secondary-to-primary ratios B/C and (Sc+Ti+V)/Fe. These results are also shown to be comparable to a GALPROP numerical model that includes the effects of diffusive reacceleration in the interstellar medium.

    20. Indirection and computer security.

      SciTech Connect (OSTI)

      Berg, Michael J.

      2011-09-01

      The discipline of computer science is built on indirection. David Wheeler famously said, 'All problems in computer science can be solved by another layer of indirection. But that usually will create another problem'. We propose that every computer security vulnerability is yet another problem created by the indirections in system designs and that focusing on the indirections involved is a better way to design, evaluate, and compare security solutions. We are not proposing that indirection be avoided when solving problems, but that understanding the relationships between indirections and vulnerabilities is key to securing computer systems. Using this perspective, we analyze common vulnerabilities that plague our computer systems, consider the effectiveness of currently available security solutions, and propose several new security solutions.

    1. Exotic equilibria of Harary graphs and a new minimum degree lower bound for synchronization

      SciTech Connect (OSTI)

      Canale, Eduardo A.; Monzn, Pablo

      2015-02-15

      This work is concerned with stability of equilibria in the homogeneous (equal frequencies) Kuramoto model of weakly coupled oscillators. In 2012 [R. Taylor, J. Phys. A: Math. Theor. 45, 115 (2012)], a sufficient condition for almost global synchronization was found in terms of the minimum degreeorder ratio of the graph. In this work, a new lower bound for this ratio is given. The improvement is achieved by a concrete infinite sequence of regular graphs. Besides, non standard unstable equilibria of the graphs studied in Wiley et al. [Chaos 16, 015103 (2006)] are shown to exist as conjectured in that work.

    2. Confronting Regulatory Cost and Quality Expectations. An Exploration of Technical Change in Minimum Efficiency Performance Standards

      SciTech Connect (OSTI)

      Taylor, Margaret; Spurlock, C. Anna; Yang, Hung-Chia

      2015-09-21

      The dual purpose of this project was to contribute to basic knowledge about the interaction between regulation and innovation and to inform the cost and benefit expectations related to technical change which are embedded in the rulemaking process of an important area of national regulation. The area of regulation focused on here is minimum efficiency performance standards (MEPS) for appliances and other energy-using products. Relevant both to U.S. climate policy and energy policy for buildings, MEPS remove certain product models from the market that do not meet specified efficiency thresholds.

    3. Method for selecting minimum width of leaf in multileaf adjustable collimator while inhibiting passage of particle beams of radiation through sawtooth joints between collimator leaves

      DOE Patents [OSTI]

      Ludewigt, Bernhard; Bercovitz, John; Nyman, Mark; Chu, William

      1995-01-01

      A method is disclosed for selecting the minimum width of individual leaves of a multileaf adjustable collimator having sawtooth top and bottom surfaces between adjacent leaves of a first stack of leaves and sawtooth end edges which are capable of intermeshing with the corresponding sawtooth end edges of leaves in a second stack of leaves of the collimator. The minimum width of individual leaves in the collimator, each having a sawtooth configuration in the surface facing another leaf in the same stack and a sawtooth end edge, is selected to comprise the sum of the penetration depth or range of the particular type of radiation comprising the beam in the particular material used for forming the leaf; plus the total path length across all the air gaps in the area of the joint at the edges between two leaves defined between lines drawn across the peaks of adjacent sawtooth edges; plus at least one half of the length or period of a single sawtooth. To accomplish this, in accordance with the method of the invention, the penetration depth of the particular type of radiation in the particular material to be used for the collimator leaf is first measured. Then the distance or gap between adjoining or abutting leaves is selected, and the ratio of this distance to the height of the sawteeth is selected. Finally the number of air gaps through which the radiation will pass between sawteeth is determined by selecting the number of sawteeth to be formed in the joint. The measurement and/or selection of these parameters will permit one to determine the minimum width of the leaf which is required to prevent passage of the beam through the sawtooth joint.

    4. SU-F-18C-01: Minimum Detectability Analysis for Comprehensive Sized Based Optimization of Image Quality and Radiation Dose Across CT Protocols

      SciTech Connect (OSTI)

      Smitherman, C; Chen, B; Samei, E

      2014-06-15

      Purpose: This work involved a comprehensive modeling of task-based performance of CT across a wide range of protocols. The approach was used for optimization and consistency of dose and image quality within a large multi-vendor clinical facility. Methods: 150 adult protocols from the Duke University Medical Center were grouped into sub-protocols with similar acquisition characteristics. A size based image quality phantom (Duke Mercury Phantom) was imaged using these sub-protocols for a range of clinically relevant doses on two CT manufacturer platforms (Siemens, GE). The images were analyzed to extract task-based image quality metrics such as the Task Transfer Function (TTF), Noise Power Spectrum, and Az based on designer nodule task functions. The data were analyzed in terms of the detectability of a lesion size/contrast as a function of dose, patient size, and protocol. A graphical user interface (GUI) was developed to predict image quality and dose to achieve a minimum level of detectability. Results: Image quality trends with variations in dose, patient size, and lesion contrast/size were evaluated and calculated data behaved as predicted. The GUI proved effective to predict the Az values representing radiologist confidence for a targeted lesion, patient size, and dose. As an example, an abdomen pelvis exam for the GE scanner, with a task size/contrast of 5-mm/50-HU, and an Az of 0.9 requires a dose of 4.0, 8.9, and 16.9 mGy for patient diameters of 25, 30, and 35 cm, respectively. For a constant patient diameter of 30 cm, the minimum detected lesion size at those dose levels would be 8.4, 5, and 3.9 mm, respectively. Conclusion: The designed CT protocol optimization platform can be used to evaluate minimum detectability across dose levels and patient diameters. The method can be used to improve individual protocols as well as to improve protocol consistency across CT scanners.

    5. Toward Molecular Catalysts by Computer

      SciTech Connect (OSTI)

      Raugei, Simone; DuBois, Daniel L.; Rousseau, Roger J.; Chen, Shentan; Ho, Ming-Hsun; Bullock, R. Morris; Dupuis, Michel

      2015-02-17

      Rational design of molecular catalysts requires a systematic approach to designing ligands with specific functionality and precisely tailored electronic and steric properties. It then becomes possible to devise computer protocols to predict accurately the required properties and ultimately to design catalysts by computer. In this account we first review how thermodynamic properties such as oxidation-reduction potentials (E0), acidities (pKa), and hydride donor abilities (ΔGH-) form the basis for a systematic design of molecular catalysts for reactions that are critical for a secure energy future (hydrogen evolution and oxidation, oxygen and nitrogen reduction, and carbon dioxide reduction). We highlight how density functional theory allows us to determine and predict these properties within “chemical” accuracy (~ 0.06 eV for redox potentials, ~ 1 pKa unit for pKa values, and ~ 1.5 kcal/mol for hydricities). These quantities determine free energy maps and profiles associated with catalytic cycles, i.e. the relative energies of intermediates, and help us distinguish between desirable and high-energy pathways and mechanisms. Good catalysts have flat profiles that avoid high activation barriers due to low and high energy intermediates. We illustrate how the criterion of a flat energy profile lends itself to the prediction of design points by computer for optimum catalysts. This research was carried out in the Center for Molecular Electro-catalysis, an Energy Frontier Research Center funded by the U.S. Department of Energy (DOE), Office of Science, Office of Basic Energy Sciences. Pacific Northwest National Laboratory (PNNL) is operated for the DOE by Battelle.

    6. Computation Directorate 2007 Annual Report

      SciTech Connect (OSTI)

      Henson, V E; Guse, J A

      2008-03-06

      If there is a single word that both characterized 2007 and dominated the thoughts and actions of many Laboratory employees throughout the year, it is transition. Transition refers to the major shift that took place on October 1, when the University of California relinquished management responsibility for Lawrence Livermore National Laboratory (LLNL), and Lawrence Livermore National Security, LLC (LLNS), became the new Laboratory management contractor for the Department of Energy's (DOE's) National Nuclear Security Administration (NNSA). In the 55 years under the University of California, LLNL amassed an extraordinary record of significant accomplishments, clever inventions, and momentous contributions in the service of protecting the nation. This legacy provides the new organization with a built-in history, a tradition of excellence, and a solid set of core competencies from which to build the future. I am proud to note that in the nearly seven years I have had the privilege of leading the Computation Directorate, our talented and dedicated staff has made far-reaching contributions to the legacy and tradition we passed on to LLNS. Our place among the world's leaders in high-performance computing, algorithmic research and development, applications, and information technology (IT) services and support is solid. I am especially gratified to report that through all the transition turmoil, and it has been considerable, the Computation Directorate continues to produce remarkable achievements. Our most important asset--the talented, skilled, and creative people who work in Computation--has continued a long-standing Laboratory tradition of delivering cutting-edge science even in the face of adversity. The scope of those achievements is breathtaking, and in 2007, our accomplishments span an amazing range of topics. From making an important contribution to a Nobel Prize-winning effort to creating tools that can detect malicious codes embedded in commercial software; from expanding BlueGene/L, the world's most powerful computer, by 60% and using it to capture the most prestigious prize in the field of computing, to helping create an automated control system for the National Ignition Facility (NIF) that monitors and adjusts more than 60,000 control and diagnostic points; from creating a microarray probe that rapidly detects virulent high-threat organisms, natural or bioterrorist in origin, to replacing large numbers of physical computer servers with small numbers of virtual servers, reducing operating expense by 60%, the people in Computation have been at the center of weighty projects whose impacts are felt across the Laboratory and the DOE community. The accomplishments I just mentioned, and another two dozen or so, make up the stories contained in this report. While they form an exceptionally diverse set of projects and topics, it is what they have in common that excites me. They share the characteristic of being central, often crucial, to the mission-driven business of the Laboratory. Computational science has become fundamental to nearly every aspect of the Laboratory's approach to science and even to the conduct of administration. It is difficult to consider how we would proceed without computing, which occurs at all scales, from handheld and desktop computing to the systems controlling the instruments and mechanisms in the laboratories to the massively parallel supercomputers. The reasons for the dramatic increase in the importance of computing are manifest. Practical, fiscal, or political realities make the traditional approach to science, the cycle of theoretical analysis leading to experimental testing, leading to adjustment of theory, and so on, impossible, impractical, or forbidden. How, for example, can we understand the intricate relationship between human activity and weather and climate? We cannot test our hypotheses by experiment, which would require controlled use of the entire earth over centuries. It is only through extremely intricate, detailed computational simulation that we can test our theories, and simulating weather and climate over the entire globe requires the most massive high-performance computers that exist. Such extreme problems are found in numerous laboratory missions, including astrophysics, weapons programs, materials science, and earth science.

    7. NP Science Network Requirements

      SciTech Connect (OSTI)

      Dart, Eli; Rotman, Lauren; Tierney, Brian

      2011-08-26

      The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. To support SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years. In August 2011, ESnet and the Office of Nuclear Physics (NP), of the DOE SC, organized a workshop to characterize the networking requirements of the programs funded by NP. The requirements identified at the workshop are summarized in the Findings section, and are described in more detail in the body of the report.

    8. Requirements | Photosynthetic Antenna Research Center

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Requirements Requirements Students must earn a total of 11 points from the following options: Please note: To receive points toward the certificate, student are required to submit...

    9. Applications in Data-Intensive Computing

      SciTech Connect (OSTI)

      Shah, Anuj R.; Adkins, Joshua N.; Baxter, Douglas J.; Cannon, William R.; Chavarra-Miranda, Daniel; Choudhury, Sutanay; Gorton, Ian; Gracio, Deborah K.; Halter, Todd D.; Jaitly, Navdeep; Johnson, John R.; Kouzes, Richard T.; Macduff, Matt C.; Marquez, Andres; Monroe, Matthew E.; Oehmen, Christopher S.; Pike, William A.; Scherrer, Chad; Villa, Oreste; Webb-Robertson, Bobbie-Jo M.; Whitney, Paul D.; Zuljevic, Nino

      2010-04-01

      This book chapter, to be published in Advances in Computers, Volume 78, in 2010 describes applications of data intensive computing (DIC). This is an invited chapter resulting from a previous publication on DIC. This work summarizes efforts coming out of the PNNL's Data Intensive Computing Initiative. Advances in technology have empowered individuals with the ability to generate digital content with mouse clicks and voice commands. Digital pictures, emails, text messages, home videos, audio, and webpages are common examples of digital content that are generated on a regular basis. Data intensive computing facilitates human understanding of complex problems. Data-intensive applications provide timely and meaningful analytical results in response to exponentially growing data complexity and associated analysis requirements through the development of new classes of software, algorithms, and hardware.

    10. Minimum separation distances for natural gas pipeline and boilers in the 300 area, Hanford Site

      SciTech Connect (OSTI)

      Daling, P.M.; Graham, T.M.

      1997-08-01

      The U.S. Department of Energy (DOE) is proposing actions to reduce energy expenditures and improve energy system reliability at the 300 Area of the Hanford Site. These actions include replacing the centralized heating system with heating units for individual buildings or groups of buildings, constructing a new natural gas distribution system to provide a fuel source for many of these units, and constructing a central control building to operate and maintain the system. The individual heating units will include steam boilers that are to be housed in individual annex buildings located at some distance away from nearby 300 Area nuclear facilities. This analysis develops the basis for siting the package boilers and natural gas distribution systems to be used to supply steam to 300 Area nuclear facilities. The effects of four potential fire and explosion scenarios involving the boiler and natural gas pipeline were quantified to determine minimum separation distances that would reduce the risks to nearby nuclear facilities. The resulting minimum separation distances are shown in Table ES.1.

    11. Computers as tools

      SciTech Connect (OSTI)

      Eriksson, I.V.

      1994-12-31

      The following message was recently posted on a bulletin board and clearly shows the relevance of the conference theme: {open_quotes}The computer and digital networks seem poised to change whole regions of human activity -- how we record knowledge, communicate, learn, work, understand ourselves and the world. What`s the best framework for understanding this digitalization, or virtualization, of seemingly everything? ... Clearly, symbolic tools like the alphabet, book, and mechanical clock have changed some of our most fundamental notions -- self, identity, mind, nature, time, space. Can we say what the computer, a purely symbolic {open_quotes}machine,{close_quotes} is doing to our thinking in these areas? Or is it too early to say, given how much more powerful and less expensive the technology seems destinated to become in the next few decades?{close_quotes} (Verity, 1994) Computers certainly affect our lives and way of thinking but what have computers to do with ethics? A narrow approach would be that on the one hand people can and do abuse computer systems and on the other hand people can be abused by them. Weli known examples of the former are computer comes such as the theft of money, services and information. The latter can be exemplified by violation of privacy, health hazards and computer monitoring. Broadening the concept from computers to information systems (ISs) and information technology (IT) gives a wider perspective. Computers are just the hardware part of information systems which also include software, people and data. Information technology is the concept preferred today. It extends to communication, which is an essential part of information processing. Now let us repeat the question: What has IT to do with ethics? Verity mentioned changes in {open_quotes}how we record knowledge, communicate, learn, work, understand ourselves and the world{close_quotes}.

    12. Requirements Definition Stage

      Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]

      1997-05-21

      This chapter addresses development of a Software Configuration Management Plan to track and control work products, analysis of the system owner/users' business processes and needs, translation of those processes and needs into formal requirements, and planning the testing activities to validate the performance of the software product.

    13. Requirements for Xenon International

      SciTech Connect (OSTI)

      Hayes, James C.; Ely, James H.; Haas, Derek A.; Harper, Warren W.; Heimbigner, Tom R.; Hubbard, Charles W.; Humble, Paul H.; Madison, Jill C.; Morris, Scott J.; Panisko, Mark E.; Ripplinger, Mike D.; Stewart, Timothy L.

      2013-09-26

      This document defines the requirements for the new Xenon International radioxenon system. The output of this project will be a Pacific Northwest National Laboratory (PNNL) developed prototype and a manufacturer-developed production prototype. The two prototypes are intended to be as close to matching as possible; this will be facilitated by overlapping development cycles and open communication between PNNL and the manufacturer.

    14. Contractor Legal Management Requirements

      Broader source: Energy.gov [DOE]

      The purpose of this flash is to inform you of the issuance of two new Acquisition Guide Chapters, Chapters 70-31 C and 31.3, both titled "Contractor Legal Management Requirements." (Chapter 31.3 simply refers you to Chapter 70-31 C.)

    15. Transportation Infrastructure Requirement Resources | Department...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Infrastructure Requirement Resources Transportation Infrastructure Requirement Resources ... Establish Alternative Fuel Infrastructure. Back to Transportation Policies and Programs.

    16. Present and Future Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... Important for DOE Energy Frontier Mission 2 * TH HEP is new ... & PDSF (studies based on usage for end of Sep 2012 - Nov ... framework (Sherpa), and a library for the computation of ...

    17. Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      a n n u a l r e p o r t 2 0 1 2 Argonne Leadership Computing Facility Director's Message .............................................................................................................................1 About ALCF ......................................................................................................................................... 2 IntroDuCIng MIrA Introducing Mira

    18. Cloud computing security.

      SciTech Connect (OSTI)

      Shin, Dongwan; Claycomb, William R.; Urias, Vincent E.

      2010-10-01

      Cloud computing is a paradigm rapidly being embraced by government and industry as a solution for cost-savings, scalability, and collaboration. While a multitude of applications and services are available commercially for cloud-based solutions, research in this area has yet to fully embrace the full spectrum of potential challenges facing cloud computing. This tutorial aims to provide researchers with a fundamental understanding of cloud computing, with the goals of identifying a broad range of potential research topics, and inspiring a new surge in research to address current issues. We will also discuss real implementations of research-oriented cloud computing systems for both academia and government, including configuration options, hardware issues, challenges, and solutions.

    19. Quantum steady computation

      SciTech Connect (OSTI)

      Castagnoli, G. )

      1991-08-10

      This paper reports that current conceptions of quantum mechanical computers inherit from conventional digital machines two apparently interacting features, machine imperfection and temporal development of the computational process. On account of machine imperfection, the process would become ideally reversible only in the limiting case of zero speed. Therefore the process is irreversible in practice and cannot be considered to be a fundamental quantum one. By giving up classical features and using a linear, reversible and non-sequential representation of the computational process - not realizable in classical machines - the process can be identified with the mathematical form of a quantum steady state. This form of steady quantum computation would seem to have an important bearing on the notion of cognition.

    20. Edison Electrifies Scientific Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... Deployment of Edison was made possible in part by funding from DOE's Office of Science and the DARPA High Productivity Computing Systems program. DOE's Office of Science is the ...

    1. Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Argonne National Laboratory | 9700 South Cass Avenue | Argonne, IL 60439 | www.anl.gov | September 2013 alcf_keyfacts_fs_0913 Key facts about the Argonne Leadership Computing Facility User support and services Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. Catalysts are computational scientist with domain expertise and work directly with project principal investigators to maximize discovery and reduce time-to- solution.

    2. New TRACC Cluster Computer

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      TRACC Cluster Computer With the addition of a new cluster called Zephyr that was made operational in September of this year (2012), TRACC now offers two clusters to choose from: Zephyr and our original cluster that has now been named Phoenix. Zephyr was acquired from Atipa technologies, and it is a 92-node system with each node having two AMD 16 core, 2.3 GHz, 32 GB processors. See also Computing Resources.

    3. Computational Modeling | Bioenergy | NREL

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Modeling NREL uses computational modeling to increase the efficiency of biomass conversion by rational design using multiscale modeling, applying theoretical approaches, and testing scientific hypotheses. model of enzymes wrapping on cellulose; colorful circular structures entwined through blue strands Cellulosomes are complexes of protein scaffolds and enzymes that are highly effective in decomposing biomass. This is a snapshot of a coarse-grain model of complex cellulosome

    4. Computational Physics and Methods

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      2 Computational Physics and Methods Performing innovative simulations of physics phenomena on tomorrow's scientific computing platforms Growth and emissivity of young galaxy hosting a supermassive black hole as calculated in cosmological code ENZO and post-processed with radiative transfer code AURORA. image showing detailed turbulence simulation, Rayleigh-Taylor Turbulence imaging: the largest turbulence simulations to date Advanced multi-scale modeling Turbulence datasets Density iso-surfaces

    5. Advanced Simulation and Computing

      National Nuclear Security Administration (NNSA)

      NA-ASC-117R-09-Vol.1-Rev.0 Advanced Simulation and Computing PROGRAM PLAN FY09 October 2008 ASC Focal Point Robert Meisner, Director DOE/NNSA NA-121.2 202-586-0908 Program Plan Focal Point for NA-121.2 Njema Frazier DOE/NNSA NA-121.2 202-586-5789 A Publication of the Office of Advanced Simulation & Computing, NNSA Defense Programs i Contents Executive Summary ----------------------------------------------------------------------------------------------- 1 I. Introduction

    6. Compute Reservation Request Form

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Reservation Request Form Compute Reservation Request Form Users can request a scheduled reservation of machine resources if their jobs have special needs that cannot be accommodated through the regular batch system. A reservation brings some portion of the machine to a specific user or project for an agreed upon duration. Typically this is used for interactive debugging at scale or real time processing linked to some experiment or event. It is not intended to be used to guarantee fast

    7. Applied Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      7 Applied Computer Science Innovative co-design of applications, algorithms, and architectures in order to enable scientific simulations at extreme scale Leadership Group Leader Linn Collins Email Deputy Group Leader (Acting) Bryan Lally Email Climate modeling visualization Results from a climate simulation computed using the Model for Prediction Across Scales (MPAS) code. This visualization shows the temperature of ocean currents using a green and blue color scale. These colors were

    8. Audit of Desktop Computer Acquisitions at the Idaho National...

      Office of Environmental Management (EM)

      require them to pay the lowest possible prices for desktop computers needed to support ... However, the audit showed that Lockheed did not always pay the lowest possible prices for ...

    9. Computational Tools to Assess Turbine Biological Performance

      SciTech Connect (OSTI)

      Richmond, Marshall C.; Serkowski, John A.; Rakowski, Cynthia L.; Strickler, Brad; Weisbeck, Molly; Dotson, Curtis L.

      2014-07-24

      Public Utility District No. 2 of Grant County (GCPUD) operates the Priest Rapids Dam (PRD), a hydroelectric facility on the Columbia River in Washington State. The dam contains 10 Kaplan-type turbine units that are now more than 50 years old. Plans are underway to refit these aging turbines with new runners. The Columbia River at PRD is a migratory pathway for several species of juvenile and adult salmonids, so passage of fish through the dam is a major consideration when upgrading the turbines. In this paper, a method for turbine biological performance assessment (BioPA) is demonstrated. Using this method, a suite of biological performance indicators is computed based on simulated data from a CFD model of a proposed turbine design. Each performance indicator is a measure of the probability of exposure to a certain dose of an injury mechanism. Using known relationships between the dose of an injury mechanism and frequency of injury (dose–response) from laboratory or field studies, the likelihood of fish injury for a turbine design can be computed from the performance indicator. By comparing the values of the indicators from proposed designs, the engineer can identify the more-promising alternatives. We present an application of the BioPA method for baseline risk assessment calculations for the existing Kaplan turbines at PRD that will be used as the minimum biological performance that a proposed new design must achieve.

    10. computing | National Nuclear Security Administration

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      computing NNSA Announces Procurement of Penguin Computing Clusters to Support Stockpile Stewardship at National Labs The National Nuclear Security Administration's (NNSA's) Lawrence Livermore National Laboratory today announced the awarding of a subcontract to Penguin Computing - a leading developer of high-performance Linux cluster computing systems based in Silicon Valley - to bolster computing for stockpile

    11. Experiment Safety Requirements

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Experiment Safety Requirements Print Safety at the ALS The mission of the ALS is to "Support users in doing outstanding science in a safe environment." How Do I...? Complete an Experiment Safety Sheet? (Do this upon receiving beam time.) Complete Safety Training? Bring and Use Electrical Equipment at the ALS? Determine what Personal Protective Equipment (PPE) to Wear? Get Authorization to Work with Lasers at the ALS? Ship Radioactive Materials to LBNL for Use at the ALS? Ship Samples

    12. Experiment Safety Requirements

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Experiment Safety Requirements Print Safety at the ALS The mission of the ALS is to "Support users in doing outstanding science in a safe environment." How Do I...? Complete an Experiment Safety Sheet? (Do this upon receiving beam time.) Complete Safety Training? Bring and Use Electrical Equipment at the ALS? Determine what Personal Protective Equipment (PPE) to Wear? Get Authorization to Work with Lasers at the ALS? Ship Radioactive Materials to LBNL for Use at the ALS? Ship Samples

    13. Experiment Safety Requirements

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Experiment Safety Requirements Print Safety at the ALS The mission of the ALS is to "Support users in doing outstanding science in a safe environment." How Do I...? Complete an Experiment Safety Sheet? (Do this upon receiving beam time.) Complete Safety Training? Bring and Use Electrical Equipment at the ALS? Determine what Personal Protective Equipment (PPE) to Wear? Get Authorization to Work with Lasers at the ALS? Ship Radioactive Materials to LBNL for Use at the ALS? Ship Samples

    14. LASSO* - Science Requirements

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      LASSO* - Science Requirements *LES ARM Symbiotic Simulation and Observation (LASSO) workflow Andy Vogelmann 1 , William I Gustafson Jr 2 Zhijin Li 3,4 , Xiaoping Cheng 3 , Satoshi Endo 1 , Tami Toto 1 , and Heng Xiao 2 1 Brookhaven National Laboratory 2 Pacific Northwest National Laboratory 3 University of California Los Angeles 4 NASA Jet Propulsion Laboratory And TONS of people from the rest of ARM! LASSO Webpage: http://www.arm.gov/science/themes/lasso LASSO e-mail list sign up:

    15. Experiment Safety Requirements

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Experiment Safety Requirements Print Safety at the ALS The mission of the ALS is to "Support users in doing outstanding science in a safe environment." How Do I...? Complete an Experiment Safety Sheet? (Do this upon receiving beam time.) Complete Safety Training? Bring and Use Electrical Equipment at the ALS? Determine what Personal Protective Equipment (PPE) to Wear? Get Authorization to Work with Lasers at the ALS? Ship Radioactive Materials to LBNL for Use at the ALS? Ship Samples

    16. Experiment Safety Requirements

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Experiment Safety Requirements Print Safety at the ALS The mission of the ALS is to "Support users in doing outstanding science in a safe environment." How Do I...? Complete an Experiment Safety Sheet? (Do this upon receiving beam time.) Complete Safety Training? Bring and Use Electrical Equipment at the ALS? Determine what Personal Protective Equipment (PPE) to Wear? Get Authorization to Work with Lasers at the ALS? Ship Radioactive Materials to LBNL for Use at the ALS? Ship Samples

    17. Experiment Safety Requirements

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Experiment Safety Requirements Print Safety at the ALS The mission of the ALS is to "Support users in doing outstanding science in a safe environment." How Do I...? Complete an Experiment Safety Sheet? (Do this upon receiving beam time.) Complete Safety Training? Bring and Use Electrical Equipment at the ALS? Determine what Personal Protective Equipment (PPE) to Wear? Get Authorization to Work with Lasers at the ALS? Ship Radioactive Materials to LBNL for Use at the ALS? Ship Samples

    18. Performance Modeling for 3D Visualization in a Heterogeneous Computing

      Office of Scientific and Technical Information (OSTI)

      Environment (Technical Report) | SciTech Connect Performance Modeling for 3D Visualization in a Heterogeneous Computing Environment Citation Details In-Document Search Title: Performance Modeling for 3D Visualization in a Heterogeneous Computing Environment The visualization of large, remotely located data sets necessitates the development of a distributed computing pipeline in order to reduce the data, in stages, to a manageable size. The required baseline infrastructure for launching such

    19. Extreme Scale Computing to Secure the Nation

      SciTech Connect (OSTI)

      Brown, D L; McGraw, J R; Johnson, J R; Frincke, D

      2009-11-10

      Since the dawn of modern electronic computing in the mid 1940's, U.S. national security programs have been dominant users of every new generation of high-performance computer. Indeed, the first general-purpose electronic computer, ENIAC (the Electronic Numerical Integrator and Computer), was used to calculate the expected explosive yield of early thermonuclear weapons designs. Even the U. S. numerical weather prediction program, another early application for high-performance computing, was initially funded jointly by sponsors that included the U.S. Air Force and Navy, agencies interested in accurate weather predictions to support U.S. military operations. For the decades of the cold war, national security requirements continued to drive the development of high performance computing (HPC), including advancement of the computing hardware and development of sophisticated simulation codes to support weapons and military aircraft design, numerical weather prediction as well as data-intensive applications such as cryptography and cybersecurity U.S. national security concerns continue to drive the development of high-performance computers and software in the U.S. and in fact, events following the end of the cold war have driven an increase in the growth rate of computer performance at the high-end of the market. This mainly derives from our nation's observance of a moratorium on underground nuclear testing beginning in 1992, followed by our voluntary adherence to the Comprehensive Test Ban Treaty (CTBT) beginning in 1995. The CTBT prohibits further underground nuclear tests, which in the past had been a key component of the nation's science-based program for assuring the reliability, performance and safety of U.S. nuclear weapons. In response to this change, the U.S. Department of Energy (DOE) initiated the Science-Based Stockpile Stewardship (SBSS) program in response to the Fiscal Year 1994 National Defense Authorization Act, which requires, 'in the absence of nuclear testing, a progam to: (1) Support a focused, multifaceted program to increase the understanding of the enduring stockpile; (2) Predict, detect, and evaluate potential problems of the aging of the stockpile; (3) Refurbish and re-manufacture weapons and components, as required; and (4) Maintain the science and engineering institutions needed to support the nation's nuclear deterrent, now and in the future'. This program continues to fulfill its national security mission by adding significant new capabilities for producing scientific results through large-scale computational simulation coupled with careful experimentation, including sub-critical nuclear experiments permitted under the CTBT. To develop the computational science and the computational horsepower needed to support its mission, SBSS initiated the Accelerated Strategic Computing Initiative, later renamed the Advanced Simulation & Computing (ASC) program (sidebar: 'History of ASC Computing Program Computing Capability'). The modern 3D computational simulation capability of the ASC program supports the assessment and certification of the current nuclear stockpile through calibration with past underground test (UGT) data. While an impressive accomplishment, continued evolution of national security mission requirements will demand computing resources at a significantly greater scale than we have today. In particular, continued observance and potential Senate confirmation of the Comprehensive Test Ban Treaty (CTBT) together with the U.S administration's promise for a significant reduction in the size of the stockpile and the inexorable aging and consequent refurbishment of the stockpile all demand increasing refinement of our computational simulation capabilities. Assessment of the present and future stockpile with increased confidence of the safety and reliability without reliance upon calibration with past or future test data is a long-term goal of the ASC program. This will be accomplished through significant increases in the scientific bases that underlie the computational tools. Computer codes must be developed that replace phenomenology with increased levels of scientific understanding together with an accompanying quantification of uncertainty. These advanced codes will place significantly higher demands on the computing infrastructure than do the current 3D ASC codes. This article discusses not only the need for a future computing capability at the exascale for the SBSS program, but also considers high performance computing requirements for broader national security questions. For example, the increasing concern over potential nuclear terrorist threats demands a capability to assess threats and potential disablement technologies as well as a rapid forensic capability for determining a nuclear weapons design from post-detonation evidence (nuclear counterterrorism).

    20. LEGACY MANAGEMENT REQUIRES INFORMATION

      SciTech Connect (OSTI)

      CONNELL, C.W.; HILDEBRAND, R.D.

      2006-12-14

      ''Legacy Management Requires Information'' describes the goal(s) of the US Department of Energy's Office of Legacy Management (LM) relative to maintaining critical records and the way those goals are being addressed at Hanford. The paper discusses the current practices for document control, as well as the use of modern databases for both storing and accessing the data to support cleanup decisions. In addition to the information goals of LM, the Hanford Federal Facility Agreement and Consent Order, known as the ''Tri-Party Agreement'' (TPA) is one of the main drivers in documentation and data management. The TPA, which specifies discrete milestones for cleaning up the Hanford Site, is a legally binding agreement among the US Department of Energy (DOE), the Washington State Department of Ecology (Ecology), and the US Environmental Protection Agency (EPA). The TPA requires that DOE provide the lead regulatory agency with the results of analytical laboratory and non-laboratory tests/readings to help guide them in making decisions. The Agreement also calls for each signatory to preserve--for at least ten years after the Agreement has ended--all of the records in its or its contractors, possession related to sampling, analysis, investigations, and monitoring conducted. The tools used at Hanford to meet TPA requirements are also the tools that can satisfy the needs of LM.

    1. MarFS-Requirements-Design-Configuration-Admin

      SciTech Connect (OSTI)

      Kettering, Brett Michael; Grider, Gary Alan

      2015-07-08

      This document will be organized into sections that are defined by the requirements for a file system that presents a near-POSIX (Portable Operating System Interface) interface to the user, but whose data is stored in whatever form is most efficient for the type of data being stored. After defining the requirement the design for meeting the requirement will be explained. Finally there will be sections on configuring and administering this file system. More and more, data dominates the computing world. There is a sea of data out there in many different formats that needs to be managed and used. Mar means sea in Spanish. Thus, this product is dubbed MarFS, a file system for a sea of data.

    2. in High Performance Computing Computer System, Cluster, and Networking...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      iSSH v. Auditd: Intrusion Detection in High Performance Computing Computer System, Cluster, and Networking Summer Institute David Karns, New Mexico State University Katy Protin,...

    3. On the minimum dark matter mass testable by neutrinos from the Sun

      SciTech Connect (OSTI)

      Busoni, Giorgio; Simone, Andrea De; Huang, Wei-Chih E-mail: andrea.desimone@sissa.it

      2013-07-01

      We discuss a limitation on extracting bounds on the scattering cross section of dark matter with nucleons, using neutrinos from the Sun. If the dark matter particle is sufficiently light (less than about 4 GeV), the effect of evaporation is not negligible and the capture process goes in equilibrium with the evaporation. In this regime, the flux of solar neutrinos of dark matter origin becomes independent of the scattering cross section and therefore no constraint can be placed on it. We find the minimum values of dark matter masses for which the scattering cross section on nucleons can be probed using neutrinos from the Sun. We also provide simple and accurate fitting functions for all the relevant processes of GeV-scale dark matter in the Sun.

    4. Minimum Fisher regularization of image reconstruction for infrared imaging bolometer on HL-2A

      SciTech Connect (OSTI)

      Gao, J. M.; Liu, Y.; Li, W.; Lu, J.; Dong, Y. B.; Xia, Z. W.; Yi, P.; Yang, Q. W.

      2013-09-15

      An infrared imaging bolometer diagnostic has been developed recently for the HL-2A tokamak to measure the temporal and spatial distribution of plasma radiation. The three-dimensional tomography, reduced to a two-dimensional problem by the assumption of plasma radiation toroidal symmetry, has been performed. A three-dimensional geometry matrix is calculated with the one-dimensional pencil beam approximation. The solid angles viewed by the detector elements are taken into account in defining the chord brightness. And the local plasma emission is obtained by inverting the measured brightness with the minimum Fisher regularization method. A typical HL-2A plasma radiation model was chosen to optimize a regularization parameter on the criterion of generalized cross validation. Finally, this method was applied to HL-2A experiments, demonstrating the plasma radiated power density distribution in limiter and divertor discharges.

    5. A fast tomographic method for searching the minimum free energy path

      SciTech Connect (OSTI)

      Chen, Changjun; Huang, Yanzhao; Xiao, Yi; Jiang, Xuewei

      2014-10-21

      Minimum Free Energy Path (MFEP) provides a lot of important information about the chemical reactions, like the free energy barrier, the location of the transition state, and the relative stability between reactant and product. With MFEP, one can study the mechanisms of the reaction in an efficient way. Due to a large number of degrees of freedom, searching the MFEP is a very time-consuming process. Here, we present a fast tomographic method to perform the search. Our approach first calculates the free energy surfaces in a sequence of hyperplanes perpendicular to a transition path. Based on an objective function and the free energy gradient, the transition path is optimized in the collective variable space iteratively. Applications of the present method to model systems show that our method is practical. It can be an alternative approach for finding the state-to-state MFEP.

    6. The turbulent cascade and proton heating in the solar wind during solar minimum

      SciTech Connect (OSTI)

      Coburn, Jesse T.; Smith, Charles W.; Vasquez, Bernard J.; Stawarz, Joshua E.; Forman, Miriam A.

      2013-06-13

      Solar wind measurements at 1 AU during the recent solar minimum and previous studies of solar maximum provide an opportunity to study the effects of the changing solar cycle on in situ heating. Our interest is to compare the levels of activity associated with turbulence and proton heating. Large-scale shears in the flow caused by transient activity are a source that drives turbulence that heats the solar wind, but as the solar cycle progresses the dynamics that drive the turbulence and heat the medium are likely to change. The application of third-moment theory to Advanced Composition Explorer (ACE) data gives the turbulent energy cascade rate which is not seen to vary with the solar cycle. Likewise, an empirical heating rate shows no significan changes in proton heating over the cycle.

    7. Observed Minimum Illuminance Threshold for Night Market Vendors in Kenya who use LED Lamps

      SciTech Connect (OSTI)

      Johnstone, Peter; Jacobson, Arne; Mills, Evan; Radecsky, Kristen

      2009-03-21

      Creation of light for work, socializing, and general illumination is a fundamental application of technology around the world. For those who lack access to electricity, an emerging and diverse range of LED based lighting products hold promise for replacing and/or augmenting their current fuel-based lighting sources that are costly and dirty. Along with analysis of environmental factors, economic models for total cost-ofownership of LED lighting products are an important tool for studying the impacts of these products as they emerge in markets of developing countries. One important metric in those models is the minimum illuminance demanded by end-users for a given task before recharging the lamp or replacing batteries. It impacts the lighting service cost per unit time if charging is done with purchased electricity, batteries, or charging services. The concept is illustrated in figure 1: LED lighting products are generally brightest immediately after the battery is charged or replaced and the illuminance degrades as the battery is discharged. When a minimum threshold level of illuminance is reached, the operational time for the battery charge cycle is over. The cost to recharge depends on the method utilized; these include charging at a shop at a fixed price per charge, charging on personal grid connections, using solar chargers, and purchasing dry cell batteries. This Research Note reports on the observed"charge-triggering" illuminance level threshold for night market vendors who use LED lighting products to provide general and task oriented illumination. All the study participants charged with AC power, either at a fixed-price charge shop or with electricity at their home.

    8. Thirty-Year Solid Waste Generation Maximum and Minimum Forecast for SRS

      SciTech Connect (OSTI)

      Thomas, L.C.

      1994-10-01

      This report is the third phase (Phase III) of the Thirty-Year Solid Waste Generation Forecast for Facilities at the Savannah River Site (SRS). Phase I of the forecast, Thirty-Year Solid Waste Generation Forecast for Facilities at SRS, forecasts the yearly quantities of low-level waste (LLW), hazardous waste, mixed waste, and transuranic (TRU) wastes generated over the next 30 years by operations, decontamination and decommissioning and environmental restoration (ER) activities at the Savannah River Site. The Phase II report, Thirty-Year Solid Waste Generation Forecast by Treatability Group (U), provides a 30-year forecast by waste treatability group for operations, decontamination and decommissioning, and ER activities. In addition, a 30-year forecast by waste stream has been provided for operations in Appendix A of the Phase II report. The solid wastes stored or generated at SRS must be treated and disposed of in accordance with federal, state, and local laws and regulations. To evaluate, select, and justify the use of promising treatment technologies and to evaluate the potential impact to the environment, the generic waste categories described in the Phase I report were divided into smaller classifications with similar physical, chemical, and radiological characteristics. These smaller classifications, defined within the Phase II report as treatability groups, can then be used in the Waste Management Environmental Impact Statement process to evaluate treatment options. The waste generation forecasts in the Phase II report includes existing waste inventories. Existing waste inventories, which include waste streams from continuing operations and stored wastes from discontinued operations, were not included in the Phase I report. Maximum and minimum forecasts serve as upper and lower boundaries for waste generation. This report provides the maximum and minimum forecast by waste treatability group for operation, decontamination and decommissioning, and ER activities.

    9. NEWLY DISCOVERED GLOBAL TEMPERATURE STRUCTURES IN THE QUIET SUN AT SOLAR MINIMUM

      SciTech Connect (OSTI)

      Huang Zhenguang; Frazin, Richard A.; Landi, Enrico; Manchester, Ward B.; Gombosi, Tamas I.; Vasquez, Alberto M.

      2012-08-20

      Magnetic loops are building blocks of the closed-field corona. While active region loops are readily seen in images taken at EUV and X-ray wavelengths, quiet-Sun (QS) loops are seldom identifiable and are therefore difficult to study on an individual basis. The first analysis of solar minimum (Carrington Rotation 2077) QS coronal loops utilizing a novel technique called the Michigan Loop Diagnostic Technique (MLDT) is presented. This technique combines Differential Emission Measure Tomography and a potential field source surface (PFSS) model, and consists of tracing PFSS field lines through the tomographic grid on which the local differential emission measure is determined. As a result, the electron temperature T{sub e} and density N{sub e} at each point along each individual field line can be obtained. Using data from STEREO/EUVI and SOHO/MDI, the MLDT identifies two types of QS loops in the corona: so-called up loops in which the temperature increases with height and so-called down loops in which the temperature decreases with height. Up loops are expected, however, down loops are a surprise, and furthermore, they are ubiquitous in the low-latitude corona. Up loops dominate the QS at higher latitudes. The MLDT allows independent determination of the empirical pressure and density scale heights, and the differences between the two remain to be explained. The down loops appear to be a newly discovered property of the solar minimum corona that may shed light on the physics of coronal heating. The results are shown to be robust to the calibration uncertainties of the EUVI instrument.

    10. Cheaper Adjoints by Reversing Address Computations

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Hascoët, L.; Utke, J.; Naumann, U.

      2008-01-01

      The reverse mode of automatic differentiation is widely used in science and engineering. A severe bottleneck for the performance of the reverse mode, however, is the necessity to recover certain intermediate values of the program in reverse order. Among these values are computed addresses, which traditionally are recovered through forward recomputation and storage in memory. We propose an alternative approach for recovery that uses inverse computation based on dependency information. Address storage constitutes a significant portion of the overall storage requirements. An example illustrates substantial gains that the proposed approach yields, and we show use cases in practical applications.

    11. Extensible Computational Chemistry Environment

      Energy Science and Technology Software Center (OSTI)

      2012-08-09

      ECCE provides a sophisticated graphical user interface, scientific visualization tools, and the underlying data management framework enabling scientists to efficiently set up calculations and store, retrieve, and analyze the rapidly growing volumes of data produced by computational chemistry studies. ECCE was conceived as part of the Environmental Molecular Sciences Laboratory construction to solve the problem of researchers being able to effectively utilize complex computational chemistry codes and massively parallel high performance compute resources. Bringing themore » power of these codes and resources to the desktops of researcher and thus enabling world class research without users needing a detailed understanding of the inner workings of either the theoretical codes or the supercomputers needed to run them was a grand challenge problem in the original version of the EMSL. ECCE allows collaboration among researchers using a web-based data repository where the inputs and results for all calculations done within ECCE are organized. ECCE is a first of kind end-to-end problem solving environment for all phases of computational chemistry research: setting up calculations with sophisticated GUI and direct manipulation visualization tools, submitting and monitoring calculations on remote high performance supercomputers without having to be familiar with the details of using these compute resources, and performing results visualization and analysis including creating publication quality images. ECCE is a suite of tightly integrated applications that are employed as the user moves through the modeling process.« less

    12. GPU COMPUTING FOR PARTICLE TRACKING

      SciTech Connect (OSTI)

      Nishimura, Hiroshi; Song, Kai; Muriki, Krishna; Sun, Changchun; James, Susan; Qin, Yong

      2011-03-25

      This is a feasibility study of using a modern Graphics Processing Unit (GPU) to parallelize the accelerator particle tracking code. To demonstrate the massive parallelization features provided by GPU computing, a simplified TracyGPU program is developed for dynamic aperture calculation. Performances, issues, and challenges from introducing GPU are also discussed. General purpose Computation on Graphics Processing Units (GPGPU) bring massive parallel computing capabilities to numerical calculation. However, the unique architecture of GPU requires a comprehensive understanding of the hardware and programming model to be able to well optimize existing applications. In the field of accelerator physics, the dynamic aperture calculation of a storage ring, which is often the most time consuming part of the accelerator modeling and simulation, can benefit from GPU due to its embarrassingly parallel feature, which fits well with the GPU programming model. In this paper, we use the Tesla C2050 GPU which consists of 14 multi-processois (MP) with 32 cores on each MP, therefore a total of 448 cores, to host thousands ot threads dynamically. Thread is a logical execution unit of the program on GPU. In the GPU programming model, threads are grouped into a collection of blocks Within each block, multiple threads share the same code, and up to 48 KB of shared memory. Multiple thread blocks form a grid, which is executed as a GPU kernel. A simplified code that is a subset of Tracy++ [2] is developed to demonstrate the possibility of using GPU to speed up the dynamic aperture calculation by having each thread track a particle.

    13. Belle-II Experiment Network Requirements

      SciTech Connect (OSTI)

      Asner, David; Bell, Greg; Carlson, Tim; Cowley, David; Dart, Eli; Erwin, Brock; Godang, Romulus; Hara, Takanori; Johnson, Jerry; Johnson, Ron; Johnston, Bill; Dam, Kerstin Kleese-van; Kaneko, Toshiaki; Kubota, Yoshihiro; Kuhr, Thomas; McCoy, John; Miyake, Hideki; Monga, Inder; Nakamura, Motonori; Piilonen, Leo; Pordes, Ruth; Ray, Douglas; Russell, Richard; Schram, Malachi; Schroeder, Jim; Sevior, Martin; Singh, Surya; Suzuki, Soh; Sasaki, Takashi; Williams, Jim

      2013-05-28

      The Belle experiment, part of a broad-based search for new physics, is a collaboration of ~400 physicists from 55 institutions across four continents. The Belle detector is located at the KEKB accelerator in Tsukuba, Japan. The Belle detector was operated at the asymmetric electron-positron collider KEKB from 1999-2010. The detector accumulated more than 1 ab-1 of integrated luminosity, corresponding to more than 2 PB of data near 10 GeV center-of-mass energy. Recently, KEK has initiated a $400 million accelerator upgrade to be called SuperKEKB, designed to produce instantaneous and integrated luminosity two orders of magnitude greater than KEKB. The new international collaboration at SuperKEKB is called Belle II. The first data from Belle II/SuperKEKB is expected in 2015. In October 2012, senior members of the Belle-II collaboration gathered at PNNL to discuss the computing and neworking requirements of the Belle-II experiment with ESnet staff and other computing and networking experts. The day-and-a-half-long workshop characterized the instruments and facilities used in the experiment, the process of science for Belle-II, and the computing and networking equipment and configuration requirements to realize the full scientific potential of the collaboration's work.

    14. Information Science, Computing, Applied Math

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information Science, Computing, Applied Math /science-innovation/_assets/images/icon-science.jpg Information Science, Computing, Applied Math National security depends on science and technology. The United States relies on Los Alamos National Laboratory for the best of both. No place on Earth pursues a broader array of world-class scientific endeavors. Computer, Computational, and Statistical Sciences (CCS)» High Performance Computing (HPC)» Extreme Scale Computing, Co-design» supercomputing

    15. Required Annual Notices

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      A Token Requesting A Token RSA_SecurID_SID800.jpg Step One - Registering with the DOE's Management Information System (MIS) Before you request a DOE Digital Identity, you must register in DOE's Management Information System (MIS). Please note that DOE Federal employees are already registered and do not need to complete this step. They may skip to step two. During the registration process, you will be required to select a DOE sponsor. Your sponsor is the DOE employee who certifies that you have a

    16. BES Science Network Requirements

      SciTech Connect (OSTI)

      Biocca, Alan; Carlson, Rich; Chen, Jackie; Cotter, Steve; Tierney, Brian; Dattoria, Vince; Davenport, Jim; Gaenko, Alexander; Kent, Paul; Lamm, Monica; Miller, Stephen; Mundy, Chris; Ndousse, Thomas; Pederson, Mark; Perazzo, Amedeo; Popescu, Razvan; Rouson, Damian; Sekine, Yukiko; Sumpter, Bobby; Dart, Eli; Wang, Cai-Zhuang -Z; Whitelam, Steve; Zurawski, Jason

      2011-02-01

      The Energy Sciences Network (ESnet) is the primary provider of network connectivityfor the US Department of Energy Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of the Office ofScience programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years.

    17. Computing contingency statistics in parallel.

      SciTech Connect (OSTI)

      Bennett, Janine Camille; Thompson, David; Pebay, Philippe Pierre

      2010-09-01

      Statistical analysis is typically used to reduce the dimensionality of and infer meaning from data. A key challenge of any statistical analysis package aimed at large-scale, distributed data is to address the orthogonal issues of parallel scalability and numerical stability. Many statistical techniques, e.g., descriptive statistics or principal component analysis, are based on moments and co-moments and, using robust online update formulas, can be computed in an embarrassingly parallel manner, amenable to a map-reduce style implementation. In this paper we focus on contingency tables, through which numerous derived statistics such as joint and marginal probability, point-wise mutual information, information entropy, and {chi}{sup 2} independence statistics can be directly obtained. However, contingency tables can become large as data size increases, requiring a correspondingly large amount of communication between processors. This potential increase in communication prevents optimal parallel speedup and is the main difference with moment-based statistics where the amount of inter-processor communication is independent of data size. Here we present the design trade-offs which we made to implement the computation of contingency tables in parallel.We also study the parallel speedup and scalability properties of our open source implementation. In particular, we observe optimal speed-up and scalability when the contingency statistics are used in their appropriate context, namely, when the data input is not quasi-diffuse.

    18. NREL: Computational Science Home Page

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      high-performance computing, computational science, applied mathematics, scientific data management, visualization, and informatics. NREL is home to the largest high performance...

    19. computers | National Nuclear Security Administration

      National Nuclear Security Administration (NNSA)

      Sandia donates 242 computers to northern California schools Sandia National Laboratories electronics technologist Mitch Williams prepares the disassembly of 242 computers for ...

    20. Careers | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      At the Argonne Leadership Computing Facility, we are helping to redefine what's possible in computational science. With some of the most powerful supercomputers in the world and a ...

    1. Computer simulation | Open Energy Information

      Open Energy Info (EERE)

      Computer simulation Jump to: navigation, search OpenEI Reference LibraryAdd to library Web Site: Computer simulation Author wikipedia Published wikipedia, 2013 DOI Not Provided...

    2. Super recycled water: quenching computers

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Super recycled water: quenching computers Super recycled water: quenching computers New facility and methods support conserving water and creating recycled products. Using reverse ...

    3. Human-computer interface

      DOE Patents [OSTI]

      Anderson, Thomas G.

      2004-12-21

      The present invention provides a method of human-computer interfacing. Force feedback allows intuitive navigation and control near a boundary between regions in a computer-represented space. For example, the method allows a user to interact with a virtual craft, then push through the windshield of the craft to interact with the virtual world surrounding the craft. As another example, the method allows a user to feel transitions between different control domains of a computer representation of a space. The method can provide for force feedback that increases as a user's locus of interaction moves near a boundary, then perceptibly changes (e.g., abruptly drops or changes direction) when the boundary is traversed.

    4. Synchronizing compute node time bases in a parallel computer

      DOE Patents [OSTI]

      Chen, Dong; Faraj, Daniel A; Gooding, Thomas M; Heidelberger, Philip

      2015-01-27

      Synchronizing time bases in a parallel computer that includes compute nodes organized for data communications in a tree network, where one compute node is designated as a root, and, for each compute node: calculating data transmission latency from the root to the compute node; configuring a thread as a pulse waiter; initializing a wakeup unit; and performing a local barrier operation; upon each node completing the local barrier operation, entering, by all compute nodes, a global barrier operation; upon all nodes entering the global barrier operation, sending, to all the compute nodes, a pulse signal; and for each compute node upon receiving the pulse signal: waking, by the wakeup unit, the pulse waiter; setting a time base for the compute node equal to the data transmission latency between the root node and the compute node; and exiting the global barrier operation.

    5. Synchronizing compute node time bases in a parallel computer

      DOE Patents [OSTI]

      Chen, Dong; Faraj, Daniel A; Gooding, Thomas M; Heidelberger, Philip

      2014-12-30

      Synchronizing time bases in a parallel computer that includes compute nodes organized for data communications in a tree network, where one compute node is designated as a root, and, for each compute node: calculating data transmission latency from the root to the compute node; configuring a thread as a pulse waiter; initializing a wakeup unit; and performing a local barrier operation; upon each node completing the local barrier operation, entering, by all compute nodes, a global barrier operation; upon all nodes entering the global barrier operation, sending, to all the compute nodes, a pulse signal; and for each compute node upon receiving the pulse signal: waking, by the wakeup unit, the pulse waiter; setting a time base for the compute node equal to the data transmission latency between the root node and the compute node; and exiting the global barrier operation.

    6. Equipment Operational Requirements

      SciTech Connect (OSTI)

      Greenwalt, B; Henderer, B; Hibbard, W; Mercer, M

      2009-06-11

      The Iraq Department of Border Enforcement is rich in personnel, but poor in equipment. An effective border control system must include detection, discrimination, decision, tracking and interdiction, capture, identification, and disposition. An equipment solution that addresses only a part of this will not succeed, likewise equipment by itself is not the answer without considering the personnel and how they would employ the equipment. The solution should take advantage of the existing in-place system and address all of the critical functions. The solutions are envisioned as being implemented in a phased manner, where Solution 1 is followed by Solution 2 and eventually by Solution 3. This allows adequate time for training and gaining operational experience for successively more complex equipment. Detailed descriptions of the components follow the solution descriptions. Solution 1 - This solution is based on changes to CONOPs, and does not have a technology component. It consists of observers at the forts and annexes, forward patrols along the swamp edge, in depth patrols approximately 10 kilometers inland from the swamp, and checkpoints on major roads. Solution 2 - This solution adds a ground sensor array to the Solution 1 system. Solution 3 - This solution is based around installing a radar/video camera system on each fort. It employs the CONOPS from Solution 1, but uses minimal ground sensors deployed only in areas with poor radar/video camera coverage (such as canals and streams shielded by vegetation), or by roads covered by radar but outside the range of the radar associated cameras. This document provides broad operational requirements for major equipment components along with sufficient operational details to allow the technical community to identify potential hardware candidates. Continuing analysis will develop quantities required and more detailed tactics, techniques, and procedures.

    7. Computer Security Risk Assessment

      Energy Science and Technology Software Center (OSTI)

      1992-02-11

      LAVA/CS (LAVA for Computer Security) is an application of the Los Alamos Vulnerability Assessment (LAVA) methodology specific to computer and information security. The software serves as a generic tool for identifying vulnerabilities in computer and information security safeguards systems. Although it does not perform a full risk assessment, the results from its analysis may provide valuable insights into security problems. LAVA/CS assumes that the system is exposed to both natural and environmental hazards and tomore » deliberate malevolent actions by either insiders or outsiders. The user in the process of answering the LAVA/CS questionnaire identifies missing safeguards in 34 areas ranging from password management to personnel security and internal audit practices. Specific safeguards protecting a generic set of assets (or targets) from a generic set of threats (or adversaries) are considered. There are four generic assets: the facility, the organization''s environment; the hardware, all computer-related hardware; the software, the information in machine-readable form stored both on-line or on transportable media; and the documents and displays, the information in human-readable form stored as hard-copy materials (manuals, reports, listings in full-size or microform), film, and screen displays. Two generic threats are considered: natural and environmental hazards, storms, fires, power abnormalities, water and accidental maintenance damage; and on-site human threats, both intentional and accidental acts attributable to a perpetrator on the facility''s premises.« less

    8. MHD computations for stellarators

      SciTech Connect (OSTI)

      Johnson, J.L.

      1985-12-01

      Considerable progress has been made in the development of computational techniques for studying the magnetohydrodynamic equilibrium and stability properties of three-dimensional configurations. Several different approaches have evolved to the point where comparison of results determined with different techniques shows good agreement. 55 refs., 7 figs.

    9. communications requirements | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Utilities to Inform Federal Smart Grid Policy Re: NBP RFI: Communications Requirements NBP RFI: Communications Requirements- Comments of Lake Region Electric Cooperative- Minnesota

    10. Competition Requirements | Department of Energy

      Energy Savers [EERE]

      PDF icon Competition Requirements More Documents & Publications Microsoft Word - AG Chapter 6 1 Nov 2010 AcqGuide 5.2-OPAM Chapter 6 - Competition Requirements

    11. Effects of minimum monitor unit threshold on spot scanning proton plan quality

      SciTech Connect (OSTI)

      Howard, Michelle Beltran, Chris; Mayo, Charles S.; Herman, Michael G.

      2014-09-15

      Purpose: To investigate the influence of the minimum monitor unit (MU) on the quality of clinical treatment plans for scanned proton therapy. Methods: Delivery system characteristics limit the minimum number of protons that can be delivered per spot, resulting in a min-MU limit. Plan quality can be impacted by the min-MU limit. Two sites were used to investigate the impact of min-MU on treatment plans: pediatric brain tumor at a depth of 5–10 cm; a head and neck tumor at a depth of 1–20 cm. Three-field, intensity modulated spot scanning proton plans were created for each site with the following parameter variations: min-MU limit range of 0.0000–0.0060; and spot spacing range of 2–8 mm. Comparisons were based on target homogeneity and normal tissue sparing. For the pediatric brain, two versions of the treatment planning system were also compared to judge the effects of the min-MU limit based on when it is accounted for in the optimization process (Eclipse v.10 and v.13, Varian Medical Systems, Palo Alto, CA). Results: The increase of the min-MU limit with a fixed spot spacing decreases plan quality both in homogeneous target coverage and in the avoidance of critical structures. Both head and neck and pediatric brain plans show a 20% increase in relative dose for the hot spot in the CTV and 10% increase in key critical structures when comparing min-MU limits of 0.0000 and 0.0060 with a fixed spot spacing of 4 mm. The DVHs of CTVs show min-MU limits of 0.0000 and 0.0010 produce similar plan quality and quality decreases as the min-MU limit increases beyond 0.0020. As spot spacing approaches 8 mm, degradation in plan quality is observed when no min-MU limit is imposed. Conclusions: Given a fixed spot spacing of ≤4 mm, plan quality decreases as min-MU increased beyond 0.0020. The effect of min-MU needs to be taken into consideration while planning proton therapy treatments.

    12. Sandia National Laboratories: Careers: Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Advanced software research & development Collaborative technologies Computational science and mathematics High-performance computing Visualization and scientific computing Advanced ...

    13. Stellar Astrophysics Requirements NERSC Forecast

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      (Copernicus Center, Warsaw). Computing cycles: DOE NERSC. 14 May 26, 2011 FLASHWDM Parallel Performance strong peak weak 15 May 26, 2011 Example 2: Core-Collapse SN...

    14. SSRL Computer & Networking Support Requests

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      CNG Help Request To use this form - Please enter your contact information below and select a category for your request. Also, provide a brief description of your request. When purchasing items, please include an account number. Priority*: Normal Urgent Requestor: (Name of person to contact for this request) Email: Phone: Support Required: I don't know Computer Support Network Support Printer Support Select Type of Request I don't know Details of your request: Property Control #: PC# Account

    15. Computers and Monitors | The Ames Laboratory

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computerized Accident Incident Reporting System Computerized Accident Incident Reporting System CAIRS Database The Computerized Accident/Incident Reporting System is a database used to collect and analyze DOE and DOE contractor reports of injuries, illnesses, and other accidents that occur during DOE operations. CAIRS is a Government computer system and, as such, has security requirements that must be followed. Access to the database is open to DOE and DOE contractors. Additional information

    16. Computer-Aided dispatching system design specification

      SciTech Connect (OSTI)

      Briggs, M.G.

      1996-05-03

      This document defines the performance requirements for a graphic display dispatching system to support Hanford Patrol emergency response. This system is defined as a Commercial-Off the-Shelf computer dispatching system providing both text and graphical display information while interfacing with the diverse reporting system within the Hanford Facility. This system also provided expansion capabilities to integrate Hanford Fire and the Occurrence Notification Center and provides back-up capabilities for the Plutonium Processing Facility.

    17. Two color laser fields for studying the Cooper minimum with phase-matched high-order harmonic generation

      SciTech Connect (OSTI)

      Ba Dinh, Khuong Vu Le, Hoang; Hannaford, Peter; Van Dao, Lap

      2014-05-28

      We experimentally study the observation of the Cooper minimum in a semi-infinite argon-filled gas cell using two-color laser fields at wavelengths of 1400 nm and 800 nm. The experimental results show that the additional 800 nm field can change the macroscopic phase-matching condition through change of the atomic dipole phase associated with the electron in the continuum state and that this approach can be used to control the appearance of the Cooper minimum in the high-order harmonic spectrum in order to study the electronic structure of atoms and molecules.

    18. Extreme Scale Computing, Co-design

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information Science, Computing, Applied Math Extreme Scale Computing, Co-design Extreme Scale Computing, Co-design Computational co-design may facilitate revolutionary designs ...

    19. Metaproteomics reveals differential modes of metabolic coupling among ubiquitous oxygen minimum zone microbes

      SciTech Connect (OSTI)

      Hawley, Alyse K.; Brewer, Heather M.; Norbeck, Angela D.; Pasa-Tolic, Ljiljana; Hallam, Steven J.

      2014-08-05

      Oxygen minimum zones (OMZs) are intrinsic water column features arising from respiratory oxygen demand during organic matter degradation in stratified marine waters. Currently OMZs are expanding due to global climate change. This expansion alters marine ecosystem function and the productivity of fisheries due to habitat compression and changes in biogeochemical cycling leading to fixed nitrogen loss and greenhouse gas production. Here we use metaproteomics to chart spatial and temporal patterns of gene expression along defined redox gradients in a seasonally anoxic fjord, Saanich Inlet to better understand microbial community responses to OMZ expansion. The expression of metabolic pathway components for nitrification, anaerobic ammonium oxidation (anammox), denitrification and inorganic carbon fixation predominantly co-varied with abundance and distribution patterns of Thaumarchaeota, Nitrospira, Planctomycetes and SUP05/ARCTIC96BD-19 Gammaproteobacteria. Within these groups, pathways mediating inorganic carbon fixation and nitrogen and sulfur transformations were differentially expressed across the redoxcline. Nitrification and inorganic carbon fixation pathways affiliated with Thaumarchaeota dominated dysoxic waters and denitrification, sulfur-oxidation and inorganic carbon fixation pathways affiliated with SUP05 dominated suboxic and anoxic waters. Nitrite-oxidation and anammox pathways affiliated with Nitrospina and Planctomycetes respectively, also exhibited redox partitioning between dysoxic and suboxic waters. The differential expression of these pathways under changing water column redox conditions has quantitative implications for coupled biogeochemical cycling linking different modes of inorganic carbon fixation with distributed nitrogen and sulfur-based energy metabolism extensible to coastal and open ocean OMZs.

    20. Use of finite volume radiation for predicting the Knudsen minimum in 2D channel flow

      SciTech Connect (OSTI)

      Malhotra, Chetan P.; Mahajan, Roop L.

      2014-12-09

      In an earlier paper we employed an analogy between surface-to-surface radiation and free-molecular flow to model Knudsen flow through tubes and onto planes. In the current paper we extend the analogy between thermal radiation and molecular flow to model the flow of a gas in a 2D channel across all regimes of rarefaction. To accomplish this, we break down the problem of gaseous flow into three sub-problems (self-diffusion, mass-motion and generation of pressure gradient) and use the finite volume method for modeling radiation through participating media to model the transport in each sub-problem as a radiation problem. We first model molecular self-diffusion in the stationary gas by modeling the transport of the molecular number density through the gas starting from the analytical asymptote for free-molecular flow to the kinetic theory limit of gaseous self-diffusion. We then model the transport of momentum through the gas at unit pressure gradient to predict Poiseuille flow and slip flow in the 2D gas. Lastly, we predict the generation of pressure gradient within the gas due to molecular collisions by modeling the transport of the forces generated due to collisions per unit volume of gas. We then proceed to combine the three radiation problems to predict flow of the gas over the entire Knudsen number regime from free-molecular to transition to continuum flow and successfully capture the Knudsen minimum at Kn ? 1.

    1. Searching for Minimum in Dependence of Squared Speed-of-Sound on Collision Energy

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Liu, Fu-Hu; Gao, Li-Na; Lacey, Roy A.

      2016-01-01

      Experimore » mental results of the rapidity distributions of negatively charged pions produced in proton-proton ( p - p ) and beryllium-beryllium (Be-Be) collisions at different beam momentums, measured by the NA61/SHINE Collaboration at the super proton synchrotron (SPS), are described by a revised (three-source) Landau hydrodynamic model. The squared speed-of-sound parameter c s 2 is then extracted from the width of rapidity distribution. There is a local minimum (knee point) which indicates a softest point in the equation of state (EoS) appearing at about 40 A  GeV/ c (or 8.8 GeV) in c s 2 excitation function (the dependence of c s 2 on incident beam momentum (or center-of-mass energy)). This knee point should be related to the searching for the onset of quark deconfinement and the critical point of quark-gluon plasma (QGP) phase transition.« less

    2. Approaching the Minimum Thermal Conductivity in Rhenium-Substituted Higher Manganese Silicides

      SciTech Connect (OSTI)

      Chen, Xi [University of Texas at Austin] [University of Texas at Austin; Girard, S. N. [University of Wisconsin, Madison] [University of Wisconsin, Madison; Meng, F. [University of Wisconsin, Madison] [University of Wisconsin, Madison; Lara-Curzio, Edgar [ORNL] [ORNL; Jin, S [University of Wisconsin, Madison] [University of Wisconsin, Madison; Goodenough, J. B. [University of Texas at Austin] [University of Texas at Austin; Zhou, J. S. [University of Texas at Austin] [University of Texas at Austin; Shi, L [University of Texas at Austin] [University of Texas at Austin

      2014-01-01

      Higher manganese silicides (HMS) made of earth-abundant and non-toxic elements are regarded as promising p-type thermoelectric materials because their complex crystal structure results in low lattice thermal conductivity. It is shown here that the already low thermal conductivity of HMS can be reduced further to approach the minimum thermal conductivity via partial substitu- tion of Mn with heavier rhenium (Re) to increase point defect scattering. The solubility limit of Re in the obtained RexMn1 xSi1.8 is determined to be about x = 0.18. Elemental inhomogeneity and the formation of ReSi1.75 inclusions with 50 200 nm size are found within the HMS matrix. It is found that the power factor does not change markedly at low Re content of x 0.04 before it drops considerably at higher Re contents. Compared to pure HMS, the reduced lattice thermal conductivity in RexMn1 xSi1.8 results in a 25% increase of the peak figure of merit ZT to reach 0.57 0.08 at 800 K for x = 0.04. The suppressed thermal conductivity in the pure RexMn1 xSi1.8 can enable further investigations of the ZT limit of this system by exploring different impurity doping strategies to optimize the carrier concentration and power factor.

    3. SCC: The Strategic Computing Complex

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      SCC: The Strategic Computing Complex SCC: The Strategic Computing Complex The Strategic Computing Complex (SCC) is a secured supercomputing facility that supports the calculation, modeling, simulation, and visualization of complex nuclear weapons data in support of the Stockpile Stewardship Program. The 300,000-square-foot, vault-type building features an unobstructed 43,500-square-foot computer room, which is an open room about three-fourths the size of a football field. The Strategic Computing

    4. Magellan: A Cloud Computing Testbed

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Magellan News & Announcements Archive Petascale Initiative Exascale Computing APEX Home » R & D » Archive » Magellan: A Cloud Computing Testbed Magellan: A Cloud Computing Testbed Cloud computing is gaining a foothold in the business world, but can clouds meet the specialized needs of scientists? That was one of the questions NERSC's Magellan cloud computing testbed explored between 2009 and 2011. The goal of Magellan, a project funded through the U.S. Department of Energy (DOE) Oce

    5. Software and High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Software and High Performance Computing Software and High Performance Computing Providing world-class high performance computing capability that enables unsurpassed solutions to complex problems of strategic national interest Contact thumbnail of Kathleen McDonald Head of Intellectual Property, Business Development Executive Kathleen McDonald Richard P. Feynman Center for Innovation (505) 667-5844 Email Software Computational physics, computer science, applied mathematics, statistics and the

    6. Solid Waste Information and Tracking System (SWITS) Software Requirements Specification

      SciTech Connect (OSTI)

      MAY, D.L.

      2000-03-22

      This document is the primary document establishing requirements for the Solid Waste Information and Tracking System (SWITS) as it is converted to a client-server architecture. The purpose is to provide the customer and the performing organizations with the requirements for the SWITS in the new environment. This Software Requirement Specification (SRS) describes the system requirements for the SWITS Project, and follows the PHMC Engineering Requirements, HNF-PRO-1819, and Computer Software Qualify Assurance Requirements, HNF-PRO-309, policies. This SRS includes sections on general description, specific requirements, references, appendices, and index. The SWITS system defined in this document stores information about the solid waste inventory on the Hanford site. Waste is tracked as it is generated, analyzed, shipped, stored, and treated. In addition to inventory reports a number of reports for regulatory agencies are produced.

    7. Refurbishment program of HANARO control computer system

      SciTech Connect (OSTI)

      Kim, H. K.; Choe, Y. S.; Lee, M. W.; Doo, S. K.; Jung, H. S. [Korea Atomic Energy Research Inst., 989-111 Daedeok-daero, Yuseong, Daejeon, 305-353 (Korea, Republic of)

      2012-07-01

      HANARO, an open-tank-in-pool type research reactor with 30 MW thermal power, achieved its first criticality in 1995. The programmable controller system MLC (Multi Loop Controller) manufactured by MOORE has been used to control and regulate HANARO since 1995. We made a plan to replace the control computer because the system supplier no longer provided technical support and thus no spare parts were available. Aged and obsolete equipment and the shortage of spare parts supply could have caused great problems. The first consideration for a replacement of the control computer dates back to 2007. The supplier did not produce the components of MLC so that this system would no longer be guaranteed. We established the upgrade and refurbishment program in 2009 so as to keep HANARO up to date in terms of safety. We designed the new control computer system that would replace MLC. The new computer system is HCCS (HANARO Control Computer System). The refurbishing activity is in progress and will finish in 2013. The goal of the refurbishment program is a functional replacement of the reactor control system in consideration of suitable interfaces, compliance with no special outage for installation and commissioning, and no change of the well-proved operation philosophy. HCCS is a DCS (Discrete Control System) using PLC manufactured by RTP. To enhance the reliability, we adapt a triple processor system, double I/O system and hot swapping function. This paper describes the refurbishment program of the HANARO control system including the design requirements of HCCS. (authors)

    8. BGE Communications Requirements | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      BGE Communications Requirements BGE Communications Requirements Chart of BGE Communications Requirements PDF icon BGE Communications Requirements More Documents & Publications ...

    9. Computer Algebra System

      Energy Science and Technology Software Center (OSTI)

      1992-05-04

      DOE-MACSYMA (Project MAC''s SYmbolic MAnipulation system) is a large computer programming system written in LISP. With DOE-MACSYMA the user can differentiate, integrate, take limits, solve systems of linear or polynomial equations, factor polynomials, expand functions in Laurent or Taylor series, solve differential equations (using direct or transform methods), compute Poisson series, plot curves, and manipulate matrices and tensors. A language similar to ALGOL-60 permits users to write their own programs for transforming symbolic expressions. Franzmore » Lisp OPUS 38 provides the environment for the Encore, Celerity, and DEC VAX11 UNIX,SUN(OPUS) versions under UNIX and the Alliant version under Concentrix. Kyoto Common Lisp (KCL) provides the environment for the SUN(KCL),Convex, and IBM PC under UNIX and Data General under AOS/VS.« less

    10. computational fluid dynamics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      computational fluid dynamics - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs

    11. GPU Computational Screening

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      GPU Computational Screening of Carbon Capture Materials J. Kim 1 , A Koniges 1 , R. Martin 1 , M. Haranczyk 1 , J. Swisher 2 , and B. Smit 1,2 1 Lawrence Berkeley National Laboratory, Berkeley, CA 94720 2 Department of Chemical Engineering, University of California, Berkeley, Berkeley, CA 94720 E-mail: jihankim@lbl.gov Abstract. In order to reduce the current costs associated with carbon capture technologies, novel materials such as zeolites and metal-organic frameworks that are based on

    12. Cloud Computing Services

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Services - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced

    13. Development of computer graphics

      SciTech Connect (OSTI)

      Nuttall, H.E.

      1989-07-01

      The purpose of this project was to screen and evaluate three graphics packages as to their suitability for displaying concentration contour graphs. The information to be displayed is from computer code simulations describing air-born contaminant transport. The three evaluation programs were MONGO (John Tonry, MIT, Cambridge, MA, 02139), Mathematica (Wolfram Research Inc.), and NCSA Image (National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign). After a preliminary investigation of each package, NCSA Image appeared to be significantly superior for generating the desired concentration contour graphs. Hence subsequent work and this report describes the implementation and testing of NCSA Image on both an Apple MacII and Sun 4 computers. NCSA Image includes several utilities (Layout, DataScope, HDF, and PalEdit) which were used in this study and installed on Dr. Ted Yamada`s Mac II computer. Dr. Yamada provided two sets of air pollution plume data which were displayed using NCSA Image. Both sets were animated into a sequential expanding plume series.

    14. High Performance Computing at the Oak Ridge Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      High Performance Computing at the Oak Ridge Leadership Computing Facility Go to Menu Page 2 Outline * Our Mission * Computer Systems: Present, Past, Future * Challenges Along the Way * Resources for Users Go to Menu Page 3 Our Mission Go to Menu Page 4 * World's most powerful computing facility * Nation's largest concentration of open source materials research * $1.3B budget * 4,250 employees * 3,900 research guests annually * $350 million invested in modernization * Nation's most diverse energy

    15. Computational Electronics and Electromagnetics

      SciTech Connect (OSTI)

      DeFord, J.F.

      1993-03-01

      The Computational Electronics and Electromagnetics thrust area is a focal point for computer modeling activities in electronics and electromagnetics in the Electronics Engineering Department of Lawrence Livermore National Laboratory (LLNL). Traditionally, they have focused their efforts in technical areas of importance to existing and developing LLNL programs, and this continues to form the basis for much of their research. A relatively new and increasingly important emphasis for the thrust area is the formation of partnerships with industry and the application of their simulation technology and expertise to the solution of problems faced by industry. The activities of the thrust area fall into three broad categories: (1) the development of theoretical and computational models of electronic and electromagnetic phenomena, (2) the development of useful and robust software tools based on these models, and (3) the application of these tools to programmatic and industrial problems. In FY-92, they worked on projects in all of the areas outlined above. The object of their work on numerical electromagnetic algorithms continues to be the improvement of time-domain algorithms for electromagnetic simulation on unstructured conforming grids. The thrust area is also investigating various technologies for conforming-grid mesh generation to simplify the application of their advanced field solvers to design problems involving complicated geometries. They are developing a major code suite based on the three-dimensional (3-D), conforming-grid, time-domain code DSI3D. They continue to maintain and distribute the 3-D, finite-difference time-domain (FDTD) code TSAR, which is installed at several dozen university, government, and industry sites.

    16. Scanning computed confocal imager

      DOE Patents [OSTI]

      George, John S. (Los Alamos, NM)

      2000-03-14

      There is provided a confocal imager comprising a light source emitting a light, with a light modulator in optical communication with the light source for varying the spatial and temporal pattern of the light. A beam splitter receives the scanned light and direct the scanned light onto a target and pass light reflected from the target to a video capturing device for receiving the reflected light and transferring a digital image of the reflected light to a computer for creating a virtual aperture and outputting the digital image. In a transmissive mode of operation the invention omits the beam splitter means and captures light passed through the target.

    17. Computer generated holographic microtags

      DOE Patents [OSTI]

      Sweatt, W.C.

      1998-03-17

      A microlithographic tag comprising an array of individual computer generated holographic patches having feature sizes between 250 and 75 nanometers is disclosed. The tag is a composite hologram made up of the individual holographic patches and contains identifying information when read out with a laser of the proper wavelength and at the proper angles of probing and reading. The patches are fabricated in a steep angle Littrow readout geometry to maximize returns in the -1 diffracted order. The tags are useful as anti-counterfeiting markers because of the extreme difficulty in reproducing them. 5 figs.

    18. Computing for Finance

      ScienceCinema (OSTI)

      None

      2011-10-06

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing ? from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege o

    19. computational-hydraulics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      and Aerodynamics using STAR-CCM+ for CFD Analysis March 21-22, 2012 Argonne, Illinois Dr. Steven Lottes This email address is being protected from spambots. You need JavaScript enabled to view it. A training course in the use of computational hydraulics and aerodynamics CFD software using CD-adapco's STAR-CCM+ for analysis will be held at TRACC from March 21-22, 2012. The course assumes a basic knowledge of fluid mechanics and will make extensive use of hands on tutorials. CD-adapco will issue

    20. Computer generated holographic microtags

      DOE Patents [OSTI]

      Sweatt, William C.

      1998-01-01

      A microlithographic tag comprising an array of individual computer generated holographic patches having feature sizes between 250 and 75 nanometers. The tag is a composite hologram made up of the individual holographic patches and contains identifying information when read out with a laser of the proper wavelength and at the proper angles of probing and reading. The patches are fabricated in a steep angle Littrow readout geometry to maximize returns in the -1 diffracted order. The tags are useful as anti-counterfeiting markers because of the extreme difficulty in reproducing them.

    1. Announcement of Computer Software

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      F 241.4 (10-01) (Replaces ESTSC F1 and ESTSC F2) All Other Editions Are Obsolete UNITED STATES DEPARTMENT OF ENERGY ANNOUNCEMENT OF COMPUTER SOFTWARE OMB Control Number 1910-1400 (OMB Burden Disclosure Statement is on last page of Instructions) Record Status (Select One): New Package Software Revision H. Description/Abstract PART I: STI SOFTWARE DESCRIPTION A. Software Title SHORT NAME OR ACRONYM KEYWORDS IN CONTEXT (KWIC) TITLE B. Developer(s) E-MAIL ADDRESS(ES) C. Site Product Number 1. DOE

    2. Computer Wallpaper | The Ames Laboratory

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Wallpaper We've incorporated the tagline, Creating Materials and Energy Solutions, into a computer wallpaper so you can display it on your desktop as a constant reminder....

    3. Introduction to High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Introduction to High Performance Computing Introduction to High Performance Computing June 10, 2013 Photo on 7 30 12 at 7.10 AM Downloads Download File Gerber-HPC-2.pdf...

    4. Regulatory Requirements | The Ames Laboratory

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Regulatory Requirements Executive Order 13423, Strengthening Federal Environment, Energy, and Transportation Management (January 26, 2007) and Executive Order 13514, Federal...

    5. Institutional Computing Executive Group Review of Multi-programmatic & Institutional Computing, Fiscal Year 2005 and 2006

      SciTech Connect (OSTI)

      Langer, S; Rotman, D; Schwegler, E; Folta, P; Gee, R; White, D

      2006-12-18

      The Institutional Computing Executive Group (ICEG) review of FY05-06 Multiprogrammatic and Institutional Computing (M and IC) activities is presented in the attached report. In summary, we find that the M and IC staff does an outstanding job of acquiring and supporting a wide range of institutional computing resources to meet the programmatic and scientific goals of LLNL. The responsiveness and high quality of support given to users and the programs investing in M and IC reflects the dedication and skill of the M and IC staff. M and IC has successfully managed serial capacity, parallel capacity, and capability computing resources. Serial capacity computing supports a wide range of scientific projects which require access to a few high performance processors within a shared memory computer. Parallel capacity computing supports scientific projects that require a moderate number of processors (up to roughly 1000) on a parallel computer. Capability computing supports parallel jobs that push the limits of simulation science. M and IC has worked closely with Stockpile Stewardship, and together they have made LLNL a premier institution for computational and simulation science. Such a standing is vital to the continued success of laboratory science programs and to the recruitment and retention of top scientists. This report provides recommendations to build on M and IC's accomplishments and improve simulation capabilities at LLNL. We recommend that institution fully fund (1) operation of the atlas cluster purchased in FY06 to support a few large projects; (2) operation of the thunder and zeus clusters to enable 'mid-range' parallel capacity simulations during normal operation and a limited number of large simulations during dedicated application time; (3) operation of the new yana cluster to support a wide range of serial capacity simulations; (4) improvements to the reliability and performance of the Lustre parallel file system; (5) support for the new GDO petabyte-class storage facility on the green network for use in data intensive external collaborations; and (6) continued support for visualization and other methods for analyzing large simulations. We also recommend that M and IC begin planning in FY07 for the next upgrade of its parallel clusters. LLNL investments in M and IC have resulted in a world-class simulation capability leading to innovative science. We thank the LLNL management for its continued support and thank the M and IC staff for its vision and dedicated efforts to make it all happen.

    6. Fermilab | Science at Fermilab | Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Computing is indispensable to science at Fermilab. High-energy physics experiments generate an astounding amount of data that physicists need to store, analyze and communicate with others. Cutting-edge technology allows scientists to work quickly and efficiently to advance our understanding of the world . Fermilab's Computing Division is recognized for its expertise in handling huge amounts of data, its success in high-speed parallel computing and its willingness to take its craft in

    7. Super recycled water: quenching computers

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Super recycled water: quenching computers Super recycled water: quenching computers New facility and methods support conserving water and creating recycled products. Using reverse osmosis to "super purify" water allows the system to reuse water and cool down our powerful yet thirsty computers. January 30, 2014 Super recycled water: quenching computers LANL's Sanitary Effluent Reclamation Facility, key to reducing the Lab's discharge of liquid. Millions of gallons of industrial

    8. Computing architecture for autonomous microgrids

      DOE Patents [OSTI]

      Goldsmith, Steven Y.

      2015-09-29

      A computing architecture that facilitates autonomously controlling operations of a microgrid is described herein. A microgrid network includes numerous computing devices that execute intelligent agents, each of which is assigned to a particular entity (load, source, storage device, or switch) in the microgrid. The intelligent agents can execute in accordance with predefined protocols to collectively perform computations that facilitate uninterrupted control of the microgrid.

    9. Computing architecture for autonomous microgrids

      DOE Patents [OSTI]

      Goldsmith, Steven Y.

      2015-09-29

      A computing architecture that facilitates autonomously controlling operations of a microgrid is described herein. A microgrid network includes numerous computing devices that execute intelligent agents, each of which is assigned to a particular entity (load, source, storage device, or switch) in the microgrid. The intelligent agents can execute in accordance with predefined protocols to collectively perform computations that facilitate uninterrupted control of the .

    10. Noise tolerant spatiotemporal chaos computing

      SciTech Connect (OSTI)

      Kia, Behnam; Kia, Sarvenaz; Ditto, William L.; Lindner, John F.; Sinha, Sudeshna

      2014-12-01

      We introduce and design a noise tolerant chaos computing system based on a coupled map lattice (CML) and the noise reduction capabilities inherent in coupled dynamical systems. The resulting spatiotemporal chaos computing system is more robust to noise than a single map chaos computing system. In this CML based approach to computing, under the coupled dynamics, the local noise from different nodes of the lattice diffuses across the lattice, and it attenuates each other's effects, resulting in a system with less noise content and a more robust chaos computing architecture.

    11. Investigating an API for resilient exascale computing. (Technical Report) |

      Office of Scientific and Technical Information (OSTI)

      SciTech Connect Technical Report: Investigating an API for resilient exascale computing. Citation Details In-Document Search Title: Investigating an API for resilient exascale computing. Increased HPC capability comes with increased complexity, part counts, and fault occurrences. In- creasing the resilience of systems and applications to faults is a critical requirement facing the viability of exascale systems, as the overhead of traditional checkpoint/restart is projected to outweigh its

    12. Name Center for Applied Scientific Computing month day, 1998

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Bosl, Art Mirin, Phil Duffy Lawrence Livermore National Lab Climate and Carbon Cycle Modeling Group Center for Applied Scientific Computing April 24, 2003 High Resolution Climate Simulation and Regional Water Supplies WJB 2 CASC/CCCM High-Performance Computing for Climate Modeling as a Planning Tool GLOBAL WARMING IS HERE!! ... so now what? How will climate change really affect societies? Effects of global climate change are local Some effects of climate change can be mitigated Requires accurate

    13. John Shalf Gives Talk at San Francisco High Performance Computing Meetup

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      John Shalf Gives Talk at San Francisco High Performance Computing Meetup John Shalf Gives Talk at San Francisco High Performance Computing Meetup September 17, 2014 XBD200503 00083 John Shalf In his role as NERSC's chief technology officer, John Shalf gave a talk on "Converging Interconnect Requirements for HPC and Warehouse Scale Computing at San Francisco High Performance Computing Meetup. The Sept 17 meeting was held at GeekdomSF in downtown San Francisco. The group, which describes

    14. Using the NEPA Requirements and Guidance - Search Index

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      the NEPA Requirements and Guidance - Search Index Step 2: Entering a Search Term or Phrase 1. Locate the downloaded file, right click on it, select "Extract all", extract it to any location on your computer or USB drive. 2. Locate and Open the extracted folder "NEPA Requirements and Guidance - Search Index". 3. Locate and Open the .PDX file titled "Search - NEPA Requirements and Guidance" to open search form. Step 1: Download and Set Up Please Note: the search form

    15. Data Requirements from NERSC Requirements Reviews Richard Gerber...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      of Energy Scientists represented by the NERSC user community have growing requirements for data storage, IO bandwidth, networking bandwidth, and data software and services. ...

    16. AMRITA -- A computational facility

      SciTech Connect (OSTI)

      Shepherd, J.E.; Quirk, J.J.

      1998-02-23

      Amrita is a software system for automating numerical investigations. The system is driven using its own powerful scripting language, Amrita, which facilitates both the composition and archiving of complete numerical investigations, as distinct from isolated computations. Once archived, an Amrita investigation can later be reproduced by any interested party, and not just the original investigator, for no cost other than the raw CPU time needed to parse the archived script. In fact, this entire lecture can be reconstructed in such a fashion. To do this, the script: constructs a number of shock-capturing schemes; runs a series of test problems, generates the plots shown; outputs the LATEX to typeset the notes; performs a myriad of behind-the-scenes tasks to glue everything together. Thus Amrita has all the characteristics of an operating system and should not be mistaken for a common-or-garden code.

    17. Computer memory management system

      DOE Patents [OSTI]

      Kirk, III, Whitson John

      2002-01-01

      A computer memory management system utilizing a memory structure system of "intelligent" pointers in which information related to the use status of the memory structure is designed into the pointer. Through this pointer system, The present invention provides essentially automatic memory management (often referred to as garbage collection) by allowing relationships between objects to have definite memory management behavior by use of coding protocol which describes when relationships should be maintained and when the relationships should be broken. In one aspect, the present invention system allows automatic breaking of strong links to facilitate object garbage collection, coupled with relationship adjectives which define deletion of associated objects. In another aspect, The present invention includes simple-to-use infinite undo/redo functionality in that it has the capability, through a simple function call, to undo all of the changes made to a data model since the previous `valid state` was noted.

    18. Biological and Environmental Research Network Requirements

      SciTech Connect (OSTI)

      Balaji, V.; Boden, Tom; Cowley, Dave; Dart, Eli; Dattoria, Vince; Desai, Narayan; Egan, Rob; Foster, Ian; Goldstone, Robin; Gregurick, Susan; Houghton, John; Izaurralde, Cesar; Johnston, Bill; Joseph, Renu; Kleese-van Dam, Kerstin; Lipton, Mary; Monga, Inder; Pritchard, Matt; Rotman, Lauren; Strand, Gary; Stuart, Cory; Tatusova, Tatiana; Tierney, Brian; Thomas, Brian; Williams, Dean N.; Zurawski, Jason

      2013-09-01

      The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet be a highly successful enabler of scientific discovery for over 25 years. In November 2012, ESnet and the Office of Biological and Environmental Research (BER) of the DOE SC organized a review to characterize the networking requirements of the programs funded by the BER program office. Several key findings resulted from the review. Among them: 1) The scale of data sets available to science collaborations continues to increase exponentially. This has broad impact, both on the network and on the computational and storage systems connected to the network. 2) Many science collaborations require assistance to cope with the systems and network engineering challenges inherent in managing the rapid growth in data scale. 3) Several science domains operate distributed facilities that rely on high-performance networking for success. Key examples illustrated in this report include the Earth System Grid Federation (ESGF) and the Systems Biology Knowledgebase (KBase). This report expands on these points, and addresses others as well. The report contains a findings section as well as the text of the case studies discussed at the review.

    19. Roadmap to the SRS computing architecture

      SciTech Connect (OSTI)

      Johnson, A.

      1994-07-05

      This document outlines the major steps that must be taken by the Savannah River Site (SRS) to migrate the SRS information technology (IT) environment to the new architecture described in the Savannah River Site Computing Architecture. This document proposes an IT environment that is {open_quotes}...standards-based, data-driven, and workstation-oriented, with larger systems being utilized for the delivery of needed information to users in a client-server relationship.{close_quotes} Achieving this vision will require many substantial changes in the computing applications, systems, and supporting infrastructure at the site. This document consists of a set of roadmaps which provide explanations of the necessary changes for IT at the site and describes the milestones that must be completed to finish the migration.

    20. Open-Source Software in Computational Research: A Case Study

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Syamlal, Madhava; O'Brien, Thomas J.; Benyahia, Sofiane; Gel, Aytekin; Pannala, Sreekanth

      2008-01-01

      A case study of open-source (OS) development of the computational research software MFIX, used for multiphase computational fluid dynamics simulations, is presented here. The verification and validation steps required for constructing modern computational software and the advantages of OS development in those steps are discussed. The infrastructure used for enabling the OS development of MFIX is described. The impact of OS development on computational research and education in gas-solids flow, as well as the dissemination of information to other areas such as geophysical and volcanology research, is demonstrated. This study shows that the advantages of OS development were realized inmore » the case of MFIX: verification by many users, which enhances software quality; the use of software as a means for accumulating and exchanging information; the facilitation of peer review of the results of computational research.« less

    1. Managing System of Systems Requirements with a Requirements Screening Group

      SciTech Connect (OSTI)

      Ronald R. Barden

      2012-07-01

      Figuring out an effective and efficient way to manage not only your Requirements Baseline, but also the development of all your individual requirements during a Programs/Projects Conceptual and Development Life Cycle Stages can be both daunting and difficult. This is especially so when you are dealing with a complex and large System of Systems (SoS) Program with potentially thousands and thousands of Top Level Requirements as well as an equal number of lower level System, Subsystem and Configuration Item requirements that need to be managed. This task is made even more overwhelming when you have to add in integration with multiple requirements development teams (e.g., Integrated Product Development Teams (IPTs)) and/or numerous System/Subsystem Design Teams. One solution for tackling this difficult activity on a recent large System of Systems Program was to develop and make use of a Requirements Screening Group (RSG). This group is essentially a Team made up of co-chairs from the various Stakeholders with an interest in the Program of record that are enabled and accountable for Requirements Development on the Program/Project. The RSG co-chairs, often with the help of individual support team, work together as a Program Board to monitor, make decisions on, and provide guidance on all Requirements Development activities during the Conceptual and Development Life Cycle Stages of a Program/Project. In addition, the RSG can establish and maintain the Requirements Baseline, monitor and enforce requirements traceability across the entire Program, and work with other elements of the Program/Project to ensure integration and coordination.

    2. Introduction to High Performance Computers Richard Gerber NERSC User Services

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      What are the main parts of a computer? Merit Badge Requirements ... 4. Explain the following to your counselor: a. The five major parts of a computer. ... Boy Scouts of America Offer a Computers Merit Badge 5 What are the "5 major parts"? 6 Five Major Parts eHow.com Answers.com Fluther.com Yahoo! Wikipedia CPU CPU CPU CPU Motherboard RAM Monitor RAM RAM Power Supply Hard Drive Printer Storage Power Supply Removable Media Video Card Mouse Keyboard/ Mouse Video Card Secondary Storage

    3. Sandia National Laboratories: Advanced Simulation and Computing:

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Systems & Software Environment Computational Systems & Software Environment Advanced Simulation and Computing Computational Systems & Software Environment Integrated Codes Physics & Engineering Models Verification & Validation Facilities Operation & User Support Research & Collaboration Contact ASC Advanced Simulation and Computing Computational Systems & Software Environment Crack Modeling The Computational Systems & Software Environment

    4. Requirements

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Regular participation in a student green group may also earn credit. Create a PARC Education & Outreach Video Earn 2 points Work with PARC Multimedia Specialist Dan Allen ...

    5. An estimate for the sum of a Dirichlet series in terms of the minimum of its modulus on a vertical line segment

      SciTech Connect (OSTI)

      Gaisin, Ahtyar M; Rakhmatullina, Zhanna G

      2011-12-31

      The behaviour of the sum of an entire Dirichlet series is analyzed in terms of the minimum of its modulus on a system of vertical line segments. Also a more general problem, connected with the Polya conjecture is posed and solved. It concerns the minimum modulus of an entire function with Fabri gaps and its growth along curves going to infinity. Bibliography: 33 titles.

    6. Benefits Forms & Required Notices

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Benefits Forms & Required Notices Benefits Forms & Required Notices A comprehensive benefits package with plan options for health care and retirement to take care of our employees today and tomorrow. Contacts Benefits Office (505) 667-1806 Email Benefits Forms & Required Notices Forms Benefits Enrollment Form, 1751a (pdf) Declaration of Domestic Partnership, 1925a (pdf) Declaration of Legal Ward as Eligible Dependent, 3028 (pdf) Declaration that Enrolled Dependent Meets IRS

    7. Customizable Computing at Datacenter Scale | Argonne Leadership Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Facility Customizable Computing at Datacenter Scale Event Sponsor: Mathematics and Computer Science Division Seminar Start Date: May 2 2016 - 10:00am Building/Room: Building 240/Room 1416 Location: Argonne National Laboratory Speaker(s): Jason Cong Speaker(s) Title: UCLA Host: Marc Snir Customizable computing has been of interest to the research community for over three decades. The interest has intensified in the recent years as the power and energy become a significant limiting factor to

    8. HEP Science Network Requirements--Final Report

      SciTech Connect (OSTI)

      Bakken, Jon; Barczyk, Artur; Blatecky, Alan; Boehnlein, Amber; Carlson, Rich; Chekanov, Sergei; Cotter, Steve; Cottrell, Les; Crawford, Glen; Crawford, Matt; Dart, Eli; Dattoria, Vince; Ernst, Michael; Fisk, Ian; Gardner, Rob; Johnston, Bill; Kent, Steve; Lammel, Stephan; Loken, Stewart; Metzger, Joe; Mount, Richard; Ndousse-Fetter, Thomas; Newman, Harvey; Schopf, Jennifer; Sekine, Yukiko; Stone, Alan; Tierney, Brian; Tull, Craig; Zurawski, Jason

      2010-04-27

      The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the US Department of Energy Office of Science, the single largest supporter of basic research in the physical sciences in the United States. In support of the Office of Science programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years. In August 2009 ESnet and the Office of High Energy Physics (HEP), of the DOE Office of Science, organized a workshop to characterize the networking requirements of the programs funded by HEP. The International HEP community has been a leader in data intensive science from the beginning. HEP data sets have historically been the largest of all scientific data sets, and the communty of interest the most distributed. The HEP community was also the first to embrace Grid technologies. The requirements identified at the workshop are summarized below, and described in more detail in the case studies and the Findings section: (1) There will be more LHC Tier-3 sites than orginally thought, and likely more Tier-2 to Tier-2 traffic than was envisioned. It it not yet known what the impact of this will be on ESnet, but we will need to keep an eye on this traffic. (2) The LHC Tier-1 sites (BNL and FNAL) predict the need for 40-50 Gbps of data movement capacity in 2-5 years, and 100-200 Gbps in 5-10 years for HEP program related traffic. Other key HEP sites include LHC Tier-2 and Tier-3 sites, many of which are located at universities. To support the LHC, ESnet must continue its collaborations with university and international networks. (3) While in all cases the deployed 'raw' network bandwidth must exceed the user requirements in order to meet the data transfer and reliability requirements, network engineering for trans-Atlantic connectivity is more complex than network engineering for intra-US connectivity. This is because transoceanic circuits have lower reliability and longer repair times when compared with land-based circuits. Therefore, trans-Atlantic connectivity requires greater deployed bandwidth and diversity to ensure reliability and service continuity of the user-level required data transfer rates. (4) Trans-Atlantic traffic load and patterns must be monitored, and projections adjusted if necessary. There is currently a shutdown planned for the LHC in 2012 that may affect projections of trans-Atlantic bandwidth requirements. (5) There is a significant need for network tuning and troubleshooting during the establishment of new LHC Tier-2 and Tier-3 facilities. ESnet will work with the HEP community to help new sites effectively use the network. (6) SLAC is building the CCD camera for the LSST. This project will require significant bandwidth (up to 30Gbps) to NCSA over the next few years. (7) The accelerator modeling program at SLAC could require the movement of 1PB simulation data sets from the Leadership Computing Facilities at Argonne and Oak Ridge to SLAC. The data sets would need to be moved overnight, and moving 1PB in eight hours requires more than 300Gbps of throughput. This requirement is dependent on the deployment of analysis capabilities at SLAC, and is about five years away. (8) It is difficult to achieve high data transfer throughput to sites in China. Projects that need to transfer data in or out of China are encouraged to deploy test and measurement infrastructure (e.g. perfSONAR) and allow time for performance tuning.

    9. Scalable Computational Chemistry: New Developments and Applications

      SciTech Connect (OSTI)

      Yuri Alexeev

      2002-12-31

      The computational part of the thesis is the investigation of titanium chloride (II) as a potential catalyst for the bis-silylation reaction of ethylene with hexaclorodisilane at different levels of theory. Bis-silylation is an important reaction for producing bis(silyl) compounds and new C-Si bonds, which can serve as monomers for silicon containing polymers and silicon carbides. Ab initio calculations on the steps involved in a proposed mechanism are presented. This choice of reactants allows them to study this reaction at reliable levels of theory without compromising accuracy. The calculations indicate that this is a highly exothermic barrierless reaction. The TiCl{sub 2} catalyst removes a 50 kcal/mol activation energy barrier required for the reaction without the catalyst. The first step is interaction of TiCl{sub 2} with ethylene to form an intermediate that is 60 kcal/mol below the energy of the reactants. This is the driving force for the entire reaction. Dynamic correlation plays a significant role because RHF calculations indicate that the net barrier for the catalyzed reaction is 50 kcal/mol. They conclude that divalent Ti has the potential to become an important industrial catalyst for silylation reactions. In the programming part of the thesis, parallelization of different quantum chemistry methods is presented. The parallelization of code is becoming important aspects of quantum chemistry code development. Two trends contribute to it: the overall desire to study large chemical systems and the desire to employ highly correlated methods which are usually computationally and memory expensive. In the presented distributed data algorithms computation is parallelized and the largest arrays are evenly distributed among CPUs. First, the parallelization of the Hartree-Fock self-consistent field (SCF) method is considered. SCF method is the most common starting point for more accurate calculations. The Fock build (sub step of SCF) from AO integrals is also often used to avoid MO integral computation. The presented distributed data SCF increases the size of chemical systems that can be calculated by using RHF and DFT. The important ab initio method to study bond formation and breaking as well as excited molecules is CASSCF. The presented distributed data CASSCF algorithm can significantly decrease computational time and memory requirements per node. Therefore, large CASSCF computations can be performed. The most time consuming operation to study potential energy surfaces of reactions and chemical systems is Hessian calculations. The distributed data parallelization of CPHF will alloy scientists carry out large analytic Hessian calculations.

    10. Paul C. Messina | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      He led the Computational and Computer Science component of Caltech's research project funded by the Academic Strategic Alliances Program of the Accelerated Strategic Computing ...

    11. CLAMR (Compute Language Adaptive Mesh Refinement)

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      CLAMR (Compute Language Adaptive Mesh Refinement) CLAMR (Compute Language Adaptive Mesh Refinement) CLAMR (Compute Language Adaptive Mesh Refinement) is being developed as a DOE...

    12. Other World Computing | Open Energy Information

      Open Energy Info (EERE)

      World Computing Jump to: navigation, search Name Other World Computing Facility Other World Computing Sector Wind energy Facility Type Community Wind Facility Status In Service...

    13. Computer_Vision

      Energy Science and Technology Software Center (OSTI)

      2002-10-04

      The Computer_Vision software performs object recognition using a novel multi-scale characterization and matching algorithm. To understand the multi-scale characterization and matching software, it is first necessary to understand some details of the Computer Vision (CV) Project. This project has focused on providing algorithms and software that provide an end-to-end toolset for image processing applications. At a high-level, this end-to-end toolset focuses on 7 coy steps. The first steps are geometric transformations. 1) Image Segmentation. Thismore » step essentially classifies pixels in foe input image as either being of interest or not of interest. We have also used GENIE segmentation output for this Image Segmentation step. 2 Contour Extraction (patent submitted). This takes the output of Step I and extracts contours for the blobs consisting of pixels of interest. 3) Constrained Delaunay Triangulation. This is a well-known geometric transformation that creates triangles inside the contours. 4 Chordal Axis Transform (CAT) . This patented geometric transformation takes the triangulation output from Step 3 and creates a concise and accurate structural representation of a contour. From the CAT, we create a linguistic string, with associated metrical information, that provides a detailed structural representation of a contour. 5.) Normalization. This takes an attributed linguistic string output from Step 4 and balances it. This ensures that the linguistic representation accurately represents the major sections of the contour. Steps 6 and 7 are implemented by the multi-scale characterization and matching software. 6) Multi scale Characterization. This takes as input the attributed linguistic string output from Normalization. Rules from a context free grammar are applied in reverse to create a tree-like representation for each contour. For example, one of the grammar’s rules is L -> (LL ). When an (LL) is seen in a string, a parent node is created that points to the four child symbols ‘(‘ , ‘L’ , ‘L’, and ‘)‘ . Levels in the tree can then be thought of as coarser (towards the root) or finer (towards the leaves) representations of the same contours. 7.) Multi scale Matching. Having a multi-scale characterization allows us to compare objects at a coarser level before matching at finer levels of detail. Matching at a coarser level not only increases the speed of the matching process (you’re comparing fewer symbols) , but also increases accuracy since small variations along contours do not significantly detract from two objects’ similarity.« less

    14. Chart of communications requirements | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Chart of communications requirements Chart of communications requirements Chart of communications requirements for BGE PDF icon Chart of communications requirements More Documents ...

    15. Cluster computing software for GATE simulations

      SciTech Connect (OSTI)

      Beenhouwer, Jan de; Staelens, Steven; Kruecker, Dirk; Ferrer, Ludovic; D'Asseler, Yves; Lemahieu, Ignace; Rannou, Fernando R.

      2007-06-15

      Geometry and tracking (GEANT4) is a Monte Carlo package designed for high energy physics experiments. It is used as the basis layer for Monte Carlo simulations of nuclear medicine acquisition systems in GEANT4 Application for Tomographic Emission (GATE). GATE allows the user to realistically model experiments using accurate physics models and time synchronization for detector movement through a script language contained in a macro file. The downside of this high accuracy is long computation time. This paper describes a platform independent computing approach for running GATE simulations on a cluster of computers in order to reduce the overall simulation time. Our software automatically creates fully resolved, nonparametrized macros accompanied with an on-the-fly generated cluster specific submit file used to launch the simulations. The scalability of GATE simulations on a cluster is investigated for two imaging modalities, positron emission tomography (PET) and single photon emission computed tomography (SPECT). Due to a higher sensitivity, PET simulations are characterized by relatively high data output rates that create rather large output files. SPECT simulations, on the other hand, have lower data output rates but require a long collimator setup time. Both of these characteristics hamper scalability as a function of the number of CPUs. The scalability of PET simulations is improved here by the development of a fast output merger. The scalability of SPECT simulations is improved by greatly reducing the collimator setup time. Accordingly, these two new developments result in higher scalability for both PET and SPECT simulations and reduce the computation time to more practical values.

    16. THE ROCHE LIMIT FOR CLOSE-ORBITING PLANETS: MINIMUM DENSITY, COMPOSITION CONSTRAINTS, AND APPLICATION TO THE 4.2 hr PLANET KOI 1843.03

      SciTech Connect (OSTI)

      Rappaport, Saul; Sanchis-Ojeda, Roberto; Winn, Joshua N.; Rogers, Leslie A.; Levine, Alan E-mail: sar@mit.edu E-mail: larogers@caltech.edu

      2013-08-10

      The requirement that a planet must orbit outside of its Roche limit gives a lower limit on the planet's mean density. The minimum density depends almost entirely on the orbital period and is immune to systematic errors in the stellar properties. We consider the implications of this density constraint for the newly identified class of small planets with periods shorter than half a day. When the planet's radius is accurately known, this lower limit to the density can be used to restrict the possible combinations of iron and rock within the planet. Applied to KOI 1843.03, a 0.6 R{sub Circled-Plus} planet with the shortest known orbital period of 4.245 hr, the planet's mean density must be {approx}> 7 g cm{sup -3}. By modeling the planetary interior subject to this constraint, we find that the composition of the planet must be mostly iron, with at most a modest fraction of silicates ({approx}< 30% by mass)

    17. Optimizing minimum free-energy crossing points in solution: Linear-response free energy/spin-flip density functional theory approach

      SciTech Connect (OSTI)

      Minezawa, Noriyuki

      2014-10-28

      Examining photochemical processes in solution requires understanding the solvent effects on the potential energy profiles near conical intersections (CIs). For that purpose, the CI point in solution is determined as the crossing between nonequilibrium free energy surfaces. In this work, the nonequilibrium free energy is described using the combined method of linear-response free energy and collinear spin-flip time-dependent density functional theory. The proposed approach reveals the solvent effects on the CI geometries of stilbene in an acetonitrile solution and those of thymine in water. Polar acetonitrile decreases the energy difference between the twisted minimum and twisted-pyramidalized CI of stilbene. For thymine in water, the hydrogen bond formation stabilizes significantly the CI puckered at the carbonyl carbon atom. The result is consistent with the recent simulation showing that the reaction path via this geometry is open in water. Therefore, the present method is a promising way of identifying the free-energy crossing points that play an essential role in photochemistry of solvated molecules.

    18. Python and computer vision

      SciTech Connect (OSTI)

      Doak, J. E.; Prasad, Lakshman

      2002-01-01

      This paper discusses the use of Python in a computer vision (CV) project. We begin by providing background information on the specific approach to CV employed by the project. This includes a brief discussion of Constrained Delaunay Triangulation (CDT), the Chordal Axis Transform (CAT), shape feature extraction and syntactic characterization, and normalization of strings representing objects. (The terms 'object' and 'blob' are used interchangeably, both referring to an entity extracted from an image.) The rest of the paper focuses on the use of Python in three critical areas: (1) interactions with a MySQL database, (2) rapid prototyping of algorithms, and (3) gluing together all components of the project including existing C and C++ modules. For (l), we provide a schema definition and discuss how the various tables interact to represent objects in the database as tree structures. (2) focuses on an algorithm to create a hierarchical representation of an object, given its string representation, and an algorithm to match unknown objects against objects in a database. And finally, (3) discusses the use of Boost Python to interact with the pre-existing C and C++ code that creates the CDTs and CATS, performs shape feature extraction and syntactic characterization, and normalizes object strings. The paper concludes with a vision of the future use of Python for the CV project.

    19. Update of Acquisition Guide Chapter 6.1, "Competition Requirements"

      Broader source: Energy.gov [DOE]

      Policy Flash 201 0-05, issued on October 15,2009, provided an overview of Federal Acquisition Circular (FAC) 2005-37. In reference to Policy Flash 2010-05, the subject Acquisition Guide Chapter has been updated to prescribe Department of Energy implementing procedures pursuant to changes to Federal Acquisition Regulation 6.302-2 which now limits the length of contracts awarded noncompetitively under unusual and compelling urgency circumstances to the minimum contract period necessary to meet the requirements, and no longer than one year, unless the head of the agency determines that exceptional circumstances apply. The chapter is revised to state that the Senior Procurement Executive can make the determination that exceptional circumstances apply. It also includes additional guidance when these exceptional circumstances apply.

    20. ESPC ENABLE Final Proposal Requirements

      Broader source: Energy.gov [DOE]

      Document describes the final proposal requirements for consideration by an energy service company (ESCO) for an agency’s Request for Quote/Notice of Opportunity or final proposal. If selected to perform a site investment grade audit, the ESCO will be required to present the findings from the IGA and a project price to the agency in the form of a final proposal.

    1. Bioinformatics Computing Consultant Position Available

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Bioinformatics Computing Consultant Position Available Bioinformatics Computing Consultant Position Available October 31, 2011 by Katie Antypas NERSC and the Joint Genome Institute (JGI) are searching for two individuals who can help biologists exploit advanced computing platforms. JGI provides production sequencing and genomics for the Department of Energy. These activities are critical to the DOE missions in areas related to clean energy generation and environmental characterization and

    2. computational-fluid-dynamics-training

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Table of Contents Date Location Advanced Hydraulic and Aerodynamic Analysis Using CFD March 27-28, 2013 Argonne TRACC Argonne, IL Computational Hydraulics and Aerodynamics using STAR-CCM+ for CFD Analysis March 21-22, 2012 Argonne TRACC Argonne, IL Computational Hydraulics and Aerodynamics using STAR-CCM+ for CFD Analysis March 30-31, 2011 Argonne TRACC Argonne, IL Computational Hydraulics for Transportation Workshop September 23-24, 2009 Argonne TRACC West Chicago, IL

    3. Efficient parallel global garbage collection on massively parallel computers

      SciTech Connect (OSTI)

      Kamada, Tomio; Matsuoka, Satoshi; Yonezawa, Akinori

      1994-12-31

      On distributed-memory high-performance MPPs where processors are interconnected by an asynchronous network, efficient Garbage Collection (GC) becomes difficult due to inter-node references and references within pending, unprocessed messages. The parallel global GC algorithm (1) takes advantage of reference locality, (2) efficiently traverses references over nodes, (3) admits minimum pause time of ongoing computations, and (4) has been shown to scale up to 1024 node MPPs. The algorithm employs a global weight counting scheme to substantially reduce message traffic. The two methods for confirming the arrival of pending messages are used: one counts numbers of messages and the other uses network `bulldozing.` Performance evaluation in actual implementations on a multicomputer with 32-1024 nodes, Fujitsu AP1000, reveals various favorable properties of the algorithm.

    4. computation | National Nuclear Security Administration

      National Nuclear Security Administration (NNSA)

      Livermore National Laboratory (LLNL), announced her retirement last week after 15 years of leading Livermore's Computation Directorate. "Dona has successfully led a ...

    5. Parallel Computing Summer Research Internship

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      should have basic experience with a scientific computing language, such as C, C++, Fortran and with the LINUX operating system. Duration & Location The program will last ten...

    6. History | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      dedicated to enabling leading-edge computational capabilities to advance fundamental ... (ASCR) program within DOE's Office of Science, the ALCF is one half of the DOE ...

    7. Bioinformatics Computing Consultant Position Available

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      You can read more about the positions and apply at jobs.lbl.gov: Bioinformatics High Performance Computing Consultant (job number: 73194) and Software Developer for High...

    8. Towards a Real-Time Cluster Computing Infrastructure

      SciTech Connect (OSTI)

      Hui, Peter SY; Chikkagoudar, Satish; Chavarría-Miranda, Daniel; Johnston, Mark R.

      2011-11-01

      Real-time computing has traditionally been considered largely in the context of single-processor and embedded systems, and indeed, the terms real-time computing, embedded systems, and control systems are often mentioned in closely related contexts. However, real-time computing in the context of multinode systems, specifically high-performance, cluster-computing systems, remains relatively unexplored, largely due to the fact that until now, there has not been a need for such an environment. In this paper, we motivate the need for a cluster computing infrastructure capable of supporting computation over large datasets in real-time. Our motivating example is an analytical framework to support the next generation North American power grid, which is growing both in size and complexity. With streaming sensor data in the future power grid potentially reaching rates on the order of terabytes per day, the task of analyzing this data subject to real-time guarantees becomes a daunting task which will require the power of high-performance cluster computing capable of functioning under real-time constraints. One specific challenge that such an environment presents is the need for real-time networked communication between cluster nodes. In this paper, we discuss the need for real-time high-performance cluster computation, along with our work-in-progress towards an infrastructure which will ultimately enable such an environment.

    9. Lawrence Livermore National Laboratory Emergency Response Capability Baseline Needs Assessment Requirement Document

      SciTech Connect (OSTI)

      Sharry, J A

      2009-12-30

      This revision of the LLNL Fire Protection Baseline Needs Assessment (BNA) was prepared by John A. Sharry, LLNL Fire Marshal and LLNL Division Leader for Fire Protection and reviewed by Martin Gresho, Sandia/CA Fire Marshal. The document follows and expands upon the format and contents of the DOE Model Fire Protection Baseline Capabilities Assessment document contained on the DOE Fire Protection Web Site, but only address emergency response. The original LLNL BNA was created on April 23, 1997 as a means of collecting all requirements concerning emergency response capabilities at LLNL (including response to emergencies at Sandia/CA) into one BNA document. The original BNA documented the basis for emergency response, emergency personnel staffing, and emergency response equipment over the years. The BNA has been updated and reissued five times since in 1998, 1999, 2000, 2002, and 2004. A significant format change was performed in the 2004 update of the BNA in that it was 'zero based.' Starting with the requirement documents, the 2004 BNA evaluated the requirements, and determined minimum needs without regard to previous evaluations. This 2010 update maintains the same basic format and requirements as the 2004 BNA. In this 2010 BNA, as in the previous BNA, the document has been intentionally divided into two separate documents - the needs assessment (1) and the compliance assessment (2). The needs assessment will be referred to as the BNA and the compliance assessment will be referred to as the BNA Compliance Assessment. The primary driver for separation is that the needs assessment identifies the detailed applicable regulations (primarily NFPA Standards) for emergency response capabilities based on the hazards present at LLNL and Sandia/CA and the geographical location of the facilities. The needs assessment also identifies areas where the modification of the requirements in the applicable NFPA standards is appropriate, due to the improved fire protection provided, the remote location and low population density of some the facilities. As such, the needs assessment contains equivalencies to the applicable requirements. The compliance assessment contains no such equivalencies and simply assesses the existing emergency response resources to the requirements of the BNA and can be updated as compliance changes independent of the BNA update schedule. There are numerous NFPA codes and standards and other requirements and guidance documents that address the subject of emergency response. These requirements documents are not always well coordinated and may contain duplicative or conflicting requirements or even coverage gaps. Left unaddressed, this regulatory situation results in frequent interpretation of requirements documents. Different interpretations can then lead to inconsistent implementation. This BNA addresses this situation by compiling applicable requirements from all identified sources (see Section 5) and analyzing them collectively to address conflict and overlap as applicable to the hazards presented by the LLNL and Sandia/CA sites (see Section 7). The BNA also generates requirements when needed to fill any identified gaps in regulatory coverage. Finally, the BNA produces a customized simple set of requirements, appropriate for the DOE protection goals, such as those defined in DOE O 420.1B, the hazard level, the population density, the topography, and the site layout at LLNL and Sandia/CA that will be used as the baseline requirements set - the 'baseline needs' - for emergency response at LLNL and Sandia/CA. A template approach is utilized to accomplish this evaluation for each of the nine topical areas that comprise the baseline needs for emergency response. The basis for conclusions reached in determining the baseline needs for each of the topical areas is presented in Sections 7.1 through 7.9. This BNA identifies only mandatory requirements and establishes the minimum performance criteria. The minimum performance criteria may not be the level of performance desired Lawrence Livermore National Laboratory or Sandia/CA. Performance at levels greater than those established by this document will provide a higher level of fire safety, fire protection, or loss control and is encouraged. In Section 7, Determination of Baseline Needs, a standard template was used to describe the process used that involves separating basic emergency response needs into nine separate services. Each service being evaluated contains a determination of minimum requirements, an analysis of the requirements, a statement of minimum performance, and finally a summary of the minimum performance. The requirement documents, listed in Section 5, are those laws, regulations, DOE Directives, contractual obligations, or LLNL policies that establish service levels. The determination of minimum requirements section explains the rationale or method used to determine the minimum requirements.

    10. Integrated Computational Materials Engineering (ICME) for Mg...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      More Documents & Publications Integrated Computational Materials Engineering (ICME) for Mg: International Pilot Project Integrated Computational Materials Engineering (ICME) for ...

    11. Automotive Turbocharging: Industrial Requirements and Technology...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Turbocharging: Industrial Requirements and Technology Developments Automotive Turbocharging: Industrial Requirements and Technology Developments Significant improvements in ...

    12. Computation and control with neural nets

      SciTech Connect (OSTI)

      Corneliusen, A.; Terdal, P.; Knight, T.; Spencer, J.

      1989-10-04

      As energies have increased exponentially with time so have the size and complexity of accelerators and control systems. NN may offer the kinds of improvements in computation and control that are needed to maintain acceptable functionality. For control their associative characteristics could provide signal conversion or data translation. Because they can do any computation such as least squares, they can close feedback loops autonomously to provide intelligent control at the point of action rather than at a central location that requires transfers, conversions, hand-shaking and other costly repetitions like input protection. Both computation and control can be integrated on a single chip, printed circuit or an optical equivalent that is also inherently faster through full parallel operation. For such reasons one expects lower costs and better results. Such systems could be optimized by integrating sensor and signal processing functions. Distributed nets of such hardware could communicate and provide global monitoring and multiprocessing in various ways e.g. via token, slotted or parallel rings (or Steiner trees) for compatibility with existing systems. Problems and advantages of this approach such as an optimal, real-time Turing machine are discussed. Simple examples are simulated and hardware implemented using discrete elements that demonstrate some basic characteristics of learning and parallelism. Future microprocessors' are predicted and requested on this basis. 19 refs., 18 figs.

    13. Computer modeling of the global warming effect

      SciTech Connect (OSTI)

      Washington, W.M.

      1993-12-31

      The state of knowledge of global warming will be presented and two aspects examined: observational evidence and a review of the state of computer modeling of climate change due to anthropogenic increases in greenhouse gases. Observational evidence, indeed, shows global warming, but it is difficult to prove that the changes are unequivocally due to the greenhouse-gas effect. Although observational measurements of global warming are subject to ``correction,`` researchers are showing consistent patterns in their interpretation of the data. Since the 1960s, climate scientists have been making their computer models of the climate system more realistic. Models started as atmospheric models and, through the addition of oceans, surface hydrology, and sea-ice components, they then became climate-system models. Because of computer limitations and the limited understanding of the degree of interaction of the various components, present models require substantial simplification. Nevertheless, in their present state of development climate models can reproduce most of the observed large-scale features of the real system, such as wind, temperature, precipitation, ocean current, and sea-ice distribution. The use of supercomputers to advance the spatial resolution and realism of earth-system models will also be discussed.

    14. Radiant energy required for infrared neural stimulation

      SciTech Connect (OSTI)

      Tan, Xiaodong; Rajguru, Suhrud; Young, Hunter; Xia, Nan; Stock, Stuart R.; Xiao, Xianghui; Richter, Claus-Peter

      2015-08-25

      Infrared neural stimulation (INS) has been proposed as an alternative method to electrical stimulation because of its spatial selective stimulation. Independent of the mechanism for INS, to translate the method into a device it is important to determine the energy for stimulation required at the target structure. Custom-designed, flat and angle polished fibers, were used to deliver the photons. By rotating the angle polished fibers, the orientation of the radiation beam in the cochlea could be changed. INS-evoked compound action potentials and single unit responses in the central nucleus of the inferior colliculus (ICC) were recorded. X-ray computed tomography was used to determine the orientation of the optical fiber. Maximum responses were observed when the radiation beam was directed towards the spiral ganglion neurons (SGNs), whereas little responses were seen when the beam was directed towards the basilar membrane. The radiant exposure required at the SGNs to evoke compound action potentials (CAPs) or ICC responses was on average 18.9 ± 12.2 or 10.3 ± 4.9 mJ/cm2, respectively. For cochlear INS it has been debated whether the radiation directly stimulates the SGNs or evokes a photoacoustic effect. The results support the view that a direct interaction between neurons and radiation dominates the response to INS.

    15. Radiant energy required for infrared neural stimulation

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Tan, Xiaodong; Rajguru, Suhrud; Young, Hunter; Xia, Nan; Stock, Stuart R.; Xiao, Xianghui; Richter, Claus-Peter

      2015-08-25

      Infrared neural stimulation (INS) has been proposed as an alternative method to electrical stimulation because of its spatial selective stimulation. Independent of the mechanism for INS, to translate the method into a device it is important to determine the energy for stimulation required at the target structure. Custom-designed, flat and angle polished fibers, were used to deliver the photons. By rotating the angle polished fibers, the orientation of the radiation beam in the cochlea could be changed. INS-evoked compound action potentials and single unit responses in the central nucleus of the inferior colliculus (ICC) were recorded. X-ray computed tomography wasmore » used to determine the orientation of the optical fiber. Maximum responses were observed when the radiation beam was directed towards the spiral ganglion neurons (SGNs), whereas little responses were seen when the beam was directed towards the basilar membrane. The radiant exposure required at the SGNs to evoke compound action potentials (CAPs) or ICC responses was on average 18.9 ± 12.2 or 10.3 ± 4.9 mJ/cm2, respectively. For cochlear INS it has been debated whether the radiation directly stimulates the SGNs or evokes a photoacoustic effect. The results support the view that a direct interaction between neurons and radiation dominates the response to INS.« less

    16. Computation Directorate 2008 Annual Report

      SciTech Connect (OSTI)

      Crawford, D L

      2009-03-25

      Whether a computer is simulating the aging and performance of a nuclear weapon, the folding of a protein, or the probability of rainfall over a particular mountain range, the necessary calculations can be enormous. Our computers help researchers answer these and other complex problems, and each new generation of system hardware and software widens the realm of possibilities. Building on Livermore's historical excellence and leadership in high-performance computing, Computation added more than 331 trillion floating-point operations per second (teraFLOPS) of power to LLNL's computer room floors in 2008. In addition, Livermore's next big supercomputer, Sequoia, advanced ever closer to its 2011-2012 delivery date, as architecture plans and the procurement contract were finalized. Hyperion, an advanced technology cluster test bed that teams Livermore with 10 industry leaders, made a big splash when it was announced during Michael Dell's keynote speech at the 2008 Supercomputing Conference. The Wall Street Journal touted Hyperion as a 'bright spot amid turmoil' in the computer industry. Computation continues to measure and improve the costs of operating LLNL's high-performance computing systems by moving hardware support in-house, by measuring causes of outages to apply resources asymmetrically, and by automating most of the account and access authorization and management processes. These improvements enable more dollars to go toward fielding the best supercomputers for science, while operating them at less cost and greater responsiveness to the customers.

    17. High-Performance Computing for Advanced Smart Grid Applications

      SciTech Connect (OSTI)

      Huang, Zhenyu; Chen, Yousu

      2012-07-06

      The power grid is becoming far more complex as a result of the grid evolution meeting an information revolution. Due to the penetration of smart grid technologies, the grid is evolving as an unprecedented speed and the information infrastructure is fundamentally improved with a large number of smart meters and sensors that produce several orders of magnitude larger amounts of data. How to pull data in, perform analysis, and put information out in a real-time manner is a fundamental challenge in smart grid operation and planning. The future power grid requires high performance computing to be one of the foundational technologies in developing the algorithms and tools for the significantly increased complexity. New techniques and computational capabilities are required to meet the demands for higher reliability and better asset utilization, including advanced algorithms and computing hardware for large-scale modeling, simulation, and analysis. This chapter summarizes the computational challenges in smart grid and the need for high performance computing, and present examples of how high performance computing might be used for future smart grid operation and planning.

    18. Meeting Federal Energy Security Requirements

      Broader source: Energy.gov [DOE]

      Presentation—given at at the Fall 2012 Federal Utility Partnership Working Group (FUPWG) meeting—discusses the opportunity to increase the scope of federal-utility partnerships for meeting energy security requirements.

    19. Requirements for GNEP Transmutation Fuels

      SciTech Connect (OSTI)

      D. C. Crawford; M. K. Meyer; S. L. Hayes

      2007-03-01

      The purpose of this document is to provide a baseline set of requirements to guide fuel fabrication development and irradiation testing performed as part of the AFCRD Transmutation Fuel Development Program. This document can be considered a supplement to the GNEP TRU Fuel Development and Qualification Plan, and will be revised as necessary to maintain a documented set of fuel testing objectives and requirements consistent with programmatic decisions and advances in technical knowledge.

    20. Scalable Computation of Streamlines on Very Large Datasets

      SciTech Connect (OSTI)

      Pugmire, David; Childs, Hank; Garth, Christoph; Ahern, Sean; Weber, Gunther H.

      2009-09-01

      Understanding vector fields resulting from large scientific simulations is an important and often difficult task. Streamlines, curves that are tangential to a vector field at each point, are a powerful visualization method in this context. Application of streamline-based visualization to very large vector field data represents a significant challenge due to the non-local and data-dependent nature of streamline computation, and requires careful balancing of computational demands placed on I/O, memory, communication, and processors. In this paper we review two parallelization approaches based on established parallelization paradigms (static decomposition and on-demand loading) and present a novel hybrid algorithm for computing streamlines. Our algorithm is aimed at good scalability and performance across the widely varying computational characteristics of streamline-based problems. We perform performance and scalability studies of all three algorithms on a number of prototypical application problems and demonstrate that our hybrid scheme is able to perform well in different settings.

    1. Multicore Challenges and Benefits for High Performance Scientific Computing

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Nielsen, Ida M.B.; Janssen, Curtis L.

      2008-01-01

      Until recently, performance gains in processors were achieved largely by improvements in clock speeds and instruction level parallelism. Thus, applications could obtain performance increases with relatively minor changes by upgrading to the latest generation of computing hardware. Currently, however, processor performance improvements are realized by using multicore technology and hardware support for multiple threads within each core, and taking full advantage of this technology to improve the performance of applications requires exposure of extreme levels of software parallelism. We will here discuss the architecture of parallel computers constructed from many multicore chips as well as techniques for managing the complexitymore » of programming such computers, including the hybrid message-passing/multi-threading programming model. We will illustrate these ideas with a hybrid distributed memory matrix multiply and a quantum chemistry algorithm for energy computation using Møller–Plesset perturbation theory.« less

    2. Federal and State Ethanol and Biodiesel Requirements (released in AEO2007)

      Reports and Publications (EIA)

      2007-01-01

      The Energy Policy Act 2005 requires that the use of renewable motor fuels be increased from the 2004 level of just over 4 billion gallons to a minimum of 7.5 billion gallons in 2012, after which the requirement grows at a rate equal to the growth of the gasoline pool. The law does not require that every gallon of gasoline or diesel fuel be blended with renewable fuels. Refiners are free to use renewable fuels, such as ethanol and biodiesel, in geographic regions and fuel formulations that make the most sense, as long as they meet the overall standard. Conventional gasoline and diesel can be blended with renewables without any change to the petroleum components, although fuels used in areas with air quality problems are likely to require adjustment to the base gasoline or diesel fuel if they are to be blended with renewables.

    3. Katrin Heitmann DOE HEP/ASCR Exascale Requirements Review

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Katrin Heitmann DOE HEP/ASCR Exascale Requirements Review June 10, 2015 Computational Cosmology Katrin Heitmann, Los Alamos National Laboratory Benasque Cosmology Workshop, August 2010 Roles of Cosmological Simulations in DE Survey Science * First part of end-to-end simulation * Control of systematics (1) Cosmology simulations and the survey (2) Solving the Inverse Problem from the LSST Science Book Cosmology Mock catalogs Athmosphere Optics Detector Images * Exploring fundamental physics *

    4. DOE SC Exascale Requirements Review: High Energy Physics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      SC Exascale Requirements Review: High Energy Physics Bethesda Hyatt, June 10, 2015 Jim Siegrist Associate Director for High Energy Physics Office of Science, U.S. Department of Energy HEP Computing and Data Challenges * What's new? * In May 2014, the U.S. particle physics community updated its vision for the future - The P5 (Particle Physics Project Prioritization Panel) report presents a strategy for the next decade and beyond that enables discovery and maintains our position as a global leader

    5. Introduction to computers: Reference guide

      SciTech Connect (OSTI)

      Ligon, F.V.

      1995-04-01

      The ``Introduction to Computers`` program establishes formal partnerships with local school districts and community-based organizations, introduces computer literacy to precollege students and their parents, and encourages students to pursue Scientific, Mathematical, Engineering, and Technical careers (SET). Hands-on assignments are given in each class, reinforcing the lesson taught. In addition, the program is designed to broaden the knowledge base of teachers in scientific/technical concepts, and Brookhaven National Laboratory continues to act as a liaison, offering educational outreach to diverse community organizations and groups. This manual contains the teacher`s lesson plans and the student documentation to this introduction to computer course.

    6. GPU Computing - Dirac.pptx

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      GPU Computing with Dirac Hemant Shukla 2 Architectural Differences 2 ALU Cache DRAM Control Logic DRAM CPU GPU 512 cores 10s t o 1 00s o f t hreads p er c ore Latency i s h idden b y f ast c ontext switching Less t han 2 0 c ores 1---2 t hreads p er c ore Latency i s h idden b y l arge c ache 3 Programming Models 3 CUDA (Compute Unified Device Architecture) OpenCL Microsoft's DirectCompute Third party wrappers are also available for Python, Perl, Fortran, Java, Ruby, Lua, MATLAB, IDL, and

    7. HEP/NP Requirements Review 2013

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      HEP/NP Requirements Review 2013 Science Engagement Move your data Programs & Workshops Science Requirements Reviews Network Requirements Reviews Documents and Background Materials FAQ for Case Study Authors BER Requirements Review 2015 ASCR Requirements Review 2015 Previous Reviews HEP/NP Requirements Review 2013 HEP Attendees 2013 FES Requirements Review 2014 BES Requirements Review 2014 Requirements Review Reports Case Studies Contact Us Technical Assistance: 1 800-33-ESnet (Inside US) 1

    8. Power throttling of collections of computing elements

      DOE Patents [OSTI]

      Bellofatto, Ralph E.; Coteus, Paul W.; Crumley, Paul G.; Gara, Alan G.; Giampapa, Mark E.; Gooding; Thomas M.; Haring, Rudolf A.; Megerian, Mark G.; Ohmacht, Martin; Reed, Don D.; Swetz, Richard A.; Takken, Todd

      2011-08-16

      An apparatus and method for controlling power usage in a computer includes a plurality of computers communicating with a local control device, and a power source supplying power to the local control device and the computer. A plurality of sensors communicate with the computer for ascertaining power usage of the computer, and a system control device communicates with the computer for controlling power usage of the computer.

    9. Authorization basis requirements comparison report

      SciTech Connect (OSTI)

      Brantley, W.M.

      1997-08-18

      The TWRS Authorization Basis (AB) consists of a set of documents identified by TWRS management with the concurrence of DOE-RL. Upon implementation of the TWRS Basis for Interim Operation (BIO) and Technical Safety Requirements (TSRs), the AB list will be revised to include the BIO and TSRs. Some documents that currently form part of the AB will be removed from the list. This SD identifies each - requirement from those documents, and recommends a disposition for each to ensure that necessary requirements are retained when the AB is revised to incorporate the BIO and TSRs. This SD also identifies documents that will remain part of the AB after the BIO and TSRs are implemented. This document does not change the AB, but provides guidance for the preparation of change documentation.

    10. Directives Requiring Additional Documentation - DOE Directives,

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Delegations, and Requirements Requiring Additional Documentation by Website Administrator PDF document icon DirectivesRequiringAdditionalDocumentation (1).pdf - PDF document, 35 KB (36219

    11. A network security case study; The Los Alamos National Laboratory integrated computer network

      SciTech Connect (OSTI)

      Dreicer, J.S.; Stoltz, L. )

      1991-01-01

      This paper reports on a study to validate the Graphical Network Representation (GRPHREP) model which is being conducted on the Los Alamos National Laboratory Integrated Computer Network (ICN). The GRPHREP model is a software system application based on graph theory and object-oriented programming methodologies. It codified the Department of Energy (DOE) Order 5637.1, which is concerned with classified computer secret policy, restrictions, and requirements. The Los Alamos ICN is required to control access to and support large-scale scientific and administrative computing. Thus, large-scale scientific and administrative computing. Thus we felt that this large, complex, and dynamic network would provide a good test for the graphical and functional capabilities of the model. Furthermore, the ICN is composed of multiple partitions that reflect the sensitivity and classification of the computation (data) and designate the required clearance level for the user.

    12. Johnson Noise Thermometry System Requirements

      SciTech Connect (OSTI)

      Britton Jr, Charles L; Roberts, Michael; Ezell, N Dianne Bull; Qualls, A L; Holcomb, David Eugene

      2013-01-01

      This document is intended to capture the requirements for the architecture of the developmental electronics for the ORNL-lead drift-free Johnson Noise Thermometry (JNT) project conducted under the Instrumentation, Controls, and Human-Machine Interface (ICHMI) research pathway of the U.S. Department of Energy (DOE) Advanced Small Modular Reactor (SMR) Research and Development (R&D) program. The requirements include not only the performance of the system but also the allowable measurement environment of the probe and the allowable physical environment of the associated electronics. A more extensive project background including the project rationale is available in the initial project report [1].

    13. Quality Assurance Requirements and Description

      Energy Savers [EERE]

      QjCivilianRadioactive Was'fe Management QA: QA QVALITY ASSURANCE REQUIREMENTS AND DESCRIPTION DOEIRW-0333P Revisiol1 20 Effective Date: 10-01-2008 LarrY Newman, DlrectQr Office of Quality As,surance ~~--~-_._._- Edward F. Spr at III, Di or Office of Civilian Radioactive Waste Management Date I/Jf/4t' , . - - - Date OCRWM Title: Quality Assurance Requirements and Description DOEIRW-0333P, Revision 20 Office of Civilian Radioactive Waste Management Quality Assurance Policy Page: 2 of 160

    14. Buddy Tag CONOPS and Requirements.

      SciTech Connect (OSTI)

      Brotz, Jay Kristoffer; Deland, Sharon M.

      2015-12-01

      This document defines the concept of operations (CONOPS) and the requirements for the Buddy Tag, which is conceived and designed in collaboration between Sandia National Laboratories and Princeton University under the Department of State Key VerificationAssets Fund. The CONOPS describe how the tags are used to support verification of treaty limitations and is only defined to the extent necessary to support a tag design. The requirements define the necessary functions and desired non-functional features of the Buddy Tag at a high level

    15. Project X functional requirements specification

      SciTech Connect (OSTI)

      Holmes, S.D.; Henderson, S.D.; Kephart, R.; Kerby, J.; Kourbanis, I.; Lebedev, V.; Mishra, S.; Nagaitsev, S.; Solyak, N.; Tschirhart, R.; /Fermilab

      2012-05-01

      Project X is a multi-megawatt proton facility being developed to support a world-leading program in Intensity Frontier physics at Fermilab. The facility is designed to support programs in elementary particle and nuclear physics, with possible applications to nuclear energy research. A Functional Requirements Specification has been developed in order to establish performance criteria for the Project X complex in support of these multiple missions, and to assure that the facility is designed with sufficient upgrade capability to provide U.S. leadership for many decades to come. This paper will briefly review the previously described Functional Requirements, and then discuss their recent evolution.

    16. SSRL Computer Account Request Form

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      SSRL/LCLS Computer Account Request Form August 2009 Fill in this form and sign the security statement mentioned at the bottom of this page to obtain an account. Your Name: __________________________________________________________ Institution: ___________________________________________________________ Mailing Address: ______________________________________________________ Email Address: _______________________________________________________ Telephone:

    17. Quantum Computing: Solving Complex Problems

      ScienceCinema (OSTI)

      DiVincenzo, David [IBM Watson Research Center

      2009-09-01

      One of the motivating ideas of quantum computation was that there could be a new kind of machine that would solve hard problems in quantum mechanics. There has been significant progress towards the experimental realization of these machines (which I will review), but there are still many questions about how such a machine could solve computational problems of interest in quantum physics. New categorizations of the complexity of computational problems have now been invented to describe quantum simulation. The bad news is that some of these problems are believed to be intractable even on a quantum computer, falling into a quantum analog of the NP class. The good news is that there are many other new classifications of tractability that may apply to several situations of physical interest.

    18. Filtration theory using computer simulations

      SciTech Connect (OSTI)

      Bergman, W.; Corey, I.

      1997-01-01

      We have used commercially available fluid dynamics codes based on Navier-Stokes theory and the Langevin particle equation of motion to compute the particle capture efficiency and pressure drop through selected two- and three- dimensional fiber arrays. The approach we used was to first compute the air velocity vector field throughout a defined region containing the fiber matrix. The particle capture in the fiber matrix is then computed by superimposing the Langevin particle equation of motion over the flow velocity field. Using the Langevin equation combines the particle Brownian motion, inertia and interception mechanisms in a single equation. In contrast, most previous investigations treat the different capture mechanisms separately. We have computed the particle capture efficiency and the pressure drop through one, 2-D and two, 3-D fiber matrix elements.

    19. Events | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      2:00 PM Finding Multiple Local Minima of Computationally Expensive Simulations Jeffery Larson Postdoctoral Appointee, MCS Building 240Room 4301 Pages 1 2 3 4 5 6 7 8 9 ... next...

    20. SSRL Computer Account Request Form

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      SSRLLCLS Computer Account Request Form August 2009 Fill in this form and sign the security statement mentioned at the bottom of this page to obtain an account. Your Name:...

    1. Computing at SSRL Home Page

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      contents you are looking for have moved. You will be redirected to the new location automatically in 5 seconds. Please bookmark the correct page at http://www-ssrl.slac.stanford.edu/content/staff-resources/computer-networking-group

    2. Tukey | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Tukey The primary purpose of Tukey is to analyze and visualize data produced on Mira. Equipped with state-of-the-art graphics processing units (GPUs), Tukey converts computational data from Mira into high-resolution visual representations. The resulting images, videos, and animations help users to better analyze and understand the data generated by

    3. Vesta | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Vesta Vesta is the ALCF's test and development platform, serving as a launching pad for researchers planning to use Mira. Vesta has the same architecture as Mira, but on a much smaller scale (two computer racks compared to Mira's 48 racks). This system enables researchers to debug and scale up codes for the Blue Gene/Q architecture in

    4. Automatic computation of transfer functions

      DOE Patents [OSTI]

      Atcitty, Stanley; Watson, Luke Dale

      2015-04-14

      Technologies pertaining to the automatic computation of transfer functions for a physical system are described herein. The physical system is one of an electrical system, a mechanical system, an electromechanical system, an electrochemical system, or an electromagnetic system. A netlist in the form of a matrix comprises data that is indicative of elements in the physical system, values for the elements in the physical system, and structure of the physical system. Transfer functions for the physical system are computed based upon the netlist.

    5. Parallel Computing Summer Research Internship

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      LaboratoryNational Security Education Center Menu About Contact Educational Prog Computer System, Cluster and Networking Summer Institute (CSCNSI) IS&T Data Science at Scale Summer School IS&T Co-Design Summer School Parallel Computing Summer Research Internship Univ Partnerships CMU/LANL Institute for Reliable High Performance Technology (IRHPIT) Missouri S&T/LANL Cyber Security Sciences Institute (CSSI) UC, Davis/LANL Institute for Next Generation Visualization and Analysis (INGVA)

    6. computational-hydraulics-for-transportation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Transportation Workshop Sept. 23-24, 2009 Argonne TRACC Dr. Steven Lottes This email address is being protected from spambots. You need JavaScript enabled to view it. Announcement pdficon small The Transportation Research and Analysis Computing Center at Argonne National Laboratory will hold a workshop on the use of computational hydraulics for transportation applications. The goals of the workshop are: Bring together people who are using or would benefit from the use of high performance cluster

    7. Computer Assisted Virtual Environment - CAVE

      ScienceCinema (OSTI)

      Erickson, Phillip; Podgorney, Robert; Weingartner, Shawn; Whiting, Eric

      2014-06-09

      Research at the Center for Advanced Energy Studies is taking on another dimension with a 3-D device known as a Computer Assisted Virtual Environment. The CAVE uses projection to display high-end computer graphics on three walls and the floor. By wearing 3-D glasses to create depth perception and holding a wand to move and rotate images, users can delve into data.

    8. Secure computing for the 'Everyman'

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Secure computing for the 'Everyman' Secure computing for the 'Everyman' If implemented on a wide scale, quantum key distribution technology could ensure truly secure commerce, banking, communications and data transfer. September 2, 2014 This small device developed at Los Alamos National Laboratory uses the truly random spin of light particles as defined by laws of quantum mechanics to generate a random number for use in a cryptographic key that can be used to securely transmit information

    9. Computational Sciences and Engineering Division

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      The Computational Sciences and Engineering Division is a major research division at the Department of Energy's Oak Ridge National Laboratory. CSED develops and applies creative information technology and modeling and simulation research solutions for National Security and National Energy Infrastructure needs. The mission of the Computational Sciences and Engineering Division is to enhance the country's capabilities in achieving important objectives in the areas of national defense, homeland

    10. National Ignition Facility sub-system design requirements integrated timing system SSDR 1.5.3

      SciTech Connect (OSTI)

      Wiedwald, J.; Van Aersau, P.; Bliss, E.

      1996-08-26

      This System Design Requirement document establishes the performance, design, development, and test requirements for the Integrated Timing System, WBS 1.5.3 which is part of the NIF Integrated Computer Control System (ICCS). The Integrated Timing System provides all temporally-critical hardware triggers to components and equipment in other NIF systems.

    11. DOE Directives, Delegations, and Requirements

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Site map An overview of the available content on this site. Keep the pointer still over an item for a few seconds to get its description. Directives Delegations Other Requirements Top 10 Directives Help Directives Tools RevCom Redirect for Departmental Element list

    12. High Performance Computing Facility Operational Assessment, FY 2010 Oak Ridge Leadership Computing Facility

      SciTech Connect (OSTI)

      Bland, Arthur S Buddy; Hack, James J; Baker, Ann E; Barker, Ashley D; Boudwin, Kathlyn J.; Kendall, Ricky A; Messer, Bronson; Rogers, James H; Shipman, Galen M; White, Julia C

      2010-08-01

      Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energy assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools and resources for next-generation systems.

    13. UCNI Review Requirement | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      UCNI Review Requirement UCNI Review Requirement What are the requirements for reviewing a document that may contain UCNI? 1017.15(a) requires you to have a document that may ...

    14. Advanced Simulation and Computing and Institutional R&D Programs | National

      National Nuclear Security Administration (NNSA)

      Nuclear Security Administration Programs Advanced Simulation and Computing and Institutional R&D Programs The Advanced Simulation and Computing (ASC) Program supports the Department of Energy's National Nuclear Security Administration (DOE/NNSA) Defense Programs' use of simulation-based evaluation of the nation's nuclear weapons stockpile. The ASC Program is responsible for providing the simulation tools and computing environments required to qualify and certify the nation's nuclear

    15. A directory service for configuring high-performance distributed computations

      SciTech Connect (OSTI)

      Fitzgerald, S.; Kesselman, C.; Foster, I.

      1997-08-01

      High-performance execution in distributed computing environments often requires careful selection and configuration not only of computers, networks, and other resources but also of the protocols and algorithms used by applications. Selection and configuration in turn require access to accurate, up-to-date information on the structure and state of available resources. Unfortunately, no standard mechanism exists for organizing or accessing such information. Consequently, different tools and applications adopt ad hoc mechanisms, or they compromise their portability and performance by using default configurations. We propose a Metacomputing Directory Service that provides efficient and scalable access to diverse, dynamic, and distributed information about resource structure and state. We define an extensible data model to represent required information and present a scalable, high-performance, distributed implementation. The data representation and application programming interface are adopted from the Lightweight Directory Access Protocol; the data model and implementation are new. We use the Globus distributed computing toolkit to illustrate how this directory service enables the development of more flexible and efficient distributed computing services and applications.

    16. Surveillance & Maintenance: The Requirements Based Surveillance...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Surveillance & Maintenance: The Requirements Based Surveillance and Maintenance Review Process (RBSM) Surveillance & Maintenance: The Requirements Based Surveillance and ...

    17. Updated Reporting Requirement Checklists and Research Performance...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Reporting Requirement Checklists and Research Performance Progress Report (RPPR) Updated Reporting Requirement Checklists and Research Performance Progress Report (RPPR) Policy ...

    18. Impact of Rate Design Alternatives on Residential Solar Customer Bills. Increased Fixed Charges, Minimum Bills and Demand-based Rates

      SciTech Connect (OSTI)

      Bird, Lori; Davidson, Carolyn; McLaren, Joyce; Miller, John

      2015-09-01

      With rapid growth in energy efficiency and distributed generation, electric utilities are anticipating stagnant or decreasing electricity sales, particularly in the residential sector. Utilities are increasingly considering alternative rates structures that are designed to recover fixed costs from residential solar photovoltaic (PV) customers with low net electricity consumption. Proposed structures have included fixed charge increases, minimum bills, and increasingly, demand rates - for net metered customers and all customers. This study examines the electricity bill implications of various residential rate alternatives for multiple locations within the United States. For the locations analyzed, the results suggest that residential PV customers offset, on average, between 60% and 99% of their annual load. However, roughly 65% of a typical customer's electricity demand is non-coincidental with PV generation, so the typical PV customer is generally highly reliant on the grid for pooling services.

    19. Standard for Communicating Waste Characterization and DOT Hazard Classification Requirements for Low Specific Activity Materials and Surface Contaminated Objects

      Energy Savers [EERE]

      Work Specifications for Single-Family Home Energy Upgrades Summary The U.S. Department of Energy (DOE), the National Renewable Energy Laboratory (NREL) and numer- ous industry stakeholders developed the Standard Work Specifications for Single-Family Home Energy Upgrades to define the minimum requirements for high- quality residential energy upgrades. The Standard Work Specifications for Single-Family Home Energy Upgrades is the first of three documents that will be published in 2012 and 2013 as

    20. Computer-based and web-based radiation safety training

      SciTech Connect (OSTI)

      Owen, C., LLNL

      1998-03-01

      The traditional approach to delivering radiation safety training has been to provide a stand-up lecture of the topic, with the possible aid of video, and to repeat the same material periodically. New approaches to meeting training requirements are needed to address the advent of flexible work hours and telecommuting, and to better accommodate individuals learning at their own pace. Computer- based and web-based radiation safety training can provide this alternative. Computer-based and web- based training is an interactive form of learning that the student controls, resulting in enhanced and focused learning at a time most often chosen by the student.

    1. Computational Quantum Chemistry at the RCC | Argonne Leadership Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Facility Computational Quantum Chemistry at the RCC Start Date: May 12 2016 - 2:00pm to 3:30pm Building/Room: Kathleen A. Zar Room, John Crerar Library Location: University of Chicago Speaker(s): Jonathan Skone Speaker(s) Title: Scientific Programming Consultant, Research Computing Center Event Website: https://training.uchicago.edu/course_detail.cfm?course_id=1652 This workshop is meant to guide those less familiar with quantum chemistry software in setting themselves up quickly to begin

    2. Travel Requirements - ITER (June 2014

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Travel Requirements - ITER (June 2014) Prior to any travel under this subcontract, the Seller shall submit their request to travel with the following information to the Technical Project Officer (TPO) for approval, with a copy to the identified US ITER Project Office Travel Administrative Coordinator (TAC), via email: name of traveler as it appears on passport; e-mail address of traveler; dates of travel; purpose of travel; business city; date business begins; and date business ends. The TAC for

    3. Computer Accounts | Stanford Synchrotron Radiation Lightsource

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Accounts Each user group must have a computer account. Additionally, all persons using these accounts are responsible for understanding and complying with the terms outlined in the "Use of SLAC Information Resources". Links are provided below for computer account forms and the computer security agreement which must be completed and sent to the appropriate contact person. SSRL does not charge for use of its computer systems. Forms X-ray/VUV Computer Account Request Form

    4. Energy and crude oil input requirements for the production of reformulated gasolines

      SciTech Connect (OSTI)

      Singh, M.; McNutt, B.

      1993-11-01

      The energy and crude oil requirements for the production of reformulated gasolines (RFG) are estimated. Both the energy and crude oil embodied in the final product and the process energy required to manufacture the RFG and its components are included. The effects on energy and crude oil use of using various oxygenates to meet the minimum oxygen content level required by the Clean Air Act Amendments are evaluated. The analysis illustrates that production of RFG requires more total energy than that of conventional gasoline but uses less crude oil. The energy and crude oil use requirements of the different RFGs vary considerably. For the same emissions performance level, RFG with ethanol requires substantially more total energy and crude oil than RFG with MTBE or ETBE. A specific proposal by the EPA designed to allow the use of ethanol in RFG would increase the total energy required to produce RFG by 2% and the total crude oil required by 2.0 to 2.5% over that for the base RFG with MTBE.

    5. Energy and crude oil input requirements for the production of reformulated gasolines

      SciTech Connect (OSTI)

      Singh, M.; McNutt, B.

      1993-10-01

      The energy and crude oil requirements for the production of reformulated gasoline (RFG) are estimated. The scope of the study includes both the energy and crude oil embodied in the final product and the process energy required to manufacture the RFG and its components. The effects on energy and crude oil use of employing various oxygenates to meet the minimum oxygen-content level required by the Clean Air Act Amendments are evaluated. The analysis shows that production of RFG requires more total energy, but uses less crude oil, than that of conventional gasoline. The energy and crude oil use requirements of the different RFGs vary considerably. For the same emissions performance level, RFG with ethanol requires substantially more total energy and crude oil than does RFG with methyl tertiary butyl ether (MTBE) or ethyl tertiary butyl ether. A specific proposal by the US Environmental Protection Agency, designed to allow the use of ethanol in RFG, would increase the total energy required to produce RFG by 2% and the total crude oil required by 2.0 to 2.5% over the corresponding values for the base RFG with MTBE.

    6. Hybrid Rotaxanes: Interlocked Structures for Quantum Computing...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      based on molecular magnets that may make them suitable as qubits for quantum computers. Chemistry Aids Quantum Computing Quantum bits or qubits are the fundamental...

    7. Parallel Programming with MPI | Argonne Leadership Computing...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Parallel Programming with MPI Event Sponsor: Mathematics and Computer Science Division ...permalinksargonne16mpi.php The Mathematics and Computer Science division of ...

    8. Mathematics and Computer Science Division | Argonne National...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Mathematics and Computer Science Division To help solve some of the nation's most critical scientific problems, the Mathematics and Computer Science (MCS) Division at Argonne ...

    9. Thermoelectric Materials by Design, Computational Theory and...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      by Design, Computational Theory and Structure Thermoelectric Materials by Design, Computational Theory and Structure 2009 DOE Hydrogen Program and Vehicle Technologies Program...

    10. OCIO Technology Summit: High Performance Computing | Department...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      The summit explored how Energy is using high performance computing to address a number of ... Oak Ridge National Laboratory, National Energy Research Scientific Computing Center ...

    11. Predictive Capability Maturity Model for computational modeling...

      Office of Scientific and Technical Information (OSTI)

      Sponsoring Org: USDOE Country of Publication: United States Language: English Subject: 97 MATHEMATICAL METHODS AND COMPUTING; 99 GENERAL AND MISCELLANEOUSMATHEMATICS, COMPUTING, ...

    12. Predictive Capability Maturity Model for computational modeling...

      Office of Scientific and Technical Information (OSTI)

      ... Sponsoring Org: USDOE Country of Publication: United States Language: English Subject: 97 MATHEMATICAL METHODS AND COMPUTING; 99 GENERAL AND MISCELLANEOUSMATHEMATICS, COMPUTING, ...

    13. Computer Science and Information Technology Student Pipeline

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Divisions recruit and hire promising undergraduate and graduate students in the areas of Computer Science, Information Technology, Management Information Systems, Computer...

    14. Energy Storage Computational Tool | Open Energy Information

      Open Energy Info (EERE)

      Energy Storage Computational Tool Jump to: navigation, search Tool Summary LAUNCH TOOL Name: Energy Storage Computational Tool AgencyCompany Organization: Navigant Consulting...

    15. Hybrid Rotaxanes: Interlocked Structures for Quantum Computing...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Hybrid Rotaxanes: Interlocked Structures for Quantum Computing? Hybrid Rotaxanes: Interlocked Structures for Quantum Computing? Print Wednesday, 26 August 2009 00:00 Rotaxanes are...

    16. Compare Activities by Number of Computers

      U.S. Energy Information Administration (EIA) Indexed Site

      of Computers Office buildings contained the most computers per square foot, followed by education and outpatient health care buildings. Education buildings were the only type...

    17. Marta Garcia Martinez | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Marta Garcia Martinez Assistant Computational Scientist Marta Garcia Martinez Argonne ... Marta Garca is an Assistant Computational Scientist at the ALCF. She is part of the ...

    18. Accerelate Your Vision | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Our Catalysts are computational scientists with domain expertise in areas such as chemistry, materials science, fusion, nuclear physics, plasma physics, computer science, ...

    19. About ALCF | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community. ...

    20. ALCF Acknowledgment Policy | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Impact on Theory and Experiment (INCITE) program. This research used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User ...