National Library of Energy BETA

Sample records for guido bartels ibm

  1. Electricity Advisory Committee

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    VICE CHAIR William Ball Southern Company Guido Bartels IBM Rick Bowen Alcoa Merwin Brown California Institute for Energy and Environment Ralph Cavanagh Natural Resources ...

  2. Guido DeHoratiis | Department of Energy

    Energy Savers [EERE]

    Guido DeHoratiis About Us Guido DeHoratiis - Associate Deputy Assistant Secretary Guido DeHoratiis is the Associate Deputy Assistant Secretary, Office of Oil and Natural Gas, in the Department of Energy's Office of Fossil Energy. In this position, he is responsible for administering oil and gas programs including research and development, analysis, and natural gas regulation. A major responsibility is developing research programs focused on the reduction of technical, environmental, and safety

  3. IBM era: 1960-64

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    IBM era: 1960-64 IBM era: 1960-64 To meet the growing computing needs of the nuclear weapons program, the Laboratory jointly developed with IBM the Stretch, IBM's first transistorized computer. July 10, 2015 trinity to trinity feature image Stretch, IBM's first transistorized computer. "Highly accurate 3D computing is a Holy Grail of the Stockpile Stewardship Program's supercomputing efforts. As the weapons age, 3D features tend to be introduced that require highly accurate 3D modeling to

  4. IBM Presentation Template Full Version

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    0 IBM Corporation Smart Grid: Impacts on Electric Power Supply and Demand 2010 Energy Conference: Short-Term Stresses, Long-Term Change Michael Valocchi, Global Energy and Utilities Industry Leader, IBM Global Business Services April, 2010 © 2010 IBM Corporation 2 Discussion Topics The Business Model will Evolve The Consumer Value Model will Transform A New Energy Consumer will Emerge Customers Segmentation will be Done in a Different Manner Information and Data Sources will

  5. V-132: IBM Tivoli System Automation Application Manager Multiple...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    V-132: IBM Tivoli System Automation Application Manager Multiple Vulnerabilities April 12, ... T-694: IBM Tivoli Federated Identity Manager Products Multiple Vulnerabilities V-145: IBM ...

  6. T-686: IBM Tivoli Integrated Portal Java Double Literal Denial...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    this November 2011 IBM Downloads Addthis Related Articles V-145: IBM Tivoli Federated Identity Manager Products Java Multiple Vulnerabilities T-694: IBM Tivoli Federated Identity...

  7. V-161: IBM Maximo Asset Management Products Java Multiple Vulnerabilit...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Articles U-179: IBM Java 7 Multiple Vulnerabilities V-145: IBM Tivoli Federated Identity Manager Products Java Multiple Vulnerabilities V-094: IBM Multiple Products Multiple...

  8. V-178: IBM Data Studio Web Console Java Multiple Vulnerabilities

    Broader source: Energy.gov [DOE]

    IBM Data Studio Web Console uses the IBM Java Runtime Environment (JRE) and might be affected by vulnerabilities in the IBM JRE

  9. International Business Machines Corp IBM | Open Energy Information

    Open Energy Info (EERE)

    Business Machines Corp IBM Jump to: navigation, search Name: International Business Machines Corp (IBM) Place: Armonk, New York Zip: 10504 Sector: Services Product: IBM is a...

  10. Electricity Advisory Committee

    Office of Environmental Management (EM)

    June 5, 2012 Electricity Advisory Committee 2012 Membership Roster Richard Cowart Regulatory Assistance Project CHAIR Irwin Popowsky Pennsylvania Consumer Advocate VICE CHAIR William Ball Southern Company Guido Bartels IBM Rick Bowen Alcoa Merwin Brown California Institute for Energy and Environment Ralph Cavanagh Natural Resources Defense Council The Honorable Paul Centolella Public Utilities Commission of Ohio David Crane NRG Energy, Inc. The Honorable Robert Curry New York State Public

  11. Electricity Advisory Committee

    Office of Environmental Management (EM)

    08 Membership Roster Linda Stuntz, Esquire Chair of the Electricity Advisory Committee Stuntz, Davis & Staffier, P.C. Paul J. Allen Constellation Energy Guido Bartels IBM Gerry Cauley SERC Reliability Corporation Ralph Cavanagh Natural Defense Resources Council Jose Delgado American Transmission Company The Honorable Jeanne Fox New Jersey Board of Public Utilities Joseph Garcia National Congress of American Indians Robert Gramlich American Wind Energy Association The Honorable Dian Grueneich

  12. IBM's New Flat Panel Displays

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    by J. Stöhr (SSRL), M. Samant (IBM), J. Lüning (SSRL) Today's laptop computers utilize flat panel displays where the light transmission from the back to the front of the display is modulated by orientation changes in liquid crystal (LC) molecules. Details are discussed in Ref. 2 below. One of the key steps in the manufacture of the displays is the alignment of the LC molecules in the display. Today this is done by mechanical rubbing of two polymer surfaces and then sandwiching the LC between

  13. IBM Probes Material Capabilities at the ALS

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    IBM Probes Material Capabilities at the ALS IBM Probes Material Capabilities at the ALS Print Wednesday, 12 February 2014 11:05 Vanadium dioxide, one of the few known materials that acts like an insulator at low temperatures but like a metal at warmer temperatures, is a somewhat futuristic material that could yield faster and much more energy-efficient electronic devices. Researchers from IBM's forward-thinking Spintronic Science and Applications Center (SpinAps) recently used the ALS to gain

  14. IBM References | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    System Overview Data Storage & File Systems Compiling & Linking Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] IBM References Contents IBM Redbooks A2 Processor Manual QPX Vector Instruction Set Architecture XL Compiler Documentation

  15. IBM Probes Material Capabilities at the ALS

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    IBM Probes Material Capabilities at the ALS IBM Probes Material Capabilities at the ALS Print Wednesday, 12 February 2014 11:05 Vanadium dioxide, one of the few known materials that acts like an insulator at low temperatures but like a metal at warmer temperatures, is a somewhat futuristic material that could yield faster and much more energy-efficient electronic devices. Researchers from IBM's forward-thinking Spintronic Science and Applications Center (SpinAps) recently used the ALS to gain

  16. IBM Probes Material Capabilities at the ALS

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Researchers from IBM's forward-thinking Spintronic Science and Applications Center (SpinAps) recently used the ALS to gain greater insight into vanadium dioxide's unusual phase ...

  17. Niek Lopes Cardozo Guido Lange Gert-Jan Kramer (Shell Global Solutions).

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    we have solar panels but not yet fusion power. Niek Lopes Cardozo Guido Lange Gert-Jan Kramer (Shell Global Solutions). Lopes Cardozo, Lange, Kramer; Why we have solar cells but not yet nuclear fusion What is the fastest development path? Lopes Cardozo, Lange, Kramer; Why we have solar cells but not yet nuclear fusion 1.00E+06 1.00E+07 1.00E+08 1.00E+09 1.00E+10 1.00E+11 1.00E+12 1.00E+13 1.00E+14 1960 1980 2000 2020 2040 2060 2080 fission wind solar PV When will we have fusion power? World

  18. Integrated Building Management System (IBMS)

    SciTech Connect (OSTI)

    Anita Lewis

    2012-07-01

    This project provides a combination of software and services that more easily and cost-effectively help to achieve optimized building performance and energy efficiency. Featuring an open-platform, cloud- hosted application suite and an intuitive user experience, this solution simplifies a traditionally very complex process by collecting data from disparate building systems and creating a single, integrated view of building and system performance. The Fault Detection and Diagnostics algorithms developed within the IBMS have been designed and tested as an integrated component of the control algorithms running the equipment being monitored. The algorithms identify the normal control behaviors of the equipment without interfering with the equipment control sequences. The algorithms also work without interfering with any cooperative control sequences operating between different pieces of equipment or building systems. In this manner the FDD algorithms create an integrated building management system.

  19. V-119: IBM Security AppScan Enterprise Multiple Vulnerabilities...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    9: IBM Security AppScan Enterprise Multiple Vulnerabilities V-119: IBM Security AppScan Enterprise Multiple Vulnerabilities March 26, 2013 - 12:56am Addthis PROBLEM: IBM Security...

  20. V-145: IBM Tivoli Federated Identity Manager Products Java Multiple...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    V-145: IBM Tivoli Federated Identity Manager Products Java Multiple Vulnerabilities April ... Addthis Related Articles V-178: IBM Data Studio Web Console Java Multiple Vulnerabilities ...

  1. U-116: IBM Tivoli Provisioning Manager Express for Software Distributi...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    for the affected ActiveX control Addthis Related Articles V-145: IBM Tivoli Federated Identity Manager Products Java Multiple Vulnerabilities V-094: IBM Multiple Products Multiple...

  2. V-122: IBM Tivoli Application Dependency Discovery Manager Java...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Automation Application Manager Multiple Vulnerabilities V-145: IBM Tivoli Federated Identity Manager Products Java Multiple Vulnerabilities T-694: IBM Tivoli Federated Identity...

  3. V-205: IBM Tivoli System Automation for Multiplatforms Java Multiple...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Automation Application Manager Multiple Vulnerabilities V-145: IBM Tivoli Federated Identity Manager Products Java Multiple Vulnerabilities V-122: IBM Tivoli Application...

  4. V-094: IBM Multiple Products Multiple Vulnerabilities | Department of

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Energy 94: IBM Multiple Products Multiple Vulnerabilities V-094: IBM Multiple Products Multiple Vulnerabilities February 19, 2013 - 1:41am Addthis PROBLEM: IBM Multiple Products Multiple Vulnerabilities PLATFORM: IBM Maximo Asset Management versions 7.5, 7.1, and 6.2 IBM Maximo Asset Management Essentials versions 7.5, 7.1, and 6.2 IBM SmartCloud Control Desk version 7.5 IBM Tivoli Asset Management for IT versions 7.2, 7.1, and 6.2 IBM Tivoli Change and Configuration Management Database

  5. August 15, 2001: IBM ASCI White | Department of Energy

    Energy Savers [EERE]

    5, 2001: IBM ASCI White August 15, 2001: IBM ASCI White August 15, 2001: IBM ASCI White August 15, 2001 Lawrence Livermore National Laboratory dedicates the "world's fastest supercomputer," the IBM ASCI White supercomputer with 8,192 processors that perform 12.3 trillion operations per second.

  6. V-074: IBM Informix Genero libpng Integer Overflow Vulnerability |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy 74: IBM Informix Genero libpng Integer Overflow Vulnerability V-074: IBM Informix Genero libpng Integer Overflow Vulnerability January 22, 2013 - 12:11am Addthis PROBLEM: IBM Informix Genero libpng Integer Overflow Vulnerability PLATFORM: IBM Informix Genero releases prior to 2.41 - all platforms ABSTRACT: A vulnerability has been reported in libpng. REFERENCE LINKS: IBM Security Bulletin: 1620982 Secunia Advisory SA51905 Secunia Advisory SA48026 CVE-2011-3026 IMPACT

  7. V-132: IBM Tivoli System Automation Application Manager Multiple

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Vulnerabilities | Department of Energy 2: IBM Tivoli System Automation Application Manager Multiple Vulnerabilities V-132: IBM Tivoli System Automation Application Manager Multiple Vulnerabilities April 12, 2013 - 6:00am Addthis PROBLEM: IBM has acknowledged multiple vulnerabilities in IBM Tivoli System Automation Application Manager PLATFORM: The vulnerabilities are reported in IBM Tivoli System Automation Application Manager versions 3.1, 3.2, 3.2.1, and 3.2.2 ABSTRACT: Multiple security

  8. V-180: IBM Application Manager For Smart Business Multiple Vulnerabilities

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    | Department of Energy 0: IBM Application Manager For Smart Business Multiple Vulnerabilities V-180: IBM Application Manager For Smart Business Multiple Vulnerabilities June 18, 2013 - 12:38am Addthis PROBLEM: IBM Application Manager For Smart Business Multiple Vulnerabilities PLATFORM: IBM Application Manager For Smart Business 1.x ABSTRACT: A security issue and multiple vulnerabilities have been reported in IBM Application Manager For Smart Business REFERENCE LINKS: Security Bulletin

  9. U-181: IBM WebSphere Application Server Information Disclosure...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    or 8.0.0.4. Addthis Related Articles V-054: IBM WebSphere Application Server for zOS Arbitrary Command Execution Vulnerability U-272: IBM WebSphere Commerce User...

  10. V-145: IBM Tivoli Federated Identity Manager Products Java Multiple

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Vulnerabilities | Department of Energy 45: IBM Tivoli Federated Identity Manager Products Java Multiple Vulnerabilities V-145: IBM Tivoli Federated Identity Manager Products Java Multiple Vulnerabilities April 30, 2013 - 12:09am Addthis PROBLEM: IBM Tivoli Federated Identity Manager Products Java Multiple Vulnerabilities PLATFORM: IBM Tivoli Federated Identity Manager versions 6.1, 6.2.0, 6.2.1, and 6.2.2. IBM Tivoli Federated Identity Manager Business Gateway versions 6.1.1, 6.2.0, 6.2.1

  11. The Easy Way of Finding Parameters in IBM (EWofFP-IBM)

    SciTech Connect (OSTI)

    Turkan, Nureddin [Bozok University, Faculty of Arts and Science, Department of Physics, Divanh Yolu, 66200 Yozgat (Turkey)

    2008-11-11

    E2/M1 multipole mixing ratios of even-even nuclei in transitional region can be calculated as soon as B(E2) and B(M1) values by using the PHINT and/or NP-BOS codes. The correct calculations of energies must be obtained to produce such calculations. Also, the correct parameter values are needed to calculate the energies. The logic of the codes is based on the mathematical and physical Statements describing interacting boson model (IBM) which is one of the model of nuclear structure physics. Here, the big problem is to find the best fitted parameters values of the model. So, by using the Easy Way of Finding Parameters in IBM (EWofFP-IBM), the best parameter values of IBM Hamiltonian for {sup 102-110}Pd and {sup 102-110}Ru isotopes were firstly obtained and then the energies were calculated. At the end, it was seen that the calculated results are in good agreement with the experimental ones. In addition, it was carried out that the presented energy values obtained by using the EWofFP-IBM are dominantly better than the previous theoretical data.

  12. T-681:IBM Lotus Symphony Multiple Unspecified Vulnerabilities

    Broader source: Energy.gov [DOE]

    Multiple unspecified vulnerabilities in IBM Lotus Symphony 3 before FP3 have unknown impact and attack vectors, related to "critical security vulnerability issues."

  13. V-229: IBM Lotus iNotes Input Validation Flaws Permit Cross-Site...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    V-229: IBM Lotus iNotes Input Validation Flaws Permit Cross-Site Scripting Attacks August ... Addthis Related Articles V-211: IBM iNotes Multiple Vulnerabilities U-198: IBM Lotus ...

  14. V-230: IBM TRIRIGA Application Platform Multiple Cross-Site Scripting...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    0: IBM TRIRIGA Application Platform Multiple Cross-Site Scripting Vulnerabilities V-230: IBM TRIRIGA Application Platform Multiple Cross-Site Scripting Vulnerabilities August 29, ...

  15. V-054: IBM WebSphere Application Server for z/OS Arbitrary Command Execution Vulnerability

    Office of Energy Efficiency and Renewable Energy (EERE)

    A vulnerability was reported in the IBM HTTP Server component 5.3 in IBM WebSphere Application Server (WAS) for z/OS

  16. New ALS Technique Guides IBM in Next-Generation Semiconductor...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    chip, which then form transistors," says Jed Pitera, a research staff member in science and technology at IBM Research-Almaden. "But it's also really hard to do the...

  17. Generalized Information Architecture for Managing Requirements in IBM?s Rational DOORS(r) Application.

    SciTech Connect (OSTI)

    Aragon, Kathryn M.; Eaton, Shelley M.; McCornack, Marjorie T.; Shannon, Sharon A.

    2014-12-01

    When a requirements engineering effort fails to meet expectations, often times the requirements management tool is blamed. Working with numerous project teams at Sandia National Laboratories over the last fifteen years has shown us that the tool is rarely the culprit; usually it is the lack of a viable information architecture with well- designed processes to support requirements engineering. This document illustrates design concepts with rationale, as well as a proven information architecture to structure and manage information in support of requirements engineering activities for any size or type of project. This generalized information architecture is specific to IBM's Rational DOORS (Dynamic Object Oriented Requirements System) software application, which is the requirements management tool in Sandia's CEE (Common Engineering Environment). This generalized information architecture can be used as presented or as a foundation for designing a tailored information architecture for project-specific needs. It may also be tailored for another software tool. Version 1.0 4 November 201

  18. V-211: IBM iNotes Multiple Vulnerabilities | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    1: IBM iNotes Multiple Vulnerabilities V-211: IBM iNotes Multiple Vulnerabilities August 5, 2013 - 6:00am Addthis PROBLEM: Multiple vulnerabilities have been reported in IBM Lotus iNotes PLATFORM: IBM iNotes 9.x ABSTRACT: IBM iNotes has two cross-site scripting vulnerabilities and an ActiveX Integer overflow vulnerability REFERENCE LINKS: Secunia Advisory SA54436 IBM Security Bulletin 1645503 CVE-2013-3027 CVE-2013-3032 CVE-2013-3990 IMPACT ASSESSMENT: High DISCUSSION: 1) Certain input related

  19. V-229: IBM Lotus iNotes Input Validation Flaws Permit Cross-Site Scripting

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Attacks | Department of Energy 9: IBM Lotus iNotes Input Validation Flaws Permit Cross-Site Scripting Attacks V-229: IBM Lotus iNotes Input Validation Flaws Permit Cross-Site Scripting Attacks August 28, 2013 - 6:00am Addthis PROBLEM: Several vulnerabilities were reported in IBM Lotus iNotes PLATFORM: IBM Lotus iNotes 8.5.x ABSTRACT: IBM Lotus iNotes 8.5.x contains four cross-site scripting vulnerabilities REFERENCE LINKS: Security Tracker Alert ID 1028954 IBM Security Bulletin 1647740

  20. U-114: IBM Personal Communications WS File Processing Buffer Overflow Vulnerability

    Broader source: Energy.gov [DOE]

    A vulnerability in WorkStation files (.ws) by IBM Personal Communications could allow a remote attacker to cause a denial of service (application crash) or potentially execute arbitrary code on vulnerable installations of IBM Personal Communications.

  1. U-049: IBM Tivoli Netcool Reporter CGI Bug Lets Remote Users...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    9: IBM Tivoli Netcool Reporter CGI Bug Lets Remote Users Inject Commands on the Target System U-049: IBM Tivoli Netcool Reporter CGI Bug Lets Remote Users Inject Commands on the...

  2. V-147: IBM Lotus Notes Mail Client Lets Remote Users Execute...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    7: IBM Lotus Notes Mail Client Lets Remote Users Execute Java Applets V-147: IBM Lotus Notes Mail Client Lets Remote Users Execute Java Applets May 2, 2013 - 6:00am Addthis...

  3. U-111: IBM AIX ICMP Processing Flaw Lets Remote Users Deny Service...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    aixefixessecurityicmpfix.tar Addthis Related Articles U-096: IBM AIX TCP Large Send Offload Bug Lets Remote Users Deny Service V-031: IBM WebSphere DataPower...

  4. U.S. Department of Energy and IBM to Collaborate in Advancing

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Supercomputing Technology | Department of Energy IBM to Collaborate in Advancing Supercomputing Technology U.S. Department of Energy and IBM to Collaborate in Advancing Supercomputing Technology November 15, 2006 - 9:25am Addthis Lawrence Livermore and Argonne National Lab Scientists to Work with IBM Designers WASHINGTON, DC -- The U.S. Department of Energy (DOE) announced today that its Office of Science, the National Nuclear Security Administration (NNSA) and IBM will share the cost of a

  5. V-230: IBM TRIRIGA Application Platform Multiple Cross-Site Scripting

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Vulnerabilities | Department of Energy 0: IBM TRIRIGA Application Platform Multiple Cross-Site Scripting Vulnerabilities V-230: IBM TRIRIGA Application Platform Multiple Cross-Site Scripting Vulnerabilities August 29, 2013 - 4:10am Addthis PROBLEM: Multiple vulnerabilities have been reported in IBM TRIRIGA Application Platform, which can be exploited by malicious people to conduct cross-site scripting attacks. PLATFORM: IBM TRIRIGA Application Platform 2.x ABSTRACT: The vulnerabilities are

  6. U-049: IBM Tivoli Netcool Reporter CGI Bug Lets Remote Users Inject

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Commands on the Target System | Department of Energy 49: IBM Tivoli Netcool Reporter CGI Bug Lets Remote Users Inject Commands on the Target System U-049: IBM Tivoli Netcool Reporter CGI Bug Lets Remote Users Inject Commands on the Target System December 1, 2011 - 9:00am Addthis PROBLEM: IBM Tivoli Netcool Reporter CGI Bug Lets Remote Users Inject Commands on the Target System. PLATFORM: IBM Tivoli Netcool Reporter prior to 2.2.0.8 ABSTRACT: A vulnerability was reported in IBM Tivoli Netcool

  7. International Border Management Systems (IBMS) Program : visions and strategies.

    SciTech Connect (OSTI)

    McDaniel, Michael; Mohagheghi, Amir Hossein

    2011-02-01

    Sandia National Laboratories (SNL), International Border Management Systems (IBMS) Program is working to establish a long-term border security strategy with United States Central Command (CENTCOM). Efforts are being made to synthesize border security capabilities and technologies maintained at the Laboratories, and coordinate with subject matter expertise from both the New Mexico and California offices. The vision for SNL is to provide science and technology support for international projects and engagements on border security.

  8. T-594: IBM solidDB Password Hash Authentication Bypass Vulnerability

    Broader source: Energy.gov [DOE]

    This vulnerability could allow remote attackers to execute arbitrary code on vulnerable installations of IBM solidDB. Authentication is not required to exploit this vulnerability.

  9. V-122: IBM Tivoli Application Dependency Discovery Manager Java Multiple Vulnerabilities

    Broader source: Energy.gov [DOE]

    Multiple security vulnerabilities exist in the Java Runtime Environments (JREs) that can affect the security of IBM Tivoli Application Dependency Discovery Manager

  10. U-154: IBM Rational ClearQuest ActiveX Control Buffer Overflow Vulnerability

    Broader source: Energy.gov [DOE]

    A vulnerability was reported in IBM Rational ClearQuest. A remote user can cause arbitrary code to be executed on the target user's system.

  11. WA_01_018_IBM_Waiver_of_Governement_US_and_Foreign_Patent_Ri.pdf |

    Broader source: Energy.gov (indexed) [DOE]

    Department of Energy 1_018_IBM_Waiver_of_Governement_US_and_Foreign_Patent_Ri.pdf More Documents & Publications WA_04_053_IBM_CORP_Waiver_of_the_Government_U.S._and_Foreign.pdf WA_00_015_COMPAQ_FEDERAL_LLC_Waiver_Domestic_and_Foreign_Pat.pdf Advance Patent Waiver W(A)2002-023

  12. T-722: IBM WebSphere Commerce Edition Input Validation Holes Permit

    Energy Savers [EERE]

    Cross-Site Scripting Attacks | Department of Energy 2: IBM WebSphere Commerce Edition Input Validation Holes Permit Cross-Site Scripting Attacks T-722: IBM WebSphere Commerce Edition Input Validation Holes Permit Cross-Site Scripting Attacks September 21, 2011 - 8:15am Addthis PROBLEM: IBM WebSphere Commerce Edition Input Validation Holes Permit Cross-Site Scripting Attacks. PLATFORM: WebSphere Commerce Edition V7.0 ABSTRACT: A remote user can access the target user's cookies (including

  13. T-561: IBM and Oracle Java Binary Floating-Point Number Conversion Denial of Service Vulnerability

    Broader source: Energy.gov [DOE]

    IBM and Oracle Java products contain a vulnerability that could allow an unauthenticated, remote attacker to cause a denial of service (DoS) condition on a targeted system.

  14. U-116: IBM Tivoli Provisioning Manager Express for Software Distribution Multiple Vulnerabilities

    Broader source: Energy.gov [DOE]

    Multiple vulnerabilities have been reported in IBM Tivoli Provisioning Manager Express for Software Distribution, which can be exploited by malicious people to conduct SQL injection attacks and compromise a user's system

  15. New ALS Technique Guides IBM in Next-Generation Semiconductor Development

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    New ALS Technique Guides IBM in Next-Generation Semiconductor Development New ALS Technique Guides IBM in Next-Generation Semiconductor Development Print Wednesday, 21 January 2015 09:37 A new measurement technique developed at the ALS is helping guide the semiconductor industry in next-generation nanopatterning techniques. Directed self assembly (DSA) of block copolymers is an extremely promising strategy for high-volume, cost-effective semiconductor manufacturing at the nanoscale. Materials

  16. U-007: IBM Rational AppScan Import/Load Function Flaws Let Remote Users Execute Arbitrary Code

    Broader source: Energy.gov [DOE]

    Two vulnerabilities were reported in IBM Rational AppScan. A remote user can cause arbitrary code to be executed on the target user's system.

  17. T-615: IBM Rational System Architect ActiveBar ActiveX Control Lets Remote Users Execute Arbitrary Code

    Broader source: Energy.gov [DOE]

    There is a high risk security vulnerability with the ActiveBar ActiveX controls used by IBM Rational System Architect.

  18. Computing Legacy Software Behavior to Understand Functionality and Security Properties: An IBM/370 Demonstration

    SciTech Connect (OSTI)

    Linger, Richard C; Pleszkoch, Mark G; Prowell, Stacy J; Sayre, Kirk D; Ankrum, Scott

    2013-01-01

    Organizations maintaining mainframe legacy software can benefit from code modernization and incorporation of security capabilities to address the current threat environment. Oak Ridge National Laboratory is developing the Hyperion system to compute the behavior of software as a means to gain understanding of software functionality and security properties. Computation of functionality is critical to revealing security attributes, which are in fact specialized functional behaviors of software. Oak Ridge is collaborating with MITRE Corporation to conduct a demonstration project to compute behavior of legacy IBM Assembly Language code for a federal agency. The ultimate goal is to understand functionality and security vulnerabilities as a basis for code modernization. This paper reports on the first phase, to define functional semantics for IBM Assembly instructions and conduct behavior computation experiments.

  19. New ALS Technique Guides IBM in Next-Generation Semiconductor Development

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    New ALS Technique Guides IBM in Next-Generation Semiconductor Development Print A new measurement technique developed at the ALS is helping guide the semiconductor industry in next-generation nanopatterning techniques. Directed self assembly (DSA) of block copolymers is an extremely promising strategy for high-volume, cost-effective semiconductor manufacturing at the nanoscale. Materials that self-assemble spontaneously form nanostructures down to the molecular scale, which would revolutionize

  20. Area schools get new computers through Los Alamos National Laboratory, IBM

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    partnership Area schools get new computers Area schools get new computers through Los Alamos National Laboratory, IBM partnership Northern New Mexico schools are recipients of fully loaded desktop and laptop computers. May 8, 2009 Los Alamos National Laboratory sits on top of a once-remote mesa in northern New Mexico with the Jemez mountains as a backdrop to research and innovation covering multi-disciplines from bioscience, sustainable energy sources, to plasma physics and new materials.

  1. Studies of phase transitions and quantum chaos relationships in extended Casten triangle of IBM-1

    SciTech Connect (OSTI)

    Proskurins, J.; Andrejevs, A.; Krasta, T.; Tambergs, J. [University of Latvia, Institute of Solid State Physics (Latvia)], E-mail: juris_tambergs@yahoo.com

    2006-07-15

    A precise solution of the classical energy functional E(N, {eta}, {chi}; {beta}) minimum problem with respect to deformation parameter {beta} is obtained for the simplified Casten version of the standard interacting boson model (IBM-1) Hamiltonian. The first-order phase transition lines as well as the critical points of X(5), -X(5), and E(5) symmetries are considered. The dynamical criteria of quantum chaos-the basis state fragmentation width and the wave function entropy - are studied for the ({eta}, {chi}) parameter space of the extended Casten triangle, and the possible relationships between these criteria and phase transition lines are discussed.

  2. Statement by Guido DeHoratiis

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    will only address a subset of unconventional resources: shale gas, tight gas, shale oil, and tight oil, and a robust Federal research and development (R&D) plan is...

  3. Additive synthesis with DIASS-M4C on Argonne National Laboratory`s IBM POWERparallel System (SP)

    SciTech Connect (OSTI)

    Kaper, H.; Ralley, D.; Restrepo, J.; Tiepei, S.

    1995-12-31

    DIASS-M4C, a digital additive instrument was implemented on the Argonne National Laboratory`s IBM POWER parallel System (SP). This paper discusses the need for a massively parallel supercomputer and shows how the code was parallelized. The resulting sounds and the degree of control the user can have justify the effort and the use of such a large computer.

  4. Intelligent Bioreactor Management Information System (IBM-IS) for Mitigation of Greenhouse Gas Emissions

    SciTech Connect (OSTI)

    Paul Imhoff; Ramin Yazdani; Don Augenstein; Harold Bentley; Pei Chiu

    2010-04-30

    Methane is an important contributor to global warming with a total climate forcing estimated to be close to 20% that of carbon dioxide (CO2) over the past two decades. The largest anthropogenic source of methane in the US is 'conventional' landfills, which account for over 30% of anthropogenic emissions. While controlling greenhouse gas emissions must necessarily focus on large CO2 sources, attention to reducing CH4 emissions from landfills can result in significant reductions in greenhouse gas emissions at low cost. For example, the use of 'controlled' or bioreactor landfilling has been estimated to reduce annual US greenhouse emissions by about 15-30 million tons of CO2 carbon (equivalent) at costs between $3-13/ton carbon. In this project we developed or advanced new management approaches, landfill designs, and landfill operating procedures for bioreactor landfills. These advances are needed to address lingering concerns about bioreactor landfills (e.g., efficient collection of increased CH4 generation) in the waste management industry, concerns that hamper bioreactor implementation and the consequent reductions in CH4 emissions. Collectively, the advances described in this report should result in better control of bioreactor landfills and reductions in CH4 emissions. Several advances are important components of an Intelligent Bioreactor Management Information System (IBM-IS).

  5. ISTUM PC: industrial sector technology use model for the IBM-PC

    SciTech Connect (OSTI)

    Roop, J.M.; Kaplan, D.T.

    1984-09-01

    A project to improve and enhance the Industrial Sector Technology Use Model (ISTUM) was originated in the summer of 1983. The project had dix identifiable objectives: update the data base; improve run-time efficiency; revise the reference base case; conduct case studies; provide technical and promotional seminars; and organize a service bureau. This interim report describes which of these objectives have been met and which tasks remain to be completed. The most dramatic achievement has been in the area of run-time efficiency. From a model that required a large proportion of the total resources of a mainframe computer and a great deal of effort to operate, the current version of the model (ISTUM-PC) runs on an IBM Personal Computer. The reorganization required for the model to run on a PC has additional advantages: the modular programs are somewhat easier to understand and the data base is more accessible and easier to use. A simple description of the logic of the model is given in this report. To generate the necessary funds for completion of the model, a multiclient project is proposed. This project will extend the industry coverage to all the industrial sectors, including the construction of process flow models for chemicals and petroleum refining. The project will also calibrate this model to historical data and construct a base case and alternative scenarios. The model will be delivered to clients and training provided. 2 references, 4 figures, 3 tables.

  6. T-559: Stack-based buffer overflow in oninit in IBM Informix Dynamic Server (IDS) 11.50 allows remote execution

    Broader source: Energy.gov [DOE]

    Stack-based buffer overflow in oninit in IBM Informix Dynamic Server (IDS) 11.50 allows remote execution attackers to execute arbitrary code via crafted arguments in the USELASTCOMMITTED session environment option in a SQL SET ENVIRONMENT statement

  7. Study of Even-Even/Odd-Even/Odd-Odd Nuclei in Zn-Ga-Ge Region in the Proton-Neutron IBM/IBFM/IBFFM

    SciTech Connect (OSTI)

    Yoshida, N.; Brant, S.; Zuffi, L.

    2009-08-26

    We study the even-even, odd-even and odd-odd nuclei in the region including Zn-Ga-Ge in the proton-neutron IBM and the models derived from it: IBM2, IBFM2, IBFFM2. We describe {sup 67}Ga, {sup 65}Zn, and {sup 68}Ga by coupling odd particles to a boson core {sup 66}Zn. We also calculate the beta{sup +}-decay rates among {sup 68}Ge, {sup 68}Ga and {sup 68}Zn.

  8. Lawrence Livermore and IBM Collaborate to Build New Brain-Inspired Supercomputer: Chip-architecture breakthrough accelerates path to exascale computing; helps computers tackle complex, cognitive tasks such as pattern recognition sensory processing

    Broader source: Energy.gov [DOE]

    Lawrence Livermore National Laboratory (LLNL) today announced it will receive a first-of-a-kind brain-inspired supercomputing platform for deep learning developed by IBM Research. Based on a breakthrough neurosynaptic computer chip called IBM TrueNorth, the scalable platform will process the equivalent of 16 million neurons and 4 billion synapses and consume the energy equivalent of a hearing aid battery – a mere 2.5 watts of power.

  9. IBM Blue Gene Architecture

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    How to use Open|SpeedShop to Analyze the Performance of Parallel Codes. Donald Frederick LLNL LLNL---PRES---508651 Performance Analysis is becoming more important ± Complex architectures ± Complex applications ± Mapping applications onto architectures Often hard to know where to start ± Which experiments to run first? ± How to plan follow---on experiments? ± What kind of problems can be explored? ± How to interpret the data? How to use OSS to Analyze the Performance of Parallel Codes? 2

  10. The Impact of IBM Cell Technology on the Programming Paradigm in the Context of Computer Systems for Climate and Weather Models

    SciTech Connect (OSTI)

    Zhou, Shujia; Duffy, Daniel; Clune, Thomas; Suarez, Max; Williams, Samuel; Halem, Milton

    2009-01-10

    The call for ever-increasing model resolutions and physical processes in climate and weather models demands a continual increase in computing power. The IBM Cell processor's order-of-magnitude peak performance increase over conventional processors makes it very attractive to fulfill this requirement. However, the Cell's characteristics, 256KB local memory per SPE and the new low-level communication mechanism, make it very challenging to port an application. As a trial, we selected the solar radiation component of the NASA GEOS-5 climate model, which: (1) is representative of column physics components (half the total computational time), (2) has an extremely high computational intensity: the ratio of computational load to main memory transfers, and (3) exhibits embarrassingly parallel column computations. In this paper, we converted the baseline code (single-precision Fortran) to C and ported it to an IBM BladeCenter QS20. For performance, we manually SIMDize four independent columns and include several unrolling optimizations. Our results show that when compared with the baseline implementation running on one core of Intel's Xeon Woodcrest, Dempsey, and Itanium2, the Cell is approximately 8.8x, 11.6x, and 12.8x faster, respectively. Our preliminary analysis shows that the Cell can also accelerate the dynamics component (~;;25percent total computational time). We believe these dramatic performance improvements make the Cell processor very competitive as an accelerator.

  11. IBM Probes Material Capabilities at the ALS

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    and temperature-dependent x-ray absorption spectroscopy experiments, in conjunction with x-ray diffraction and electrical transport measurements. The researchers were able to...

  12. U-198: IBM Lotus Expeditor Multiple Vulnerabilities

    Broader source: Energy.gov [DOE]

    The vulnerabilities can be exploited by malicious people to conduct cross-site scripting attacks, disclose potentially sensitive information, bypass certain security restrictions, and compromise a user's system..

  13. U-179: IBM Java 7 Multiple Vulnerabilities

    Broader source: Energy.gov [DOE]

    Vulnerabilities can be exploited by malicious users to disclose certain information and by malicious people to disclose potentially sensitive information, hijack a user's session, conduct DNS cache poisoning attacks, manipulate certain data, cause a DoS (Denial of Service), and compromise a vulnerable system.

  14. T-694: IBM Tivoli Federated Identity Manager Products Multiple Vulnerabilities

    Broader source: Energy.gov [DOE]

    This Security Alert addresses a serious security issue CVE-2010-4476 (Java Runtime Environment hangs when converting "2.2250738585072012e-308" to a binary floating-point number). This vulnerability might cause the Java Runtime Environment to hang, be in infinite loop, and/or crash resulting in a denial of service exposure. This same hang might occur if the number is written without scientific notation (324 decimal places). In addition to the Application Server being exposed to this attack, any Java program using the Double.parseDouble method is also at risk of this exposure including any customer written application or third party written application.

  15. U-139: IBM Tivoli Directory Server Input Validation Flaw

    Broader source: Energy.gov [DOE]

    The Web Admin Tool does not properly filter HTML code from user-supplied input before displaying the input.

  16. V-161: IBM Maximo Asset Management Products Java Multiple Vulnerabilities

    Broader source: Energy.gov [DOE]

    Asset and Service Mgmt Products - Potential security exposure when using JavaTM based applications due to vulnerabilities in Java Software Developer Kits.

  17. U-186: IBM WebSphere Sensor Events Multiple Vulnerabilities

    Broader source: Energy.gov [DOE]

    Some vulnerabilities have unknown impacts and others can be exploited by malicious people to conduct cross-site scripting attacks.

  18. V-118: IBM Lotus Domino Multiple Vulnerabilities | Department...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    to version 9.0 or update to version 8.5.3 Fix Pack 4 when available Addthis Related Articles T-534: Vulnerability in the PDF distiller of the BlackBerry Attachment Service...

  19. U-154: IBM Rational ClearQuest ActiveX Control Buffer Overflow...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Architect ActiveBar ActiveX Control Lets Remote Users Execute Arbitrary Code V-020: Apple QuickTime Multiple Flaws Let Remote Users Execute Arbitrary Code U-126: Cisco Adaptive...

  20. U-096: IBM AIX TCP Large Send Offload Bug Lets Remote Users Deny Service

    Broader source: Energy.gov [DOE]

    A remote user can send a series of specially crafted TCP packets to trigger a kernel panic on the target system.

  1. T-722: IBM WebSphere Commerce Edition Input Validation Holes...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    with unspecified impact CVE-2010-2276. The vulnerabilities reside in the included 'Dojo' component. Impact: A remote user can access the target user's cookies (including ...

  2. Survivability enhancement study for C/sup 3/I/BM (communications...

    Office of Scientific and Technical Information (OSTI)

    RELIABILITY; ELECTROMAGNETIC PULSES; COMMUNICATIONS; FEASIBILITY STUDIES; FIBER OPTICS; HARDENING; MILITARY EQUIPMENT; POWER SUPPLIES; PROGRESS REPORT; SURVIVAL TIME;...

  3. U.S. Department of Energy and IBM to Collaborate in Advancing...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    (R&D) effort to further enhance the capabilities of the fastest computer in existence. Under the agreement, scientists from two of the DOE's national laboratories are...

  4. U-181: IBM WebSphere Application Server Information Disclosure Vulnerability

    Broader source: Energy.gov [DOE]

    The vulnerability is caused due to missing access controls in the Application Snoop Servlet when handling requests and can be exploited to disclose request and client information.

  5. T-559: Stack-based buffer overflow in oninit in IBM Informix...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    exploit this vulnerability. The specific flaw exists within the oninit process bound to TCP port 9088 when processing the arguments to the USELASTCOMMITTED option in a SQL query....

  6. U.S. Department of Energy and IBM to Collaborate in Advancing...

    Office of Science (SC) Website

    "Supercomputers are crucial to the continued success of the NNSA's science-based efforts to keep the U.S. nuclear weapons stockpile safe, secure and reliable without underground ...

  7. A Core Hole in the Southwestern Moat of the Long Valley Caldera...

    Open Energy Info (EERE)

    in water level, temperatures, and fluid chemistry. Authors Harold A. Wollenberg, Michael L. Sorey, Christopher D. Farrar, Art F. White, S. Flexser and L.C. Bartel Published...

  8. A Large Hadron Electron Collider at CERN (Journal Article) |...

    Office of Scientific and Technical Information (OSTI)

    M. ; Brookhaven ; Barber, D. ; Daresbury DESY Liverpool U. ; Bartels, J. ; Hamburg, Tech. U. ; Behnke, O. ; DESY ; Behr, J. ; DESY ; Belyaev, A.S. ; Rutherford...

  9. Mesoscale Modeling of Fuel Swelling and Restructuring: Coupling...

    Office of Scientific and Technical Information (OSTI)

    Abstract not provided. Authors: Dingreville, Remi Philippe Michel ; Robbins, Joshua ; Bartel, Timothy James Publication Date: 2013-08-01 OSTI Identifier: 1106425 Report Number(s): ...

  10. Local Imaging of High Mobility Two-Dimensional Electron Systems...

    Office of Scientific and Technical Information (OSTI)

    at www.ntis.gov. Authors: Pelliccione, M. ; Stanford U., Appl. Phys. Dept. SLAC UC, Santa Barbara ; Bartel, J. ; SLAC Stanford U., Phys. Dept. ; Sciambi, A. ; Stanford U.,...

  11. Stochastic formation of magnetic vortex structures in asymmetric...

    Office of Scientific and Technical Information (OSTI)

    structures in asymmetric disks triggered by chaotic dynamics Authors: Im, Mi-Young ; Lee, Ki-Suk ; Vogel, Andreas ; Hong, Jung-Il ; Meier, Guido ; Fischer, Peter Publication...

  12. Tuning of the nucleation field in nanowires with perpendicular...

    Office of Scientific and Technical Information (OSTI)

    Theo ; Kobs, Andr ; Vogel, Andreas ; Wintz, Sebastian ; Im, Mi-Young ; Fischer, Peter ; Oepen, Hans Peter ; Merkt, Ulrich ; Meier, Guido Publication Date: 2013-02-28 OSTI...

  13. Blog Feed: Vehicles | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    and Renewable Energy Postdoctoral Research Awards. | Photo courtesy of Dr. Guido Bender, NREL. 10 Questions for a Materials Scientist: Brian Larsen Meet Brian Larsen, who is...

  14. Before the Subcommittees on Energy and Environment - House Committee...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Technology Testimony of Guido DeHoratiis, Acting Deputy Assistant Secretary for Oil and Gas, Office of Fossil Energy Before the Subcommittees on Energy and Environment - House...

  15. Tuning of the nucleation field in nanowires with perpendicular...

    Office of Scientific and Technical Information (OSTI)

    Wintz, Sebastian ; Im, Mi-Young ; Fischer, Peter ; Oepen, Hans Peter ; Merkt, Ulrich ; Meier, Guido Publication Date: 2013-02-28 OSTI Identifier: 1165080 Report Number(s):...

  16. Simulation of High-Resolution Magnetic Resonance Images on the IBM Blue Gene/L Supercomputer Using SIMRI

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Baum, K. G.; Menezes, G.; Helguera, M.

    2011-01-01

    Medical imaging system simulators are tools that provide a means to evaluate system architecture and create artificial image sets that are appropriate for specific applications. We have modified SIMRI, a Bloch equation-based magnetic resonance image simulator, in order to successfully generate high-resolution 3D MR images of the Montreal brain phantom using Blue Gene/L systems. Results show that redistribution of the workload allows an anatomically accurate 256 3 voxel spin-echo simulation in less than 5 hours when executed on an 8192-node partition of a Blue Gene/L system.

  17. Nuclear matrix elements for 0??{sup ?}?{sup ?} decays: Comparative analysis of the QRPA, shell model and IBM predictions

    SciTech Connect (OSTI)

    Civitarese, Osvaldo; Suhonen, Jouni

    2013-12-30

    In this work we report on general properties of the nuclear matrix elements involved in the neutrinoless double ?{sup ?} decays (0??{sup ?}?{sup ?} decays) of several nuclei. A summary of the values of the NMEs calculated along the years by the Jyvskyl-La Plata collaboration is presented. These NMEs, calculated in the framework of the quasiparticle random phase approximation (QRPA), are compared with those of the other available calculations, like the Shell Model (ISM) and the interacting boson model (IBA-2)

  18. Design and Implementation of a Scalable Membership Service for...

    Office of Scientific and Technical Information (OSTI)

    Corporation, Haifa Research Center IBM Corporation, Haifa Research Center IBM T. J. Watson Research Center IBM T. J. Watson Research Center ORNL ORNL Publication Date:...

  19. Stochastic formation of magnetic vortex structures in asymmetric...

    Office of Scientific and Technical Information (OSTI)

    Technical Information Service, Springfield, VA at www.ntis.gov. Authors: Im, Mi-Young ; Lee, Ki-Suk ; Vogel, Andreas ; Hong, Jung-Il ; Meier, Guido ; Fischer, Peter Publication...

  20. DOE's Shale Gas and Hydraulic Fracturing Research

    Broader source: Energy.gov [DOE]

    Statement of Guido DeHoratiis Acting Deputy Assistant Secretary for Oil and Natural Gas before the House Committee on Science, Space, and Technology Subcommittees on Energy and Environment

  1. Before the Subcommittees on Energy and Environment- House Committee on Science, Space, and Technology

    Broader source: Energy.gov [DOE]

    Subject: Interagency Working Group to Support Safe and Responsible Development of Unconventional Domestic Natural Gas Resources By: Guido DeHoratiis, Acting Deputy Assistant Secretary for Oil and Gas, Office of Fossil Energy

  2. Magnetic soft x-ray microscopy of the domain wall depinning process...

    Office of Scientific and Technical Information (OSTI)

    Service, Springfield, VA at www.ntis.gov. Authors: Im, Mi-Young ; Bocklage, Lars ; Meier, Guido ; Fischer, Peter Publication Date: 2011-10-27 OSTI Identifier: 1172969 Report...

  3. 10 Questions for a Materials Scientist: Brian Larsen | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Brian Larsen 10 Questions for a Materials Scientist: Brian Larsen January 24, 2013 - 10:50am Addthis Brian Larsen is developing the next generation of fuel cell catalysts thanks to the Energy Efficiency and Renewable Energy Postdoctoral Research Awards. | Photo courtesy of Dr. Guido Bender, NREL. Brian Larsen is developing the next generation of fuel cell catalysts thanks to the Energy Efficiency and Renewable Energy Postdoctoral Research Awards. | Photo courtesy of Dr. Guido Bender, NREL.

  4. Browse by Discipline -- E-print Network Subject Pathways: Plasma Physics

    Office of Scientific and Technical Information (OSTI)

    and Fusion -- Energy, science, and technology for the research community -- hosted by the Office of Scientific and Technical Information, U.S. Department of Energy B C D E F G H I J K L M N O P Q R S T U V W X Y Z Barlaz, Morton A. (Morton A. Barlaz) - Department of Civil, Construction, and Environmental Engineering, North Carolina State University Bartels, Soeren (Soeren Bartels) - Abteilung für Angewandte Mathematik, Albert-Ludwigs-Universität Freiburg Brown, Sally (Sally Brown) -

  5. Nicholas Donofrio | Department of Energy

    Office of Environmental Management (EM)

    Nicholas Donofrio About Us Nicholas Donofrio - Former EVP of Innovation and Technology, IBM Photo of Nicholas Donofrio Nicholas M. Donofrio is a 44-year IBM veteran who led IBM's technology and innovation strategies from 1997 until his retirement in October 2008. He also was vice chairman of the IBM International Foundation and chairman of the Board of Governors for the IBM Academy of Technology. Mr. Donofrio's most recent responsibilities included IBM Research, Governmental Programs, Technical

  6. JC3 | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    were reported in HP Service Manager April 30, 2013 V-145: IBM Tivoli Federated Identity Manager Products Java Multiple Vulnerabilities IBM Tivoli Federated Identity Manager...

  7. JC3 Bulletin Archive | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    were reported in HP Service Manager April 30, 2013 V-145: IBM Tivoli Federated Identity Manager Products Java Multiple Vulnerabilities IBM Tivoli Federated Identity Manager...

  8. The Cell Processor

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    2005 IBM Corporation The Cell Processor Architecture & Issues 2 2005 IBM Corporation Agenda Cell Processor Overview Programming the Cell Processor Concluding Remarks 3 ...

  9. Beyond "Partly Sunny": A Better Solar Forecast | Department of...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    ... The IBM Thomas J. Watson Research Center and its partners will integrate big data ... Similar to the recently demonstrated IBM Watson computer system, the proposed Watt-sun ...

  10. Mesoscale Modeling of Fuel Swelling and Restructuring: Coupling

    Office of Scientific and Technical Information (OSTI)

    Microstructure evolution and Mechanical Localization. (Conference) | SciTech Connect Conference: Mesoscale Modeling of Fuel Swelling and Restructuring: Coupling Microstructure evolution and Mechanical Localization. Citation Details In-Document Search Title: Mesoscale Modeling of Fuel Swelling and Restructuring: Coupling Microstructure evolution and Mechanical Localization. Abstract not provided. Authors: Dingreville, Remi Philippe Michel ; Robbins, Joshua ; Bartel, Timothy James Publication

  11. Computing and Computational Sciences Directorate - Contacts

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Home › About Us Contacts Jeff Nichols Associate Laboratory Director Computing and Computational Sciences Becky Verastegui Directorate Operations Manager Computing and Computational Sciences Directorate Michael Bartell Chief Information Officer Information Technologies Services Division Jim Hack Director, Climate Science Institute National Center for Computational Sciences Shaun Gleason Division Director Computational Sciences and Engineering Barney Maccabe Division Director Computer Science

  12. Browse by Discipline -- E-print Network Subject Pathways: Energy Storage,

    Office of Scientific and Technical Information (OSTI)

    Conversion and Utilization -- Energy, science, and technology for the research community -- hosted by the Office of Scientific and Technical Information, U.S. Department of Energy D E F G H I J K L M N O P Q R S T U V W X Y Z Cacciani, Patrice (Patrice Cacciani) - Laboratoire de Physique des Lasers Atomes et Molécules & Centre de Recherches Lasers et Applications, Cahill, Kevin (Kevin Cahill) - Department of Physics and Astronomy, University of New Mexico Caldarelli, Guido (Guido

  13. Local Imaging of High Mobility Two-Dimensional Electron Systems with

    Office of Scientific and Technical Information (OSTI)

    Virtual Scanning Tunneling Microscopy (Journal Article) | SciTech Connect Local Imaging of High Mobility Two-Dimensional Electron Systems with Virtual Scanning Tunneling Microscopy Citation Details In-Document Search Title: Local Imaging of High Mobility Two-Dimensional Electron Systems with Virtual Scanning Tunneling Microscopy Authors: Pelliccione, M. ; /Stanford U., Appl. Phys. Dept. /SLAC /UC, Santa Barbara ; Bartel, J. ; /SLAC /Stanford U., Phys. Dept. ; Sciambi, A. ; /Stanford U.,

  14. Presentations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    created by Ken Schwartz IBM Compiler Optimization Options June 4, 2002 | Author(s): M. Stewart | Download File: optarg.ppt | ppt | 53 KB All of the IBM supplied compilers produce...

  15. Presentations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    IBM Compiler Optimization Options June 4, 2002 | Author(s): M. Stewart | Download File: optarg.ppt | ppt | 53 KB All of the IBM supplied compilers produce unoptimized code by...

  16. Secretary Chu Announces $47 Million to Improve Efficiency in...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    ... Equipment & Software Projects IBM T.J. Watson Research Center (1.6 million) SeaMicro Inc. ... Cooling IBM T.J. Watson Research Center (2.3 million) Federspiel Controls, Inc. ...

  17. News Item

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    IBM's Brain-Inspired Computing Systems and Ecosystem Location: 67-3111 Chemla Room Abstract: Over the past 6 years as part of the DARPA SyNAPSE program, IBM's Brain Inspired ...

  18. Motor Current Data Collection System

    Energy Science and Technology Software Center (OSTI)

    1992-12-01

    The Motor Current Data Collection System (MCDCS) uses IBM compatible PCs to collect, process, and store Motor Current Signature information.

  19. Driving Operational Changes through an Energy Monitoring System

    SciTech Connect (OSTI)

    2012-08-01

    Institutional change case study details IBM's corporate efficiency program focused on basic operation improvements in its diverse real estate operations.

  20. Search for: All records | SciTech Connect

    Office of Scientific and Technical Information (OSTI)

    Urciuoli, Guido" Name Name ORCID Search Authors Type: All Book/Monograph Conference/Event Journal Article Miscellaneous Patent Program Document Software Manual Technical Report Thesis/Dissertation Subject: Identifier Numbers: Site: All Alaska Power Administration, Juneau, Alaska (United States) Albany Research Center (ARC), Albany, OR (United States) Albuquerque Complex - NNSA Albuquerque Operations Office, Albuquerque, NM (United States) Amarillo National Resource Center for Plutonium,

  1. untitled

    Broader source: Energy.gov (indexed) [DOE]

    Unconventional Resources Technology Advisory Committee (URTAC) Meeting January 29, 2008 Meeting Minutes July 2, 2008 A Federal Advisory Committee to the U.S. Secretary of Energy 2 A Federal Advisory Committee to the U.S. Secretary of Energy 3 Unconventional Resources Technology Advisory Committee January 29, 2008 Meeting Minutes Crowne Plaza Houston North Greenspoint, Houston, Texas Introduction and DOE Oil and Natural Gas Programs At 8:00 a.m., Mr. Guido DeHoratiis called the Unconventional

  2. Presentations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    IBM Compiler Optimization Options June 4, 2002 | Author(s): M. Stewart | Download File: optarg.ppt | ppt | 53 KB All of the IBM supplied compilers produce unoptimized code by default. Specific optimization command line options must be supplied to the compilers in order for them to produce optimized code. In this talk, several of the more useful optimization options for the IBM Fortran, C, and C++ compilers are described and recommendations will be given on which of them are most useful.

  3. Software and Libraries | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries Boost CPMD CodeSaturne GAMESS GPAW GROMACS LAMMPS MADNESS QBox IBM...

  4. ASC eNews Quarterly Newsletter December 2012 | National Nuclear...

    National Nuclear Security Administration (NNSA)

    ... Early unclassified work on the machine allows Livermore researchers and IBM computer ... Cardioid, researchers are modeling the electrical signals moving throughout the heart. ...

  5. Watt-Sun: A Multi-Scale, Multi-Model, Machine-Learning Solar...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    learning, and cloud modeling integrated in a universal platform with an open architecture. ... INNOVATIONS IBM sky cameras.jpg The goal of the project is the development and ...

  6. Pete Beckman on Mira and Exascale

    ScienceCinema (OSTI)

    Pete Beckman

    2013-06-06

    Argonne's Pete Beckman, director of the Exascale Technology Computing Institute (ETCi), talks about the IBM Bluegene Q supercomputer and the future of computing and Exascale technology.

  7. SunShot Rooftop Challenge Awardees | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    enable multiple financing options for community solar programs. City University of New York City University of New York, NYC Department of Buildings, Procemx, CUNY Ventures, IBM,...

  8. V-215: NetworkMiner Directory Traversal and Insecure Library...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Addthis Related Articles U-198: IBM Lotus Expeditor Multiple Vulnerabilities U-146: Adobe ReaderAcrobat Multiple Vulnerabilities T-542: SAP Crystal Reports Server Multiple...

  9. News Item

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    California, Santa Barbara Catherine Murphy, University of Illinois at Urbana-Champaign Frances Ross, IBM Ned Seeman, New York University Donald Tennant, Cornell Nanoscale Science...

  10. Armonk, New York: Energy Resources | Open Energy Information

    Open Energy Info (EERE)

    place in Westchester County, New York.1 Registered Energy Companies in Armonk, New York International Business Machines Corp IBM Windfarm Finance LLC References US Census...

  11. ALSNews Vol. 350

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    IBM researchers from the company's forward-thinking Spintronic Science and Applications Center recently used the ALS to gain greater insight into vanadium dioxide's unusual phase ...

  12. Blue Gene/Q Network Performance Counters Monitoring Library

    Energy Science and Technology Software Center (OSTI)

    2015-03-12

    BGQNCL is a library to monitor and record network performance counters on the 5D torus interconnection network of IBM's Blue Gene/Q platform.

  13. Advance Patent Waiver W(A)2005-014

    Broader source: Energy.gov [DOE]

    This is a request by IBM for a DOE waiver of domestic and foreign patent rights under agreement W-7405-ENG-48.

  14. U.S. Department of Energy Interim E-QIP Procedures | Department...

    Broader source: Energy.gov (indexed) [DOE]

    Energy Security Symposium OE Releases Second Issue of Energy Emergency Preparedness Quarterly (April 2012) V-147: IBM Lotus Notes Mail Client Lets Remote Users Execute Java Applets...

  15. Bassi_Experiences-NUG-2006.ppt

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    we started experimenting with various compiler and runtime settings. * With NERSC & IBM playing about equal roles, most of the benchmark requirements were easy exceeded in what...

  16. JC3 Bulletin Archive | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    have been reported in Apache HTTP Server July 29, 2013 V-205: IBM Tivoli System Automation for Multiplatforms Java Multiple Vulnerabilities The weakness and the...

  17. JC3 Bulletin Archive | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    "blue screen of death" after installation. April 12, 2013 V-132: IBM Tivoli System Automation Application Manager Multiple Vulnerabilities Multiple security vulnerabilities exist...

  18. JC3 | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    have been reported in Apache HTTP Server July 29, 2013 V-205: IBM Tivoli System Automation for Multiplatforms Java Multiple Vulnerabilities The weakness and the...

  19. JC3 | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    "blue screen of death" after installation. April 12, 2013 V-132: IBM Tivoli System Automation Application Manager Multiple Vulnerabilities Multiple security vulnerabilities exist...

  20. BG/Q File Systems | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    BGQ File Systems Disk Quota Using HPSS Compiling & Linking Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM...

  1. JC3 | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    ESX and ESXi March 29, 2013 V-122: IBM Tivoli Application Dependency Discovery Manager Java Multiple Vulnerabilities Multiple security vulnerabilities exist in the Java Runtime...

  2. JC3 Bulletin Archive | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    ESX and ESXi March 29, 2013 V-122: IBM Tivoli Application Dependency Discovery Manager Java Multiple Vulnerabilities Multiple security vulnerabilities exist in the Java Runtime...

  3. Advance Patent Waiver W(A)2005-048

    Broader source: Energy.gov [DOE]

    This is a request by IBM BLUEGENE/P DESIGN, PHASE III for a DOE waiver of domestic and foreign patent rights under agreement W-7405-ENG-48.

  4. Project Final Report: HPC-Colony II (Technical Report) | SciTech...

    Office of Scientific and Technical Information (OSTI)

    Authors: Jones, Terry R 1 ; Kale, Laxmikant V 2 ; Moreira, Jose 3 + Show Author Affiliations ORNL University of Illinois, Urbana-Champaign IBM T. J. Watson Research Center ...

  5. Search for: All records | SciTech Connect

    Office of Scientific and Technical Information (OSTI)

    Knuepfer, Andreas Technische Universitat Dresden (2) Moreira, Jose IBM T. J. Watson ... multiprocessing nodes Jones, Terry R. ; Watson, Pythagoras C. ; Tuel, William ; Brenner, ...

  6. Visiting Speaker Program – July 16, 2008

    Broader source: Energy.gov [DOE]

    Jonathan Breul (IBM Center for the Business of Government) and Frank DiGiammarino (National Academy of Public Administration (NAPA) Companion Book, Event Slideshow and Presentation

  7. Dr. Robinson E. Pino | U.S. DOE Office of Science (SC)

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    (AFRL) where he was a program manager and principle scientist for the computational intelligence and neuromorphic computing research efforts. He also worked at IBM as an...

  8. Solar Forecasting Gets a Boost from Watson, Accuracy Improved by 30% |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy Solar Forecasting Gets a Boost from Watson, Accuracy Improved by 30% Solar Forecasting Gets a Boost from Watson, Accuracy Improved by 30% October 27, 2015 - 11:48am Addthis IBM Youtube Video | Courtesy of IBM Remember when IBM's super computer Watson defeated Jeopardy! champions Ken Jennings and Brad Rutter? With funding from the U.S. Department of Energy SunShot Initiative, IBM researchers are using Watson-like technology to improve solar forecasting accuracy by as much

  9. EERE Success Story-Solar Forecasting Gets a Boost from Watson, Accuracy

    Office of Environmental Management (EM)

    Improved by 30% | Department of Energy Forecasting Gets a Boost from Watson, Accuracy Improved by 30% EERE Success Story-Solar Forecasting Gets a Boost from Watson, Accuracy Improved by 30% October 27, 2015 - 11:48am Addthis IBM Youtube Video | Courtesy of IBM Remember when IBM's super computer Watson defeated Jeopardy! champions Ken Jennings and Brad Rutter? With funding from the U.S. Department of Energy SunShot Initiative, IBM researchers are using Watson-like technology to improve solar

  10. Multi Platform Graphics Subroutine Library

    Energy Science and Technology Software Center (OSTI)

    1992-02-21

    DIGLIB is a collection of general graphics subroutines. It was designed to be small, reasonably fast, device-independent, and compatible with DEC-supplied operating systems for VAXes, PDP-11s, and LSI-11s, and the DOS operating system for IBM PCs and IBM-compatible machines. The software is readily usable by casual programmers for two-dimensional plotting.

  11. Bassi_intro_NUG06.ppt

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Bassi IBM POWER 5 p575 Richard Gerber NERSC User Services Group RAGerber@lbl.gov June 13, NUG @ Princeton Plasma Physics Lab NUG June 13, 2006 Princeton Plasma Physics Lab About Bassi Bassi is an IBM p575 POWER 5 cluster * It is a distributed memory computer, with 111 single-core 8-way SMP compute nodes. * 888 processors are available to run scientific computing applications. * Each node has 32 GB of memory. * The nodes are connected by IBM's proprietary HPS network. * It is named in honor of

  12. A2 Processor User's Manual for Blue Gene/Q

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    A2 Processor User's Manual for Blue Gene/Q Note: This document and the information it contains are provided on an as-is basis. There is no plan for providing for future updates and corrections to this document. October 23, 2012 Version 1.3 Title Page ® Copyright and Disclaimer © Copyright International Business Machines Corporation 2010, 2012 Printed in the United States of America October 2012 IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business

  13. PII: S0368-2048(98)00286-2

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Liquid crystal alignment by rubbed polymer surfaces: a microscopic bond orientation model J. Sto ¨hr * , M.G. Samant IBM Research Division, Almaden Research Center, 650 Harry Road, San Jose, CA 95120 USA Dedication by J. Sto ¨hr - This paper is dedicated to Dick Brundle who for many years was my colleage at the IBM Almaden Research Center. Dick was responsible for my hiring by IBM, and over the years we interacted with each other in many roles - as each other's boss or simply as colleagues.

  14. ASC_machines_cielo_2

    National Nuclear Security Administration (NNSA)

    96 2000 2004 2008 2012 10 12 10 15 (petaFLOPS) ASC #1 Top500 winners ASC supercomputers ASC future supercomputers Leading High-Performance Computing ASCI Red * First sustained teraFLOPS machine * 1 teraFLOPS * #1 TOP500 6/97-11/00 * Intel ASCI White * First routinely shared tri-lab resource * 12 teraFLOPS * #1 TOP500 11/00-6/02 * IBM ASCI Blue Mountain * 3 teraFLOPS * SGI ASC Purple * 100 teraFLOPS * IBM * 3 teraFLOPS * IBM ASC Red Storm * 40 teraFLOPS * Cray ASCI Q * 20 teraFLOPS * Compaq

  15. Timeline of Events: 2001 | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    1 Timeline of Events: 2001 August 15, 2001: IBM's ASCI White August 15, 2001: IBM's ASCI White Lawrence Livermore National Laboratory dedicates the "world's fastest supercomputer," the IBM ASCI White. Read more June 28, 2001: President Bush announces $85.7 million in Federal grants June 28, 2001: President Bush announces $85.7 million in Federal grants President Bush speaks to employees at DOE's Forrestal building in Washington, D.C. announcing $85.7 million in Federal grants. Read

  16. Helicity evolution at small-x

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Kovchegov, Yuri V.; Pitonyak, Daniel; Sievert, Matthew D.

    2016-01-13

    We construct small-x evolution equations which can be used to calculate quark and anti-quark helicity TMDs and PDFs, along with the g1 structure function. These evolution equations resum powers of αs ln2(1/x) in the polarization-dependent evolution along with the powers of αs ln(1/x) in the unpolarized evolution which includes saturation efects. The equations are written in an operator form in terms of polarization-dependent Wilson line-like operators. While the equations do not close in general, they become closed and self-contained systems of non-linear equations in the large-Nc and large-Nc & Nf limits. As a cross-check, in the ladder approximation, our equationsmore » map onto the same ladder limit of the infrared evolution equations for g1 structure function derived previously by Bartels, Ermolaev and Ryskin.« less

  17. 2011 NERSC User Survey (Read Only)

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Sun Solaris IBM AIX HP HPUX SGI IRIX Other PC Systems Windows 7 Windows Vista Windows XP Windows 2000 Other Windows Mac Systems MacOS X MacOS 9 or earlier Other Mac Other...

  18. Solar Forecasting Gets a Boost from Watson, Accuracy Improved...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Solar Forecasting Gets a Boost from Watson, Accuracy Improved by 30% Solar Forecasting Gets a Boost from Watson, Accuracy Improved by 30% October 27, 2015 - 11:48am Addthis IBM ...

  19. Blog Feed: Vehicles | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    from the latest Clean Energy Jobs Roundup. August 7, 2012 Principal Deputy Director Eric Toone, former ARPA-E Director Arun Majumdar, the Honorable Bart Gordon and IBM Research...

  20. U-048: HP LaserJet Printers Unspecified Flaw Lets Remote Users...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    T-699: EMC AutoStart Buffer Overflows Let Remote Users Execute Arbitrary Code U-049: IBM Tivoli Netcool Reporter CGI Bug Lets Remote Users Inject Commands on the Target System...

  1. 1950s | OSTI, US Dept of Energy, Office of Scientific and Technical...

    Office of Scientific and Technical Information (OSTI)

    Photo 1950: IBM Punch Cards 1950: Maintenance of Kodak Film Processor 1950: Atoms for Peace Program Material 1950: Troops Train 1950: Manager 1951-1955 Armen Gregory Abdian 1950: ...

  2. Presentations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    code. In this talk, several of the more useful optimization options for the IBM Fortran, C, and C++ compilers are described and recommendations will be given on which of...

  3. Item Management Control System

    Energy Science and Technology Software Center (OSTI)

    1993-08-06

    The Item Management Control System (IMCS) has been developed at Idaho National Engineering Laboratory to assist in organizing collections of documents using an IBM-PC or similar DOS system platform.

  4. Cosmological Simulations for Large-Scale Sky Surveys | Argonne...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    on all HPC systems. In particular, on the IBM BGQ system, HACC has reached very high levels of performance-almost 14 petaflops (the highest ever recorded by a science code)...

  5. Introducing Mira, Argonne's Next-Generation Supercomputer

    SciTech Connect (OSTI)

    2013-03-19

    Mira, the new petascale IBM Blue Gene/Q system installed at the ALCF, will usher in a new era of scientific supercomputing. An engineering marvel, the 10-petaflops machine is capable of carrying out 10 quadrillion calculations per second.

  6. JC3 Bulletin Archive | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    was reported in IBM Tivoli Federated Identity Manager. January 18, 2013 V-072: Red Hat update for java-1.7.0-openjdk Red Hat has issued an update for java-1.7.0-openjdk....

  7. 2000 User Survey Results

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    "NERSC has been the most stable supercomputer center in the country particularly with the migration from the T3E to the IBM SP". "Makes supercomputing easy." Below are the survey...

  8. NUG Meeting February 22, 2001

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    8:45 - 9:30 Walt Polansky Perspectives from Washington 9:30 - 10:30 Bill Kramer Status reports: IBM SP Phase 2 plans, NERSC-4 plans, NERSC-2 decommissioning 10:30 - ... ...

  9. Presentations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Status reports: IBM SP Phase 2 plans, NERSC-4 plans, NERSC-2 decommissioning February 22, 2001 | Author(s): Bill Kramer | Download File: Kramer.Status.Plans.Feb2001.ppt | ppt | 6.8 ...

  10. Presentations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Status reports: IBM SP Phase 2 plans, NERSC-4 plans, NERSC-2 decommissioning February 22, 2001 | Author(s): Bill Kramer | Download File: Kramer.Status.Plans.Feb2001.ppt | ppt | 6.8

  11. Agenda

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Perspectives from Washington 9:30 - 10:30 Bill Kramer Status reports: IBM SP Phase 2 plans, NERSC-4 plans, NERSC-2 decommissioning 10:30 - 10:45 Break 10:45 - 11:15 Bill Kramer ...

  12. STRIPESPDSlidesApril_2010.pdf | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    STRIPESPDSlidesApril2010.pdf STRIPESPDSlidesApril2010.pdf PDF icon STRIPESPDSlidesApril2010.pdf More Documents & Publications The document title is Arial, 32-point bold. IBM...

  13. DOE/SF/15929-1 FSC-ESD-86-368-11 SURVIVABILITY ENHANCEMENT STUDY

    Office of Scientific and Technical Information (OSTI)

    ... STATION (e.g., AT&T) W...M i 7Z ... 3 intelligencebattle management (C IBM) network to the ... Conventional thrust and journal bearings can be used with ...

  14. Businesses Seek Fuel Cells to Meet Sustainability Goals, Provide...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    New customers include Home Depot, Dietz & Watson, IBM, Panasonic Avionics, Johnson & Johnson, FreezPak, and Uline. Almost one quarter of the top 100 companies on the Fortune 500 ...

  15. Energy Department Announces New SunShot Investment in Solar Energy...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    ... In Armonk, New York, the IBM Thomas J. Watson Research Center will lead a new project based on the Watson computer system that uses big data processing and self-adjusting ...

  16. Cetus and Vesta | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    jobs in order to debug problems that occurred on Mira. Cetus System Configuration Architecture: IBM BGQ Processor: 16 1600 MHz PowerPC A2 cores Cabinets: 4 Nodes: 4,096 Cores...

  17. through Los Alamos National

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Area schools get new computers through Los Alamos National Laboratory, IBM partnership May 8, 2009 LOS ALAMOS, New Mexico, May 8, 2009-Thanks to a partnership between Los Alamos National Laboratory and IBM, Northern New Mexico schools are recipients of fully loaded desktop and laptop computers. Officials from the Laboratory's Community Programs Office, the Española School Board, and elected officials including Española Mayor Joseph Maestas recently dedicated the technology center at Española

  18. Mira/Cetus/Vesta | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    System Overview Data Storage & File Systems Compiling & Linking Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Mira/Cetus/Vesta Mira Mira, an IBM Blue Gene/Q supercomputer at the Argonne Leadership Computing Facility, is equipped with

  19. Landscape NERSC template

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 1 A Comparison of Performance Analysis Tools on the NERSC SP Jonathan Carter NERSC User Services NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUTING CENTER 2 Performance Tools on the IBM SP * PE Benchmarker - IBM PSSP - Trace and visualize hardware counters values or MPI and user-defined events * Paraver - European Center for Parallelism at Barcelona (CEPBA) - Trace and visualize program states, hardware counters values, and MPI and user-defined

  20. Microsoft PowerPoint - 2009 04 Salishan prog models CLEAN-Stunkel [Compatibility Mode]

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Salishan conference, April 2009 Impacts of Energy Efficiency on Supercomputer Programming Models Craig Stunkel, IBM Research IBM Research What is a programming model? What is a programming model? A programming model is a May be realized through one or A programming model is a story - A common conceptual framework - Used by application May be realized through one or more of: * Libraries * Language/compiler extensions - pragmas, - Used by application developers, algorithm designers,

  1. Case Study: Driving Operational Changes Through an Energy Monitoring System

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    reporting, checklists, energy targets, and feedback leads to effective organizational change. Driving Operational Changes Through an Energy Monitoring System In 2006, IBM launched a corporate effciency program focused on basic operation im- provements in its diverse and far-fung real estate operations. The effciency program had behavior change as a major focus. Examples of changes include the following: * IBM implemented a monthly energy reporting system for its various facilities where

  2. INCITE Program Doles Out Hours on Supercomputers | Department of Energy

    Office of Environmental Management (EM)

    INCITE Program Doles Out Hours on Supercomputers INCITE Program Doles Out Hours on Supercomputers November 5, 2012 - 1:30pm Addthis Mira, the 10-petaflop IBM Blue Gene/Q system at Argonne National Laboratory, is capable of carrying out 10 quadrillion calculations per second. Each year researchers apply to the INCITE program to get to use this machine's incredible computing power. | Photo courtesy of Argonne National Lab. Mira, the 10-petaflop IBM Blue Gene/Q system at Argonne National

  3. Driving Operational Changes Through an Energy Monitoring System |

    Office of Environmental Management (EM)

    Department of Energy Driving Operational Changes Through an Energy Monitoring System Driving Operational Changes Through an Energy Monitoring System Fact sheet describes a case study of IBM's corporate energy efficiency monitoring program that focuses on basic improvements in its real estate operations. PDF icon ic_ibm.pdf More Documents & Publications Driving Operational Changes Through an Energy Monitoring System Data, Feedback, and Awareness Lead to Big Energy Savings Connecting

  4. heat_ghc02

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    2 3 4 5 6 7 8 0 2 4 6 8 10 20 30 40 50 60 70 Expansion Level Calculation Time (sec) IBM SP ∇∇∇ L=2, 64PE oooo L=2, 16PE ***** L=1, 64PE xxxx L=1, 16PE (a) 0 2 4 6 8 0.2 0.4 0.6 0.8 1 1.2 Expansion Level Calculation Slowdown IBM SP ∇∇∇ L=2, 64PE oooo L=2, 16PE ***** L=1, 64PE xxxx L=1, 16PE 0 2 4 6 8 0 5 10 15 20 25 Expansion Level Communication Time (sec) IBM SP ∇∇∇ L=2, 64PE ***** L=1, 64PE oooo L=2, 16PE xxxx L=1, 16PE (b) 0 2 4 6 8 1 1.5 2 2.5 3 Expansion Level

  5. PII: S0304-8853(99)00407-2

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Tel.: #001-408-927-2461; fax: #001-408-927-2100. E-mail address: stohr@almaden.ibm.com (J. Sto K hr) Journal of Magnetism and Magnetic Materials 200 (1999) 470}497 Exploring the microscopic origin of magnetic anisotropies with X-ray magnetic circular dichroism (XMCD) spectroscopy J. Sto K hr* IBM Research Division, Almaden Research Center, 650 Harry Road, San Jose, CA 95120-6099, USA Received 11 February 1999; received in revised form 13 April 1999 Abstract Symmetry breaking and bonding at

  6. Weaving New York's Solar Industry Web | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Weaving New York's Solar Industry Web Weaving New York's Solar Industry Web June 29, 2010 - 11:00am Addthis Solar films are manufactured at Precision Flow Technologies in Kingston, N.Y., facility. The factory once served as an IBM plant. | Photo Courtesy of Kevin Brady Solar films are manufactured at Precision Flow Technologies in Kingston, N.Y., facility. The factory once served as an IBM plant. | Photo Courtesy of Kevin Brady Stephen Graff Former Writer & editor for Energy Empowers, EERE

  7. Interacting boson model from energy density functionals: {gamma}-softness and the related topics

    SciTech Connect (OSTI)

    Nomura, K.

    2012-10-20

    A comprehensive way of deriving the Hamiltonian of the interacting boson model (IBM) is described. Based on the fact that the multi-nucleon induced surface deformation in finite nucleus is simulated by effective boson degrees of freedom, the potential energy surface calculated with self-consistent mean-field method employing a given energy density functional (EDF) is mapped onto the IBM analog, and thereby the excitation spectra and transition rates with good symmetry quantum numbers are calculated. Recent applications of the proposed approach are reported: (i) an alternative robust interpretation of the {gamma}-soft nuclei and (ii) shape coexistence in lead isotopes.

  8. Buildings Energy Data Book: 5.7 Appliances

    Buildings Energy Data Book [EERE]

    3 2007 Personal Computer Manufacturer Market Shares (Percent of Products Produced) Desktop Computer Portable Computer Company Market Share (%) Market Share (%) Dell 32% 25% Hewlett-Packard 24% 26% Gateway 5% 4% Apple 4% 9% Acer America 3% N/A IBM 1% N/A Micron 0% N/A Toshiba N/A 12% Levono (IBM) N/A 6% Sony N/A 5% Fujitsu Siemens N/A 1% Others 30% 13% Total 100% 100% Note(s): Source(s): Total Desktop Computer Units Shipped: 34,211,601 Total Portable Computer Units Shipped: 30,023,844

  9. The quest data mining system

    SciTech Connect (OSTI)

    Agrawal, R.; Mehta, M.; Shafer, J.; Srikant, R.

    1996-12-31

    The goal of the Quest project at the IBM Almaden Research center is to develop technology to enable a new breed of data-intensive decision-support applications. This paper is a capsule summary of the current functionality and architecture of the Quest data mining System.

  10. EIA directory of electronic products. First quarter 1994

    SciTech Connect (OSTI)

    Not Available

    1994-04-01

    The Energy Information Administration (EIA) makes available for public use a series of machine-readable data files and computer models. The data files and models are made available to the public on magnetic tapes. In addition, selected data files/models are available on diskette for IBM-compatible personal computers.

  11. EIA directory of electronic products, Third quarter 1995

    SciTech Connect (OSTI)

    1996-02-01

    EIA makes available for public use a series of machine-readable data files and computer models on magnetic tapes. Selected data files/models are also available on diskette for IBM-compatible personal computers. For each product listed in this directory, a detailed abstract is provided which describes the data published. Ordering information is given in the preface. Indexes are included.

  12. EIA directory of electronic products. Fourth quarter 1995

    SciTech Connect (OSTI)

    1996-08-01

    The Energy Information Administration (EIA) makes available for public use a series of machine-readable data files and computer models. The data files and models are made available to the public on magnetic tapes. In addition, selected data files/models are available on diskette for IBM-compatible personal computers.

  13. MS FORTRAN Extended Libraries

    Energy Science and Technology Software Center (OSTI)

    1986-09-01

    DISPPAK is a set of routines for use with Microsoft FORTRAN programs that allows the flexible display of information on the screen of an IBM PC in both text and graphics modes. The text mode routines allow the cursor to be placed at an arbitrary point on the screen and text to be displayed at the cursor location, making it possible to create menus and other structured displays. A routine to set the color ofmore » the characters that these routines display is also provided. A set of line drawing routines is included for use with IBM''s Color Graphics Adapter or an equivalent board (such as the Enhanced Graphics Adapter in CGA emulation mode). These routines support both pixel coordinates and a user-specified set of real number coordinates. SUBPAK is a function library which allows Microsoft FORTRAN programs to calculate random numbers, issue calls to the operating system, read individual characters from the keyboard, perform Boolean and shift operations, and communicate with the I/O ports of the IBM PC. In addition, peek and poke routines, a routine that returns the address of any variable, and routines that can access the system time and date are included.« less

  14. New Advances in Neutrinoless Double Beta Decay Matrix Elements

    SciTech Connect (OSTI)

    Munoz, Jose Barea [Instituto de Estructura de la Materia, C.S.I.C. Unidad Asociada al Departamento de Fisica Atomica, Molecular y Nuclear, Facultad de Fisica, Universidad de Sevilla, Apartado 1065, 41080 Sevilla (Spain)

    2010-08-04

    We present the matrix elements necessary to evaluate the half-life of some neutrinoless double beta decay candidates in the framework of the microscopic interacting boson model (IBM). We compare our results with those from other models and extract some simple features of the calculations.

  15. Neutrinoless double beta decay in the microscopic interacting boson model

    SciTech Connect (OSTI)

    Iachello, F. [Center for Theoretical Physics, Sloane Physics Laboratory Yale University New Haven, CT 06520-8120 (United States)

    2009-11-09

    The results of a calculation of the nuclear matrix elements for neutrinoless double beta decay in the closure approximation in several nuclei within the framework of the microscopic interacting boson model (IBM-2) are presented and compared with those calculated in the shell model (SM) and quasiparticle random phase approximation (QRPA)

  16. History of Systems

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    208 16 3,328 3,328 1.0 GB IBM Colony 3,052 (2) 4,992 JWatson 1999 Cray Y-MP J90 Cray CMOS 100 MHz 1 32 32 8 250 MB SMP 6.4 FCrick 1999 Cray Y-MP J90 Cray CMOS 100 MHz 1 32 32 8...

  17. Opportunities for high aspect ratio micro-electro-magnetic-mechanical systems (HAR-MEMMS) at Lawrence Berkeley Laboratory

    SciTech Connect (OSTI)

    Hunter, S.

    1993-10-01

    This report contains viewgraphs on the following topics: Opportunities for HAR-MEMMS at LBL; Industrial Needs and Opportunities; Deep Etch X-ray Lithography; MEMS Activities at BSAC; DNA Amplification with Microfabricated Reaction Chamber; Electrochemistry Research at LBL; MEMS Activities at LLNL; Space Microsensors and Microinstruments; The Advanced Light Source; Institute for Micromaching; IBM MEMS Interests; and Technology Transfer Opportunities at LBL.

  18. UR principles2.indd | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    UR principles2.indd UR principles2.indd PDF icon UR principles2.indd More Documents & Publications Advance Patent Waiver W(A)2006-028 International Business Machines Corporation WA_05_056_IBM_WATSON_RESEARCH_CENTER_Waiver_of_Domestic_and_.pdf

  19. Intellectual Property (IP) Service Providers for Acquisition and Assistance

    Office of Environmental Management (EM)

    Transactions | Department of Energy DOE_IP_Counsel_for_DOE_Laboratories 2015 More Documents & Publications Intellectual Property (IP) Service Providers for Acquisition and Assistance Transactions WA_05_056_IBM_WATSON_RESEARCH_CENTER_Waiver_of_Domestic_and_.pdf Need to Consider Intentional Destructive Acts in NEPA Documents

  20. July 28, 2010, Partnerships of academia, industry, and government labs

    Office of Environmental Management (EM)

    UNCLASSIFIED UNCLASSIFIED * Interdisciplinary nature of research * Rapid transition from research to products One size does not fit all Partnerships of academia, industry, and government labs UNCLASSIFIED UNCLASSIFIED Network Science Collaborative Technology Alliance: an Interdisciplinary Collaboration Model Social/Cognitive Network ARC * Principal Member - Rensselaer Polytechnic Institute * General Members - CUNY, Northeastern Univ, IBM Communication Networks ARC * Principal Member - Penn State

  1. WA_00_015_COMPAQ_FEDERAL_LLC_Waiver_Domestic_and_Foreign_Pat.pdf |

    Broader source: Energy.gov (indexed) [DOE]

    Department of Energy 0_015_COMPAQ_FEDERAL_LLC_Waiver_Domestic_and_Foreign_Pat.pdf More Documents & Publications WA_01_018_IBM_Waiver_of_Governement_US_and_Foreign_Patent_Ri.pdf Advance Patent Waiver W(A)2002-023 WC_1997_004_CLASS_ADVANCE_WAIVER_Under_Domestic_First_and_Se.pdf

  2. Electromagnetic Reciprocity.

    SciTech Connect (OSTI)

    Aldridge, David F.

    2014-11-01

    A reciprocity theorem is an explicit mathematical relationship between two different wavefields that can exist within the same space - time configuration. Reciprocity theorems provi de the theoretical underpinning for mod ern full waveform inversion solutions, and also suggest practical strategies for speed ing up large - scale numerical modeling of geophysical datasets . In the present work, several previously - developed electromagnetic r eciprocity theorems are generalized to accommodate a broader range of medi um, source , and receiver types. Reciprocity relations enabling the interchange of various types of point sources and point receivers within a three - dimensional electromagnetic model are derived. Two numerical modeling algorithms in current use are successfully tested for adherence to reciprocity. Finally, the reciprocity theorem forms the point of departure for a lengthy derivation of electromagnetic Frechet derivatives. These mathe matical objects quantify the sensitivity of geophysical electromagnetic data to variatio ns in medium parameters, and thus constitute indispensable tools for solution of the full waveform inverse problem. ACKNOWLEDGEMENTS Sandia National Labor atories is a multi - program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the US Department of Energy's National Nuclear Security Administration under contract DE - AC04 - 94AL85000. Signif icant portions of the work reported herein were conducted under a Cooperative Research and Development Agreement (CRADA) between Sandia National Laboratories (SNL) and CARBO Ceramics Incorporated. The author acknowledges Mr. Chad Cannan and Mr. Terry Pa lisch of CARBO Ceramics, and Ms. Amy Halloran, manager of SNL's Geophysics and Atmospheric Sciences Department, for their interest in and encouragement of this work. Special thanks are due to Dr . Lewis C. Bartel ( recently retired from Sandia National Labo ratories and now a geophysical consultant ) and Dr. Chester J. Weiss (recently rejoined with Sandia National Laboratories) for many stimulating (and reciprocal!) discussions regar ding the topic at hand.

  3. A study of electromagnetic characteristics of {sup 124,126,128,130,132,134,136}Ba isotopes performed in the framework of IBA

    SciTech Connect (OSTI)

    Turkan, N.

    2010-01-15

    It was pointed out that the level scheme of the transitional nuclei {sup 124,126,128,130,132,134,136}Ba also can be studied by both characteristics (IBM-1 and IBM-2) of the interacting boson model and an adequate point of the model leading to E2 transitions is therefore confirmed. Most of the {delta}(E2/M1) ratios that are still not known so far are stated and the set of parameters used in these calculations is the best approximation that has been carried out so far. It has turned out that the interacting boson approximation is fairly reliable for the calculation of spectra in the entire set of {sup 124,126,128,130,132,134,136}Ba isotopes.

  4. ICP-MS Data Analysis Software

    Energy Science and Technology Software Center (OSTI)

    1999-01-14

    VG2Xl - this program reads binary data files generated by VG instrumentals inductively coupled plasma-mass spectrometers using PlasmaQuad Software Version 4.2.1 and 4.2.2 running under IBM OS/2. ICPCalc - this module is a macro for Microsoft Excel written in VBA (Virtual Basic for Applications) that performs data analysis for ICP-MS data required for nuclear materials that cannot readily be done with the vendor''s software. VG2GRAMS - This program reads binary data files generated by VGmore » instruments inductively coupled plasma mass spectrometers using PlasmaQuad software versions 4.2.1 and 4.2.2 running under IBM OS/2.« less

  5. A valiant little terminal: A VLT user's manual

    SciTech Connect (OSTI)

    Weinstein, A.

    1992-08-01

    VLT came to be used at SLAC (Stanford Linear Accelerator Center), because SLAC wanted to assess the Amiga's usefulness as a color graphics terminal and T{sub E}X workstation. Before the project could really begin, the people at SLAC needed a terminal emulator which could successfully talk to the IBM 3081 (now the IBM ES9000-580) and all the VAXes on the site. Moreover, it had to compete in quality with the Ann Arbor Ambassador GXL terminals which were already in use at the laboratory. Unfortunately, at the time there was no commercial program which fit the bill. Luckily, Willy Langeveld had been independently hacking up a public domain VT100 emulator written by Dave Wecker et al. and the result, VLT, suited SLAC's purpose. Over the years, as the program was debugged and rewritten, the original code disappeared, so that now, in the present version of VLT, none of the original VT100 code remains.

  6. PRODEEDINGS OF RIKEN BNL RESEARCH CENTER WORKSHOP : HIGH PERFORMANCE COMPUTING WITH QCDOC AND BLUEGENE.

    SciTech Connect (OSTI)

    CHRIST,N.; DAVENPORT,J.; DENG,Y.; GARA,A.; GLIMM,J.; MAWHINNEY,R.; MCFADDEN,E.; PESKIN,A.; PULLEYBLANK,W.

    2003-03-11

    Staff of Brookhaven National Laboratory, Columbia University, IBM and the RIKEN BNL Research Center organized a one-day workshop held on February 28, 2003 at Brookhaven to promote the following goals: (1) To explore areas other than QCD applications where the QCDOC and BlueGene/L machines can be applied to good advantage, (2) To identify areas where collaboration among the sponsoring institutions can be fruitful, and (3) To expose scientists to the emerging software architecture. This workshop grew out of an informal visit last fall by BNL staff to the IBM Thomas J. Watson Research Center that resulted in a continuing dialog among participants on issues common to these two related supercomputers. The workshop was divided into three sessions, addressing the hardware and software status of each system, prospective applications, and future directions.

  7. 3081/E processor

    SciTech Connect (OSTI)

    Kunz, P.F.; Gravina, M.; Oxoby, G.; Rankin, P.; Trang, Q.; Ferran, P.M.; Fucci, A.; Hinton, R.; Jacobs, D.; Martin, B.

    1984-04-01

    The 3081/E project was formed to prepare a much improved IBM mainframe emulator for the future. Its design is based on a large amount of experience in using the 168/E processor to increase available CPU power in both online and offline environments. The processor will be at least equal to the execution speed of a 370/168 and up to 1.5 times faster for heavy floating point code. A single processor will thus be at least four times more powerful than the VAX 11/780, and five processors on a system would equal at least the performance of the IBM 3081K. With its large memory space and simple but flexible high speed interface, the 3081/E is well suited for the online and offline needs of high energy physics in the future.

  8. Recent developments in the theory of double beta decay

    SciTech Connect (OSTI)

    Iachello, F.; Kotila, J.; Barea, J.

    2013-12-30

    We report results of a novel calculation of phase space factors for 2??{sup +}?{sup +}, 2??{sup +}EC, 2?ECEC, 0??{sup +}?{sup +}, and 0??{sup +}EC using exact Dirac wave functions, and finite nuclear size and electron screening corrections. We present results of expected half-lives for 0??{sup +}?{sup +} and 0??{sup +}EC decays obtained by combining the calculation of phase space factors with IBM-2 nuclear matrix elements.

  9. The bgclang Compiler | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Mira/Cetus/Vesta System Overview Data Storage & File Systems Compiling & Linking Overview of How to Compile and Link Example Program and Makefile for BG/Q How to Manage Threading bgclang Compiler Compiling and Linking FAQ Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new

  10. ULTRA-DEEPWATER ADVISORY COMMITTEE

    Broader source: Energy.gov (indexed) [DOE]

    ULTRA-DEEPWATER ADVISORY COMMITTEE 25 TH MEETING; DECEMBER 16, 2013; TELECONFERENCE 3 ATTENDEES: UDAC Members Mary Jane Wilson, Chair Doug Foster, Vice Chair George Cooper Quenton Dokken Hartley Downs James Litton Stephen Pye Lesli Wood U.S. Department of Energy Elena Melchert, Acting Designated Federal Officer Erica Folio, Committee Manager Michelle Rathbun, Meeting Recorder, IBM Members of the Public Jennifer Thompson, Shell DISCUSSION:  The meeting was called to order by the Committee

  11. Salishan_Talk 4-14

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Yes Virginia, There is an HPSS in Your Future Dick Watson Lawrence Livermore National Laboratory 925-422-9216 dwatson@llnl.gov Development Partners - Lawrence Livermore National Laboratory - Oak Ridge National Laboratory - Los Alamos National Laboratory - Sandia National Laboratories - National Energy Research Scientific Computing Center - IBM HPSS Web Site URL: www.hpss-collaboration.org Prepared for Salishan Conference on High Speed Computing, Salishan Oregon, 4/24-27/2006 UCRL-PRES-220462 2

  12. EIA directory of electronic products fourth quarter 1993

    SciTech Connect (OSTI)

    Not Available

    1994-02-23

    The Energy Information Administration (EIA) makes available for public use a series of machine-readable data files and computer models. The data files and models are made available to the public on magnetic tapes. In addition, selected data files/models are available on diskette for IBM-compatible personal computers. For each product listed in this directory, a detailed abstract is provided which describes the data published.

  13. ALSNews Vol. 350

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ALSNews Vol. 350 Print High-Pressure MOF Research Yields Structural Insights 285 thumb mofs Metal-organic frameworks have shown promise in a variety of applications ranging from gas storage to ion exchange. Accurate structural knowledge is key to the understanding of the applicability of these materials; to learn more, researchers used ALS Beamline 11.3.1 to perform in situ, high-pressure, single-crystal x-ray diffraction. Read more... Contact: Kevin Gagnon Industry @ ALS: IBM Probes Unique

  14. Recent HRIBF Research - Deviations from U(5) Symmetry in 116Cd

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    . Recent HRIBF Research - Deviations from U(5) Symmetry in 116 Cd [J. C. Batchelder (UNIRIB), spokesperson] The cadmium isotopes near their mid-neutron shell, i.e., N=66, exhibit one of the well-known examples of shape coexistence [1]. In addition to the spherical quadrupole vibrational levels (hereafter referred to as normal phonon states), which are described within the Interacting Boson Model (IBM) by the U(5) limit, they possess intruding proton particle-hole configurations that give rise to

  15. 2009 CNM Users Meeting | Argonne National Laboratory

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    9 CNM Users Meeting October 5-7, 2009 Full Information Available Here Meeting Summary Plenary Session Views from DOE and Washington Keynote Presentations Stephen Chou (Princeton University), "Nanostructure Engineering: A Path to Discovery and Innovation" Andreas Heinrich (IBM Almaden Research Center), "The Quantum Properties of Magnetic Nanostructures on Surfaces" User Science Highlights Focus Sessions Nanostructured Materials for Solar Energy Utilization Materials and

  16. 2014 INCITE Call for Proposals - Due June 28

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    INCITE Call for Proposals 2014 INCITE Call for Proposals - Due June 28 April 30, 2013 by Francesca Verdier The 2014 INCITE Call for Proposals is now open. Open to researchers from academia, government labs, and industry, the INCITE Program is the major means by which the scientific community gains access to the Leadership Computing Facilities' resources. INCITE is currently soliciting proposals for research on the 27-petaflops Cray XK7 "Titan" and the 10-petaflops IBM Blue Gene/Q

  17. Performance Application Programming Interface

    Energy Science and Technology Software Center (OSTI)

    2005-10-31

    PAPI is a programming interface designed to provide the tool designer and application engineer with a consistent interface and methodology for use of the performance counter hardware found in most major microprocessors. PAPI enables software engineers to see, in near real time, the relation between software performance and processor events. This release covers the hardware dependent implementation of PAPI version 3 for the IBM BlueGene/L (BG/L) system.

  18. Watt-Sun: A Multi-Scale, Multi-Model, Machine-Learning Solar Forecasting

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Technology | Department of Energy Watt-Sun: A Multi-Scale, Multi-Model, Machine-Learning Solar Forecasting Technology Watt-Sun: A Multi-Scale, Multi-Model, Machine-Learning Solar Forecasting Technology IBM logo.png As part of this project, new solar forecasting technology will be developed that leverages big data processing, deep machine learning, and cloud modeling integrated in a universal platform with an open architecture. Similar to the Watson computer system, this proposed technology

  19. Molecular Foundry

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Marissa Libbee Scientific Engineering Associate, NCEM mlibbee@lbl.gov 510.495.2308 Biography Marissa Libbee transitioned from the liberal arts world in 2005 and spent the next two years at the Center for Mathematics and Applied Sciences at San Joaquin Delta College where she completed her studies on electron microscopy with an emphasis on crystalline materials and biological ultra-structure. Before joining NCEM, Marissa worked for IBM Almaden on multi-layer magnetic thin films, for SanDisk with

  20. Speakers: Eric M. Lightner, U.S. Department of Energy William M. Gausman, Pepco Holdings, Inc.

    Gasoline and Diesel Fuel Update (EIA)

    8: "Smart Grid: Impacts on Electric Power Supply and Demand" Speakers: Eric M. Lightner, U.S. Department of Energy William M. Gausman, Pepco Holdings, Inc. Christian Grant, Booz & Company, Inc. F. Michael Valocchi, IBM Global Business Services [Note: Recorders did not pick up introduction of panel (see biographies for details on the panelists) or introduction of session.] Eric Lightner: Well, good morning, everybody. My name is Eric Lightner. I work at the U.S. Department of

  1. Scalable computations in penetration mechanics

    SciTech Connect (OSTI)

    Kimsey, K.D.; Schraml, S.J.; Hertel, E.S.

    1998-01-01

    This paper presents an overview of an explicit message passing paradigm for an Eulerian finite volume method for modeling solid dynamics problems involving shock wave propagation, multiple materials, and large deformations. Three-dimensional simulations of high-velocity impact were conducted on the IBM SP2, the SGI Power challenge Array, and the SGI Origin 2000. The scalability of the message-passing code on distributed-memory and symmetric multiprocessor architectures is presented and compared to the ideal linear performance.

  2. EIA - Energy Conferences & Presentations.

    Gasoline and Diesel Fuel Update (EIA)

    8 EIA Conference 2010 Session 8: Smart Grid: Impacts on Electric Power Supply and Demand Moderator: Eric M. Lightner, DOE Speakers: William M. Gausman, Pepco Holdings Christian Grant, Booz & Company, Inc. Michael Valocchi, IBM Global Business Services Moderator and Speaker Biographies Eric M. Lightner, DOE Eric M. Lightner has worked as a program manager for advanced technology development at the U.S. Department of Energy for the last 20 years. Currently, Mr. Lightner is the Director of the

  3. Gordon receives INCITE grant | The Ames Laboratory

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Gordon receives INCITE grant Ames Laboratory scientist Mark Gordon has been awarded a 2016 INCITE grant from the U.S. Department of Energy's (DOE) Office of Science. The Innovative and Novel Computational Impact on Theory and Experiment, or INCITE, program was created as a primary means for scientists to access the DOE's supercomputers at Argonne and Oak Ridge national laboratories. The award made to Gordon includes 200 million processor hours of computing time on the IBM Blue Gene/Q

  4. Project Final Report: HPC-Colony II (Technical Report) | SciTech Connect

    Office of Scientific and Technical Information (OSTI)

    Project Final Report: HPC-Colony II Citation Details In-Document Search Title: Project Final Report: HPC-Colony II This report recounts the HPC Colony II Project which was a computer science effort funded by DOE's Advanced Scientific Computing Research office. The project included researchers from ORNL, IBM, and the University of Illinois at Urbana-Champaign. The topic of the effort was adaptive system software for extreme scale parallel machines. A description of findings is included. Authors:

  5. Solar Forecast Improvement Project | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Solar Forecast Improvement Project Solar Forecast Improvement Project NOAA.png For the Solar Forecast Improvement Project (SFIP), the Earth System Research Laboratory (ESRL) is partnering with the National Center for Atmospheric Research (NCAR) and IBM to develop more accurate methods for solar forecasts using their state-of-the-art weather models. APPROACH NOAA solar.png SFIP has three main goals: 1) to develop solar forecasting metrics tailored to the utility sector; 2) to improve solar

  6. Performing three-dimensional neutral particle transport calculations on tera scale computers

    SciTech Connect (OSTI)

    Woodward, C S; Brown, P N; Chang, B; Dorr, M R; Hanebutte, U R

    1999-01-12

    A scalable, parallel code system to perform neutral particle transport calculations in three dimensions is presented. To utilize the hyper-cluster architecture of emerging tera scale computers, the parallel code successfully combines the MPI message passing and paradigms. The code's capabilities are demonstrated by a shielding calculation containing over 14 billion unknowns. This calculation was accomplished on the IBM SP ''ASCI-Blue-Pacific computer located at Lawrence Livermore National Laboratory (LLNL).

  7. Overview

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Vermont Regional Test Center (RTC) is situated in the town of Williston, Vermont (outside of Burlington), adjacent to IBM's semiconductor facility. Located on flat, unshaded land, the VT RTC adds a unique climate to the RTC portfolio, providing validation studies for photovoltaic (PV) components and systems in a northern location, where harsh winters, abrupt variability in weather, and moderate to high precipitation prevail. The VT RTC reflects a collaborative effort between Sandia National

  8. Queueing & Running Jobs | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    System Overview Data Storage & File Systems Compiling & Linking Queueing & Running Jobs Reservations Cobalt Job Control How to Queue a Job Running Jobs FAQs Queuing and Running on BG/Q Systems Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Queueing &

  9. gdb | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Allinea DDT Core File Settings Determining Memory Use Using VNC with a Debugger bgq_stack gdb Coreprocessor Runjob termination TotalView Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] gdb Using gdb Preliminaries You should prepare a debug version of your code: Compile using -O0 -g If you are using the XL

  10. gprof Profiling Tools | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Tuning MPI on BG/Q Tuning and Analysis Utilities (TAU) HPCToolkit HPCTW mpiP gprof Profiling Tools Darshan PAPI BG/Q Performance Counters BGPM Openspeedshop Scalasca BG/Q DGEMM Performance Automatic Performance Collection (AutoPerf) Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] gprof Profiling Tools Contents Introduction Profiling on the

  11. predictive-models | netl.doe.gov

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    predictive-models DOE/BC-88/1/SP. EOR Predictive Models: Handbook for Personal Computer Versions of Enhanced Oil Recovery Predictive Models. BPO Staff. February 1988. 76 pp. NTIS Order No. DE89001204. FORTRAN source code and executable programs for the five EOR Predictive Models shown below are available. The five recovery processes modeled are Steamflood, In-Situ Combustion, Polymer, Chemical Flooding, and CO2 Miscible Flooding. The models are available individually. Min Req.: IBM PC/XT, PS-2,

  12. Data Storage & File Systems | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    BG/Q File Systems Disk Quota Using HPSS Compiling & Linking Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Data Storage & File Systems BG/Q File Systems BG/Q File Systems: An overview of the BG/Q file systems available at ALCF. Disk

  13. Data Transfer | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Using Globus Using GridFTP Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Data Transfer The Blue Gene/Q will connect to other research institutions using a total of 100 Gbit/s of public network connectivity. This allows scientists to transfer datasets to and from other institutions

  14. Debugging & Profiling | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Debugging & Profiling Allinea DDT Core File Settings Determining Memory Use Using VNC with a Debugger bgq_stack gdb Coreprocessor Runjob termination TotalView Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Debugging & Profiling Initial setups Core file settings - this page contains some environment

  15. Darshan | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Performance Tools & APIs Tuning MPI on BG/Q Tuning and Analysis Utilities (TAU) HPCToolkit HPCTW mpiP gprof Profiling Tools Darshan PAPI BG/Q Performance Counters BGPM Openspeedshop Scalasca BG/Q DGEMM Performance Automatic Performance Collection (AutoPerf) Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Darshan References Darshan

  16. Determining Memory Use | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Allinea DDT Core File Settings Determining Memory Use Using VNC with a Debugger bgq_stack gdb Coreprocessor Runjob termination TotalView Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Determining Memory Use Determining the amount of memory available during the execution of the program requires the use of

  17. Disk Quota | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    BG/Q File Systems Disk Quota Using HPSS Compiling & Linking Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Disk Quota Mira, Cetus and Vesta Disk quotas are enabled on Mira, Cetus and Vesta home and project directories. The details of those

  18. Compiling and Linking FAQ | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Overview of How to Compile and Link Example Program and Makefile for BG/Q How to Manage Threading bgclang Compiler Compiling and Linking FAQ Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Compiling and Linking FAQ Contents Where do I find

  19. Blue Gene/Q Versus Blue Gene/P | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    System Overview Blue Gene/Q Versus Blue Gene/P BG/Q Drivers Status Machine Overview Machine Partitions Torus Network Data Storage & File Systems Compiling & Linking Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Blue Gene/Q Versus Blue

  20. Bradbury Museum's supercomputing exhibit gets updated

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Bradbury's supercomputing exhibit gets updated Bradbury Museum's supercomputing exhibit gets updated The updated exhibit interactive displays, artifacts from early computers, vacuum tubes from the MANIAC computer, and unique IBM cell blades from Roadrunner. May 19, 2011 Bradbury Science Museum Bradbury Science Museum Contact Communications Office (505) 667-7000 LOS ALAMOS, New Mexico, May 19, 2011-For decades, Los Alamos National Laboratory has been synonymous with supercomputing, achieving a

  1. GAMESS | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Performance Tools & APIs Software & Libraries Boost CPMD Code_Saturne GAMESS GPAW GROMACS LAMMPS MADNESS QBox IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] GAMESS What Is GAMESS? The General Atomic and Molecular Electronic Structure System (GAMESS) is a general ab initio quantum chemistry package. For more information on GAMESS, see the Gordon research

  2. GROMACS | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Software & Libraries Boost CPMD Code_Saturne GAMESS GPAW GROMACS LAMMPS MADNESS QBox IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] GROMACS Building and Running GROMACS on Vesta/Mira The Gromacs Molecular Dynamics package has a large number of executables. Some of them, such as luck, are just utilities that do not need to be built for the back end. Begin by

  3. Example Program and Makefile for BG/Q | Argonne Leadership Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Facility Overview of How to Compile and Link Example Program and Makefile for BG/Q How to Manage Threading bgclang Compiler Compiling and Linking FAQ Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Example Program and Makefile for BG/Q

  4. About

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Archive » About About HPSS Mass Storage HPSS tape library The High Performance Storage System (HPSS) is a modern, flexible, performance-oriented mass storage system. It has been used at NERSC for archival storage since 1998. HPSS is Herarchical Storage Management (HSM) software developed by a collaboration of DOE labs, of which NERSC is a participant, and IBM. The HSM software enables all user data to be ingested onto high performance disk arrays and automatically migrated to a very large

  5. Coreprocessor | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Coreprocessor Coreprocessor is a basic parallel debugging tool that can be used to debug problems at all levels (hardware, kernel, and application). It is particularly useful when working with a large set of core files since it reveals where processors aborted, grouping them together automatically (for example, 9 died here, 500 were here, etc.). See the instructions below for using the Coreprocessor tool. References The Coreprocessor tool (IBM System Blue Gene Solution: Blue Gene/Q System

  6. Browse by Discipline -- E-print Network Subject Pathways: Materials Science

    Office of Scientific and Technical Information (OSTI)

    -- Energy, science, and technology for the research community -- hosted by the Office of Scientific and Technical Information, U.S. Department of Energy S T U V W X Y Z Rabaey, Jan M. (Jan M. Rabaey) - Department of Electrical Engineering and Computer Sciences, University of California at Berkeley Rabbah, Rodric (Rodric Rabbah) - Dynamic Optimization Group, IBM T.J. Watson Research Center Rabbat, Michael (Michael Rabbat) - Department of Electrical and Computer Engineering, McGill University

  7. Compiling & Linking | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    System Overview Data Storage & File Systems Compiling & Linking Overview of How to Compile and Link Example Program and Makefile for BG/Q How to Manage Threading bgclang Compiler Compiling and Linking FAQ Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource.

  8. Carver

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Carver Carver Magellan1.jpg Current Status: Retired Carver was decommissioned on September 30, 2015. Please see Retirement Plans for more information. Carver, named in honor of American scientist George Washington Carver, is an IBM iDataPlex cluster. Its unique features are nodes with relatively high memory per core, a batch system that supports long running jobs, and a rich set of third party software applications. Carver also serves as the interactive host for various experimental

  9. Mira | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing Resources Mira Cetus and Vesta Visualization Cluster Data and Networking Software JLSE Featured Videos Mira: Argonne's 10-Petaflop Supercomputer Mira's Dedication Ceremony Introducing Mira: Our Next-Generation Supercomputer Mira Mira Ushers in a New Era of Scientific Supercomputing As one of the fastest supercomputers, Mira, our 10-petaflops IBM Blue Gene/Q system, is capable of 10 quadrillion calculations per second. With this computing power, Mira can do in one day what it would take

  10. Oak Ridge National Laboratory - Computing and Computational Sciences

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Directorate Oak Ridge to acquire next generation supercomputer Oak Ridge to acquire next generation supercomputer The U.S. Department of Energy's (DOE) Oak Ridge Leadership Computing Facility (OLCF) has signed a contract with IBM to bring a next-generation supercomputer to Oak Ridge National Laboratory (ORNL). The OLCF's new hybrid CPU/GPU computing system, Summit, will be delivered in 2017. (more) Links Department of Energy Consortium for Advanced Simulation of Light Water Reactors Extreme

  11. Nanoscale Imaging of Strain using X-Ray Bragg Projection Ptychography |

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Argonne National Laboratory Nanoscale Imaging of Strain using X-Ray Bragg Projection Ptychography October 1, 2012 Tweet EmailPrint Users of the Center for Nanoscale Materials (CNM) from IBM exploited nanofocused X-ray Bragg projection ptychography to determine the lattice strain profile in an epitaxial SiGe stressor layer of a silicon prototype device. The theoretical and experimental framework of this new coherent diffraction strain imaging approach was developed by Argonne's Materials

  12. LAMMPS | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Software & Libraries Boost CPMD Code_Saturne GAMESS GPAW GROMACS LAMMPS MADNESS QBox IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] LAMMPS Overview LAMMPS is a general-purpose molecular dynamics software package for massively parallel computers. It is written in an exceptionally clean style that makes it one of the most popular codes for users to extend and

  13. Richard Luis Martin | Center for Gas SeparationsRelevant to Clean Energy

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Technologies | Blandine Jerome Richard Luis Martin Previous Next List Martin Richard Luis Martin Formerly: Postdoctoral Research Fellow, Lawrence Berkeley National Laboratory Presently: Staff scientist , IBM PhD in Chemoinformatics, Sheffield University, UK BSc in Computer Science & Mathematics, Sheffield University, UK EFRC research: Porous materials, such as zeolites and metal-organic frameworks, hold great promise for potential application to many pressing energy-related challenges

  14. How to Manage Threading | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Overview of How to Compile and Link Example Program and Makefile for BG/Q How to Manage Threading bgclang Compiler Compiling and Linking FAQ Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] How to Manage Threading Contents Performance

  15. How to Queue a Job | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Reservations Cobalt Job Control How to Queue a Job Running Jobs FAQs Queuing and Running on BG/Q Systems Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] How to Queue a Job Using the Job Resource Manager on BG/Q: Commands, Options and Examples This document provides

  16. MADNESS | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Software & Libraries Boost CPMD Code_Saturne GAMESS GPAW GROMACS LAMMPS MADNESS QBox IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] MADNESS Overview MADNESS is a numerical tool kit used to solve integral differential equations using multi-resolution analysis and a low-rank separation representation. MADNESS can solve multi-dimensional equations, currently up

  17. NNSA NEWS DRAFT October final edits 19 2009

    National Nuclear Security Administration (NNSA)

    2009 National Nuclear Security Administration Monthly News (continued on page 2) This month President Obama presented Dr. Berni Alder, a retired physicist from Lawrence Livermore National Laboratory, with the National Medal of Science and awarded the National Medal of Technology and Innovation to IBM for its Blue Gene series of supercomputers, developed in partnership with NNSA. The awards are the nation's most prestigious honors in the fields of science and technology innovation. Alder is

  18. Audit Report: OAS-L-04-22 | Department of Energy

    Energy Savers [EERE]

    2 Audit Report: OAS-L-04-22 September 22, 2004 Completion of the Terascale Simulation Facility Project PDF icon OAS-L-04-22.pdf More Documents & Publications Audit Report: OAS-M-10-02 WA_01_018_IBM_Waiver_of_Governement_US_and_Foreign_Patent_Ri.pdf WA_00_015_COMPAQ_FEDERAL_LLC_Waiver_Domestic_and_Foreign_Pat.pdf

  19. Design and Fabrication of a Radiation-Hard 500-MHz Digitizer Using Deep Submicron Technology

    SciTech Connect (OSTI)

    K.K. Gan; M.O. Johnson; R.D. Kass; J. Moore

    2008-09-12

    The proposed International Linear Collider (ILC) will use tens of thousands of beam position monitors (BPMs) for precise beam alignment. The signal from each BPM is digitized and processed for feedback control. We proposed the development of an 11-bit (effective) digitizer with 500 MHz bandwidth and 2 G samples/s. The digitizer was somewhat beyond the state-of-the-art. Moreover we planned to design the digitizer chip using the deep-submicron technology with custom transistors that had proven to be very radiation hard (up to at least 60 Mrad). The design mitigated the need for costly shielding and long cables while providing ready access to the electronics for testing and maintenance. In FY06 as we prepared to submit a chip with test circuits and a partial ADC circuit we found that IBM had changed the availability of our chosen IC fabrication process (IBM 6HP SiGe BiCMOS), making it unaffordable for us, at roughly 3 times the previous price. This prompted us to change our design to the IBM 5HPE process with 0.35 m feature size. We requested funding for FY07 to continue the design work and submit the first prototype chip. Unfortunately, the funding was not continued and we will summarize below the work accomplished so far.

  20. Development of an Immersed Boundary Method to Resolve Complex Terrain in the Weather Research and Forecasting Model

    SciTech Connect (OSTI)

    Lunquist, K A; Chow, F K; Lundquist, J K; Mirocha, J D

    2007-09-04

    Flow and dispersion processes in urban areas are profoundly influenced by the presence of buildings which divert mean flow, affect surface heating and cooling, and alter the structure of turbulence in the lower atmosphere. Accurate prediction of velocity, temperature, and turbulent kinetic energy fields are necessary for determining the transport and dispersion of scalars. Correct predictions of scalar concentrations are vital in densely populated urban areas where they are used to aid in emergency response planning for accidental or intentional releases of hazardous substances. Traditionally, urban flow simulations have been performed by computational fluid dynamics (CFD) codes which can accommodate the geometric complexity inherent to urban landscapes. In these types of models the grid is aligned with the solid boundaries, and the boundary conditions are applied to the computational nodes coincident with the surface. If the CFD code uses a structured curvilinear mesh, then time-consuming manual manipulation is needed to ensure that the mesh conforms to the solid boundaries while minimizing skewness. If the CFD code uses an unstructured grid, then the solver cannot be optimized for the underlying data structure which takes an irregular form. Unstructured solvers are therefore often slower and more memory intensive than their structured counterparts. Additionally, urban-scale CFD models are often forced at lateral boundaries with idealized flow, neglecting dynamic forcing due to synoptic scale weather patterns. These CFD codes solve the incompressible Navier-Stokes equations and include limited options for representing atmospheric processes such as surface fluxes and moisture. Traditional CFD codes therefore posses several drawbacks, due to the expense of either creating the grid or solving the resulting algebraic system of equations, and due to the idealized boundary conditions and the lack of full atmospheric physics. Meso-scale atmospheric boundary layer simulations, on the other hand, are performed by numerical weather prediction (NWP) codes, which cannot handle the geometry of the urban landscape, but do provide a more complete representation of atmospheric physics. NWP codes typically use structured grids with terrain-following vertical coordinates, include a full suite of atmospheric physics parameterizations, and allow for dynamic synoptic scale lateral forcing through grid nesting. Terrain following grids are unsuitable for urban terrain, as steep terrain gradients cause extreme distortion of the computational cells. In this work, we introduce and develop an immersed boundary method (IBM) to allow the favorable properties of a numerical weather prediction code to be combined with the ability to handle complex terrain. IBM uses a non-conforming structured grid, and allows solid boundaries to pass through the computational cells. As the terrain passes through the mesh in an arbitrary manner, the main goal of the IBM is to apply the boundary condition on the interior of the domain as accurately as possible. With the implementation of the IBM, numerical weather prediction codes can be used to explicitly resolve urban terrain. Heterogeneous urban domains using the IBM can be nested into larger mesoscale domains using a terrain-following coordinate. The larger mesoscale domain provides lateral boundary conditions to the urban domain with the correct forcing, allowing seamless integration between mesoscale and urban scale models. Further discussion of the scope of this project is given by Lundquist et al. [2007]. The current paper describes the implementation of an IBM into the Weather Research and Forecasting (WRF) model, which is an open source numerical weather prediction code. The WRF model solves the non-hydrostatic compressible Navier-Stokes equations, and employs an isobaric terrain-following vertical coordinate. Many types of IB methods have been developed by researchers; a comprehensive review can be found in Mittal and Iaccarino [2005]. To the authors knowledge, this is the first IBM approach that is able to

  1. Summary Slides of ALS Industry Highlights

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Industry Highlights Print No. Slide Beamline Full Web Highlight ALSNews Volume 18 Collaboration Produces World's Best Metrology Tool 6.1.2 01.27.2016 Vol. 369 17 Takeda Advances Diabetes Research at ALS 5.0.2, 5.0.3 06.02.2015 Vol. 364 16 Metrology for Next-Generation Nanopatterning 7.3.3, 11.0.1 01.28.2015 Vol. 360 15 Caribou Biosciences Has Roots at ALS - 09.24.2014 Vol. 357 13 Lithium-Battery Dendrite Growth: A New View 8.3.2 04.30.2014 Vol. 352 12 IBM Probes Material Capabilities at the ALS

  2. Experiences from the Roadrunner petascale hybrid systems

    SciTech Connect (OSTI)

    Kerbyson, Darren J; Pakin, Scott; Lang, Mike; Sancho Pitarch, Jose C; Davis, Kei; Barker, Kevin J; Peraza, Josh

    2010-01-01

    The combination of flexible microprocessors (AMD Opterons) with high-performing accelerators (IBM PowerXCell 8i) resulted in the extremely powerful Roadrunner system. Many challenges in both hardware and software were overcome to achieve its goals. In this talk we detail some of the experiences in achieving performance on the Roadrunner system. In particular we examine several implementations of the kernel application, Sweep3D, using a work-queue approach, a more portable Thread-building-blocks approach, and an MPI on the accelerator approach.

  3. Performance assessment of OTEC power systems and thermal power plants. Final report. Volume I

    SciTech Connect (OSTI)

    Leidenfrost, W.; Liley, P.E.; McDonald, A.T.; Mudawwar, I.; Pearson, J.T.

    1985-05-01

    The focus of this report is on closed-cycle ocean thermal energy conversion (OTEC) power systems under research at Purdue University. The working operations of an OTEC power plant are briefly discussed. Methods of improving the performance of OTEC power systems are presented. Brief discussions on the methods of heat exchanger analysis and design are provided, as are the thermophysical properties of the working fluids and seawater. An interactive code capable of analyzing OTEC power system performance is included for use with an IBM personal computer.

  4. Mira: Argonne's 10-petaflops supercomputer

    ScienceCinema (OSTI)

    Papka, Michael; Coghlan, Susan; Isaacs, Eric; Peters, Mark; Messina, Paul

    2014-06-05

    Mira, Argonne's petascale IBM Blue Gene/Q system, ushers in a new era of scientific supercomputing at the Argonne Leadership Computing Facility. An engineering marvel, the 10-petaflops supercomputer is capable of carrying out 10 quadrillion calculations per second. As a machine for open science, any researcher with a question that requires large-scale computing resources can submit a proposal for time on Mira, typically in allocations of millions of core-hours, to run programs for their experiments. This adds up to billions of hours of computing time per year.

  5. EIA directory of electronic products, first quarter 1995

    SciTech Connect (OSTI)

    1995-06-01

    The Energy Information Administration (EIA) makes available for public use a series of machine-readable data files and computer models. The data files and models are made available to the public on magnetic tapes. In addition, selected data files/models are available on diskette for IBM-compatible personal computers. EIA, as the independent statistical and analytical branch of the Department of Energy, provides assistance to the general public through the National Energy Information Center (NEIC). For each product listed in this directory, a detailed abstract is provided which describes the data published. Specific technical questions may be referred to the appropriate contact person.

  6. EIA directory of electronic products. Third quarter 1994

    SciTech Connect (OSTI)

    Not Available

    1994-09-01

    The Energy Information Administration (EIA) makes available for public use a series of machine-readable data files and computer models. The data files and models are made available to the public on magnetic tapes. In addition, selected data files/models are available on diskette for IBM-compatible personal computers. EIA, as the independent statistical and analytical branch of the Department of Energy, provides assistance to the general public through the National Energy Information Center (NEIC). Inquirers may telephone NEIC`s information specialists at (202) 586-8800 with any data questions relating to the content of EIA Directory of Electronic Products.

  7. Tomographic

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    study of atomic-scale redistribution of platinum during the silicidation of Ni 0.95 Pt 0.05 / Si"100... thin films Praneet Adusumilli, 1 Lincoln J. Lauhon, 1 David N. Seidman, 1,a͒ Conal E. Murray, 2 Ori Avayu, 3 and Yossi Rosenwaks 3 1 Department of Materials Science and Engineering, Northwestern University, 2220 Campus Drive, Evanston, Illinois 60208-3108, USA 2 IBM Thomas J. Watson Research Center, Yorktown Heights, New York 10598, USA 3 School of Electrical Engineering, Tel-Aviv

  8. Simulation analysis of within-day flow fluctuation effects on trout below flaming Gorge Dam.

    SciTech Connect (OSTI)

    Railsback, S. F.; Hayse, J. W.; LaGory, K. E.; Environmental Science Division; EPRI

    2006-01-01

    In addition to being renewable, hydropower has the advantage of allowing rapid load-following, in that the generation rate can easily be varied within a day to match the demand for power. However, the flow fluctuations that result from load-following can be controversial, in part because they may affect downstream fish populations. At Flaming Gorge Dam, located on the Green River in northeastern Utah, concern has been raised about whether flow fluctuations caused by the dam disrupt feeding at a tailwater trout fishery, as fish move in response to flow changes and as the flow changes alter the amount or timing of the invertebrate drift that trout feed on. Western Area Power Administration (Western), which controls power production on submonthly time scales, has made several operational changes to address concerns about flow fluctuation effects on fisheries. These changes include reducing the number of daily flow peaks from two to one and operating within a restricted range of flows. These changes significantly reduce the value of the power produced at Flaming Gorge Dam and put higher load-following pressure on other power plants. Consequently, Western has great interest in understanding what benefits these restrictions provide to the fishery and whether adjusting the restrictions could provide a better tradeoff between power and non-power concerns. Directly evaluating the effects of flow fluctuations on fish populations is unfortunately difficult. Effects are expected to be relatively small, so tightly controlled experiments with large sample sizes and long study durations would be needed to evaluate them. Such experiments would be extremely expensive and would be subject to the confounding effects of uncontrollable variations in factors such as runoff and weather. Computer simulation using individual-based models (IBMs) is an alternative study approach for ecological problems that are not amenable to analysis using field studies alone. An IBM simulates how a population responds to environmental changes by representing how the population's individuals interact with their environment and each other. IBMs represent key characteristics of both individual organisms (trout, in this case) and the environment, thus allowing controlled simulation experiments to analyze the effects of changes in the key variables. For the flow fluctuation problem at Flaming Gorge Dam, the key environmental variables are flow rates and invertebrate drift concentrations, and the most important processes involve how trout adapt to changes (over space and time) in growth potential and mortality risk. This report documents simulation analyses of flow fluctuation effects on trout populations. The analyses were conducted in a highly controlled fashion: an IBM was used to predict production (survival and growth) of trout populations under a variety of scenarios that differ only in the level or type of flow fluctuation.

  9. News Item

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    4, 2015 Time: 11:00 am Speaker: Michael A. Guillorn, IBM T. J. Watson Research Center Title: Self-assembled, self-aligned and self healing: CMOS scaling enabled by stochastic suppression at the nanoscale Location: 67-3111 Chemla Room Abstract: The end of CMOS density scaling has been erroneously predicted by a number of authors for several decades. A review of some of this work was presented by Haensch, et al[1]. Many of these predictions arose from a belief that the only possible solutions to

  10. Mira: Our 10-PetaFLOPS Supercomputer | Argonne Leadership Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Facility Our 10-PetaFLOPS Supercomputer Our new computer, named "Mira", is an IBM Blue Gene/Q, the third generation in a line of supercomputers that has topped the performance charts. Mira has been ranked the fourth fastest supercomputer in the world as of November 2012. Beyond providing hours of computing time, Mira itself is a stepping stone toward the next great goal of supercomputing: exascale speed, where computers will calculate quintillions of floating point operations per

  11. BlueGene/L Applications: Parallelism on a Massive Scale (Journal Article) |

    Office of Scientific and Technical Information (OSTI)

    SciTech Connect Journal Article: BlueGene/L Applications: Parallelism on a Massive Scale Citation Details In-Document Search Title: BlueGene/L Applications: Parallelism on a Massive Scale BlueGene/L (BG/L), developed through a partnership between IBM and Lawrence Livermore National Laboratory (LLNL), is currently the world's largest system both in terms of scale with 131,072 processors and absolute performance with a peak rate of 367 TFlop/s. BG/L has led the Top500 list the last four times

  12. simulators | netl.doe.gov

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    simulators DOE/BC-89/3/SP. Handbook for Personal Computer Version of BOAST II: A Three- Dimensional, Three-Phase Black Oil Applied Simulation Tool. Bartlesville Project Office. January 1989. 82 pp. NTIS Order No. DE89000725. FORTRAN source code and executable program. Min. Req.: IBM PC/AT, PS-2, or compatible computer with 640 Kbytes of memory. Download 464 KB Manual 75 KB Manual 404 KB Reference paper (1033-3,v1) by Fanchi, et al. Manual 83 KB Reference paper (1033-3,v2) by Fanchi, et al. BOAST

  13. Design Forward and Fast Forward Semi-Annual Review

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    DesignForward » Design Forward / Fast Forward Semi-Annual Review Design Forward and Fast Forward Semi-Annual Review A semi-annual meeting for the Design Forward and Fast Forward initiatives will be held at the new Computational Research and Theory (CRT) Building at Lawrence Berkeley National Laboratory on September 21-25, 2015. Under the initiatives, AMD, Cray, IBM, Intel Federal and NVIDIA will work to advance extreme-scale computing technology on the path to exascale. AGENDA MONDAY, SEPTEMBER

  14. Featured Announcements

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    April 2013 2014 INCITE Call for Proposals - Due June 28 April 30, 2013 by Francesca Verdier The 2014 INCITE Call for Proposals is now open. Open to researchers from academia, government labs, and industry, the INCITE Program is the major means by which the scientific community gains access to the Leadership Computing Facilities' resources. INCITE is currently soliciting proposals for research on the 27-petaflops Cray XK7 "Titan" and the 10-petaflops IBM Blue Gene/Q "Mira"

  15. Chippewa Falls

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    years... & When did it all begin? 2 1974? 1978? 1963? CDC 6600 - 1974 NERSC started service with the first Supercomputer... 3 A well-used system - Serial Number 1 ● On its last legs... Designed and built in Chippewa Falls Launch Date: 1963 Load / Store Architecture ● First RISC Computer! First CRT Monitor Freon Cooled State-of-the-Art Remote Access at NERSC ● Via 4 acoustic modems, manually answered capable of 10 characters /sec 50 th Anniversary of the IBM / Cray Rivalry... 2/6/14

  16. Hopper:Improving I/O performance to GSCRATCH and PROJECT

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    GSCRATCH/PROJECT Performance Tuning on Hopper Hopper:Improving I/O performance to GSCRATCH and PROJECT What are GSCRATCH/PROJECT? GSCRATCH and PROJECT are two file systems at NERSC that one can access on most computational systems. They are both based on the IBM GPFS file system and have multiple racks of dedicated servers and disk arrays. How are GSCRATCH/PROJECT connected to Hopper? As shown in the figure below, GSCRATCH and PROJECT are each connected to several Private NSD Servers (PNSD; for

  17. Agenda

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Agenda Agenda NUG Meeting: June 5-6, 2000 Garden Plaza Hotel, Oak Ridge, TN The next NERSC User Group meeting will be held in Oak Ridge, TN, June 5-7 and will be hosted by Oak Ridge National Laboratory (ORNL). See the agenda, below. The meeting will be all day Monday, June 5, and is expected to finish Tuesday, June 6, at lunchtime. Following this business meeting will be a training class on the new IBM SP in conjunction with Users Helping Users (UHU) talks and discussions with the consultants.

  18. Computers for artificial intelligence a technology assessment and forecast

    SciTech Connect (OSTI)

    Miller, R.K.

    1986-01-01

    This study reviews the development and current state-of-the-art in computers for artificial intelligence, including LISP machines, AI workstations, professional and engineering workstations, minicomputers, mainframes, and supercomputers. Major computer systems for AI applications are reviewed. The use of personal computers for expert system development is discussed, and AI software for the IBM PC, Texas Instrument Professional Computer, and Apple MacIntosh is presented. Current research aimed at developing a new computer for artificial intelligence is described, and future technological developments are discussed.

  19. bgclang Compiler | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Projects bgclang Compiler Cobalt Scheduler GLEAN Petrel Swift bgclang Compiler bgclang, a compiler toolchain based on the LLVM/Clang compiler infrastructure, but customized for the IBM Blue Gene/Q (BG/Q) supercomputer, is a successful experiment in creating an alternative, high-quality compiler toolchain for non-commodity HPC hardware. By enhancing LLVM (http://llvm.org/) with support for the BG/Q's QPX vector instruction set, bgclang inherits from LLVM/Clang a high-quality auto-vectorizing

  20. 2016 INCITE Proposal Writing Webinar | Argonne Leadership Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Facility 2016 INCITE Proposal Writing Webinar The 2016 INCITE Call for Proposals is open now through June 26th, 2015. Through an annual call for proposals, the 2016 INCITE program will deliver more than five billion core hours of compute time on two of the nation's most powerful high-performance supercomputers for fundamental research: the IBM Blue Gene/Q (Mira) systems at the Argonne Leadership Computing Facility (ALCF), and the Cray XK7 (Titan) system at the Oak Ridge Leadership Computing

  1. Configuration

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Configuration Configuration Overview Carver, a liquid-cooled IBM iDataPlex system, has 1202 compute nodes (9,984 processor cores). This represents a theoretical peak performance of 106.5 Teraflops/sec. Note that the above node count includes hardware that is dedicated to various strategic projects and experimental testbeds (e.g., Hadoop). As such, not all 1202 nodes will be available to all users at all times. All nodes are interconnected by 4X QDR InfiniBand technology, providing 32 Gb/s of

  2. Early Site Permit Demonstration Program: Nuclear Power Plant Siting Database

    Energy Science and Technology Software Center (OSTI)

    1994-01-28

    This database is a repository of comprehensive licensing and technical reviews of siting regulatory processes and acceptance criteria for advanced light water reactor (ALWR) nuclear power plants. The program is designed to be used by applicants for an early site permit or combined construction permit/operating license (10CFRR522, Subparts A and C) as input for the development of the application. The database is a complete, menu-driven, self-contained package that can search and sort the supplied datamore » by topic, keyword, or other input. The software is designed for operation on IBM compatible computers with DOS.« less

  3. Machine Overview | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Overview Blue Gene/Q systems are composed of login nodes, I/O nodes, and compute nodes. Login Nodes Login and compile nodes are IBM Power 7-based systems running Red Hat Linux and are the user's interface to a Blue Gene/Q system. This is where users login, edit files, compile, and submit jobs. These are shared resources with multiple users. I/O Nodes The I/O node and compute environments are based around a very simple 1.6 GHz 16 core PowerPC A2 system with 16 GB of RAM. I/O node environments are

  4. ARPA-E's 19 New Projects Focus on Battery Management and Storage |

    Energy Savers [EERE]

    Department of Energy E's 19 New Projects Focus on Battery Management and Storage ARPA-E's 19 New Projects Focus on Battery Management and Storage August 7, 2012 - 1:17pm Addthis Principal Deputy Director Eric Toone, former ARPA-E Director Arun Majumdar, the Honorable Bart Gordon and IBM Research Senior Director Kathleen Kingscott discuss the future of energy innovation at an ITIF event on August 2. | Energy Department photo. Principal Deputy Director Eric Toone, former ARPA-E Director Arun

  5. A valiant little terminal: A VLT user`s manual. Revision 4

    SciTech Connect (OSTI)

    Weinstein, A.

    1992-08-01

    VLT came to be used at SLAC (Stanford Linear Accelerator Center), because SLAC wanted to assess the Amiga`s usefulness as a color graphics terminal and T{sub E}X workstation. Before the project could really begin, the people at SLAC needed a terminal emulator which could successfully talk to the IBM 3081 (now the IBM ES9000-580) and all the VAXes on the site. Moreover, it had to compete in quality with the Ann Arbor Ambassador GXL terminals which were already in use at the laboratory. Unfortunately, at the time there was no commercial program which fit the bill. Luckily, Willy Langeveld had been independently hacking up a public domain VT100 emulator written by Dave Wecker et al. and the result, VLT, suited SLAC`s purpose. Over the years, as the program was debugged and rewritten, the original code disappeared, so that now, in the present version of VLT, none of the original VT100 code remains.

  6. Comparing the Performance of Blue Gene/Q with Leading Cray XE6 and InfiniBand Systems

    SciTech Connect (OSTI)

    Kerbyson, Darren J.; Barker, Kevin J.; Vishnu, Abhinav; Hoisie, Adolfy

    2013-01-21

    AbstractThree types of systems dominate the current High Performance Computing landscape: the Cray XE6, the IBM Blue Gene, and commodity clusters using InfiniBand. These systems have quite different characteristics making the choice for a particular deployment difficult. The XE6 uses Crays proprietary Gemini 3-D torus interconnect with two nodes at each network endpoint. The latest IBM Blue Gene/Q uses a single socket integrating processor and communication in a 5-D torus network. InfiniBand provides the flexibility of using nodes from many vendors connected in many possible topologies. The performance characteristics of each vary vastly along with their utilization model. In this work we compare the performance of these three systems using a combination of micro-benchmarks and a set of production applications. In particular we discuss the causes of variability in performance across the systems and also quantify where performance is lost using a combination of measurements and models. Our results show that significant performance can be lost in normal production operation of the Cray XT6 and InfiniBand Clusters in comparison to Blue Gene/Q.

  7. Tracking the Performance Evolution of Blue Gene Systems

    SciTech Connect (OSTI)

    Kerbyson, Darren J.; Barker, Kevin J.; Gallo, Diego S.; Chen, Dong; Brunheroto, Jose R.; Ryu, Kyung D.; Chiu, George L.; Hoisie, Adolfy

    2013-06-17

    IBMs Blue Gene supercomputer has evolved through three generations from the original Blue Gene/L to P to Q. A higher level of integration has enabled greater single-core performance, and a larger concurrency per compute node. Although these changes have brought with them a higher overall system peak-performance, no study has examined in detail the evolution of perfor-mance across system generations. In this work we make two significant contri-butions that of providing a comparative performance analysis across Blue Gene generations using a consistent set of tests, and also in providing a validat-ed performance model of the NEK-Bone proxy application. The combination of empirical analysis and the predictive performance model enable us to not only directly compare measured performance but also allow for a comparison of sys-tem configurations that cannot currently be measured. We provide insights into how the changing characteristics of Blue Gene have impacted on the application performance, as well as what future systems may be able to achieve.

  8. Quadrupole collective dynamics from energy density functionals: Collective Hamiltonian and the interacting boson model

    SciTech Connect (OSTI)

    Nomura, K.; Vretenar, D.; Niksic, T.; Otsuka, T.; Shimizu, N.

    2011-07-15

    Microscopic energy density functionals have become a standard tool for nuclear structure calculations, providing an accurate global description of nuclear ground states and collective excitations. For spectroscopic applications, this framework has to be extended to account for collective correlations related to restoration of symmetries broken by the static mean field, and for fluctuations of collective variables. In this paper, we compare two approaches to five-dimensional quadrupole dynamics: the collective Hamiltonian for quadrupole vibrations and rotations and the interacting boson model (IBM). The two models are compared in a study of the evolution of nonaxial shapes in Pt isotopes. Starting from the binding energy surfaces of {sup 192,194,196}Pt, calculated with a microscopic energy density functional, we analyze the resulting low-energy collective spectra obtained from the collective Hamiltonian, and the corresponding IBM Hamiltonian. The calculated excitation spectra and transition probabilities for the ground-state bands and the {gamma}-vibration bands are compared to the corresponding sequences of experimental states.

  9. Automation and optimization of the design parameters in tactical military pipeline systems. Master's thesis

    SciTech Connect (OSTI)

    Frick, R.M.

    1988-12-01

    Tactical military petroleum pipeline systems will play a vital role in any future conflict due to an increased consumption of petroleum products by our combined Armed Forces. The tactical pipeline must be rapidly constructed and highly mobile to keep pace with the constantly changing battle zone. Currently, the design of these pipeline system is time consuming and inefficient, which may cause shortages of fuel and pipeline components at the front lines. Therefore, a need for a computer program that will both automate and optimize the pipeline design process is quite apparent. These design needs are satisfied by developing a software package using Advance Basic (IBM DOS) programming language and made to run on an IBM-compatible personal computer. The program affords the user the options of either finding the optimum pump station locations for a proposed pipeline or calculating the maximum operating pressures for an existing pipeline. By automating the design procedure, a field engineer can vary the pipeline length, diameter, roughness, viscosity, gravity, flow rate, pump station pressure, or terrain profile and see how it affects the other parameters in just a few seconds. The design process was optimized by implementing a weighting scheme based on the volume percent of each fuel in the pipeline at any given time.

  10. Polymer Hybrid Photovoltaics for Inexpensive Electricity Generation: Final Technical Report, 1 September 2001--30 April 2006

    SciTech Connect (OSTI)

    Carter, S. A.

    2006-07-01

    The project goal is to understand the operating mechanisms underlying the performance of polymer hybrid photovoltaics to enable the development of a photovoltaic with a maximum power conversion efficiency over cost ratio that is significantly greater than current PV technologies. Plastic or polymer-based photovoltaics can have significant cost advantages over conventional technologies in that they are compatible with liquid-based plastic processing and can be assembled onto plastic under atmospheric conditions (ambient temperature and pressure) using standard printing technologies, such as reel-to-reel and screen printing. Moreover, polymer-based PVs are lightweight, flexible, and largely unbreakable, which make shipping, installation, and maintenance simpler. Furthermore, a numerical simulation program was developed (in collaboration with IBM) to fully simulate the performance of multicomponent polymer photovoltaic devices, and a manufacturing method was developed (in collaboration with Add-vision) to inexpensively manufacture larger-area devices.

  11. A mobile computed tomographic unit for inspecting reinforced concrete columns

    SciTech Connect (OSTI)

    Sumitra, T.; Srisatit, S.; Pattarasumunt, A.

    1994-12-31

    A mobile computed tomographic unit applicable in the inspection of reinforced concrete columns was designed, constructed and tested. A CT image reconstruction programme written in Quick Basic was first developed to be used on an IBM PC/AT microcomputer. It provided user friendly menus for data processing and displaying CT image. The prototype of a gamma-ray scanning system using a 1.11 GBq Cs-137 source and a NaI(T1) scintillation detector was also designed and constructed. The system was a microcomputer controlled, single-beam rotate-translate scanner used for collecting transmitted gamma-ray data in different angles. The CT unit was finally tested with a standard column and a column of an existing building. The cross sectional images of the columns could be clearly seen. The positions and sizes of the reinforced bars could be estimated.

  12. A BLAS-3 version of the QR factorization with column pivoting

    SciTech Connect (OSTI)

    Quintana-Orti, G.; Sun, X.; Bischof, C.H.

    1998-09-01

    The QR factorization with column pivoting (QRP), originally suggested by Golub is a popular approach to computing rank-revealing factorizations. Using Level 1 BLAS, it was implemented in LINPACK, and, using Level 2 BLAS, in LAPACK. While the Level 2 BLAS version delivers superior performance in general, it may result in worse performance for large matrix sizes due to cache effects. The authors introduce a modification of the QRP algorithm which allows the use of Level 3 BLAs kernels while maintaining the numerical behavior of the LINPACK and LAPACK implementations. Experimental comparisons of this approach with the LINPACK and LAPACK implementations on IBM RS/6000, SGI R8000, and DEC AXP platforms show considerable performance improvements.

  13. Application of bar codes to the automation of analytical sample data collection

    SciTech Connect (OSTI)

    Jurgensen, H A

    1986-01-01

    The Health Protection Department at the Savannah River Plant collects 500 urine samples per day for tritium analyses. Prior to automation, all sample information was compiled manually. Bar code technology was chosen for automating this program because it provides a more accurate, efficient, and inexpensive method for data entry. The system has three major functions: sample labeling is accomplished at remote bar code label stations composed of an Intermec 8220 (Intermec Corp.) interfaced to an IBM-PC, data collection is done on a central VAX 11/730 (Digital Equipment Corp.). Bar code readers are used to log-in samples to be analyzed on liquid scintillation counters. The VAX 11/730 processes the data and generates reports, data storage is on the VAX 11/730 and backed up on the plant's central computer. A brief description of several other bar code applications at the Savannah River Plant is also presented.

  14. EIA directory of electronic products. Second quarter 1995

    SciTech Connect (OSTI)

    1995-10-04

    The Energy Information Administration (EIA) makes available for public use a series of machine-readable data files and computer models. They are available to the public on magnetic tapes; selected data files/models are available on diskette for IBM-compatible personal computers. This directory first presents the on-line files and compact discs. This is followed by descriptions and technical contacts and ordering and other information on the data files and models. An index by energy source is included. Additional ordering information is in the preface. The data files cover petroleum, natural gas, electricity, coal, integrated statistics, and consumption; the models cover petroleum, natural gas, electricity, coal, nuclear, and multifuel.

  15. ALCF Future Systems Tim Williams, Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Future Systems Tim Williams, Argonne Leadership Computing Facility DOE Exascale Requirements Review: High Energy Physics June 11, 2015 Production Systems (ALCF-2) 2 Mira - IBM Blue Gene/Q ¥ 49,152 nodes ¡ PowerPC A2 cpu - 16 cores, 4 HW threads/core ¡ 16 GB RAM ¥ Aggregate ¡ 768 TB RAM, 768K cores ¡ Peak 10 PetaFLOPS ¥ 5D torus interconnect Cooley - Viz/Analysis cluster ¥ 126 nodes: ¡ Two 2.4 GHz Intel Haswell 6-core - 384 GB RAM ¡ NVIDIA Tesla K80 (two

  16. P

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    etascale C ompu6ng This w ork p erformed u nder t he a uspices o f t he U .S. D epartment o f E nergy b y Lawrence L ivermore N a6onal L aboratory u nder C ontract D E--- AC52---07NA27344. Donald F rederick, L ivermore C ompu6ng, L awrence L ivermore Na6onal L aboratory LLNL---PRES---508651 IBM B lue G ene A rchitecture LLNL---PRES---508651 Outline * Overview o f B lueGene * BG Philosophy * The B G F amily * BG h ardware - System O verview - CPU - Node - Interconnect * BG S ystem S oVware * BG S

  17. Simple Electric Vehicle Simulation

    Energy Science and Technology Software Center (OSTI)

    1993-07-29

    SIMPLEV2.0 is an electric vehicle simulation code which can be used with any IBM compatible personal computer. This general purpose simulation program is useful for performing parametric studies of electric and series hybrid electric vehicle performance on user input driving cycles.. The program is run interactively and guides the user through all of the necessary inputs. Driveline components and the traction battery are described and defined by ASCII files which may be customized by themore » user. Scaling of these components is also possible. Detailed simulation results are plotted on the PC monitor and may also be printed on a printer attached to the PC.« less

  18. TOP500 Supercomputers for June 2002

    SciTech Connect (OSTI)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2002-06-20

    19th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 19th edition of the TOP500 list of the worlds fastest supercomputers was released today (June 20, 2002). The recently installed Earth Simulator supercomputer at the Earth Simulator Center in Yokohama, Japan, is as expected the clear new number 1. Its performance of 35.86 Tflop/s (trillions of calculations per second) running the Linpack benchmark is almost five times higher than the performance of the now No.2 IBM ASCI White system at Lawrence Livermore National Laboratory (7.2 Tflop/s). This powerful leap frogging to the top by a system so much faster than the previous top system is unparalleled in the history of the TOP500.

  19. Quantum Monte Carlo by message passing

    SciTech Connect (OSTI)

    Bonca, J.; Gubernatis, J.E.

    1993-01-01

    We summarize results of quantum Monte Carlo simulations of the degenerate single-impurity Anderson model using the impurity algorithm of Hirsch and Fye. Using methods of Bayesian statistical inference, coupled with the principle of maximum entropy, we extracted the single-particle spectral density from the imaginary-time Green's function. The variations of resulting spectral densities with model parameters agree qualitatively with the spectral densities predicted by NCA calculations. All the simulations were performed on a cluster of 16 IBM R6000/560 workstations under the control of the message-passing software PVM. We described the trivial parallelization of our quantum Monte Carlo code both for the cluster and the CM-5 computer. Other issues for effective parallelization of the impurity algorithm are also discussed.

  20. Quantum Monte Carlo by message passing

    SciTech Connect (OSTI)

    Bonca, J.; Gubernatis, J.E.

    1993-05-01

    We summarize results of quantum Monte Carlo simulations of the degenerate single-impurity Anderson model using the impurity algorithm of Hirsch and Fye. Using methods of Bayesian statistical inference, coupled with the principle of maximum entropy, we extracted the single-particle spectral density from the imaginary-time Green`s function. The variations of resulting spectral densities with model parameters agree qualitatively with the spectral densities predicted by NCA calculations. All the simulations were performed on a cluster of 16 IBM R6000/560 workstations under the control of the message-passing software PVM. We described the trivial parallelization of our quantum Monte Carlo code both for the cluster and the CM-5 computer. Other issues for effective parallelization of the impurity algorithm are also discussed.

  1. Computer Algebra System

    Energy Science and Technology Software Center (OSTI)

    1992-05-04

    DOE-MACSYMA (Project MAC''s SYmbolic MAnipulation system) is a large computer programming system written in LISP. With DOE-MACSYMA the user can differentiate, integrate, take limits, solve systems of linear or polynomial equations, factor polynomials, expand functions in Laurent or Taylor series, solve differential equations (using direct or transform methods), compute Poisson series, plot curves, and manipulate matrices and tensors. A language similar to ALGOL-60 permits users to write their own programs for transforming symbolic expressions. Franzmore » Lisp OPUS 38 provides the environment for the Encore, Celerity, and DEC VAX11 UNIX,SUN(OPUS) versions under UNIX and the Alliant version under Concentrix. Kyoto Common Lisp (KCL) provides the environment for the SUN(KCL),Convex, and IBM PC under UNIX and Data General under AOS/VS.« less

  2. PC Basic Linear Algebra Subroutines

    Energy Science and Technology Software Center (OSTI)

    1992-03-09

    PC-BLAS is a highly optimized version of the Basic Linear Algebra Subprograms (BLAS), a standardized set of thirty-eight routines that perform low-level operations on vectors of numbers in single and double-precision real and complex arithmetic. Routines are included to find the index of the largest component of a vector, apply a Givens or modified Givens rotation, multiply a vector by a constant, determine the Euclidean length, perform a dot product, swap and copy vectors, andmore » find the norm of a vector. The BLAS have been carefully written to minimize numerical problems such as loss of precision and underflow and are designed so that the computation is independent of the interface with the calling program. This independence is achieved through judicious use of Assembly language macros. Interfaces are provided for Lahey Fortran 77, Microsoft Fortran 77, and Ryan-McFarland IBM Professional Fortran.« less

  3. Performance Tuning of Fock Matrix and Two-Electron Integral Calculations for NWChem on Leading HPC Platforms

    SciTech Connect (OSTI)

    Shan, Hongzhan; Austin, Brian M.; De Jong, Wibe A.; Oliker, Leonid; Wright, Nicholas J.; Apra, Edoardo

    2014-10-01

    Attaining performance in the evaluation of two-electron repulsion integrals and constructing the Fock matrix is of considerable importance to the computational chemistry community. Due to its numerical complexity improving the performance behavior across a variety of leading supercomputing platforms is an increasing challenge due to the significant diversity in high-performance computing architectures. In this paper, we present our successful tuning methodology for these important numerical methods on the Cray XE6, the Cray XC30, the IBM BG/Q, as well as the Intel Xeon Phi. Our optimization schemes leverage key architectural features including vectorization and simultaneous multithreading, and results in speedups of up to 2.5x compared with the original implementation.

  4. Nuclear Computerized Library for Assessing Reactor Reliability (NUCLARR): Programmer's guide

    SciTech Connect (OSTI)

    Call, O. J.; Jacobson, J. A.

    1988-09-01

    The Nuclear Computerized Library for Assessing Reactor Reliability (NUCLARR) is an automated data base management system for processing and storing human error probability and hardware component failure data. The NUCLARR system software resides on an IBM (or compatible) personal micro-computer and can be used to furnish data inputs for both human and hardware reliability analysis in support of a variety of risk assessment activities. The NUCLARR system is documented in a five-volume series of reports. Volume 2 of this series is the Programmer's Guide for maintaining the NUCLARR system software. This Programmer's Guide provides, for the software engineer, an orientation to the software elements involved, discusses maintenance methods, and presents useful aids and examples. 4 refs., 75 figs., 1 tab.

  5. LAMMPS strong scaling performance optimization on Blue Gene/Q

    SciTech Connect (OSTI)

    Coffman, Paul; Jiang, Wei; Romero, Nichols A.

    2014-11-12

    LAMMPS "Large-scale Atomic/Molecular Massively Parallel Simulator" is an open-source molecular dynamics package from Sandia National Laboratories. Significant performance improvements in strong-scaling and time-to-solution for this application on IBM's Blue Gene/Q have been achieved through computational optimizations of the OpenMP versions of the short-range Lennard-Jones term of the CHARMM force field and the long-range Coulombic interaction implemented with the PPPM (particle-particle-particle mesh) algorithm, enhanced by runtime parameter settings controlling thread utilization. Additionally, MPI communication performance improvements were made to the PPPM calculation by re-engineering the parallel 3D FFT to use MPICH collectives instead of point-to-point. Performance testing was done using an 8.4-million atom simulation scaling up to 16 racks on the Mira system at Argonne Leadership Computing Facility (ALCF). Speedups resulting from this effort were in some cases over 2x.

  6. A brief summary on formalizing parallel tensor distributions redistributions and algorithm derivations.

    SciTech Connect (OSTI)

    Schatz, Martin D.; Kolda, Tamara G.; van de Geijn, Robert

    2015-09-01

    Large-scale datasets in computational chemistry typically require distributed-memory parallel methods to perform a special operation known as tensor contraction. Tensors are multidimensional arrays, and a tensor contraction is akin to matrix multiplication with special types of permutations. Creating an efficient algorithm and optimized im- plementation in this domain is complex, tedious, and error-prone. To address this, we develop a notation to express data distributions so that we can apply use automated methods to find optimized implementations for tensor contractions. We consider the spin-adapted coupled cluster singles and doubles method from computational chemistry and use our methodology to produce an efficient implementation. Experiments per- formed on the IBM Blue Gene/Q and Cray XC30 demonstrate impact both improved performance and reduced memory consumption.

  7. Modeling and simulation of Red Teaming. Part 1, Why Red Team M&S?

    SciTech Connect (OSTI)

    Skroch, Michael J.

    2009-11-01

    Red teams that address complex systems have rarely taken advantage of Modeling and Simulation (M&S) in a way that reproduces most or all of a red-blue team exchange within a computer. Chess programs, starting with IBM's Deep Blue, outperform humans in that red-blue interaction, so why shouldn't we think computers can outperform traditional red teams now or in the future? This and future position papers will explore possible ways to use M&S to augment or replace traditional red teams in some situations, the features Red Team M&S should possess, how one might connect live and simulated red teams, and existing tools in this domain.

  8. GSW2015_MASTER-current-2.pptx

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Getting Started at the ALCF 1 Agenda ¤ Part I ¥ Blue Gene/Q hardware overview ¥ Building your code ¥ Considerations before you run ¥ Hands-on session ¤ Part II ¥ Queuing and running ¥ After your job is submitted ¥ Potential problems ¥ Hands-on session 2 Part I 3 Section: Blue Gene/Q hardware overview 4 ALCF resources ¤ Mira (Production) - IBM Blue Gene/Q ¥ 49,152 nodes / 786,432 cores ¥ 768 TB of memory ¥ Peak flop rate: 10 PF ¥ Linpack

  9. Integrated Air Pollution Control System (IAPCS), Executable Model and Source Model (version 4. 0) (for microcomputers). Model-Simulation

    SciTech Connect (OSTI)

    Not Available

    1990-10-29

    The Integrated Air Pollution Control System (IAPCS) Cost Model is an IBM PC cost model that can be used to estimate the cost of installing SO2, NOx, and particulate matter control systems at coal-fired utility electric generating facilities. The model integrates various combinations of the following technologies: physical coal cleaning, coal switching, overfire air/low NOx burners, natural gas reburning, LIMB, ADVACATE, electrostatic precipitator, fabric filter, gas conditioning, wet lime or limestone FGD, lime spray drying/duct spray drying, dry sorbent injection, pressurized fluidized bed combustion, integrated gasification combined cycle, and pulverized coal burning boiler. The model generates capital, annualized, and unitized pollutant removal costs in either constant or current dollars for any year.

  10. Integrated Air Pollution Control System (IAPCS), Executable Model (Version 4. 0) (for microcomputers). Model-Simulation

    SciTech Connect (OSTI)

    Not Available

    1990-10-29

    The Integrated Air Pollution Control System (IAPCS) Cost Model is an IBM PC cost model that can be used to estimate the cost of installing SO2, NOx, and particulate matter control systems at coal-fired utility electric generating facilities. The model integrates various combinations of the following technologies: physical coal cleaning, coal switching, overfire air/low NOx burners, natural gas reburning, LIMB, ADVACATE, electrostatic precipitator, fabric filter, gas conditioning, wet lime or limestone FGD, lime spray drying/duct spray drying, dry sorbent injection, pressurized fluidized bed combustion, integrated gasification combined cycle, and pulverized coal burning boiler. The model generates capital, annualized, and unitized pollutant removal costs in either constant or current dollars for any year.

  11. Parallel Scaling Characteristics of Selected NERSC User ProjectCodes

    SciTech Connect (OSTI)

    Skinner, David; Verdier, Francesca; Anand, Harsh; Carter,Jonathan; Durst, Mark; Gerber, Richard

    2005-03-05

    This report documents parallel scaling characteristics of NERSC user project codes between Fiscal Year 2003 and the first half of Fiscal Year 2004 (Oct 2002-March 2004). The codes analyzed cover 60% of all the CPU hours delivered during that time frame on seaborg, a 6080 CPU IBM SP and the largest parallel computer at NERSC. The scale in terms of concurrency and problem size of the workload is analyzed. Drawing on batch queue logs, performance data and feedback from researchers we detail the motivations, benefits, and challenges of implementing highly parallel scientific codes on current NERSC High Performance Computing systems. An evaluation and outlook of the NERSC workload for Allocation Year 2005 is presented.

  12. SIMPLEV: A simple electric vehicle simulation program, Version 1.0

    SciTech Connect (OSTI)

    Cole, G.H.

    1991-06-01

    An electric vehicle simulation code which can be used with any IBM compatible personal computer was written. This general purpose simulation program is useful for performing parametric studies of electric vehicle performance on user input driving cycles. The program is run interactively and guides the user through all of the necessary inputs. Driveline components and the traction battery are described and defined by ASCII files which may be customized by the user. Scaling of these components is also possible. Detailed simulation results are plotted on the PC monitor and may also be printed on a printer attached to the PC. This report serves as a users` manual and documents the mathematical relationships used in the simulation.

  13. The NUCLARR databank: Human reliability and hardware failure data for the nuclear power industry

    SciTech Connect (OSTI)

    Reece, W.J.

    1993-05-01

    Under the sponsorship of the US Nuclear Regulatory Commission (NRC), the Nuclear Computerized Library for Assessing Reactor Reliability (NUCLARR) was developed to provide human reliability and hardware failure data to analysts in the nuclear power industry. This IBM-compatible databank is contained on a set of floppy diskettes which include data files and a menu-driven system for locating, reviewing, sorting, and retrieving the data. NUCLARR contains over 2500 individual data records, drawn from more, than 60 sources. The system is upgraded annually, to include additional human error and hardware component failure data and programming enhancements (i.e., increased user-friendliness). NUCLARR is available from the NRC through project staff at the INEL.

  14. User's guide to a data base of current environmental monitoring projects in the US-Canadian transboundary region

    SciTech Connect (OSTI)

    Ballinger, M.Y.; Defferding, J.; Chapman, E.G.; Bettinson, M.D.; Glantz, C.S.

    1987-11-01

    This document describes how to use a data base of current transboundary region environmental monitoring projects. The data base was prepared from data provided by Glantz et al. (1986) and Concord Scientific Corporation (1985), and contains information on 226 projects with monitoring stations located within 400 km (250 mi) of the US-Canadian border. The data base is designed for use with the dBASE III PLUS data management systems on IBM-compatible personal computers. Data-base searches are best accomplished using an accompanying command file called RETRIEVE or the dBASE command LIST. The user must carefully select the substrings on which the search is to be based. Example search requests and subsequent output are presented to illustrate substring selections and applications of the data base. 4 refs., 15 figs., 4 tabs.

  15. Simulation of oil-slick transport in Great Lakes connecting channels. User's manual for the River Spill Simulation Model (ROSS). Special report

    SciTech Connect (OSTI)

    Shen, H.T.; Yapa, P.D.; Petroski, M.E.

    1991-12-01

    Two computer models, named ROSS and LROSS, have been developed for simulating oil slick transport in rivers and lakes, respectively. The oil slick transformation processes considered in these models include advection, spreading, evaporation and dissolution. These models can be used for slicks of any shape originating from instantaneous or continuous spills in rivers and lakes with or without ice covers. Although developed for the connecting channels in the upper Great Lakes, including the Detroit River, Lake St. Clair, the St. Clair River and the St. Marys River, these models are site independent and can be used for other rivers and lakes. The programs are written in FORTRAN programming language to be compatible with the FORTRAN77 compiler. In addition, a user-friendly, menu-driven program with graphics capability was developed for the IBM-PC AT computer, so that these models can be easily used to assist the cleanup action in the connecting channels should an oil spill occur.

  16. Automated system for handling tritiated mixed waste

    SciTech Connect (OSTI)

    Dennison, D.K.; Merrill, R.D.; Reitz, T.C.

    1995-03-01

    Lawrence Livermore National Laboratory (LLNL) is developing a semi system for handling, characterizing, processing, sorting, and repackaging hazardous wastes containing tritium. The system combines an IBM-developed gantry robot with a special glove box enclosure designed to protect operators and minimize the potential release of tritium to the atmosphere. All hazardous waste handling and processing will be performed remotely, using the robot in a teleoperational mode for one-of-a-kind functions and in an autonomous mode for repetitive operations. Initially, this system will be used in conjunction with a portable gas system designed to capture any gaseous-phase tritium released into the glove box. This paper presents the objectives of this development program, provides background related to LLNL`s robotics and waste handling program, describes the major system components, outlines system operation, and discusses current status and plans.

  17. Testing of the Eberline PCM-2

    SciTech Connect (OSTI)

    Howe, K.L.

    1994-12-23

    The PCM-2 manufactured by Eberline Instruments is a whole body monitor that detects both alpha and beta contamination. The PCM-2 uses an IBM compatible personal computer for all software functions. The PCM-2 has 34 large area detectors which can cover approximately 40% of the body at a time. This requires two counting cycles to cover approximately 80% of the body. With the normal background seen at Rocky Flats, each count time takes approximately 15--20 seconds. There are a number of beta and gamma whole body monitors available from different manufacturers, but an alpha whole body monitor is a rarity. Because of the need for alpha whole body monitors at The Rocky Flats Environmental Technology Site, it was decided to do thorough testing on the PCM-2. A three month test was run in uranium building and a three month test in a plutonium building to verify the alpha capabilities of the PCM-2.

  18. Trainblazing with roadrunner

    SciTech Connect (OSTI)

    Henning, Paul J; White, Andrew B

    2009-01-01

    In June 2008, a new supercomputer broke the petaflop/s performance barrier, more than doubling the computational performance of the next fastest machine on the TopSOO Supercomputing Sites list (http://topSOO.org).This computer, named Roadrunner, is the result of an intensive collaboration between IBM and Los Alamos National Laboratory, where it is now located. Aside from its performance, Roadrunner has two distinguishing characteristics: a very good power/performance ratio and a 'hybrid' computer architecture that mixes several types of processors. By November 2008, the traditionally-architected Jaguar computer at Oak Ridge National Laboratory was neck-and-neck with Roadrunner in the performance race, but it requires almost 2.8 times the electric power of Roadrunner. This difference translates into millions of dollars per year in operating costs.

  19. PCDAS Version 2. 2: Remote network control and data acquisition

    SciTech Connect (OSTI)

    Fishbaugher, M.J.

    1987-09-01

    This manual is intended for both technical and non-technical people who want to use the PCDAS remote network control and data acquisition software. If you are unfamiliar with remote data collection hardware systems designed at Pacific Northwest Laboratory (PNL), this introduction should answer your basic questions. Even if you have some experience with the PNL-designed Field Data Acquisition Systems (FDAS), it would be wise to review this material before attempting to set up a network. This manual was written based on the assumption that you have a rudimentary understanding of personal computer (PC) operations using Disk Operating System (DOS) version 2.0 or greater (IBM 1984). You should know how to create subdirectories and get around the subdirectory tree.

  20. Petascale Parallelization of the Gyrokinetic Toroidal Code

    SciTech Connect (OSTI)

    Ethier, Stephane; Adams, Mark; Carter, Jonathan; Oliker, Leonid

    2010-05-01

    The Gyrokinetic Toroidal Code (GTC) is a global, three-dimensional particle-in-cell application developed to study microturbulence in tokamak fusion devices. The global capability of GTC is unique, allowing researchers to systematically analyze important dynamics such as turbulence spreading. In this work we examine a new radial domain decomposition approach to allow scalability onto the latest generation of petascale systems. Extensive performance evaluation is conducted on three high performance computing systems: the IBM BG/P, the Cray XT4, and an Intel Xeon Cluster. Overall results show that the radial decomposition approach dramatically increases scalability, while reducing the memory footprint - allowing for fusion device simulations at an unprecedented scale. After a decade where high-end computing (HEC) was dominated by the rapid pace of improvements to processor frequencies, the performance of next-generation supercomputers is increasingly differentiated by varying interconnect designs and levels of integration. Understanding the tradeoffs of these system designs is a key step towards making effective petascale computing a reality. In this work, we examine a new parallelization scheme for the Gyrokinetic Toroidal Code (GTC) [?] micro-turbulence fusion application. Extensive scalability results and analysis are presented on three HEC systems: the IBM BlueGene/P (BG/P) at Argonne National Laboratory, the Cray XT4 at Lawrence Berkeley National Laboratory, and an Intel Xeon cluster at Lawrence Livermore National Laboratory. Overall results indicate that the new radial decomposition approach successfully attains unprecedented scalability to 131,072 BG/P cores by overcoming the memory limitations of the previous approach. The new version is well suited to utilize emerging petascale resources to access new regimes of physical phenomena.

  1. Data Foundry: Data Warehousing and Integration for Scientific Data Management

    SciTech Connect (OSTI)

    Musick, R.; Critchlow, T.; Ganesh, M.; Fidelis, Z.; Zemla, A.; Slezak, T.

    2000-02-29

    Data warehousing is an approach for managing data from multiple sources by representing them with a single, coherent point of view. Commercial data warehousing products have been produced by companies such as RebBrick, IBM, Brio, Andyne, Ardent, NCR, Information Advantage, Informatica, and others. Other companies have chosen to develop their own in-house data warehousing solution using relational databases, such as those sold by Oracle, IBM, Informix and Sybase. The typical approaches include federated systems, and mediated data warehouses, each of which, to some extent, makes use of a series of source-specific wrapper and mediator layers to integrate the data into a consistent format which is then presented to users as a single virtual data store. These approaches are successful when applied to traditional business data because the data format used by the individual data sources tends to be rather static. Therefore, once a data source has been integrated into a data warehouse, there is relatively little work required to maintain that connection. However, that is not the case for all data sources. Data sources from scientific domains tend to regularly change their data model, format and interface. This is problematic because each change requires the warehouse administrator to update the wrapper, mediator, and warehouse interfaces to properly read, interpret, and represent the modified data source. Furthermore, the data that scientists require to carry out research is continuously changing as their understanding of a research question develops, or as their research objectives evolve. The difficulty and cost of these updates effectively limits the number of sources that can be integrated into a single data warehouse, or makes an approach based on warehousing too expensive to consider.

  2. Project Report on DOE Young Investigator Grant (Contract No. DE-FG02-02ER25525) Dynamic Scheduling and Fusion of Irregular Computation (August 15, 2002 to August 14, 2005)

    SciTech Connect (OSTI)

    Chen Ding

    2005-08-16

    Computer simulation has become increasingly important in many scientific disciplines, but its performance and scalability are severely limited by the memory throughput on today??s computer systems. With the support of this grant, we first designed training-based prediction, which accurately predicts the memory performance of large applications before their execution. Then we developed optimization techniques using dynamic computation fusion and large-scale data transformation. The research work has three major components. The first is modeling and prediction of cache behav- ior. We have developed a new technique, which uses reuse distance information from training inputs then extracts a parameterized model of the program??s cache miss rates for any input size and for any size of fully associative cache. Using the model we have built a web-based tool using three dimensional visualization. The new model can help to build cost-effective computer systems, design better benchmark suites, and improve task scheduling on heterogeneous systems. The second component is global computation for improving cache performance. We have developed an algorithm for dynamic data partitioning using sampling theory and probability distribution. Recent work from a number of groups show that manual or semi-manual computation fusion has significant benefits in physical, mechanical, and biological simulations as well as information retrieval and machine verification. We have developed an au- tomatic tool that measures the potential of computation fusion. The new system can be used by high-performance application programmers to estimate the potential of locality improvement for a program before trying complex transformations for a specific cache system. The last component studies models of spatial locality and the problem of data layout. In scientific programs, most data are stored in arrays. Grand challenge problems such as hydrodynamics simulation and data mining may use an enormous number of data elements. To optimize the layout across multiple arrays, we have developed a formal model called reference affinity. We collaborated with the IBM production compiler group and designed an efficient compiler analysis that performs as well as data or code profiling does. Based on these results, the IBM group has filed a patent and is including this technique in their product compiler. A major part of the project is the development of software tools. We have developed web-based visu- alization for program locality. In addition, we have implemented a prototype of array regrouping in the IBM compiler. The full implementation is expected to come out of IBM in the near future and to benefit scientific applications running on IBM supercomputers. We have also developed a test environment for studying the limit of computation fusion. Finally, our work has directly in?uenced the design of the Intel Itanium compiler. The project has strengthened the research relation between the PI??s group and groups in DoE labs. The PI was an invited speaker at the Center for Applied Scientific Computing Seminar Series at the early stage of the project. The question that the most audience was curious about was the limit of computation fusion, which has been studied in depth in this research. In addition, the seminar directly helped a group at Lawrence Livermore to achieve four times speedup on an important DoE code. The PI helped to organize a number of high-performance computing forums, including the founding of a workshop on memory system performance (MSP). In the past two years, one fourth of the papers in the workshop came from researchers in Lawrence Livermore, Argonne, Las Alamos, and Lawrence Berkeley national laboratories. The PI lectured frequently on DoE funded research. In a broader context, high performance computing is central to America??s scientific and economic stature in the world,

  3. Study of Particle Rotation Effect in Gas-Solid Flows using Direct Numerical Simulation with a Lattice Boltzmann Method

    SciTech Connect (OSTI)

    Kwon, Kyung; Fan, Liang-Shih; Zhou, Qiang; Yang, Hui

    2014-09-30

    A new and efficient direct numerical method with second-order convergence accuracy was developed for fully resolved simulations of incompressible viscous flows laden with rigid particles. The method combines the state-of-the-art immersed boundary method (IBM), the multi-direct forcing method, and the lattice Boltzmann method (LBM). First, the multi-direct forcing method is adopted in the improved IBM to better approximate the no-slip/no-penetration (ns/np) condition on the surface of particles. Second, a slight retraction of the Lagrangian grid from the surface towards the interior of particles with a fraction of the Eulerian grid spacing helps increase the convergence accuracy of the method. An over-relaxation technique in the procedure of multi-direct forcing method and the classical fourth order Runge-Kutta scheme in the coupled fluid-particle interaction were applied. The use of the classical fourth order Runge-Kutta scheme helps the overall IB-LBM achieve the second order accuracy and provides more accurate predictions of the translational and rotational motion of particles. The preexistent code with the first-order convergence rate is updated so that the updated new code can resolve the translational and rotational motion of particles with the second-order convergence rate. The updated code has been validated with several benchmark applications. The efficiency of IBM and thus the efficiency of IB-LBM were improved by reducing the number of the Lagragian markers on particles by using a new formula for the number of Lagrangian markers on particle surfaces. The immersed boundary-lattice Boltzmann method (IBLBM) has been shown to predict correctly the angular velocity of a particle. Prior to examining drag force exerted on a cluster of particles, the updated IB-LBM code along with the new formula for the number of Lagrangian markers has been further validated by solving several theoretical problems. Moreover, the unsteadiness of the drag force is examined when a fluid is accelerated from rest by a constant average pressure gradient toward a steady Stokes flow. The simulation results agree well with the theories for the short- and long-time behavior of the drag force. Flows through non-rotational and rotational spheres in simple cubic arrays and random arrays are simulated over the entire range of packing fractions, and both low and moderate particle Reynolds numbers to compare the simulated results with the literature results and develop a new drag force formula, a new lift force formula, and a new torque formula. Random arrays of solid particles in fluids are generated with Monte Carlo procedure and Zinchenko's method to avoid crystallization of solid particles over high solid volume fractions. A new drag force formula was developed with extensive simulated results to be closely applicable to real processes over the entire range of packing fractions and both low and moderate particle Reynolds numbers. The simulation results indicate that the drag force is barely affected by rotational Reynolds numbers. Drag force is basically unchanged as the angle of the rotating axis varies.

  4. Simulating atmosphere flow for wind energy applications with WRF-LES

    SciTech Connect (OSTI)

    Lundquist, J K; Mirocha, J D; Chow, F K; Kosovic, B; Lundquist, K A

    2008-01-14

    Forecasts of available wind energy resources at high spatial resolution enable users to site wind turbines in optimal locations, to forecast available resources for integration into power grids, to schedule maintenance on wind energy facilities, and to define design criteria for next-generation turbines. This array of research needs implies that an appropriate forecasting tool must be able to account for mesoscale processes like frontal passages, surface-atmosphere interactions inducing local-scale circulations, and the microscale effects of atmospheric stability such as breaking Kelvin-Helmholtz billows. This range of scales and processes demands a mesoscale model with large-eddy simulation (LES) capabilities which can also account for varying atmospheric stability. Numerical weather prediction models, such as the Weather and Research Forecasting model (WRF), excel at predicting synoptic and mesoscale phenomena. With grid spacings of less than 1 km (as is often required for wind energy applications), however, the limits of WRF's subfilter scale (SFS) turbulence parameterizations are exposed, and fundamental problems arise, associated with modeling the scales of motion between those which LES can represent and those for which large-scale PBL parameterizations apply. To address these issues, we have implemented significant modifications to the ARW core of the Weather Research and Forecasting model, including the Nonlinear Backscatter model with Anisotropy (NBA) SFS model following Kosovic (1997) and an explicit filtering and reconstruction technique to compute the Resolvable Subfilter-Scale (RSFS) stresses (following Chow et al, 2005).We are also modifying WRF's terrain-following coordinate system by implementing an immersed boundary method (IBM) approach to account for the effects of complex terrain. Companion papers presenting idealized simulations with NBA-RSFS-WRF (Mirocha et al.) and IBM-WRF (K. A. Lundquist et al.) are also presented. Observations of flow through the Altamont Pass (Northern California) wind farm are available for validation of the WRF modeling tool for wind energy applications. In this presentation, we use these data to evaluate simulations using the NBA-RSFS-WRF tool in multiple configurations. We vary nesting capabilities, multiple levels of RSFS reconstruction, SFS turbulence models (the new NBA turbulence model versus existing WRF SFS turbulence models) to illustrate the capabilities of the modeling tool and to prioritize recommendations for operational uses. Nested simulations which capture both significant mesoscale processes as well as local-scale stable boundary layer effects are required to effectively predict available wind resources at turbine height.

  5. 2008 ALCF annual report.

    SciTech Connect (OSTI)

    Drugan, C.

    2009-12-07

    The word 'breakthrough' aptly describes the transformational science and milestones achieved at the Argonne Leadership Computing Facility (ALCF) throughout 2008. The number of research endeavors undertaken at the ALCF through the U.S. Department of Energy's (DOE) Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program grew from 9 in 2007 to 20 in 2008. The allocation of computer time awarded to researchers on the Blue Gene/P also spiked significantly - from nearly 10 million processor hours in 2007 to 111 million in 2008. To support this research, we expanded the capabilities of Intrepid, an IBM Blue Gene/P system at the ALCF, to 557 teraflops (TF) for production use. Furthermore, we enabled breakthrough levels of productivity and capability in visualization and data analysis with Eureka, a powerful installation of NVIDIA Quadro Plex S4 external graphics processing units. Eureka delivered a quantum leap in visual compute density, providing more than 111 TF and more than 3.2 terabytes of RAM. On April 21, 2008, the dedication of the ALCF realized DOE's vision to bring the power of the Department's high performance computing to open scientific research. In June, the IBM Blue Gene/P supercomputer at the ALCF debuted as the world's fastest for open science and third fastest overall. No question that the science benefited from this growth and system improvement. Four research projects spearheaded by Argonne National Laboratory computer scientists and ALCF users were named to the list of top ten scientific accomplishments supported by DOE's Advanced Scientific Computing Research (ASCR) program. Three of the top ten projects used extensive grants of computing time on the ALCF's Blue Gene/P to model the molecular basis of Parkinson's disease, design proteins at atomic scale, and create enzymes. As the year came to a close, the ALCF was recognized with several prestigious awards at SC08 in November. We provided resources for Linear Scaling Divide-and-Conquer Electronic Structure Calculations for Thousand Atom Nanostructures, a collaborative effort between Argonne, Lawrence Berkeley National Laboratory, and Oak Ridge National Laboratory that received the ACM Gordon Bell Prize Special Award for Algorithmic Innovation. The ALCF also was named a winner in two of the four categories in the HPC Challenge best performance benchmark competition.

  6. HARE: Final Report

    SciTech Connect (OSTI)

    Mckie, Jim

    2012-01-09

    This report documents the results of work done over a 6 year period under the FAST-OS programs. The first effort was called Right-Weight Kernels, (RWK) and was concerned with improving measurements of OS noise so it could be treated quantitatively; and evaluating the use of two operating systems, Linux and Plan 9, on HPC systems and determining how these operating systems needed to be extended or changed for HPC, while still retaining their general-purpose nature. The second program, HARE, explored the creation of alternative runtime models, building on RWK. All of the HARE work was done on Plan 9. The HARE researchers were mindful of the very good Linux and LWK work being done at other labs and saw no need to recreate it. Even given this limited funding, the two efforts had outsized impact: _ Helped Cray decide to use Linux, instead of a custom kernel, and provided the tools needed to make Linux perform well _ Created a successor operating system to Plan 9, NIX, which has been taken in by Bell Labs for further development _ Created a standard system measurement tool, Fixed Time Quantum or FTQ, which is widely used for measuring operating systems impact on applications _ Spurred the use of the 9p protocol in several organizations, including IBM _ Built software in use at many companies, including IBM, Cray, and Google _ Spurred the creation of alternative runtimes for use on HPC systems _ Demonstrated that, with proper modifications, a general purpose operating systems can provide communications up to 3 times as effective as user-level libraries Open source was a key part of this work. The code developed for this project is in wide use and available at many places. The core Blue Gene code is available at https://bitbucket.org/ericvh/hare. We describe details of these impacts in the following sections. The rest of this report is organized as follows: First, we describe commercial impact; next, we describe the FTQ benchmark and its impact in more detail; operating systems and runtime research follows; we discuss infrastructure software; and close with a description of the new NIX operating system, future work, and conclusions.

  7. Petascale algorithms for reactor hydrodynamics.

    SciTech Connect (OSTI)

    Fischer, P.; Lottes, J.; Pointer, W. D.; Siegel, A.

    2008-01-01

    We describe recent algorithmic developments that have enabled large eddy simulations of reactor flows on up to P = 65, 000 processors on the IBM BG/P at the Argonne Leadership Computing Facility. Petascale computing is expected to play a pivotal role in the design and analysis of next-generation nuclear reactors. Argonne's SHARP project is focused on advanced reactor simulation, with a current emphasis on modeling coupled neutronics and thermal-hydraulics (TH). The TH modeling comprises a hierarchy of computational fluid dynamics approaches ranging from detailed turbulence computations, using DNS (direct numerical simulation) and LES (large eddy simulation), to full core analysis based on RANS (Reynolds-averaged Navier-Stokes) and subchannel models. Our initial study is focused on LES of sodium-cooled fast reactor cores. The aim is to leverage petascale platforms at DOE's Leadership Computing Facilities (LCFs) to provide detailed information about heat transfer within the core and to provide baseline data for less expensive RANS and subchannel models.

  8. On-line test of signal validation software on the LOBI-MOD2 facility in Ispra, Italy

    SciTech Connect (OSTI)

    Prock, J.; Labeit, M. ); Ohlmer, E. . Joint Research Centre)

    1992-01-01

    A computer program for the detection of abrupt changes in nonhardware redundant measurement signals that uses different methods of analytical redundancy is developed by the Gesellschaft fur Reaktorsicherheit, Garching, Federal Republic of Germany. The program, instrumental fault detection and identification (IFDI) module, validates in real time output signals of power plant components that are scanned at a fixed rate. The IFDI module, implemented on an IBM-compatible personal computer (PC) with an 80386 processor, is tested on-line at the light water reactor off-normal behavior investigations (LOBI-MOD2) facility in the Joint Research Centre, Ispra, Italy, during the loss-of-feedwater experiment BT-15/BT-16 on November 22, 1990. The measurement signals validated by the IFDI module originate from one of the two LOBI-MOD2 facility's steam generators. During the experiment, sensor faults are simulated by falsifying the measurement signals through electrical resistances arranged in series. In this paper questions about the signal validation software and the steam generator's model are dealt with briefly, while the experimental environment and the results obtained are discussed in detail.

  9. Department of Defense (DOD) renewables and energy efficiency planning (REEP) program manual

    SciTech Connect (OSTI)

    Nemeth, R.J.; Fournier, D.; Debaillie, L.; Edgar, L.; Stroot, P.; Beasley, R.; Edgar, D.; McMillen, L.; Marren, M.

    1995-08-01

    The Renewables and Energy Efficiency Planning (REEP) program was developed at the US Army Construction Engineering Research Laboratories (USACERL). This program allows for the analysis of 78 energy and water conservation opportunities at 239 major DOD installations. REEP uses a series of algorithms in conjunction with installation specific data to estimate the energy and water conservation potential for entire installations. The program provides the energy, financial, pollution, and social benefits of conservation initiatives. The open architecture of the program allows for simple modification of energy and water conservation variables, and installation database values to allow for individualized analysis. The program is essentially a high-level screening tool that can be used to help identify and focus preliminary conservation studies. The REEP program requires an IBM PC or compatible with a 80386 or 80486 microprocessor. It also requires approximately 4 megabytes of disk space and at least 8 megabytes of RAM. The system was developed for a Windows environment and requires Microsoft Windows 3.1{trademark} or higher to run properly.

  10. Code manual for MACCS2: Volume 1, user`s guide

    SciTech Connect (OSTI)

    Chanin, D.I.; Young, M.L.

    1997-03-01

    This report describes the use of the MACCS2 code. The document is primarily a user`s guide, though some model description information is included. MACCS2 represents a major enhancement of its predecessor MACCS, the MELCOR Accident Consequence Code System. MACCS, distributed by government code centers since 1990, was developed to evaluate the impacts of severe accidents at nuclear power plants on the surrounding public. The principal phenomena considered are atmospheric transport and deposition under time-variant meteorology, short- and long-term mitigative actions and exposure pathways, deterministic and stochastic health effects, and economic costs. No other U.S. code that is publicly available at present offers all these capabilities. MACCS2 was developed as a general-purpose tool applicable to diverse reactor and nonreactor facilities licensed by the Nuclear Regulatory Commission or operated by the Department of Energy or the Department of Defense. The MACCS2 package includes three primary enhancements: (1) a more flexible emergency-response model, (2) an expanded library of radionuclides, and (3) a semidynamic food-chain model. Other improvements are in the areas of phenomenological modeling and new output options. Initial installation of the code, written in FORTRAN 77, requires a 486 or higher IBM-compatible PC with 8 MB of RAM.

  11. Global-Address Space Networking (GASNet) Library

    Energy Science and Technology Software Center (OSTI)

    2011-04-06

    GASNet (Global-Address Space Networking) is a language-independent, low-level networking layer that provides network-independent, high-performance communication primitives tailored for implementing parallel global address space SPMD languages such as UPC and Titanium. The interface is primarily intended as a compilation target and for use by runtime library writers (as opposed to end users), and the primary goals are high performance, interface portability, and expressiveness. GASNet is designed specifically to support high-performance, portable implementations of global address spacemore » languages on modern high-end communication networks. The interface provides the flexibility and extensibility required to express a wide variety of communication patterns without sacrificing performance by imposing large computational overheads in the interface. The design of the GASNet interface is partitioned into two layers to maximize porting ease without sacrificing performance: the lower level is a narrow but very general interface called the GASNet core API - the design is basedheavily on Active Messages, and is implemented directly on top of each individual network architecture. The upper level is a wider and more expressive interface called GASNet extended API, which provides high-level operations such as remote memory access and various collective operations. This release implements GASNet over MPI, the Quadrics "elan" API, the Myrinet "GM" API and the "LAPI" interface to the IBM SP switch. A template is provided for adding support for additional network interfaces.« less

  12. Common Geometry Module

    Energy Science and Technology Software Center (OSTI)

    2005-01-01

    The Common Geometry Module (CGM) is a code library which provides geometry functionality used for mesh generation and other applications. This functionality includes that commonly found in solid modeling engines, like geometry creation, query and modification; CGM also includes capabilities not commonly found in solid modeling engines, like geometry decomposition tools and support for shared material interfaces. CGM is built upon the ACIS solid modeling engine, but also includes geometry capability developed beside and onmore » top of ACIS. CGM can be used as-is to provide geometry functionality for codes needing this capability. However, CGM can also be extended using derived classes in C++, allowing the geometric model to serve as the basis for other applications, for example mesh generation. CGM is supported on Sun Solaris, SGI, HP, IBM, DEC, Linux and Windows NT platforms. CGM also indudes support for loading ACIS models on parallel computers, using MPI-based communication. Future plans for CGM are to port it to different solid modeling engines, including Pro/Engineer or SolidWorks. CGM is being released into the public domain under an LGPL license; the ACIS-based engine is available to ACIS licensees on request.« less

  13. What then do we do about computer security?

    SciTech Connect (OSTI)

    Suppona, Roger A.; Mayo, Jackson R.; Davis, Christopher Edward; Berg, Michael J.; Wyss, Gregory Dane

    2012-01-01

    This report presents the answers that an informal and unfunded group at SNL provided for questions concerning computer security posed by Jim Gosler, Sandia Fellow (00002). The primary purpose of this report is to record our current answers; hopefully those answers will turn out to be answers indeed. The group was formed in November 2010. In November 2010 Jim Gosler, Sandia Fellow, asked several of us several pointed questions about computer security metrics. Never mind that some of the best minds in the field have been trying to crack this nut without success for decades. Jim asked Campbell to lead an informal and unfunded group to answer the questions. With time Jim invited several more Sandians to join in. We met a number of times both with Jim and without him. At Jim's direction we contacted a number of people outside Sandia who Jim thought could help. For example, we interacted with IBM's T.J. Watson Research Center and held a one-day, videoconference workshop with them on the questions.

  14. Self-propelled in-tube shuttle and control system for automated measurements of magnetic field alignment

    SciTech Connect (OSTI)

    Boroski, W.N.; Nicol, T.H. ); Pidcoe, S.V. . Space Systems Div.); Zink, R.A. )

    1990-03-01

    A magnetic field alignment gauge is used to measure the field angle as a function of axial position in each of the magnets for the Superconducting Super Collider (SSC). Present measurements are made by manually pushing the through the magnet bore tube and stopping at intervals to record field measurements. Gauge location is controlled through graduation marks and alignment pins on the push rods. Field measurements are recorded on a logging multimeter with tape output. Described is a computerized control system being developed to replace the manual procedure for field alignment measurements. The automated system employs a pneumatic walking device to move the measurement gauge through the bore tube. Movement of the device, called the Self-Propelled In-Tube Shuttle (SPITS), is accomplished through an integral, gas driven, double-acting cylinder. The motion of the SPITS is transferred to the bore tube by means of a pair of controlled, retractable support feet. Control of the SPITS is accomplished through an RS-422 interface from an IBM-compatible computer to a series of solenoid-actuated air valves. Direction of SPITS travel is determined by the air-valve sequence, and is managed through the control software. Precise axial position of the gauge within the magnet is returned to the control system through an optically-encoded digital position transducer attached to the shuttle. Discussed is the performance of the transport device and control system during preliminary testing of the first prototype shuttle. 1 ref., 7 figs.

  15. Prototype prosperity-diversity game for the Laboratory Development Division of Sandia National Laboratories

    SciTech Connect (OSTI)

    VanDevender, P.; Berman, M.; Savage, K.

    1996-02-01

    The Prosperity Game conducted for the Laboratory Development Division of National Laboratories on May 24--25, 1995, focused on the individual and organizational autonomy plaguing the Department of Energy (DOE)-Congress-Laboratories` ability to manage the wrenching change of declining budgets. Prosperity Games are an outgrowth and adaptation of move/countermove and seminar War Games. Each Prosperity Game is unique in that both the game format and the player contributions vary from game to game. This particular Prosperity Game was played by volunteers from Sandia National Laboratories, Eastman Kodak, IBM, and AT&T. Since the participants fully control the content of the games, the specific outcomes will be different when the team for each laboratory, Congress, DOE, and the Laboratory Operating Board (now Laboratory Operations Board) is composed of executives from those respective organizations. Nevertheless, the strategies and implementing agreements suggest that the Prosperity Games stimulate cooperative behaviors and may permit the executives of the institutions to safely explore the consequences of a family of DOE concert.

  16. Software Roadmap to Plug and Play Petaflop/s

    SciTech Connect (OSTI)

    Kramer, Bill; Carter, Jonathan; Skinner, David; Oliker, Lenny; Husbands, Parry; Hargrove, Paul; Shalf, John; Marques, Osni; Ng, Esmond; Drummond, Tony; Yelick, Kathy

    2006-07-31

    In the next five years, the DOE expects to build systemsthat approach a petaflop in scale. In the near term (two years), DOE willhave several near-petaflops systems that are 10 percent to 25 percent ofa peraflop-scale system. A common feature of these precursors to petaflopsystems (such as the Cray XT3 or the IBM BlueGene/L) is that they rely onan unprecedented degree of concurrency, which puts stress on every aspectof HPC system design. Such complex systems will likely break current bestpractices for fault resilience, I/O scaling, and debugging, and evenraise fundamental questions about languages and application programmingmodels. It is important that potential problems are anticipated farenough in advance that they can be addressed in time to prepare the wayfor petaflop-scale systems. This report considers the following fourquestions: (1) What software is on a critical path to make the systemswork? (2) What are the strengths/weaknesses of the vendors and ofexisting vendor solutions? (3) What are the local strengths at the labs?(4) Who are other key players who will play a role and canhelp?

  17. Diagnosing the Causes and Severity of One-sided Message Contention

    SciTech Connect (OSTI)

    Tallent, Nathan R.; Vishnu, Abhinav; van Dam, Hubertus; Daily, Jeffrey A.; Kerbyson, Darren J.; Hoisie, Adolfy

    2015-02-11

    Two trends suggest network contention for one-sided messages is poised to become a performance problem that concerns application developers: an increased interest in one-sided programming models and a rising ratio of hardware threads to network injection bandwidth. Unfortunately, it is difficult to reason about network contention and one-sided messages because one-sided tasks can either decrease or increase contention. We present effective and portable techniques for diagnosing the causes and severity of one-sided message contention. To detect that a message is affected by contention, we maintain statistics representing instantaneous (non-local) network resource demand. Using lightweight measurement and modeling, we identify the portion of a message's latency that is due to contention and whether contention occurs at the initiator or target. We attribute these metrics to program statements in their full static and dynamic context. We characterize contention for an important computational chemistry benchmark on InfiniBand, Cray Aries, and IBM Blue Gene/Q interconnects. We pinpoint the sources of contention, estimate their severity, and show that when message delivery time deviates from an ideal model, there are other messages contending for the same network links. With a small change to the benchmark, we reduce contention up to 50% and improve total runtime as much as 20%.

  18. One-Dimensional Heat Conduction

    Energy Science and Technology Software Center (OSTI)

    1992-03-09

    ICARUS-LLNL was developed to solve one-dimensional planar, cylindrical, or spherical conduction heat transfer problems. The IBM PC version is a family of programs including ICARUSB, an interactive BASIC heat conduction program; ICARUSF, a FORTRAN heat conduction program; PREICAR, a BASIC preprocessor for ICARUSF; and PLOTIC and CPLOTIC, interpretive BASIC and compiler BASIC plot postprocessor programs. Both ICARUSB and ICARUSF account for multiple material regions and complex boundary conditions, such as convection or radiation. In addition,more » ICARUSF accounts for temperature-dependent material properties and time or temperature-dependent boundary conditions. PREICAR is a user-friendly preprocessor used to generate or modify ICARUSF input data. PLOTIC and CPLOTIC generate plots of the temperature or heat flux profile at specified times, plots of the variation of temperature or heat flux with time at selected nodes, or plots of the solution grid. First developed in 1974 to allow easy modeling of complex one-dimensional systems, its original application was in the nuclear explosive testing program. Since then it has undergone extensive revision and been applied to problems dealing with laser fusion target fabrication, heat loads on underground tests, magnetic fusion switching tube anodes, and nuclear waste isolation canisters.« less

  19. Engineering Design Information System (EDIS)

    SciTech Connect (OSTI)

    Smith, P.S.; Short, R.D.; Schwarz, R.K.

    1990-11-01

    This manual is a guide to the use of the Engineering Design Information System (EDIS) Phase I. The system runs on the Martin Marietta Energy Systems, Inc., IBM 3081 unclassified computer. This is the first phase in the implementation of EDIS, which is an index, storage, and retrieval system for engineering documents produced at various plants and laboratories operated by Energy Systems for the Department of Energy. This manual presents on overview of EDIS, describing the system's purpose; the functions it performs; hardware, software, and security requirements; and help and error functions. This manual describes how to access EDIS and how to operate system functions using Database 2 (DB2), Time Sharing Option (TSO), Interactive System Productivity Facility (ISPF), and Soft Master viewing features employed by this system. Appendix A contains a description of the Soft Master viewing capabilities provided through the EDIS View function. Appendix B provides examples of the system error screens and help screens for valid codes used for screen entry. Appendix C contains a dictionary of data elements and descriptions.

  20. MPH: A Library for Distributed Multi-Component Environment

    Energy Science and Technology Software Center (OSTI)

    2001-05-01

    A growing trend in developing large and complex applications on today's Teraflops compyters is to integrate stand-alone and/or semi-independent program components into a comprehensive simulation package. We develop MPH, a multi-component handshaking library that allows component models recognize and talk to each other in a convenient and consisten way, thus to run multi-component ulti-executable applications effectively on distributed memory architectures. MPH provides the following capabilities: component name registration, resource allocation, inter-component communication, inquiry on themore » multi-component environment, standard in/out redirect. It supports the following four integration mechanisms: Multi-Component Single-Executable (MCSE); Single-Component Multi-Executable (SCME); Multi-Component Multi-Executable (MCME); Multi-instance Multi-Executable (MIME). MPH currently works on IBM SP, SGI Origin, Compaq AlphaSC, Cray T3E, and PC clusters. It is being adopted in NCAR's CCSM and Colorado State University's icosahedra grid coupled model. A joint communicator between any two components could be created. MPI communication between local processors and remote processors are invoked through component names and the local id. More functions are available to inquire the global-id, local-id, number of executales, etc.« less

  1. Optimization and Performance Modeling of Stencil Computations on Modern Microprocessors

    SciTech Connect (OSTI)

    Datta, Kaushik; Kamil, Shoaib; Williams, Samuel; Oliker, Leonid; Shalf, John; Yelick, Katherine

    2007-06-01

    Stencil-based kernels constitute the core of many important scientific applications on blockstructured grids. Unfortunately, these codes achieve a low fraction of peak performance, due primarily to the disparity between processor and main memory speeds. In this paper, we explore the impact of trends in memory subsystems on a variety of stencil optimization techniques and develop performance models to analytically guide our optimizations. Our work targets cache reuse methodologies across single and multiple stencil sweeps, examining cache-aware algorithms as well as cache-oblivious techniques on the Intel Itanium2, AMD Opteron, and IBM Power5. Additionally, we consider stencil computations on the heterogeneous multicore design of the Cell processor, a machine with an explicitly managed memory hierarchy. Overall our work represents one of the most extensive analyses of stencil optimizations and performance modeling to date. Results demonstrate that recent trends in memory system organization have reduced the efficacy of traditional cache-blocking optimizations. We also show that a cache-aware implementation is significantly faster than a cache-oblivious approach, while the explicitly managed memory on Cell enables the highest overall efficiency: Cell attains 88% of algorithmic peak while the best competing cache-based processor achieves only 54% of algorithmic peak performance.

  2. A performance comparison of current HPC systems: Blue Gene/Q, Cray XE6 and InfiniBand systems

    SciTech Connect (OSTI)

    Kerbyson, Darren J.; Barker, Kevin J.; Vishnu, Abhinav; Hoisie, Adolfy

    2014-01-01

    We present here a performance analysis of three of current architectures that have become commonplace in the High Performance Computing world. Blue Gene/Q is the third generation of systems from IBM that use modestly performing cores but at large-scale in order to achieve high performance. The XE6 is the latest in a long line of Cray systems that use a 3-D topology but the first to use its Gemini interconnection network. InfiniBand provides the flexibility of using compute nodes from many vendors that can be connected in many possible topologies. The performance characteristics of each vary vastly, and the way in which nodes are allocated in each type of system can significantly impact on achieved performance. In this work we compare these three systems using a combination of micro-benchmarks and a set of production applications. In addition we also examine the differences in performance variability observed on each system and quantify the lost performance using a combination of both empirical measurements and performance models. Our results show that significant performance can be lost in normal production operation of the Cray XE6 and InfiniBand Clusters in comparison to Blue Gene/Q.

  3. PDS SHRINK. PDS SHRINK

    SciTech Connect (OSTI)

    Phillion, D.

    1991-12-15

    This code enables one to display, take line-outs on, and perform various transformations on an image created by an array of integer*2 data. Uncompressed eight-bit TIFF files created on either the Macintosh or the IBM PC may also be read in and converted to a 16 bit signed integer image. This code is designed to handle all the formats used for PDS (photo-densitometer) files at the Lawrence Livermore National Laboratory. These formats are all explained by the application code. The image may be zoomed infinitely and the gray scale mapping can be easily changed. Line-outs may be horizontal or vertical with arbitrary width, angled with arbitrary end points, or taken along any path. This code is usually used to examine spectrograph data. Spectral lines may be identified and a polynomial fit from position to wavelength may be found. The image array can be remapped so that the pixels all have the same change of lambda width. It is not necessary to do this, however. Lineouts may be printed, saved as Cricket tab-delimited files, or saved as PICT2 files. The plots may be linear, semilog, or logarithmic with nice values and proper scientific notation. Typically, spectral lines are curved.

  4. Particle Communication and Domain Neighbor Coupling: Scalable Domain Decomposed Algorithms for Monte Carlo Particle Transport

    SciTech Connect (OSTI)

    O'Brien, M J; Brantley, P S

    2015-01-20

    In order to run Monte Carlo particle transport calculations on new supercomputers with hundreds of thousands or millions of processors, care must be taken to implement scalable algorithms. This means that the algorithms must continue to perform well as the processor count increases. In this paper, we examine the scalability of:(1) globally resolving the particle locations on the correct processor, (2) deciding that particle streaming communication has finished, and (3) efficiently coupling neighbor domains together with different replication levels. We have run domain decomposed Monte Carlo particle transport on up to 221 = 2,097,152 MPI processes on the IBM BG/Q Sequoia supercomputer and observed scalable results that agree with our theoretical predictions. These calculations were carefully constructed to have the same amount of work on every processor, i.e. the calculation is already load balanced. We also examine load imbalanced calculations where each domains replication level is proportional to its particle workload. In this case we show how to efficiently couple together adjacent domains to maintain within workgroup load balance and minimize memory usage.

  5. Scalable Equation of State Capability

    SciTech Connect (OSTI)

    Epperly, T W; Fritsch, F N; Norquist, P D; Sanford, L A

    2007-12-03

    The purpose of this techbase project was to investigate the use of parallel array data types to reduce the memory footprint of the Livermore Equation Of State (LEOS) library. Addressing the memory scalability of LEOS is necessary to run large scientific simulations on IBM BG/L and future architectures with low memory per processing core. We considered using normal MPI, one-sided MPI, and Global Arrays to manage the distributed array and ended up choosing Global Arrays because it was the only communication library that provided the level of asynchronous access required. To reduce the runtime overhead using a parallel array data structure, a least recently used (LRU) caching algorithm was used to provide a local cache of commonly used parts of the parallel array. The approach was initially implemented in a isolated copy of LEOS and was later integrated into the main trunk of the LEOS Subversion repository. The approach was tested using a simple test. Testing indicated that the approach was feasible, and the simple LRU caching had a 86% hit rate.

  6. Coal Preparation Plant Simulation

    Energy Science and Technology Software Center (OSTI)

    1992-02-25

    COALPREP assesses the degree of cleaning obtained with different coal feeds for a given plant configuration and mode of operation. It allows the user to simulate coal preparation plants to determine an optimum plant configuration for a given degree of cleaning. The user can compare the performance of alternative plant configurations as well as determine the impact of various modes of operation for a proposed configuration. The devices that can be modelled include froth flotationmore » devices, washers, dewatering equipment, thermal dryers, rotary breakers, roll crushers, classifiers, screens, blenders and splitters, and gravity thickeners. The user must specify the plant configuration and operating conditions and a description of the coal feed. COALPREP then determines the flowrates within the plant and a description of each flow stream (i.e. the weight distribution, percent ash, pyritic sulfur and total sulfur, moisture, BTU content, recoveries, and specific gravity of separation). COALPREP also includes a capability for calculating the cleaning cost per ton of coal. The IBM PC version contains two auxiliary programs, DATAPREP and FORLIST. DATAPREP is an interactive preprocessor for creating and editing COALPREP input data. FORLIST converts carriage-control characters in FORTRAN output data to ASCII line-feed (X''0A'') characters.« less

  7. PERI - Auto-tuning Memory Intensive Kernels for Multicore

    SciTech Connect (OSTI)

    Bailey, David H; Williams, Samuel; Datta, Kaushik; Carter, Jonathan; Oliker, Leonid; Shalf, John; Yelick, Katherine; Bailey, David H

    2008-06-24

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to Sparse Matrix Vector Multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we develop a code generator for each kernel that allows us to identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4X improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications.

  8. Optimization of a Lattice Boltzmann Computation on State-of-the-Art Multicore Platforms

    SciTech Connect (OSTI)

    Williams, Samuel; Carter, Jonathan; Oliker, Leonid; Shalf, John; Yelick, Katherine

    2009-04-10

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to a lattice Boltzmann application (LBMHD) that historically has made poor use of scalar microprocessors due to its complex data structures and memory access patterns. We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Xeon E5345 (Clovertown), AMD Opteron 2214 (Santa Rosa), AMD Opteron 2356 (Barcelona), Sun T5140 T2+ (Victoria Falls), as well as a QS20 IBM Cell Blade. Rather than hand-tuning LBMHD for each system, we develop a code generator that allows us to identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned LBMHD application achieves up to a 15x improvement compared with the original code at a given concurrency. Additionally, we present detailed analysis of each optimization, which reveal surprising hardware bottlenecks and software challenges for future multicore systems and applications.

  9. A Parallel Ghosting Algorithm for The Flexible Distributed Mesh Database

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Mubarak, Misbah; Seol, Seegyoung; Lu, Qiukai; Shephard, Mark S.

    2013-01-01

    Critical to the scalability of parallel adaptive simulations are parallel control functions including load balancing, reduced inter-process communication and optimal data decomposition. In distributed meshes, many mesh-based applications frequently access neighborhood information for computational purposes which must be transmitted efficiently to avoid parallel performance degradation when the neighbors are on different processors. This article presents a parallel algorithm of creating and deleting data copies, referred to as ghost copies, which localize neighborhood data for computation purposes while minimizing inter-process communication. The key characteristics of the algorithm are: (1) It can create ghost copies of any permissible topological order inmore » a 1D, 2D or 3D mesh based on selected adjacencies. (2) It exploits neighborhood communication patterns during the ghost creation process thus eliminating all-to-all communication. (3) For applications that need neighbors of neighbors, the algorithm can create n number of ghost layers up to a point where the whole partitioned mesh can be ghosted. Strong and weak scaling results are presented for the IBM BG/P and Cray XE6 architectures up to a core count of 32,768 processors. The algorithm also leads to scalable results when used in a parallel super-convergent patch recovery error estimator, an application that frequently accesses neighborhood data to carry out computation.« less

  10. Users manual for the Chameleon parallel programming tools

    SciTech Connect (OSTI)

    Gropp, W.; Smith, B.

    1993-06-01

    Message passing is a common method for writing programs for distributed-memory parallel computers. Unfortunately, the lack of a standard for message passing has hampered the construction of portable and efficient parallel programs. In an attempt to remedy this problem, a number of groups have developed their own message-passing systems, each with its own strengths and weaknesses. Chameleon is a second-generation system of this type. Rather than replacing these existing systems, Chameleon is meant to supplement them by providing a uniform way to access many of these systems. Chameleon`s goals are to (a) be very lightweight (low over-head), (b) be highly portable, and (c) help standardize program startup and the use of emerging message-passing operations such as collective operations on subsets of processors. Chameleon also provides a way to port programs written using PICL or Intel NX message passing to other systems, including collections of workstations. Chameleon is tracking the Message-Passing Interface (MPI) draft standard and will provide both an MPI implementation and an MPI transport layer. Chameleon provides support for heterogeneous computing by using p4 and PVM. Chameleon`s support for homogeneous computing includes the portable libraries p4, PICL, and PVM and vendor-specific implementation for Intel NX, IBM EUI (SP-1), and Thinking Machines CMMD (CM-5). Support for Ncube and PVM 3.x is also under development.

  11. User's manual for ONEDANT: a code package for one-dimensional, diffusion-accelerated, neutral-particle transport

    SciTech Connect (OSTI)

    O'Dell, R.D.; Brinkley, F.W. Jr.; Marr, D.R.

    1982-02-01

    ONEDANT is designed for the CDC-7600, but the program has been implemented and run on the IBM-370/190 and CRAY-I computers. ONEDANT solves the one-dimensional multigroup transport equation in plane, cylindrical, spherical, and two-angle plane geometries. Both regular and adjoint, inhomogeneous and homogeneous (k/sub eff/ and eigenvalue search) problems subject to vacuum, reflective, periodic, white, albedo, or inhomogeneous boundary flux conditions are solved. General anisotropic scattering is allowed and anisotropic inhomogeneous sources are permitted. ONEDANT numerically solves the one-dimensional, multigroup form of the neutral-particle, steady-state form of the Boltzmann transport equation. The discrete-ordinates approximation is used for treating the angular variation of the particle distribution and the diamond-difference scheme is used for phase space discretization. Negative fluxes are eliminated by a local set-to-zero-and-correct algorithm. A standard inner (within-group) iteration, outer (energy-group-dependent source) iteration technique is used. Both inner and outer iterations are accelerated using the diffusion synthetic acceleration method. (WHK)

  12. High-speed imaging of blood splatter patterns

    SciTech Connect (OSTI)

    McDonald, T.E.; Albright, K.A.; King, N.S.P.; Yates, G.J.; Levine, G.F.

    1993-05-01

    The interpretation of blood splatter patterns is an important element in reconstructing the events and circumstances of an accident or crime scene. Unfortunately, the interpretation of patterns and stains formed by blood droplets is not necessarily intuitive and study and analysis are required to arrive at a correct conclusion. A very useful tool in the study of blood splatter patterns is high-speed photography. Scientists at the Los Alamos National Laboratory, Department of Energy (DOE), and Bureau of Forensic Services, State of California, have assembled a high-speed imaging system designed to image blood splatter patterns. The camera employs technology developed by Los Alamos for the underground nuclear testing program and has also been used in a military mine detection program. The camera uses a solid-state CCD sensor operating at approximately 650 frames per second (75 MPixels per second) with a microchannel plate image intensifier that can provide shuttering as short as 5 ns. The images are captured with a laboratory high-speed digitizer and transferred to an IBM compatible PC for display and hard copy output for analysis. The imaging system is described in this paper.

  13. High-speed imaging of blood splatter patterns

    SciTech Connect (OSTI)

    McDonald, T.E.; Albright, K.A.; King, N.S.P.; Yates, G.J. ); Levine, G.F. . Bureau of Forensic Services)

    1993-01-01

    The interpretation of blood splatter patterns is an important element in reconstructing the events and circumstances of an accident or crime scene. Unfortunately, the interpretation of patterns and stains formed by blood droplets is not necessarily intuitive and study and analysis are required to arrive at a correct conclusion. A very useful tool in the study of blood splatter patterns is high-speed photography. Scientists at the Los Alamos National Laboratory, Department of Energy (DOE), and Bureau of Forensic Services, State of California, have assembled a high-speed imaging system designed to image blood splatter patterns. The camera employs technology developed by Los Alamos for the underground nuclear testing program and has also been used in a military mine detection program. The camera uses a solid-state CCD sensor operating at approximately 650 frames per second (75 MPixels per second) with a microchannel plate image intensifier that can provide shuttering as short as 5 ns. The images are captured with a laboratory high-speed digitizer and transferred to an IBM compatible PC for display and hard copy output for analysis. The imaging system is described in this paper.

  14. An update on modeling land-ice/ocean interactions in CESM

    SciTech Connect (OSTI)

    Asay-davis, Xylar

    2011-01-24

    This talk is an update on ongoing land-ice/ocean coupling work within the Community Earth System Model (CESM). The coupling method is designed to allow simulation of a fully dynamic ice/ocean interface, while requiring minimal modification to the existing ocean model (the Parallel Ocean Program, POP). The method makes use of an immersed boundary method (IBM) to represent the geometry of the ice-ocean interface without requiring that the computational grid be modified in time. We show many of the remaining development challenges that need to be addressed in order to perform global, century long climate runs with fully coupled ocean and ice sheet models. These challenges include moving to a new grid where the computational pole is no longer at the true south pole and several changes to the coupler (the software tool used to communicate between model components) to allow the boundary between land and ocean to vary in time. We discuss benefits for ice/ocean coupling that would be gained from longer-term ocean model development to allow for natural salt fluxes (which conserve both water and salt mass, rather than water volume).

  15. Karlsruhe Database for Radioactive Wastes (KADABRA) - Accounting and Management System for Radioactive Waste Treatment - 12275

    SciTech Connect (OSTI)

    Himmerkus, Felix; Rittmeyer, Cornelia [WAK Rueckbau- und Entsorgungs- GmbH, 76339 Eggenstein-Leopoldshafen (Germany)

    2012-07-01

    The data management system KADABRA was designed according to the purposes of the Cen-tral Decontamination Department (HDB) of the Wiederaufarbeitungsanlage Karlsruhe Rueckbau- und Entsorgungs-GmbH (WAK GmbH), which is specialized in the treatment and conditioning of radioactive waste. The layout considers the major treatment processes of the HDB as well as regulatory and legal requirements. KADABRA is designed as an SAG ADABAS application on IBM system Z mainframe. The main function of the system is the data management of all processes related to treatment, transfer and storage of radioactive material within HDB. KADABRA records the relevant data concerning radioactive residues, interim products and waste products as well as the production parameters relevant for final disposal. Analytical data from the laboratory and non destructive assay systems, that describe the chemical and radiological properties of residues, production batches, interim products as well as final waste products, can be linked to the respective dataset for documentation and declaration. The system enables the operator to trace the radioactive material through processing and storage. Information on the actual sta-tus of the material as well as radiological data and storage position can be gained immediately on request. A variety of programs accessed to the database allow the generation of individual reports on periodic or special request. KADABRA offers a high security standard and is constantly adapted to the recent requirements of the organization. (authors)

  16. A Metascalable Computing Framework for Large Spatiotemporal-Scale Atomistic Simulations

    SciTech Connect (OSTI)

    Nomura, K; Seymour, R; Wang, W; Kalia, R; Nakano, A; Vashishta, P; Shimojo, F; Yang, L H

    2009-02-17

    A metascalable (or 'design once, scale on new architectures') parallel computing framework has been developed for large spatiotemporal-scale atomistic simulations of materials based on spatiotemporal data locality principles, which is expected to scale on emerging multipetaflops architectures. The framework consists of: (1) an embedded divide-and-conquer (EDC) algorithmic framework based on spatial locality to design linear-scaling algorithms for high complexity problems; (2) a space-time-ensemble parallel (STEP) approach based on temporal locality to predict long-time dynamics, while introducing multiple parallelization axes; and (3) a tunable hierarchical cellular decomposition (HCD) parallelization framework to map these O(N) algorithms onto a multicore cluster based on hybrid implementation combining message passing and critical section-free multithreading. The EDC-STEP-HCD framework exposes maximal concurrency and data locality, thereby achieving: (1) inter-node parallel efficiency well over 0.95 for 218 billion-atom molecular-dynamics and 1.68 trillion electronic-degrees-of-freedom quantum-mechanical simulations on 212,992 IBM BlueGene/L processors (superscalability); (2) high intra-node, multithreading parallel efficiency (nanoscalability); and (3) nearly perfect time/ensemble parallel efficiency (eon-scalability). The spatiotemporal scale covered by MD simulation on a sustained petaflops computer per day (i.e. petaflops {center_dot} day of computing) is estimated as NT = 2.14 (e.g. N = 2.14 million atoms for T = 1 microseconds).

  17. Second update The Gordon Bell Competetion entry gb110s2

    SciTech Connect (OSTI)

    Vranas, P; Soltz, R

    2006-11-12

    Since the update to our entry of October 20th we have just made a significant improvement. We understand that this is past the deadline for updates and very close to the conference date. However, Lawrence Livermore National Laboratory has just updated the BG/L system software on their full 64 BG/L supercomputer to IBM-BGL Release 3. As we discussed in our update of October 20 this release includes our custom L1 and SRAM access functions that allow us to achieve higher sustained performance. Just a few hours ago we got access to the full system and obtained the fastest sustained performance point. In the full 131,072 CPU-cores system QCD sustains 70.9 Teraflops for the Dirac operator and 67.9 teraflops for the full Conjugate Gradient inverter. This is about 20% faster than our last update. We attach the corresponding speedup figure. As you can tell the speedup is perfect. This figure is the same as Figure 1 of our October 20th update except that it now includes the 131,072 CPU-cores point.

  18. WINDOW 4.0: Program description. A PC program for analyzing the thermal performance of fenestration products

    SciTech Connect (OSTI)

    Not Available

    1992-03-01

    WINDOW 4.0 is a publicly available IBM PC compatible computer program developed by the Windows and Daylighting Group at Lawrence Berkeley Laboratory for calculating total window thermal performance indices (e.g. U-values, solar heat gain coefficients, shading coefficients, and visible transmittances). WINDOW 4.0 provides a versatile heat transfer analysis method consistent with the rating procedure developed by the National Fenestration Rating Council (NFRC). The program can be used to design and develop new products, to rate and compare performance characteristics of all types of window products, to assist educators in teaching heat transfer through windows, and to help public officials in developing building energy codes. WINDOW 4.0 is a major revision to WINDOW 3.1 and we strongly urge all users to read this manual before using the program. Users who need professional assistance with the WINDOW 4.0 program or other window performance simulation issues are encouraged to contact one or more of the NFRC-accredited Simulation Laboratories. A list of these accredited simulation professionals is available from the NFRC.

  19. WINDOW 4. 0: Program description. A PC program for analyzing the thermal performance of fenestration products

    SciTech Connect (OSTI)

    Not Available

    1992-03-01

    WINDOW 4.0 is a publicly available IBM PC compatible computer program developed by the Windows and Daylighting Group at Lawrence Berkeley Laboratory for calculating total window thermal performance indices (e.g. U-values, solar heat gain coefficients, shading coefficients, and visible transmittances). WINDOW 4.0 provides a versatile heat transfer analysis method consistent with the rating procedure developed by the National Fenestration Rating Council (NFRC). The program can be used to design and develop new products, to rate and compare performance characteristics of all types of window products, to assist educators in teaching heat transfer through windows, and to help public officials in developing building energy codes. WINDOW 4.0 is a major revision to WINDOW 3.1 and we strongly urge all users to read this manual before using the program. Users who need professional assistance with the WINDOW 4.0 program or other window performance simulation issues are encouraged to contact one or more of the NFRC-accredited Simulation Laboratories. A list of these accredited simulation professionals is available from the NFRC.

  20. xdamp Version 6 : an IDL-based data and image manipulation program.

    SciTech Connect (OSTI)

    Ballard, William Parker

    2012-04-01

    The original DAMP (DAta Manipulation Program) was written by Mark Hedemann of Sandia National Laboratories and used the CA-DISSPLA{trademark} (available from Computer Associates International, Inc., Garden City, NY) graphics package as its engine. It was used to plot, modify, and otherwise manipulate the one-dimensional data waveforms (data vs. time) from a wide variety of accelerators. With the waning of CA-DISSPLA and the increasing popularity of Unix(reg sign)-based workstations, a replacement was needed. This package uses the IDL(reg sign) software, available from Research Systems Incorporated, a Xerox company, in Boulder, Colorado, as the engine, and creates a set of widgets to manipulate the data in a manner similar to the original DAMP and earlier versions of xdamp. IDL is currently supported on a wide variety of Unix platforms such as IBM(reg sign) workstations, Hewlett Packard workstations, SUN(reg sign) workstations, Microsoft(reg sign) Windows{trademark} computers, Macintosh(reg sign) computers and Digital Equipment Corporation VMS(reg sign) and Alpha(reg sign) systems. Thus, xdamp is portable across many platforms. We have verified operation, albeit with some minor IDL bugs, on personal computers using Windows 7 and Windows Vista; Unix platforms; and Macintosh computers. Version 6 is an update that uses the IDL Virtual Machine to resolve the need for licensing IDL.

  1. Comparison of open-source linear programming solvers.

    SciTech Connect (OSTI)

    Gearhart, Jared Lee; Adair, Kristin Lynn; Durfee, Justin D.; Jones, Katherine A.; Martin, Nathaniel; Detry, Richard Joseph

    2013-10-01

    When developing linear programming models, issues such as budget limitations, customer requirements, or licensing may preclude the use of commercial linear programming solvers. In such cases, one option is to use an open-source linear programming solver. A survey of linear programming tools was conducted to identify potential open-source solvers. From this survey, four open-source solvers were tested using a collection of linear programming test problems and the results were compared to IBM ILOG CPLEX Optimizer (CPLEX) [1], an industry standard. The solvers considered were: COIN-OR Linear Programming (CLP) [2], [3], GNU Linear Programming Kit (GLPK) [4], lp_solve [5] and Modular In-core Nonlinear Optimization System (MINOS) [6]. As no open-source solver outperforms CPLEX, this study demonstrates the power of commercial linear programming software. CLP was found to be the top performing open-source solver considered in terms of capability and speed. GLPK also performed well but cannot match the speed of CLP or CPLEX. lp_solve and MINOS were considerably slower and encountered issues when solving several test problems.

  2. Quantitative genetic activity graphical profiles for use in chemical evaluation

    SciTech Connect (OSTI)

    Waters, M.D.; Stack, H.F.; Garrett, N.E.; Jackson, M.A.

    1990-12-31

    A graphic approach, terms a Genetic Activity Profile (GAP), was developed to display a matrix of data on the genetic and related effects of selected chemical agents. The profiles provide a visual overview of the quantitative (doses) and qualitative (test results) data for each chemical. Either the lowest effective dose or highest ineffective dose is recorded for each agent and bioassay. Up to 200 different test systems are represented across the GAP. Bioassay systems are organized according to the phylogeny of the test organisms and the end points of genetic activity. The methodology for producing and evaluating genetic activity profile was developed in collaboration with the International Agency for Research on Cancer (IARC). Data on individual chemicals were compiles by IARC and by the US Environmental Protection Agency (EPA). Data are available on 343 compounds selected from volumes 1-53 of the IARC Monographs and on 115 compounds identified as Superfund Priority Substances. Software to display the GAPs on an IBM-compatible personal computer is available from the authors. Structurally similar compounds frequently display qualitatively and quantitatively similar profiles of genetic activity. Through examination of the patterns of GAPs of pairs and groups of chemicals, it is possible to make more informed decisions regarding the selection of test batteries to be used in evaluation of chemical analogs. GAPs provided useful data for development of weight-of-evidence hazard ranking schemes. Also, some knowledge of the potential genetic activity of complex environmental mixtures may be gained from an assessment of the genetic activity profiles of component chemicals. The fundamental techniques and computer programs devised for the GAP database may be used to develop similar databases in other disciplines. 36 refs., 2 figs.

  3. Adversary Sequence Interruption Model

    Energy Science and Technology Software Center (OSTI)

    1985-11-15

    PC EASI is an IBM personal computer or PC-compatible version of an analytical technique for measuring the effectiveness of physical protection systems. PC EASI utilizes a methodology called Estimate of Adversary Sequence Interruption (EASI) which evaluates the probability of interruption (PI) for a given sequence of adversary tasks. Probability of interruption is defined as the probability that the response force will arrive before the adversary force has completed its task. The EASI methodology is amore » probabilistic approach that analytically evaluates basic functions of the physical security system (detection, assessment, communications, and delay) with respect to response time along a single adversary path. It is important that the most critical scenarios for each target be identified to ensure that vulnerabilities have not been overlooked. If the facility is not overly complex, this can be accomplished by examining all paths. If the facility is complex, a global model such as Safeguards Automated Facility Evaluation (SAFE) may be used to identify the most vulnerable paths. PC EASI is menu-driven with screen forms for entering and editing the basic scenarios. In addition to evaluating PI for the basic scenario, the sensitivities of many of the parameters chosen in the scenario can be analyzed. These sensitivities provide information to aid the analyst in determining the tradeoffs for reducing the probability of interruption. PC EASI runs under the Micro Data Base Systems'' proprietary database management system Knowledgeman. KMAN provides the user environment and file management for the specified basic scenarios, and KGRAPH the graphical output of the sensitivity calculations. This software is not included. Due to errors in release 2 of KMAN, PC EASI will not execute properly; release 1.07 of KMAN is required.« less

  4. CPS and the Fermilab farms

    SciTech Connect (OSTI)

    Fausey, M.R.

    1992-06-01

    Cooperative Processes Software (CPS) is a parallel programming toolkit developed at the Fermi National Accelerator Laboratory. It is the most recent product in an evolution of systems aimed at finding a cost-effective solution to the enormous computing requirements in experimental high energy physics. Parallel programs written with CPS are large-grained, which means that the parallelism occurs at the subroutine level, rather than at the traditional single line of code level. This fits the requirements of high energy physics applications, such as event reconstruction, or detector simulations, quite well. It also satisfies the requirements of applications in many other fields. One example is in the pharmaceutical industry. In the field of computational chemistry, the process of drug design may be accelerated with this approach. CPS programs run as a collection of processes distributed over many computers. CPS currently supports a mixture of heterogeneous UNIX-based workstations which communicate over networks with TCP/IR CPS is most suited for jobs with relatively low I/O requirements compared to CPU. The CPS toolkit supports message passing remote subroutine calls, process synchronization, bulk data transfers, and a mechanism called process queues, by which one process can find another which has reached a particular state. The CPS software supports both batch processing and computer center operations. The system is currently running in production mode on two farms of processors at Fermilab. One farm consists of approximately 90 IBM RS/6000 model 320 workstations, and the other has 85 Silicon Graphics 4D/35 workstations. This paper first briefly describes the history of parallel processing at Fermilab which lead to the development of CPS. Then the CPS software and the CPS Batch queueing system are described. Finally, the experiences of using CPS in production on the Fermilab processor farms are described.

  5. DYNA3D, INGRID, and TAURUS: an integrated, interactive software system for crashworthiness engineering

    SciTech Connect (OSTI)

    Benson, D.J.; Hallquist, J.O.; Stillman, D.W.

    1985-04-01

    Crashworthiness engineering has always been a high priority at Lawrence Livermore National Laboratory because of its role in the safe transport of radioactive material for the nuclear power industry and military. As a result, the authors have developed an integrated, interactive set of finite element programs for crashworthiness analysis. The heart of the system is DYNA3D, an explicit, fully vectorized, large deformation structural dynamics code. DYNA3D has the following four capabilities that are critical for the efficient and accurate analysis of crashes: (1) fully nonlinear solid, shell, and beam elements for representing a structure, (2) a broad range of constitutive models for representing the materials, (3) sophisticated contact algorithms for the impact interactions, and (4) a rigid body capability to represent the bodies away from the impact zones at a greatly reduced cost without sacrificing any accuracy in the momentum calculations. To generate the large and complex data files for DYNA3D, INGRID, a general purpose mesh generator, is used. It runs on everything from IBM PCs to CRAYS, and can generate 1000 nodes/minute on a PC. With its efficient hidden line algorithms and many options for specifying geometry, INGRID also doubles as a geometric modeller. TAURUS, an interactive post processor, is used to display DYNA3D output. In addition to the standard monochrome hidden line display, time history plotting, and contouring, TAURUS generates interactive color displays on 8 color video screens by plotting color bands superimposed on the mesh which indicate the value of the state variables. For higher quality color output, graphic output files may be sent to the DICOMED film recorders. We have found that color is every bit as important as hidden line removal in aiding the analyst in understanding his results. In this paper the basic methodologies of the programs are presented along with several crashworthiness calculations.

  6. Computer simulation of coal preparation plants. Part 2. User's manual. Final report

    SciTech Connect (OSTI)

    Gottfried, B.S.; Tierney, J.W.

    1985-12-01

    This report describes a comprehensive computer program that allows the user to simulate the performance of realistic coal preparation plants. The program is very flexible in the sense that it can accommodate any particular plant configuration that may be of interest. This allows the user to compare the performance of different plant configurations and to determine the impact of various modes of operation with the same configuration. In addition, the program can be used to assess the degree of cleaning obtained with different coal feeds for a given plant configuration and a given mode of operation. Use of the simulator requires that the user specify the appearance of the plant configuration, the plant operating conditions, and a description of the coal feed. The simulator will then determine the flowrates within the plant, and a description of each flowrate (i.e., the weight distribution, percent ash, pyritic sulfur and total sulfur, moisture, and Btu content). The simulation program has been written in modular form using the Fortran language. It can be implemented on a great many different types of computers, ranging from large scientific mainframes to IBM-type personal computers with a fixed disk. Some customization may be required, however, to ensure compatibility with the features of Fortran available on a particular computer. Part I of this report contains a general description of the methods used to carry out the simulation. Each of the major types of units is described separately, in addition to a description of the overall system analysis. Part II is intended as a user's manual. It contains a listing of the mainframe version of the program, instructions for its use (on both a mainframe and a microcomputer), and output for a representative sample problem.

  7. Nesting large-eddy simulations within mesoscale simulations for wind energy applications

    SciTech Connect (OSTI)

    Lundquist, J K; Mirocha, J D; Chow, F K; Kosovic, B; Lundquist, K A

    2008-09-08

    With increasing demand for more accurate atmospheric simulations for wind turbine micrositing, for operational wind power forecasting, and for more reliable turbine design, simulations of atmospheric flow with resolution of tens of meters or higher are required. These time-dependent large-eddy simulations (LES), which resolve individual atmospheric eddies on length scales smaller than turbine blades and account for complex terrain, are possible with a range of commercial and open-source software, including the Weather Research and Forecasting (WRF) model. In addition to 'local' sources of turbulence within an LES domain, changing weather conditions outside the domain can also affect flow, suggesting that a mesoscale model provide boundary conditions to the large-eddy simulations. Nesting a large-eddy simulation within a mesoscale model requires nuanced representations of turbulence. Our group has improved the Weather and Research Forecasting model's (WRF) LES capability by implementing the Nonlinear Backscatter and Anisotropy (NBA) subfilter stress model following Kosovic (1997) and an explicit filtering and reconstruction technique to compute the Resolvable Subfilter-Scale (RSFS) stresses (following Chow et al, 2005). We have also implemented an immersed boundary method (IBM) in WRF to accommodate complex terrain. These new models improve WRF's LES capabilities over complex terrain and in stable atmospheric conditions. We demonstrate approaches to nesting LES within a mesoscale simulation for farms of wind turbines in hilly regions. Results are sensitive to the nesting method, indicating that care must be taken to provide appropriate boundary conditions, and to allow adequate spin-up of turbulence in the LES domain.

  8. THE LOS ALAMOS NATIONAL LABORATORY ATMOSPHERIC TRANSPORT AND DIFFUSION MODELS

    SciTech Connect (OSTI)

    M. WILLIAMS

    1999-08-01

    The LANL atmospheric transport and diffusion models are composed of two state-of-the-art computer codes. The first is an atmospheric wind model called HOThlAC, Higher Order Turbulence Model for Atmospheric circulations. HOTMAC generates wind and turbulence fields by solving a set of atmospheric dynamic equations. The second is an atmospheric diffusion model called RAPTAD, Random Particle Transport And Diffusion. RAPTAD uses the wind and turbulence output from HOTMAC to compute particle trajectories and concentration at any location downwind from a source. Both of these models, originally developed as research codes on supercomputers, have been modified to run on microcomputers. Because the capability of microcomputers is advancing so rapidly, the expectation is that they will eventually become as good as today's supercomputers. Now both models are run on desktop or deskside computers, such as an IBM PC/AT with an Opus Pm 350-32 bit coprocessor board and a SUN workstation. Codes have also been modified so that high level graphics, NCAR Graphics, of the output from both models are displayed on the desktop computer monitors and plotted on a laser printer. Two programs, HOTPLT and RAPLOT, produce wind vector plots of the output from HOTMAC and particle trajectory plots of the output from RAPTAD, respectively. A third CONPLT provides concentration contour plots. Section II describes step-by-step operational procedures, specifically for a SUN-4 desk side computer, on how to run main programs HOTMAC and RAPTAD, and graphics programs to display the results. Governing equations, boundary conditions and initial values of HOTMAC and RAPTAD are discussed in Section III. Finite-difference representations of the governing equations, numerical solution procedures, and a grid system are given in Section IV.

  9. RTAP evaluation

    SciTech Connect (OSTI)

    Cupps, K.; Elko, S.; Folta, P.

    1995-01-23

    An in-depth analysis of the RTAP product was undertaken within the CNC associate program to determine the feasibility of utilizing it to replace the current Supervisory Control System that supports the AVLIS program. This document contains the results of that evaluation. With some fundamental redesign the current Supervisory Control system could meet the needs described above. The redesign would require a large amount of software rewriting and would be very time consuming. The higher level functionality (alarming, automation, etc.) would have to wait until its completion. Our current understanding and preliminary testing indicate that using commercial software is the best way to get these new features at the minimum cost to the program. Additional savings will be obtained by moving the maintenance costs of the basic control system from in-house to commercial industry and allowing our developers to concentrate on the unique control areas that require customization. Our current operating system, VMS, has become a hindrance. The UNIX operating system has become the choice for most scientific and engineering systems and we should follow suit. As a result of the commercial system survey referenced above we selected RTAP, a SCADA product developed by Hewlett Packard (HP), as the most favorable product to replace the current supervisory system in AVLIS. It is an extremely open system, with a large, well defined Application Programming Interface (API). This will allow the seamless integration of unique front end devices in the laser area (e.g. Optical Device Controller). RTAP also possesses various functionality that is lacking in our current system: integrated alarming, real-time configurable database, system scalability, and a Sequence Control Language (SQL developed by CPU, an RTAP Channel Partner) that will facilitate the automation necessary to bring the AVLIS process to plant-line operation. It runs on HP-9000, DEC-Alpha, IBM-RS6000 and Sun Workstations.

  10. Guide to verification and validation of the SCALE-4 criticality safety software

    SciTech Connect (OSTI)

    Emmett, M.B.; Jordan, W.C.

    1996-12-01

    Whenever a decision is made to newly install the SCALE nuclear criticality safety software on a computer system, the user should run a set of verification and validation (V&V) test cases to demonstrate that the software is properly installed and functioning correctly. This report is intended to serve as a guide for this V&V in that it specifies test cases to run and gives expected results. The report describes the V&V that has been performed for the nuclear criticality safety software in a version of SCALE-4. The verification problems specified by the code developers have been run, and the results compare favorably with those in the SCALE 4.2 baseline. The results reported in this document are from the SCALE 4.2P version which was run on an IBM RS/6000 workstation. These results verify that the SCALE-4 nuclear criticality safety software has been correctly installed and is functioning properly. A validation has been performed for KENO V.a utilizing the CSAS25 criticality sequence and the SCALE 27-group cross-section library for {sup 233}U, {sup 235}U, and {sup 239}Pu fissile, systems in a broad range of geometries and fissile fuel forms. The experimental models used for the validation were taken from three previous validations of KENO V.a. A statistical analysis of the calculated results was used to determine the average calculational bias and a subcritical k{sub eff} criteria for each class of systems validated. Included the statistical analysis is a means of estimating the margin of subcriticality in k{sub eff}. This validation demonstrates that KENO V.a and the 27-group library may be used for nuclear criticality safety computations provided the system being analyzed falls within the range of the experiments used in the validation.

  11. Scalable Performance Measurement and Analysis

    SciTech Connect (OSTI)

    Gamblin, T

    2009-10-27

    Concurrency levels in large-scale, distributed-memory supercomputers are rising exponentially. Modern machines may contain 100,000 or more microprocessor cores, and the largest of these, IBM's Blue Gene/L, contains over 200,000 cores. Future systems are expected to support millions of concurrent tasks. In this dissertation, we focus on efficient techniques for measuring and analyzing the performance of applications running on very large parallel machines. Tuning the performance of large-scale applications can be a subtle and time-consuming task because application developers must measure and interpret data from many independent processes. While the volume of the raw data scales linearly with the number of tasks in the running system, the number of tasks is growing exponentially, and data for even small systems quickly becomes unmanageable. Transporting performance data from so many processes over a network can perturb application performance and make measurements inaccurate, and storing such data would require a prohibitive amount of space. Moreover, even if it were stored, analyzing the data would be extremely time-consuming. In this dissertation, we present novel methods for reducing performance data volume. The first draws on multi-scale wavelet techniques from signal processing to compress systemwide, time-varying load-balance data. The second uses statistical sampling to select a small subset of running processes to generate low-volume traces. A third approach combines sampling and wavelet compression to stratify performance data adaptively at run-time and to reduce further the cost of sampled tracing. We have integrated these approaches into Libra, a toolset for scalable load-balance analysis. We present Libra and show how it can be used to analyze data from large scientific applications scalably.

  12. Argonne National Laboratory Physics Division annual report, January--December 1996

    SciTech Connect (OSTI)

    Thayer, K.J.

    1997-08-01

    The past year has seen several of the Physics Division`s new research projects reach major milestones with first successful experiments and results: the atomic physics station in the Basic Energy Sciences Research Center at the Argonne Advanced Photon Source was used in first high-energy, high-brilliance x-ray studies in atomic and molecular physics; the Short Orbit Spectrometer in Hall C at the Thomas Jefferson National Accelerator (TJNAF) Facility that the Argonne medium energy nuclear physics group was responsible for, was used extensively in the first round of experiments at TJNAF; at ATLAS, several new beams of radioactive isotopes were developed and used in studies of nuclear physics and nuclear astrophysics; the new ECR ion source at ATLAS was completed and first commissioning tests indicate excellent performance characteristics; Quantum Monte Carlo calculations of mass-8 nuclei were performed for the first time with realistic nucleon-nucleon interactions using state-of-the-art computers, including Argonne`s massively parallel IBM SP. At the same time other future projects are well under way: preparations for the move of Gammasphere to ATLAS in September 1997 have progressed as planned. These new efforts are imbedded in, or flowing from, the vibrant ongoing research program described in some detail in this report: nuclear structure and reactions with heavy ions; measurements of reactions of astrophysical interest; studies of nucleon and sub-nucleon structures using leptonic probes at intermediate and high energies; atomic and molecular structure with high-energy x-rays. The experimental efforts are being complemented with efforts in theory, from QCD to nucleon-meson systems to structure and reactions of nuclei. Finally, the operation of ATLAS as a national users facility has achieved a new milestone, with 5,800 hours beam on target for experiments during the past fiscal year.

  13. The HARNESS Workbench: Unified and Adaptive Access to Diverse HPC Platforms

    SciTech Connect (OSTI)

    Sunderam, Vaidy S.

    2012-03-20

    The primary goal of the Harness WorkBench (HWB) project is to investigate innovative software environments that will help enhance the overall productivity of applications science on diverse HPC platforms. Two complementary frameworks were designed: one, a virtualized command toolkit for application building, deployment, and execution, that provides a common view across diverse HPC systems, in particular the DOE leadership computing platforms (Cray, IBM, SGI, and clusters); and two, a unified runtime environment that consolidates access to runtime services via an adaptive framework for execution-time and post processing activities. A prototype of the first was developed based on the concept of a 'system-call virtual machine' (SCVM), to enhance portability of the HPC application deployment process across heterogeneous high-end machines. The SCVM approach to portable builds is based on the insertion of toolkit-interpretable directives into original application build scripts. Modifications resulting from these directives preserve the semantics of the original build instruction flow. The execution of the build script is controlled by our toolkit that intercepts build script commands in a manner transparent to the end-user. We have applied this approach to a scientific production code (Gamess-US) on the Cray-XT5 machine. The second facet, termed Unibus, aims to facilitate provisioning and aggregation of multifaceted resources from resource providers and end-users perspectives. To achieve that, Unibus proposes a Capability Model and mediators (resource drivers) to virtualize access to diverse resources, and soft and successive conditioning to enable automatic and user-transparent resource provisioning. A proof of concept implementation has demonstrated the viability of this approach on high end machines, grid systems and computing clouds.

  14. TEDANN: Turbine engine diagnostic artificial neural network

    SciTech Connect (OSTI)

    Kangas, L.J.; Greitzer, F.L.; Illi, O.J. Jr.

    1994-03-17

    The initial focus of TEDANN is on AGT-1500 fuel flow dynamics: that is, fuel flow faults detectable in the signals from the Electronic Control Unit`s (ECU) diagnostic connector. These voltage signals represent the status of the Electro-Mechanical Fuel System (EMFS) in response to ECU commands. The EMFS is a fuel metering device that delivers fuel to the turbine engine under the management of the ECU. The ECU is an analog computer whose fuel flow algorithm is dependent upon throttle position, ambient air and turbine inlet temperatures, and compressor and turbine speeds. Each of these variables has a representative voltage signal available at the ECU`s J1 diagnostic connector, which is accessed via the Automatic Breakout Box (ABOB). The ABOB is a firmware program capable of converting 128 separate analog data signals into digital format. The ECU`s J1 diagnostic connector provides 32 analog signals to the ABOB. The ABOB contains a 128 to 1 multiplexer and an analog-to-digital converter, CP both operated by an 8-bit embedded controller. The Army Research Laboratory (ARL) developed and published the hardware specifications as well as the micro-code for the ABOB Intel EPROM processor and the internal code for the multiplexer driver subroutine. Once the ECU analog readings are converted into a digital format, the data stream will be input directly into TEDANN via the serial RS-232 port of the Contact Test Set (CTS) computer. The CTS computer is an IBM compatible personal computer designed and constructed for tactical use on the battlefield. The CTS has a 50MHz 32-bit Intel 80486DX processor. It has a 200MB hard drive and 8MB RAM. The CTS also has serial, parallel and SCSI interface ports. The CTS will also host a frame-based expert system for diagnosing turbine engine faults (referred to as TED; not shown in Figure 1).

  15. RAMONA-4B development for SBWR safety studies

    SciTech Connect (OSTI)

    Rohatgi, U.S.; Aronson, A.L.; Cheng, H.S.; Khan, H.J.; Mallen, A.N.

    1993-12-31

    The Simplified Boiling Water Reactor (SBWR) is a revolutionary design of a boiling-water reactor. The reactor is based on passive safety systems such as natural circulation, gravity flow, pressurized gas, and condensation. SBWR has no active systems, and the flow in the vessel is by natural circulation. There is a large chimney section above the core to provide a buoyancy head for natural circulation. The reactor can be shut down by either of four systems; namely, scram, Fine Motion Control Rod Drive (FMCRD), Alternate Rod Insertion (ARI), and Standby Liquid Control System (SLCS). The safety injection is by gravity drain from the Gravity Driven Cooling System (GDCS) and Suppression Pool (SP). The heat sink is through two types of heat exchangers submerged in the tank of water. These heat exchangers are the Isolation Condenser (IC) and the Passive Containment Cooling System (PCCS). The RAMONA-4B code has been developed to simulate the normal operation, reactivity transients, and to address the instability issues for SBWR. The code has a three-dimensional neutron kinetics coupled to multiple parallel-channel thermal-hydraulics. The two-phase thermal hydraulics is based on a nonhomogeneous nonequilibrium drift-flux formulation. It employs an explicit integration to solve all state equations (except for neutron kinetics) in order to predict the instability without numerical damping. The objective of this project is to develop a Sun SPARC and IBM RISC 6000 based RAMONA-4B code for applications to SBWR safety analyses, in particular for stability and ATWS studies.

  16. A Big Data Approach to Analyzing Market Volatility

    SciTech Connect (OSTI)

    Wu, Kesheng; Bethel, E. Wes; Gu, Ming; Leinweber, David; Ruebel, Oliver

    2013-06-05

    Understanding the microstructure of the financial market requires the processing of a vast amount of data related to individual trades, and sometimes even multiple levels of quotes. Analyzing such a large volume of data requires tremendous computing power that is not easily available to financial academics and regulators. Fortunately, public funded High Performance Computing (HPC) power is widely available at the National Laboratories in the US. In this paper we demonstrate that the HPC resource and the techniques for data-intensive sciences can be used to greatly accelerate the computation of an early warning indicator called Volume-synchronized Probability of Informed trading (VPIN). The test data used in this study contains five and a half year?s worth of trading data for about 100 most liquid futures contracts, includes about 3 billion trades, and takes 140GB as text files. By using (1) a more efficient file format for storing the trading records, (2) more effective data structures and algorithms, and (3) parallelizing the computations, we are able to explore 16,000 different ways of computing VPIN in less than 20 hours on a 32-core IBM DataPlex machine. Our test demonstrates that a modest computer is sufficient to monitor a vast number of trading activities in real-time ? an ability that could be valuable to regulators. Our test results also confirm that VPIN is a strong predictor of liquidity-induced volatility. With appropriate parameter choices, the false positive rates are about 7percent averaged over all the futures contracts in the test data set. More specifically, when VPIN values rise above a threshold (CDF > 0.99), the volatility in the subsequent time windows is higher than the average in 93percent of the cases.

  17. Studies of acute and chronic radiation injury at the Biological and Medical Research Division, Argonne National Laboratory, 1953-1970: Description of individual studies, data files, codes, and summaries of significant findings

    SciTech Connect (OSTI)

    Grahn, D.; Fox, C.; Wright, B.J.; Carnes, B.A.

    1994-05-01

    Between 1953 and 1970, studies on the long-term effects of external x-ray and {gamma} irradiation on inbred and hybrid mouse stocks were carried out at the Biological and Medical Research Division, Argonne National Laboratory. The results of these studies, plus the mating, litter, and pre-experimental stock records, were routinely coded on IBM cards for statistical analysis and record maintenance. Also retained were the survival data from studies performed in the period 1943-1953 at the National Cancer Institute, National Institutes of Health, Bethesda, Maryland. The card-image data files have been corrected where necessary and refiled on hard disks for long-term storage and ease of accessibility. In this report, the individual studies and data files are described, and pertinent factors regarding caging, husbandry, radiation procedures, choice of animals, and other logistical details are summarized. Some of the findings are also presented. Descriptions of the different mouse stocks and hybrids are included in an appendix; more than three dozen stocks were involved in these studies. Two other appendices detail the data files in their original card-image format and the numerical codes used to describe the animal`s exit from an experiment and, for some studies, any associated pathologic findings. Tabular summaries of sample sizes, dose levels, and other variables are also given to assist investigators in their selection of data for analysis. The archive is open to any investigator with legitimate interests and a willingness to collaborate and acknowledge the source of the data and to recognize appropriate conditions or caveats.

  18. Towards Energy-Centric Computing and Computer Architecture

    SciTech Connect (OSTI)

    2011-02-09

    Technology forecasts indicate that device scaling will continue well into the next decade. Unfortunately, it is becoming extremely difficult to harness this increase in the number of transistorsinto performance due to a number of technological, circuit, architectural, methodological and programming challenges.In this talk, I will argue that the key emerging showstopper is power. Voltage scaling as a means to maintain a constant power envelope with an increase in transistor numbers is hitting diminishing returns. As such, to continue riding the Moore's law we need to look for drastic measures to cut power. This is definitely the case for server chips in future datacenters,where abundant server parallelism, redundancy and 3D chip integration are likely to remove programming, reliability and bandwidth hurdles, leaving power as the only true limiter.I will present results backing this argument based on validated models for future server chips and parameters extracted from real commercial workloads. Then I use these results to project future research directions for datacenter hardware and software.About the speakerBabak Falsafi is a Professor in the School of Computer and Communication Sciences at EPFL, and an Adjunct Professor of Electrical and Computer Engineering and Computer Science at Carnegie Mellon. He is thefounder and the director ofthe Parallel Systems Architecture Laboratory (PARSA) at EPFL where he conducts research onarchitectural support for parallel programming, resilient systems, architectures to break the memory wall, and analytic and simulation tools for computer system performance evaluation.In 1999, in collaboration with T. N. Vijaykumar he showed for the first time that, contrary to conventional wisdom,multiprocessors do not needrelaxed memory consistency models (and the resulting convoluted programming interfaces found and used in modern systems) to achieve high performance. He is a recipient of an NSF CAREER award in 2000, IBM Faculty Partnership Awards between 2001 and 2004, and an Alfred P. Sloan Research Fellowship in 2004. He is a senior member of IEEE and ACM.

  19. A divide-conquer-recombine algorithmic paradigm for large spatiotemporal quantum molecular dynamics simulations

    SciTech Connect (OSTI)

    Shimojo, Fuyuki; Hattori, Shinnosuke [Collaboratory for Advanced Computing and Simulations, Department of Physics and Astronomy, Department of Computer Science, and Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, California 90089-0242 (United States) [Collaboratory for Advanced Computing and Simulations, Department of Physics and Astronomy, Department of Computer Science, and Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, California 90089-0242 (United States); Department of Physics, Kumamoto University, Kumamoto 860-8555 (Japan); Kalia, Rajiv K.; Mou, Weiwei; Nakano, Aiichiro; Nomura, Ken-ichi; Rajak, Pankaj; Vashishta, Priya [Collaboratory for Advanced Computing and Simulations, Department of Physics and Astronomy, Department of Computer Science, and Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, California 90089-0242 (United States)] [Collaboratory for Advanced Computing and Simulations, Department of Physics and Astronomy, Department of Computer Science, and Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, California 90089-0242 (United States); Kunaseth, Manaschai [Collaboratory for Advanced Computing and Simulations, Department of Physics and Astronomy, Department of Computer Science, and Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, California 90089-0242 (United States) [Collaboratory for Advanced Computing and Simulations, Department of Physics and Astronomy, Department of Computer Science, and Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, California 90089-0242 (United States); National Nanotechnology Center, Pathumthani 12120 (Thailand); Ohmura, Satoshi [Collaboratory for Advanced Computing and Simulations, Department of Physics and Astronomy, Department of Computer Science, and Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, California 90089-0242 (United States) [Collaboratory for Advanced Computing and Simulations, Department of Physics and Astronomy, Department of Computer Science, and Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, California 90089-0242 (United States); Department of Physics, Kumamoto University, Kumamoto 860-8555 (Japan); Department of Physics, Kyoto University, Kyoto 606-8502 (Japan); Shimamura, Kohei [Collaboratory for Advanced Computing and Simulations, Department of Physics and Astronomy, Department of Computer Science, and Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, California 90089-0242 (United States) [Collaboratory for Advanced Computing and Simulations, Department of Physics and Astronomy, Department of Computer Science, and Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, California 90089-0242 (United States); Department of Physics, Kumamoto University, Kumamoto 860-8555 (Japan); Department of Applied Quantum Physics and Nuclear Engineering, Kyushu University, Fukuoka 819-0395 (Japan)

    2014-05-14

    We introduce an extension of the divide-and-conquer (DC) algorithmic paradigm called divide-conquer-recombine (DCR) to perform large quantum molecular dynamics (QMD) simulations on massively parallel supercomputers, in which interatomic forces are computed quantum mechanically in the framework of density functional theory (DFT). In DCR, the DC phase constructs globally informed, overlapping local-domain solutions, which in the recombine phase are synthesized into a global solution encompassing large spatiotemporal scales. For the DC phase, we design a lean divide-and-conquer (LDC) DFT algorithm, which significantly reduces the prefactor of the O(N) computational cost for N electrons by applying a density-adaptive boundary condition at the peripheries of the DC domains. Our globally scalable and locally efficient solver is based on a hybrid real-reciprocal space approach that combines: (1) a highly scalable real-space multigrid to represent the global charge density; and (2) a numerically efficient plane-wave basis for local electronic wave functions and charge density within each domain. Hybrid space-band decomposition is used to implement the LDC-DFT algorithm on parallel computers. A benchmark test on an IBM Blue Gene/Q computer exhibits an isogranular parallel efficiency of 0.984 on 786?432 cores for a 50.3 10{sup 6}-atom SiC system. As a test of production runs, LDC-DFT-based QMD simulation involving 16?661 atoms is performed on the Blue Gene/Q to study on-demand production of hydrogen gas from water using LiAl alloy particles. As an example of the recombine phase, LDC-DFT electronic structures are used as a basis set to describe global photoexcitation dynamics with nonadiabatic QMD (NAQMD) and kinetic Monte Carlo (KMC) methods. The NAQMD simulations are based on the linear response time-dependent density functional theory to describe electronic excited states and a surface-hopping approach to describe transitions between the excited states. A series of techniques are employed for efficiently calculating the long-range exact exchange correction and excited-state forces. The NAQMD trajectories are analyzed to extract the rates of various excitonic processes, which are then used in KMC simulation to study the dynamics of the global exciton flow network. This has allowed the study of large-scale photoexcitation dynamics in 6400-atom amorphous molecular solid, reaching the experimental time scales.

  20. Argonne Leadership Computing Facility 2011 annual report : Shaping future supercomputing.

    SciTech Connect (OSTI)

    Papka, M.; Messina, P.; Coffey, R.; Drugan, C.

    2012-08-16

    The ALCF's Early Science Program aims to prepare key applications for the architecture and scale of Mira and to solidify libraries and infrastructure that will pave the way for other future production applications. Two billion core-hours have been allocated to 16 Early Science projects on Mira. The projects, in addition to promising delivery of exciting new science, are all based on state-of-the-art, petascale, parallel applications. The project teams, in collaboration with ALCF staff and IBM, have undertaken intensive efforts to adapt their software to take advantage of Mira's Blue Gene/Q architecture, which, in a number of ways, is a precursor to future high-performance-computing architecture. The Argonne Leadership Computing Facility (ALCF) enables transformative science that solves some of the most difficult challenges in biology, chemistry, energy, climate, materials, physics, and other scientific realms. Users partnering with ALCF staff have reached research milestones previously unattainable, due to the ALCF's world-class supercomputing resources and expertise in computation science. In 2011, the ALCF's commitment to providing outstanding science and leadership-class resources was honored with several prestigious awards. Research on multiscale brain blood flow simulations was named a Gordon Bell Prize finalist. Intrepid, the ALCF's BG/P system, ranked No. 1 on the Graph 500 list for the second consecutive year. The next-generation BG/Q prototype again topped the Green500 list. Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. The Catalyst Team matches project PIs with experienced computational scientists to maximize and accelerate research in their specific scientific domains. The Performance Engineering Team facilitates the effective use of applications on the Blue Gene system by assessing and improving the algorithms used by applications and the techniques used to implement those algorithms. The Data Analytics and Visualization Team lends expertise in tools and methods for high-performance, post-processing of large datasets, interactive data exploration, batch visualization, and production visualization. The Operations Team ensures that system hardware and software work reliably and optimally; system tools are matched to the unique system architectures and scale of ALCF resources; the entire system software stack works smoothly together; and I/O performance issues, bug fixes, and requests for system software are addressed. The User Services and Outreach Team offers frontline services and support to existing and potential ALCF users. The team also provides marketing and outreach to users, DOE, and the broader community.

  1. A Fault Oblivious Extreme-Scale Execution Environment

    SciTech Connect (OSTI)

    McKie, Jim

    2014-11-20

    The FOX project, funded under the ASCR X-stack I program, developed systems software and runtime libraries for a new approach to the data and work distribution for massively parallel, fault oblivious application execution. Our work was motivated by the premise that exascale computing systems will provide a thousand-fold increase in parallelism and a proportional increase in failure rate relative to todays machines. To deliver the capability of exascale hardware, the systems software must provide the infrastructure to support existing applications while simultaneously enabling efficient execution of new programming models that naturally express dynamic, adaptive, irregular computation; coupled simulations; and massive data analysis in a highly unreliable hardware environment with billions of threads of execution. Our OS research has prototyped new methods to provide efficient resource sharing, synchronization, and protection in a many-core compute node. We have experimented with alternative task/dataflow programming models and shown scalability in some cases to hundreds of thousands of cores. Much of our software is in active development through open source projects. Concepts from FOX are being pursued in next generation exascale operating systems. Our OS work focused on adaptive, application tailored OS services optimized for multi ? many core processors. We developed a new operating system NIX that supports role-based allocation of cores to processes which was released to open source. We contributed to the IBM FusedOS project, which promoted the concept of latency-optimized and throughput-optimized cores. We built a task queue library based on distributed, fault tolerant key-value store and identified scaling issues. A second fault tolerant task parallel library was developed, based on the Linda tuple space model, that used low level interconnect primitives for optimized communication. We designed fault tolerance mechanisms for task parallel computations employing work stealing for load balancing that scaled to the largest existing supercomputers. Finally, we implemented the Elastic Building Blocks runtime, a library to manage object-oriented distributed software components. To support the research, we won two INCITE awards for time on Intrepid (BG/P) and Mira (BG/Q). Much of our work has had impact in the OS and runtime community through the ASCR Exascale OS/R workshop and report, leading to the research agenda of the Exascale OS/R program. Our project was, however, also affected by attrition of multiple PIs. While the PIs continued to participate and offer guidance as time permitted, losing these key individuals was unfortunate both for the project and for the DOE HPC community.

  2. PROCEEDINGS OF THE RIKEN BNL RESEARCH CENTER WORKSHOP ON LARGE SCALE COMPUTATIONS IN NUCLEAR PHYSICS USING THE QCDOC, SEPTEMBER 26 - 28, 2002.

    SciTech Connect (OSTI)

    AOKI,Y.; BALTZ,A.; CREUTZ,M.; GYULASSY,M.; OHTA,S.

    2002-09-26

    The massively parallel computer QCDOC (QCD On a Chip) of the RIKEN BNL Research Center (RI3RC) will provide ten-teraflop peak performance for lattice gauge calculations. Lattice groups from both Columbia University and RBRC, along with assistance from IBM, jointly handled the design of the QCDOC. RIKEN has provided $5 million in funding to complete the machine in 2003. Some fraction of this computer (perhaps as much as 10%) might be made available for large-scale computations in areas of theoretical nuclear physics other than lattice gauge theory. The purpose of this workshop was to investigate the feasibility and possibility of using a supercomputer such as the QCDOC for lattice, general nuclear theory, and other calculations. The lattice applications to nuclear physics that can be investigated with the QCDOC are varied: for example, the light hadron spectrum, finite temperature QCD, and kaon ({Delta}I = 1/2 and CP violation), and nucleon (the structure of the proton) matrix elements, to name a few. There are also other topics in theoretical nuclear physics that are currently limited by computer resources. Among these are ab initio calculations of nuclear structure for light nuclei (e.g. up to {approx}A = 8 nuclei), nuclear shell model calculations, nuclear hydrodynamics, heavy ion cascade and other transport calculations for RHIC, and nuclear astrophysics topics such as exploding supernovae. The physics topics were quite varied, ranging from simulations of stellar collapse by Douglas Swesty to detailed shell model calculations by David Dean, Takaharu Otsuka, and Noritaka Shimizu. Going outside traditional nuclear physics, James Davenport discussed molecular dynamics simulations and Shailesh Chandrasekharan presented a class of algorithms for simulating a wide variety of femionic problems. Four speakers addressed various aspects of theory and computational modeling for relativistic heavy ion reactions at RHIC. Scott Pratt and Steffen Bass gave general overviews of how qualitatively different types of physical processes evolve temporally in heavy ion reactions. Denes Molnar concentrated on the application of hydrodynamics, and Alex Krasnitz on a classical Yang-Mills field theory for the initial phase. We were pleasantly surprised by the excellence of the talks and the substantial interest from all parties. The diversity of the audience forced the speakers to give their talks at an understandable level, which was highly appreciated. One particular bonus of the discussions could be the application of highly developed three-dimensional astrophysics hydrodynamics codes to heavy ion reactions.

  3. Eclipse Parallel Tools Platform

    Energy Science and Technology Software Center (OSTI)

    2005-02-18

    Designing and developing parallel programs is an inherently complex task. Developers must choose from the many parallel architectures and programming paradigms that are available, and face a plethora of tools that are required to execute, debug, and analyze parallel programs i these environments. Few, if any, of these tools provide any degree of integration, or indeed any commonality in their user interfaces at all. This further complicates the parallel developer's task, hampering software engineering practices,more » and ultimately reducing productivity. One consequence of this complexity is that best practice in parallel application development has not advanced to the same degree as more traditional programming methodologies. The result is that there is currently no open-source, industry-strength platform that provides a highly integrated environment specifically designed for parallel application development. Eclipse is a universal tool-hosting platform that is designed to providing a robust, full-featured, commercial-quality, industry platform for the development of highly integrated tools. It provides a wide range of core services for tool integration that allow tool producers to concentrate on their tool technology rather than on platform specific issues. The Eclipse Integrated Development Environment is an open-source project that is supported by over 70 organizations, including IBM, Intel and HP. The Eclipse Parallel Tools Platform (PTP) plug-in extends the Eclipse framwork by providing support for a rich set of parallel programming languages and paradigms, and a core infrastructure for the integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration, support for a small number of parallel architectures, and basis Fortran integration. Future versions will extend the functionality substantially, provide a number of core parallel tools, and provide support across a wide rang of parallel architectures and languages.« less

  4. Fundamental Mechanisms Driving the Amorphous to Crystalline Phase Transformation

    SciTech Connect (OSTI)

    Reed, B W; Browning, N D; Santala, M K; LaGrange, T; Gilmer, G H; Masiel, D J; Campbell, G H; Raoux, S; Topuria, T; Meister, S; Cui, Y

    2011-01-04

    Phase transformations are ubiquitous, fundamental phenomena that lie at the heart of many structural, optical and electronic properties in condensed matter physics and materials science. Many transformations, especially those occurring under extreme conditions such as rapid changes in the thermodynamic state, are controlled by poorly understood processes involving the nucleation and quenching of metastable phases. Typically these processes occur on time and length scales invisible to most experimental techniques ({micro}s and faster, nm and smaller), so our understanding of the dynamics tends to be very limited and indirect, often relying on simulations combined with experimental study of the ''time infinity'' end state. Experimental techniques that can directly probe phase transformations on their proper time and length scales are therefore key to providing fundamental insights into the whole area of transformation physics and materials science. LLNL possesses a unique dynamic transmission electron microscope (DTEM) capable of taking images and diffraction patterns of laser-driven material processes with resolution measured in nanometers and nanoseconds. The DTEM has previously used time-resolved diffraction patterns to quantitatively study phase transformations that are orders of magnitude too fast for conventional in situ TEM. More recently the microscope has demonstrated the ability to directly image a reaction front moving at {approx}13 nm/ns and the nucleation of a new phase behind that front. Certain compound semiconductor phase change materials, such as Ge{sub 2}Sb{sub 2}Te{sub 5} (GST), Sb{sub 2}Te and GeSb, exhibit a technologically important series of transformations on scales that fall neatly into the performance specifications of the DTEM. If a small portion of such material is heated above its melting point and then rapidly cooled, it quenches into an amorphous state. Heating again with a less intense pulse leads to recrystallization into a vacancy-stabilized metastable rock salt structure. Each transformation takes {approx}10-100 ns, and the cycle can be driven repeatedly a very large number of times with a nanosecond laser such as the DTEM's sample drive laser. These materials are widely used in optical storage devices such as rewritable CDs and DVDs, and they are also applied in a novel solid state memory technology - phase change memory (PCM). PCM has the potential to produce nonvolatile memory systems with high speed, extreme density, and very low power requirements. For PCM applications several materials properties are of great importance: the resistivities of both phases, the crystallization temperature, the melting point, the crystallization speed, reversibility (number of phase-transformation cycles without degradation) and stability against crystallization at elevated temperature. For a viable technology, all these properties need to have good scaling behavior, as dimensions of the memory cells will shrink with every generation. In this LDRD project, we used the unique single-shot nanosecond in situ experimentation capabilities of the DTEM to watch these transformations in GST on the time and length scales most relevant for device applications. Interpretation of the results was performed in conjunction with atomistic and finite-element computations. Samples were provided by collaborators at IBM and Stanford University. We observed, and measured the kinetics of, the amorphous-crystalline and melting-solidification transitions in uniform thin-film samples. Above a certain threshold, the crystal nucleation rate was found to be enormously high (with many nuclei appearing per cubic {micro}m even after nanosecond-scale incubation times), in agreement with atomistic simulation and consistent with an extremely low nucleation barrier. We developed data reduction techniques based on principal component analysis (PCA), revealing the complex, multi-dimensional evolution of the material while suppressing noise and irrelevant information. Using a novel specimen geometry, we also achieved repeated switching betw

  5. Recovery Act: Integrated DC-DC Conversion for Energy-Efficient Multicore Processors

    SciTech Connect (OSTI)

    Shepard, Kenneth L

    2013-03-31

    In this project, we have developed the use of thin-film magnetic materials to improve in energy efficiency of digital computing applications by enabling integrated dc-dc power conversion and management with on-chip power inductors. Integrated voltage regulators also enables fine-grained power management, by providing dynamic scaling of the supply voltage in concert with the clock frequency of synchronous logic to throttle power consumption at periods of low computational demand. The voltage converter generates lower output voltages during periods of low computational performance requirements and higher output voltages during periods of high computational performance requirements. Implementation of integrated power conversion requires high-capacity energy storage devices, which are generally not available in traditional semiconductor processes. We achieve this with integration of thin-film magnetic materials into a conventional complementary metal-oxide-semiconductor (CMOS) process for high-quality on-chip power inductors. This project includes a body of work conducted to develop integrated switch-mode voltage regulators with thin-film magnetic power inductors. Soft-magnetic materials and inductor topologies are selected and optimized, with intent to maximize efficiency and current density of the integrated regulators. A custom integrated circuit (IC) is designed and fabricated in 45-nm CMOS silicon-on-insulator (SOI) to provide the control system and power-train necessary to drive the power inductors, in addition to providing a digital load for the converter. A silicon interposer is designed and fabricated in collaboration with IBM Research to integrate custom power inductors by chip stacking with the 45-nm CMOS integrated circuit, enabling power conversion with current density greater than 10A/mm2. The concepts and designs developed from this work enable significant improvements in performance-per-watt of future microprocessors in servers, desktops, and mobile devices. These new approaches to scaled voltage regulation for computing devices also promise significant impact on electricity consumption in the United States and abroad by improving the efficiency of all computational platforms. In 2006, servers and datacenters in the United States consumed an estimated 61 billion kWh or about 1.5% of the nation's total energy consumption. Federal Government servers and data centers alone accounted for about 10 billion kWh, for a total annual energy cost of about $450 million. Based upon market growth and efficiency trends, estimates place current server and datacenter power consumption at nearly 85 billion kWh in the US and at almost 280 billion kWh worldwide. Similar estimates place national desktop, mobile and portable computing at 80 billion kWh combined. While national electricity utilization for computation amounts to only 4% of current usage, it is growing at a rate of about 10% a year with volume servers representing one of the largest growth segments due to the increasing utilization of cloud-based services. The percentage of power that is consumed by the processor in a server varies but can be as much as 30% of the total power utilization, with an additional 50% associated with heat removal. The approaches considered here should allow energy efficiency gains as high as 30% in processors for all computing platforms, from high-end servers to smart phones, resulting in a direct annual energy savings of almost 15 billion kWh nationally, and 50 billion kWh globally. The work developed here is being commercialized by the start-up venture, Ferric Semiconductor, which has already secured two Phase I SBIR grants to bring these technologies to the marketplace.

  6. Final Report: Phase II Nevada Water Resources Data, Modeling, and Visualization (DMV) Center

    SciTech Connect (OSTI)

    Jackman, Thomas; Minor, Timothy; Pohll, Gregory

    2013-07-22

    Water is unquestionably a critical resource throughout the United States. In the semi-arid west -- an area stressed by increase in human population and sprawl of the built environment -- water is the most important limiting resource. Crucially, science must understand factors that affect availability and distribution of water. To sustain growing consumptive demand, science needs to translate understanding into reliable and robust predictions of availability under weather conditions that could be average but might be extreme. These predictions are needed to support current and long-term planning. Similar to the role of weather forecast and climate prediction, water prediction over short and long temporal scales can contribute to resource strategy, governmental policy and municipal infrastructure decisions, which are arguably tied to the natural variability and unnatural change to climate. Change in seasonal and annual temperature, precipitation, snowmelt, and runoff affect the distribution of water over large temporal and spatial scales, which impact the risk of flooding and the groundwater recharge. Anthropogenic influences and impacts increase the complexity and urgency of the challenge. The goal of this project has been to develop a decision support framework of data acquisition, digital modeling, and 3D visualization. This integrated framework consists of tools for compiling, discovering and projecting our understanding of processes that control the availability and distribution of water. The framework is intended to support the analysis of the complex interactions between processes that affect water supply, from controlled availability to either scarcity or deluge. The developed framework enables DRI to promote excellence in water resource management, particularly within the Lake Tahoe basin. In principle, this framework could be replicated for other watersheds throughout the United States. Phase II of this project builds upon the research conducted during Phase I, in which the hydrologic framework was investigated and the development initiated. Phase II concentrates on practical implementation of the earlier work but emphasizes applications to the hydrology of the Lake Tahoe basin. Phase 1 efforts have been refined and extended by creating a toolset for geographic information systems (GIS) that is usable for disparate types of geospatial and geo-referenced data. The toolset is intended to serve multiple users for a variety of applications. The web portal for internet access to hydrologic and remotely sensed product data, prototyped in Phase I, has been significantly enhanced. The portal provides high performance access to LANDSAT-derived data using techniques developed during the course of the project. The portal is interactive, and supports the geo-referenced display of hydrologic information derived from remotely sensed data, such as various vegetative indices used to calculate water consumption. The platform can serve both internal and external constituencies using inter-operating infrastructure that spans both sides of the DRI firewall. The platform is intended grow its supported data assets and to serve as a template for replication to other geographic areas. An unanticipated development during the project was the use of ArcGIS software on a new computer system, called the IBM PureSytems, and the parallel use of the systems for faster, more efficient image processing. Additional data, independent of the portal, was collected within the Sagehen basin and provides detailed information regarding the processes that control hydrologic responses within mountain watersheds. The newly collected data include elevation, evapotranspiration, energy balance and remotely sensed snow-pack data. A Lake Tahoe basin hydrologic model has been developed, in part to help predict the hydrologic impacts of climate change. The model couples both the surface and subsurface hydrology, with the two components having been independently calibrated. Results from the coupled simulations involving both surface water and groundwater processes

  7. Greenhouse Gas Mitigation Options in ISEEM Global Energy Model: 2010-2050 Scenario Analysis for Least-Cost Carbon Reduction in Iron and Steel Sector

    SciTech Connect (OSTI)

    Karali, Nihan; Xu, Tengfang; Sathaye, Jayant

    2013-12-01

    The goal of the modeling work carried out in this project was to quantify long-term scenarios for the future emission reduction potentials in the iron and steel sector. The main focus of the project is to examine the impacts of carbon reduction options in the U.S. iron and steel sector under a set of selected scenarios. In order to advance the understanding of carbon emission reduction potential on the national and global scales, and to evaluate the regional impacts of potential U.S. mitigation strategies (e.g., commodity and carbon trading), we also included and examined the carbon reduction scenarios in Chinas and Indias iron and steel sectors in this project. For this purpose, a new bottom-up energy modeling framework, the Industrial Sector Energy Efficiency Modeling (ISEEM), (Karali et al. 2012) was used to provide detailed annual projections starting from 2010 through 2050. We used the ISEEM modeling framework to carry out detailed analysis, on a country-by-country basis, for the U.S., Chinas, and Indias iron and steel sectors. The ISEEM model applicable to iron and steel section, called ISEEM-IS, is developed to estimate and evaluate carbon emissions scenarios under several alternative mitigation options - including policies (e.g., carbon caps), commodity trading, and carbon trading. The projections will help us to better understand emission reduction potentials with technological and economic implications. The database for input of ISEEM-IS model consists of data and information compiled from various resources such as World Steel Association (WSA), the U.S. Geological Survey (USGS), China Steel Year Books, India Bureau of Mines (IBM), Energy Information Administration (EIA), and recent LBNL studies on bottom-up techno-economic analysis of energy efficiency measures in the iron and steel sector of the U.S., China, and India, including long-term steel production in China. In the ISEEM-IS model, production technology and manufacturing details are represented, in addition to the extensive data compiled from recent studies on bottom-up representation of efficiency measures for the sector. We also defined various mitigation scenarios including long-term production trends to project country-specific production, energy use, trading, carbon emissions, and costs of mitigation. Such analyses can provide useful information to assist policy-makers when considering and shaping future emissions mitigation strategies and policies. The technical objective is to analyze the costs of production and CO{sub 2} emission reduction in the U.S, China, and Indias iron and steel sectors under different emission reduction scenarios, using the ISEEM-IS as a cost optimization model. The scenarios included in this project correspond to various CO{sub 2} emission reduction targets for the iron and steel sector under different strategies such as simple CO{sub 2} emission caps (e.g., specific reduction goals), emission reduction via commodity trading, and emission reduction via carbon trading.

  8. A Measurement Management Technology for Improving Energy Efficiency in Data Centers and Telecommunication Facilities

    SciTech Connect (OSTI)

    Hendrik Hamann, Levente Klein

    2012-06-28

    Data center (DC) electricity use is increasing at an annual rate of over 20% and presents a concern for the Information Technology (IT) industry, governments, and the society. A large fraction of the energy use is consumed by the compressor cooling to maintain the recommended operating conditions for IT equipment. The most common way to improve the DC efficiency is achieved by optimally provisioning the cooling power to match the global heat dissipation in the DC. However, at a more granular level, the large range of heat densities of today's IT equipment makes the task of provisioning cooling power optimized to the level of individual computer room air conditioning (CRAC) units much more challenging. Distributed sensing within a DC enables the development of new strategies to improve energy efficiency, such as hot spot elimination through targeted cooling, matching power consumption at rack level with workload schedule, and minimizing power losses. The scope of Measurement and Management Technologies (MMT) is to develop a software tool and the underlying sensing technology to provide critical decision support and control for DC and telecommunication facilities (TF) operations. A key aspect of MMT technology is integration of modeling tools to understand how changes in one operational parameter affect the overall DC response. It is demonstrated that reduced ordered models for DC can generate, in less than 2 seconds computational time, a three dimensional thermal model in a 50 kft{sup 2} DC. This rapid modeling enables real time visualization of the DC conditions and enables 'what if' scenarios simulations to characterize response to 'disturbances'. One such example is thermal zone modeling that matches the cooling power to the heat generated at a local level by identifying DC zones cooled by a specific CRAC. Turning off a CRAC unit can be simulated to understand how the other CRAC utilization changes and how server temperature responds. Several new sensing technologies were added to the existing MMT platform: (1) air contamination (corrosion) sensors, (2) power monitoring, and (3) a wireless environmental sensing network. All three technologies are built on cost effective sensing solutions that increase the density of sensing points and enable high resolution mapping of DCs. The wireless sensing solution enables Air Conditioning Unit (ACU) control while the corrosion sensor enables air side economization and can quantify the risk of IT equipment failure due to air contamination. Validation data for six test sites demonstrate that leveraging MMT energy efficiency solutions combined with industry best practices results in an average of 20% reduction in cooling energy, without major infrastructure upgrades. As an illustration of the unique MMT capabilities, a data center infrastructure efficiency (DCIE) of 87% (industry best operation) was achieved. The technology is commercialized through IBM System and Technology Lab Services that offers MMT as a solution to improve DC energy efficiency. Estimation indicates that deploying MMT in existing DCs can results in an 8 billion kWh savings and projection indicates that constant adoption of MMT can results in obtainable savings of 44 billion kWh in 2035. Negotiations are under way with business partners to commercialize/license the ACU control technology and the new sensor solutions (corrosion and power sensing) to enable third party vendors and developers to leverage the energy efficiency solutions.

  9. A Novel Coarsening Method for Scalable and Efficient Mesh Generation

    SciTech Connect (OSTI)

    Yoo, A; Hysom, D; Gunney, B

    2010-12-02

    In this paper, we propose a novel mesh coarsening method called brick coarsening method. The proposed method can be used in conjunction with any graph partitioners and scales to very large meshes. This method reduces problem space by decomposing the original mesh into fixed-size blocks of nodes called bricks, layered in a similar way to conventional brick laying, and then assigning each node of the original mesh to appropriate brick. Our experiments indicate that the proposed method scales to very large meshes while allowing simple RCB partitioner to produce higher-quality partitions with significantly less edge cuts. Our results further indicate that the proposed brick-coarsening method allows more complicated partitioners like PT-Scotch to scale to very large problem size while still maintaining good partitioning performance with relatively good edge-cut metric. Graph partitioning is an important problem that has many scientific and engineering applications in such areas as VLSI design, scientific computing, and resource management. Given a graph G = (V,E), where V is the set of vertices and E is the set of edges, (k-way) graph partitioning problem is to partition the vertices of the graph (V) into k disjoint groups such that each group contains roughly equal number of vertices and the number of edges connecting vertices in different groups is minimized. Graph partitioning plays a key role in large scientific computing, especially in mesh-based computations, as it is used as a tool to minimize the volume of communication and to ensure well-balanced load across computing nodes. The impact of graph partitioning on the reduction of communication can be easily seen, for example, in different iterative methods to solve a sparse system of linear equation. Here, a graph partitioning technique is applied to the matrix, which is basically a graph in which each edge is a non-zero entry in the matrix, to allocate groups of vertices to processors in such a way that many of matrix-vector multiplication can be performed locally on each processor and hence to minimize communication. Furthermore, a good graph partitioning scheme ensures the equal amount of computation performed on each processor. Graph partitioning is a well known NP-complete problem, and thus the most commonly used graph partitioning algorithms employ some forms of heuristics. These algorithms vary in terms of their complexity, partition generation time, and the quality of partitions, and they tend to trade off these factors. A significant challenge we are currently facing at the Lawrence Livermore National Laboratory is how to partition very large meshes on massive-size distributed memory machines like IBM BlueGene/P, where scalability becomes a big issue. For example, we have found that the ParMetis, a very popular graph partitioning tool, can only scale to 16K processors. An ideal graph partitioning method on such an environment should be fast and scale to very large meshes, while producing high quality partitions. This is an extremely challenging task, as to scale to that level, the partitioning algorithm should be simple and be able to produce partitions that minimize inter-processor communications and balance the load imposed on the processors. Our goals in this work are two-fold: (1) To develop a new scalable graph partitioning method with good load balancing and communication reduction capability. (2) To study the performance of the proposed partitioning method on very large parallel machines using actual data sets and compare the performance to that of existing methods. The proposed method achieves the desired scalability by reducing the mesh size. For this, it coarsens an input mesh into a smaller size mesh by coalescing the vertices and edges of the original mesh into a set of mega-vertices and mega-edges. A new coarsening method called brick algorithm is developed in this research. In the brick algorithm, the zones in a given mesh are first grouped into fixed size blocks called bricks. These brick are then laid in a way similar to conventional brick layin

  10. The CO-OP Guide

    SciTech Connect (OSTI)

    Michael, J.; /Fermilab

    1991-08-16

    You are at D0, the newest and most advanced experiment at Fermilab. Its goal is to find the 'top quark', nicknamed 'truth'. theoretically one of the six fundamental building blocks of matter. Combinations of the six quarks are said to make up electrons, protons and neutrons. Your group at D0 is the cryogenic division. Its goal is to provide and maintain a cryogenic system which ultimately supplies and controls the liquid argon used in the giant cryostats for the experiment. The high purity liquid argon is needed to keep the detector modules inside the cryostats cold, so that they will operate properly. Your job at D0 is to be a co-op for the research and development group of the cryogenics division. Your goals are dependent on the needs of the cryo group. D0 is where you will spend most of your time. The co-op office is located on what is known as the 3rd floor, but is actually on the ground floor. The floor directly above the 3rd floor is the 5th floor, which contains your immediate superiors and the D0 secretary. The 6th and top floor is above that, and contains the D0 secretary for official and important business. On the other side of the D0 assembly building is the cryo control room. This is where the cryogenic piping system is remotely monitored and controlled. Other important sites at D0 include the trailer city on the north parking lot, which has the D0 secretary who handles all the payroll matters (among other duties), and the portakamp in the south parking lot. Besides D0, which is named for its location on the particle accelerator ring. the most important place is Wilson Hall. That is the large building shaped like a big Atact symbol. It contains various important people such as the safety group. the personnel department (which you have already encountered. being hired), the minor stock room, the cafeteria, the Fermi library. Ramsey Auditorium. etc. Behind Wilson Hall is the Booster Ring, which accelerates particles before they are injected into the main ring. Inside the booster ring are the East and West Booster towers, which contain cryogenic support groups. The D0 cryo group offices used to be in the West Booster Portakamps. Away from Wilson Hall, there are various buildings strewn about the Fermilab property that have important functional uses to D0. One such example is Lab A. This is where the now unused bubble chamber resides. which was used to take pictures of particle motion. Many of our group is from the bubble chamber, and occasionally stories from the 'bubble chamber days' can be heard as someone waxes nostalgic. Lab A has a machine shop and many technicians. All three of the cryostats used in the D0 experiment went through Lab A for preparation and installation work. Lab A is located directly up the road from the front of Wilson Hall (north-east). Its unmistakable dark geodesic dome makes it easy to find. The Feynman Computer building, located east and just a little bit north of Wilson Hall, houses the computer repair people. If any of the computers used in our group crash and burn, we must take them to the third floor of Feynman to be fixed or exchanged. On one side is the Prep department, which handles the VAX mainframe computers, and on the other is personal computer repair, which handles Fermi Macs and IBMs. Directly north of Wilson Hall is Site 38. This site is the location of many important Fermilab facilities, such as the Fermi fire department, the carpenter's shop, the Fermi gas pumps, the main stock room, and shipping and receiving. Lastly, but perhaps most significantly, is the Fermilab Village. In addition to the machine shops, the cut shop, welding facilities, and the garishly painted physicist dorms, there are such things as a gym, a pool and other facilities to take the edge off a weary mind. The village is located just north off Batavia road on the east side of Fermilab. The village barn is the first and most notable building as one approaches.